Thank you for this! I should get this up and running on a site I run for a complex fantasy football league...I've been meaning to with Cron jobs and whatnot, but it's just such a hassle and, so far, there hasn't been much site traffic, but still...a catastrophe could be hiding around an upcoming corner!
I've been horribly slow about getting this implemented (had to do a bunch of work on the site first) and I went about doing this stuff last night. The backup script is working brilliantly and I've restored from one of the backups to confirm it, so thank you for the writeup!
However, the Cron portion is proving to be quite the headache. I'm at Digital Ocean with an Ubuntu 12.10 server. Running crontab -e, I was getting errors about nano permissions but found out how to get rid of that nonsense (I can't find the link now, but it was removing a file and commenting out a line about logging stuff). Anyway, the cron job is in my file:
0 * * * * /bin/bash -l -c '/home/kkerley/.rvm/gems/ruby-1.9.3-p392/bin/backup perform -t sqwid_backup'
(got this by running crontab -l just now) but it's never firing. I thought maybe it was due to not having a newline at the end, but I've put one in three times. I also ran pgrep cron to confirm that it's running and it is (returned 598). I just don't understand what I'm doing wrong/why this isn't firing off hourly.
Are there any gotchas that I'm just not aware of? This is the first time I've messed with cron.
One thing (and I updated the line) is that it should be
perform -t production_backup to match the name of the backup that you created earlier.
That should be it I imagine.
Ahhh! I didn't even notice. I saw that last night and thought it was weird when I was typing it in but figured it was just some weird cron command. :)
I've updated my cronjob and will hopefully know in 51 minutes if it worked or not.
Thanks for the quick reply and again for the tutorial!
s3.path = "/production/database"
Is the the end point or path with the bucket? I get The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
Does one need to set any permission on the bucket?
For anyone getting this warning.
[fog][WARNING] fog: the specified s3 bucket name(BUCKET_NAME) is not a valid dns name, which will negatively impact performance.
Fog does not like dash in the the BUCKET_NAME. It's best to use BUCKETNAME.
Thanks Chris for this very useful guide. I just implemented the Backup gem and the cron job. And it is successfully storing a backup of the database to Amazon S3.
At the end of the guide you say:
"Always be sure to test your backups and make sure you can safely restore from them!"
So how do you restore your database with a backup on Amazon S3?
Hello Chris, thanks for this useful website! I am getting the following error with Amazon S3:
AuthorizationHeaderMalformed<message>The authorization header is malformed; the authorization header requires three components: Credential, SignedHeaders, and Signature.</message>
I have tried to use GIYF without success. I have paperclip gem in my rails app working fine with the same S3 credentials and same bucket but backup gem is not! Can you help me please?
is there a way to use backup gem on heroku instance and setup? thx
Hey Chris, Backup gem link is broken, new link should go here: https://github.com/backup/b...
[fog][WARNING] fog: followed redirect to my-bucket.s3-eu-west-1.amaz..., connecting to the matching region will be more performant
[info] CloudIO::Error: Retry #1 of 10
[info] Operation: PUT 'path/production_backup.tar'
[info] --- Wrapped Exception ---
[info] Excon::Errors::BadRequest: Expected(200) <=> Actual(400 Bad Request)
[info] :body=> "\n<error>
IncompleteBody<message>The request body terminated unexpectedly<
Hi Chris, my application db size is 84 GB. If the backup runs every one hour will it not hamper the application. What should be the ideal way?
Lots of options. Depending on how long it takes you might want to do this every 4 hours or something instead. You might want to do a live replica database for realtime backups to another server and then archive a copy of it nightly.
Hey Chris this is interesting. But I need the restore functionality. Do you think you have a way arounf?
Hey Chris, how to do a incremental backup. Is there best tools to do that or procedure to do with postgres commands like WAL to get incremental backup.
Join 22,346+ developers who get early access to new screencasts, articles, guides, updates, and more.