Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Browse files


  • Loading branch information...
commit 33b5b323d07030ee67dc9fc4e52f03178cc102cb 1 parent a75a75f
Brian D. Burns authored
Showing with 140 additions and 387 deletions.
  1. +1 −1 
  2. +139 −386
2 
@@ -1,5 +1,5 @@
-Copyright (c) 2009-2011 Michael van Rooijen ( [@meskyanichi](!/meskyanichi) )
+Copyright (c) 2009-2013 Michael van Rooijen ( [@meskyanichi](!/meskyanichi) )
The "Backup" RubyGem is released under the **MIT LICENSE**
@@ -1,38 +1,45 @@
-Backup is a RubyGem, written for Linux and Mac OSX, that allows you to easily perform backup operations on both your remote and local environments. It provides you with an elegant DSL in Ruby for modeling your backups. Backup has built-in support for various databases, storage protocols/services, syncers, compressors, encryptors and notifiers which you can mix and match. It was built with modularity, extensibility and simplicity in mind.
+Backup is a system utility for Linux and Mac OS X, distributed as a RubyGem, that allows you to easily perform backup
+operations. It provides an elegant DSL in Ruby for _modeling_ your backups. Backup has built-in support for various
+databases, storage protocols/services, syncers, compressors, encryptors and notifiers which you can mix and match. It
+was built with modularity, extensibility and simplicity in mind.
-[![Build Status](](
-[![Still Maintained](](
+## Installation
+To get the latest stable version:
-### Author
-**[Michael van Rooijen]( ( [@meskyanichi](!/meskyanichi) )**
-Drop me a message for any questions, suggestions, requests, bugs or submit them to the [issue log](
+ gem install backup
-### Installation
+See [Release Notes]( in the wiki for changes in the latest
-To get the latest stable version
+Backup supports Ruby versions 1.8.7, 1.9.2, 1.9.3 and 2.0.0.
- gem install backup
+## Overview
-You can view the list of released versions over at [ (Backup)](
+Backup allows you to _model_ your backup jobs using a Ruby DSL:
+```rb, 'Description for my_backup') do
+ # ... components here ...
-### Getting Started
+The `:my_backup` symbol is the model's `trigger` and used to perform the job:
-I recommend you read this README first, and refer to the [wiki pages]( afterwards. There's also a [Getting Started wiki page](
+ $ backup perform --trigger my_backup
-What Backup 3 currently supports
+Backup's _components_ are added to the backup _model_ to define the actions to be performed.
+All of Backup's components are fully documented in the [Backup Wiki](
+The following is brief overview of the components Backup provides:
-Below you find a list of components that Backup currently supports. If you'd like support for components other than the ones listed here, feel free to request them or to fork Backup and add them yourself. Backup is modular and easy to extend.
+### Archives and Databases
-### Database Support
+[Archives]( create basic `tar` archives. Both **GNU** and **BSD**
+`tar` are supported.
+[Databases]( create backups of one of the following supported databases:
- PostgreSQL
@@ -40,78 +47,76 @@ Below you find a list of components that Backup currently supports. If you'd lik
- Redis
- Riak
-[Database Wiki Page](
+Any number of Archives and Databases may be defined within a backup _model_.
-### Filesystem Support
+### Compressors and Encryptors
-- Files
-- Directories
+Adding a [Compressor]( to your backup will compress all the
+Archives and Database backups within your final archive package.
+`Gzip`, `Bzip2` and other similar compressors are supported.
-[Archive Wiki Page](
+Adding a [Encryptor]( allows you to encrypt your final backup package.
+Both `OpenSSL` and `GPG` are supported.
-### Storage Locations and Services
+Your final backup _package_ might look something like this:
+$ gpg --decrypt my_backup.tar.gpg --outfile my_backup.tar
+$ tar -tvf my_backup.tar
+ my_backup/
+ my_backup/archives/
+ my_backup/archives/user_avatars.tar.gz
+ my_backup/archives/log_files.tar.gz
+ my_backup/databases/
+ my_backup/databases/PostgreSQL/
+ my_backup/databases/PostgreSQL/pg_db_name.sql.gz
+ my_backup/databases/Redis/
+ my_backup/databases/Redis/redis_db_name.rdb.gz
+### Storages
+Once your final backup package is ready, you can use any number of the following
+[Storages]( to store it:
- Amazon Simple Storage Service (S3)
- Rackspace Cloud Files (Mosso)
- Ninefold Cloud Storage
- Dropbox Web Service
-- Remote Servers *(Available Protocols: FTP, SFTP, SCP and RSync)*
-- Local Storage
-[Storage Wiki Page](
-### Storage Features
+- Remote Servers _(Available Protocols: FTP, SFTP, SCP and RSync)_
+- Local Storage _(including network mounted locations)_
-- **Backup Cycling, applies to:**
- - Amazon Simple Storage Service (S3)
- - Rackspace Cloud Files (Mosso)
- - Ninefold Cloud Storage
- - Dropbox Web Service
- - Remote Servers *(Only Protocols: FTP, SFTP, SCP)*
- - Local Storage
+All of the above Storages _(except RSync)_ support:
-[Cycling Wiki Page](
+- [Cycling]( to keep and rotate multiple copies
+of your stored backups.
-- **Backup Splitting, applies to:**
- - Amazon Simple Storage Service (S3)
- - Rackspace Cloud Files (Mosso)
- - Ninefold Cloud Storage
- - Dropbox Web Service
- - Remote Servers *(Only Protocols: FTP, SFTP, SCP)*
- - Local Storage
+- [Splitter]( to break up a large
+backup package into smaller files.
-[Splitter Wiki Page](
-- **Incremental Backups, applies to:**
- - Remote Servers *(Only Protocols: RSync)*
+When using the RSync Storage, once a full backup has been stored, subsequent backups only need to
+transmit the changed portions of the final archive to bring the remote copy up-to-date.
### Syncers
-- RSync (Push, Pull and Local)
-- Amazon S3
-- Rackspce Cloud Files
-[Syncer Wiki Page](
+[Syncers]( are processed after your final backup archive has been
+stored and allow you to perform file synchronization.
-### Compressors
+Backup includes two types of Syncers:
-- Gzip
-- Bzip2
-- Pbzip2
-- Lzma
+- `RSync`: Used to sync files locally, local-to-remote (`Push`), or remote-to-local (`Pull`).
+- `Cloud`: Used to sync files to remote storage services like Amazon S3 and Rackspace.
-[Compressors Wiki Page](
+A backup _model_ may contain _only_ Syncers as well.
-### Encryptors
+### Notifiers
-- OpenSSL
-- GPG
+[Notifiers]( are used to send notifications upon successful and/or
+failed completion of your backup _model_.
-[Encryptors Wiki Page](
+Supported notification services include:
-### Notifiers
-- Mail
+- Email _(SMTP, Sendmail, Exim and File delivery)_
- Twitter
- Campfire
- Presently
@@ -119,360 +124,108 @@ Below you find a list of components that Backup currently supports. If you'd lik
- Hipchat
- Pushover
-[Notifiers Wiki Page](
-### Supported Ruby versions (Tested with RSpec)
-- Ruby 1.9.3
-- Ruby 1.9.2
-- Ruby 1.8.7
-A sample Backup configuration file
+## Generators
-This is a Backup configuration file. Check it out and read the explanation below.
-Backup has a [great wiki]( which explains each component of Backup in detail.
+Backup makes it easy to setup new backup _model_ files with it's [Generator]( command.
-``` rb, 'A sample backup configuration') do
+$ backup generate:model -t my_backup --archives --databases=postgresql,redis --compressors=gzip \
+ --encryptors=gpg --storages=sftp,s3 --notifiers=mail,twitter
- split_into_chunks_of 4000
+Simply generate a new _model_ using the options you need, then update the configuration for each component using the
+[Wiki]( documentation.
- database MySQL do |database|
- = 'my_sample_mysql_db'
- database.username = 'my_username'
- database.password = 'my_password'
- database.skip_tables = ['logs']
- database.additional_options = ['--single-transaction', '--quick']
- end
+The following is an example of a what this Backup _model_ might look like:
- database MongoDB do |database|
- = 'my_sample_mongo_db'
- database.only_collections = ['users', 'events', 'posts']
- end
+```rb, 'Description for my_backup') do
+ split_into_chunks_of 250
archive :user_avatars do |archive|
archive.add '/var/apps/my_sample_app/public/avatars'
- archive :logs do |archive|
- archive.add '/var/apps/my_sample_app/logs/production.log'
- archive.add '/var/apps/my_sample_app/logs/newrelic_agent.log'
- archive.add '/var/apps/my_sample_app/logs/other/'
- archive.exclude '/var/apps/my_sample_app/logs/other/exclude-this.log'
+ archive :log_files do |archive|
+ archive.add '/var/apps/my_sample_app/logs'
+ archive.exclude '/var/apps/my_sample_app/logs/exclude-this.log'
- encrypt_with OpenSSL do |encryption|
- encryption.password = 'my_secret_password'
+ database PostgreSQL do |db|
+ = "pg_db_name"
+ db.username = "username"
+ db.password = "password"
+ end
+ database Redis do |db|
+ = "redis_db_name"
+ db.path = "/usr/local/var/db/redis"
+ db.password = "password"
+ db.invoke_save = true
compress_with Gzip
- store_with SFTP, "Server A" do |server|
- server.username = 'my_username'
- server.password = 'secret'
- server.ip = ''
- server.port = 22
- server.path = '~/backups'
- server.keep = 25
+ encrypt_with GPG do |encryption|
+ encryption.mode = :symmetric
+ encryption.passphrase = 'my_password'
- store_with SFTP, "Server B" do |server|
- server.username = 'my_username'
- server.password = 'secret'
- server.ip = ''
- server.port = 22
- server.path = '~/backups'
- server.keep = 25
+ store_with SFTP do |server|
+ server.username = "my_username"
+ server.password = "my_password"
+ server.ip = "123.45.678.90"
+ server.port = 22
+ server.path = "~/backups/"
+ server.keep = 5
store_with S3 do |s3|
- s3.access_key_id = 'my_access_key_id'
- s3.secret_access_key = 'my_secret_access_key'
- s3.region = 'us-east-1'
- s3.bucket = 'my_bucket/backups'
- s3.keep = 20
- end
- sync_with Cloud::S3 do |s3|
s3.access_key_id = "my_access_key_id"
s3.secret_access_key = "my_secret_access_key"
- s3.bucket = "my-bucket"
- s3.path = "/backups"
- s3.mirror = true
- s3.directories do |directory|
- directory.add "/var/apps/my_app/public/videos"
- directory.add "/var/apps/my_app/public/music"
- end
+ s3.region = "us-east-1"
+ s3.bucket = "bucket-name"
+ s3.path = "/path/to/my/backups"
+ s3.keep = 10
notify_by Mail do |mail|
- mail.on_success = false
- mail.on_warning = true
- mail.on_failure = true
+ mail.on_success = false
+ mail.from = ""
+ = ""
+ mail.address = ""
+ mail.port = 587
+ mail.user_name = ""
+ mail.password = "my_password"
+ mail.authentication = "plain"
+ mail.enable_starttls_auto = true
notify_by Twitter do |tweet|
- tweet.on_success = true
- tweet.on_warning = true
- tweet.on_failure = true
+ tweet.consumer_key = "my_consumer_key"
+ tweet.consumer_secret = "my_consumer_secret"
+ tweet.oauth_token = "my_oauth_token"
+ tweet.oauth_token_secret = "my_oauth_token_secret"
-### Brief explanation for the above example configuration
-First, it will dump the two Databases (MySQL and MongoDB). The MySQL dump will be piped through the Gzip Compressor into
-`sample_backup/databases/MySQL/my_sample_mysql_db.sql.gz`. The MongoDB dump will be dumped into
-`sample_backup/databases/MongoDB/`, which will then be packaged into `sample_backup/databases/MongoDB-#####.tar.gz`
-(`#####` will be a simple unique identifier, in case multiple dumps are performed.)
-Next, it will create two _tar_ Archives (user\_avatars and logs). Each will be piped through the Gzip Compressor into
-`sample_backup/archives/` as `user_archives.tar.gz` and `logs.tar.gz`.
-Finally, the `sample_backup` directory will be packaged into an uncompressed _tar_ archive, which will be piped through
-the OpenSSL Encryptor to encrypt this final package into `YYYY-MM-DD-hh-mm-ss.sample_backup.tar.enc`. This final
-encrypted archive will then be transfered to your Amazon S3 account. If all goes well, and no exceptions are raised,
-you'll be notified via the Twitter notifier that the backup succeeded. If any warnings were issued or there was an
-exception raised during the backup process, then you'd receive an email in your inbox containing detailed exception
-information, as well as receive a simple Twitter message that something went wrong.
-Aside of S3, we have also defined two `SFTP` storage methods, and given them two unique identifiers `Server A` and
-`Server B` to distinguish between the two. With these in place, a copy of the backup will now also be stored on two
-separate servers: `` and ``.
-As you can see, you can freely mix and match **archives**, **databases**, **compressors**, **encryptors**, **storages**
-and **notifiers** for your backups. You could even specify 4 storage locations if you wanted: Amazon S3, Rackspace Cloud
-Files, Ninefold and Dropbox, it'd then store your packaged backup to 4 separate locations for high redundancy.
-Also, notice the `split_into_chunks_of 4000` at the top of the configuration. This tells Backup to split any backups
-that exceed in 4000 MEGABYTES of size in to multiple smaller chunks. Assuming your backup file is 12000 MEGABYTES (12GB)
-in size, then Backup will take the output which was piped from _tar_ into the OpenSSL Compressor and additionally pipe
-that output through the _split_ utility, which will result in 3 chunks of 4000 MEGABYTES with additional file extensions
-of `-aa`, `-ab` and `-ac`. These files will then be individually transfered. This is useful for when you are using
-Amazon S3, Rackspace Cloud Files, or other 3rd party storage services which limit you to "5GB per file" uploads. So with
-this, the backup file size is no longer a constraint.
-Additionally we have also defined a **S3 Syncer** ( `sync_with Cloud::S3` ), which does not follow the above process of
-archiving/compression/encryption, but instead will directly sync the whole `videos` and `music` folder structures from
-your machine to your Amazon S3 account. (very efficient and cost-effective since it will only transfer files that were
-added/changed. Additionally, since we flagged it to 'mirror', it'll also remove files from S3 that no longer exist). If
-you simply wanted to sync to a separate backup server that you own, you could also use the RSync syncer for even more
-efficient backups that only transfer the **bytes** of each file that changed.
-There are more **archives**, **databases**, **compressors**, **encryptors**, **storages** and **notifiers** than
-displayed in the example, all available components are listed at the top of this README, as well as in the
-[Wiki]( with more detailed information.
-### Running the example
-Notice the `, 'A sample backup configuration') do` at the top of the above example. The
-`:sample_backup` is called the **trigger**. This is used to identify the backup procedure/file and initialize it.
-``` sh
-backup perform -t [--trigger] sample_backup
+The [Getting Started]( page provides a simple
+walk-through to familiarize you with setting up, configuring and running a backup job.
-Now it'll run the backup, it's as simple as that.
+## Suggestions, Issues, etc...
-### Automatic backups
+If you have any suggestions or problems, please submit an Issue or Pull Request using Backup's
+[Issue Log](
-Since Backup is an easy-to-use command line utility, you should write a crontask to invoke it periodically. I recommend
-using [Whenever]( to manage your crontab. It'll allow you to write to the crontab
-using pure Ruby, and it provides an elegant DSL to do so. Here's an example:
+If you find any errors or omissions in Backup's documentation [Wiki](,
+please feel free to edit it!
-``` rb
-every 6.hours do
- command "backup perform --trigger sample_backup"
+Backup has seen many improvements over the years thanks to it's
+[Contributors](, as well as those who have help discuss issues and
+improve the documentation, and looks forward to continuing to provide users with a reliable backup solution.
-With this in place, run `whenever --update-crontab backup` to write the equivalent of the above Ruby syntax to the
-crontab in cron-syntax. Cron will now invoke `backup perform --trigger sample_backup` every 6 hours. Check out the
-Whenever project page for more information.
-### Documentation
-See the [Wiki Pages](
-### Suggestions, Bugs, Requests, Questions
-View the [issue log]( and post them there.
-### Contributors
- <tr>
- <th>Contributor</th>
- <th>Contribution</th>
- </tr>
- <tr>
- <td><a href="" target="_blank"><b>Brian D. Burns ( burns )</b></a></td>
- <td><b>Core Contributor</b></td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Aditya Sanghi ( asanghi )</a></td>
- <td>Twitter Notifier, Dropbox Timeout Configuration</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Phil Cohen ( phlipper )</a></td>
- <td>Exclude Option for Archives</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Arun Agrawal ( arunagw )</a></td>
- <td>Campfire notifier</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Stefan Zimmermann ( szimmermann )</a></td>
- <td>Enabling package/archive (tar utility) support for more Linux distro's (FreeBSD, etc)</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Mark Nyon ( trystant )</a></td>
- <td>Helping discuss MongoDump Lock/FSync problem</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Bernard Potocki ( imanel )</a></td>
- <td>Helping discuss MongoDump Lock/FSync problem + Submitting a patch</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Tomasz Stachewicz ( tomash )</a></td>
- <td>Helping discuss MongoDump Lock/FSync problem + Submitting a patch</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Paul Strong ( lapluviosilla )</a></td>
- <td>Helping discuss MongoDump Lock/FSync problem</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Ryan ( rgnitz )</a></td>
- <td>Helping discuss MongoDump Lock/FSync problem</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Robert Speicher ( tsigo )</a></td>
- <td>Adding the --quiet [-q] feature to Backup to silence console logging</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Jon Whitcraft ( jwhitcraft )</a></td>
- <td>Adding the ability to add additional options to the S3Syncer</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Benoit Garret ( bgarret )</a></td>
- <td>Presently notifier</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Lleïr Borràs Metje ( lleirborras )</a></td>
- <td>Lzma Compressor</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Jonathan Lassoff ( jof )</a></td>
- <td>Bugfixes and more secure GPG storage</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Michal Cichra ( mikz )</a></td>
- <td>Wildcard Triggers</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Dmitry Novotochinov ( trybeee )</a></td>
- <td>Dropbox Storage</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Emerson Lackey ( Emerson )</a></td>
- <td>Local RSync Storage</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">digilord</a></td>
- <td>OpenSSL Verify Mode for Mail Notifier</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">stemps</a></td>
- <td>FTP Passive Mode</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">David Kowis ( dkowis )</a></td>
- <td>Fixed PostgreSQL Password issues</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Jonathan Otto ( jotto )</a></td>
- <td>Allow for running PostgreSQL as another UNIX user</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">João Vitor ( joaovitor )</a></td>
- <td>Changed default PostgreSQL example options to appropriate ones</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Manuel Alabor ( swissmanu )</a></td>
- <td>Prowl Notifier</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Joseph Crim ( josephcrim )</a></td>
- <td>Riak Database, exit() suggestions</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Jamie van Dyke ( fearoffish )</a></td>
- <td>POpen4 implementation</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Harry Marr ( hmarr )</a></td>
- <td>Auth URL for Rackspace Cloud Files Storage</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Manuel Meurer ( manuelmeurer )</a></td>
- <td>Ensure the storage file (YAML dump) has content before reading it</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Jesse Dearing ( jessedearing )</a></td>
- <td>Hipchat Notifier</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Szymon ( szymonpk )</a></td>
- <td>Pbzip2 compressor</td>
- </tr>
- <tr>
- <td><a href="" target="_blank">Steve Newson ( SteveNewson )</a></td>
- <td>Pushover Notifier</td>
- </tr>
-### Want to contribute?
-- Fork the project
-- Write RSpec tests, and test against:
- - Ruby 1.9.3
- - Ruby 1.9.2
- - Ruby 1.8.7
-- Try to keep the overall *structure / design* of the gem the same
-I can't guarantee I'll pull every pull request. Also, I may accept your pull request and drastically change parts to improve readability/maintainability. Feel free to discuss about improvements, new functionality/features in the [issue log]( before contributing if you need/want more information.
-### Easily run tests against all three Ruby versions

This part of the README was very valuable and I wouldn't mind seeing it added back. I probably never would have gotten started with backup without this section. I wasn't familiar with "bundle exec guard" previously. I had to look at the history of this file to figure out how to run the tests again

Cool =)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
-Install [RVM]( and use it to install Ruby 1.9.3, 1.9.2 and 1.8.7.
- rvm get latest && rvm reload
- rvm install 1.9.3 && rvm install 1.9.2 && rvm install 1.8.7
-Once these are installed, go ahead and install all the necessary dependencies.
- cd backup
- rvm use 1.9.3 && gem install bundler && bundle install
- rvm use 1.9.2 && gem install bundler && bundle install
- rvm use 1.8.7 && gem install bundler && bundle install
-The Backup gem uses [Guard]( along with [Guard::RSpec]( to quickly and easily test Backup's code against all four Rubies. If you've done the above, all you have to do is run:
- bundle exec guard
-from Backup's root and that's it. It'll now test against all Ruby versions each time you adjust a file in the `lib` or `spec` directories.
-### Or contribute by writing blogs/tutorials/use cases
+**Copyright (c) 2009-2013 [Michael van Rooijen]( ( [@meskyanichi](!/meskyanichi) )**
+Released under the **MIT** [License](

0 comments on commit 33b5b32

Please sign in to comment.
Something went wrong with that request. Please try again.