Dump and restore your databases.
require 'dumpdb'
class MysqlFullRestore
include Dumpdb
dump_file { "dump.bz2" }
source do
{ :host => 'production.example.com',
:port => 1234,
:user => 'admin',
:pw => 'secret',
:db => 'myapp_db',
:output_root => '/some/source/dir'
}
end
target do
{ :host => 'localhost',
:user => 'admin',
:db => 'myapp_db',
:output_root => '/some/target/dir'
}
end
dump { "mysqldump -u :user -p\":pw\" :db | bzip2 > :dump_file" }
restore { "mysqladmin -u :user -p\":pw\" -f -b DROP :db; true" }
restore { "mysqladmin -u :user -p\":pw\" -f CREATE :db" }
restore { "bunzip2 -c :dump_file | mysql -u :user -p\":pw\" :db" }
end
Dumpdb provides a framework for scripting database backups and restores. You configure your source and target db settings. You define the set of commands needed for your script to dump the (local or remote) source database and optionally restore the dump to the (local) target database.
Once you have created an instance of your script with its database settings you can run it.
MysqlFullRestore.new.run
Dumpdb runs the dump commands using source settings and runs the restore commands using target settings. By default, Dumpdb assumes both the dump and restore commands are to be run on the local system.
Dumpdb supports defining callbacks for your script. These get fired as the script is being run.
class MysqlFullRestore
include Dumpdb
# ...
def after_dump
# this will be called after the dump commands have been run
end
end
Available callbacks:
{before|after}_run
- called before/after any commands have been executed{before|after}_setup
- called before/after the runner sets up the script run{before|after}_dump
- called before/after the dump cmds are executed{before|after}_copy_dump
- called before/after the dump file is copied from source to target{before|after}_restore
- called before/after the restore cmds are executed{before|after}_teardown
- called before/after the runner tears down the script run{before|after}_cmd_run
- called before/after each cmd is run, passes the cmd obj being run
Phases occur in this order: setup, dump, copy_dump, restore, teardown
To run your dump commands on a remote server, specify the optional ssh
setting.
class MysqlFullRestore
include Dumpdb
ssh { 'user@host' }
# ...
end
This tells Dumpdb to run the dump commands using ssh on a remote host and to download the dump file using sftp.
Every Dumpdb script assumes there are two types of commands involved: dump commands that run using source settings and restore commands that run using target settings. The dump commands should produce a single "dump file" (typically a compressed file or tar). The restore commands restore the local db from the dump file.
You specify the name of the dump file using the dump_file
setting
# ...
dump_file { "dump.bz2" }
#...
This tells Dumpdb what file is being generated by the dump and will be used in the restore. The dump commands should produce it. The restore commands should use it.
Dump commands are system commands that should produce the dump file.
# ...
dump { "mysqldump -u :user -p :pw :db | bzip2 > :dump_file" }
#...
Restore commands are system commands that should restore the local db from the dump file.
# ...
restore { "mysqladmin -u :user :pw -f -b DROP :db; true" } # drop the local db, whether it exists or not
restore { "mysqladmin -u :user :pw -f CREATE :db" } # recreate the local db
restore { "bunzip2 -c :dump_file | mysql -u :user :pw :db" } # unzip the dump file and apply it to the db
#...
Dump and restore commands are templated. You define the command with placeholders and appropriate setting values are substituted in when the script is run.
Command placeholders should correspond with keys in the source or target settings. Dump commands use the source settings and restore commands use the target settings.
There are two special placeholders that are added to the source and target settings automatically:
-
:output_dir
dir the dump file is written to or read from (depending on whether dumping or restoring). This is generated by the script instance. By default, no specific root value is used - pass in a:output_root
value to the source and target to specify one. -
:dump_file
path of the dump file - uses the :output_dir setting
You should at least use the :dump_file
placeholder in your dump and restore commands to ensure proper dump handling and usage.
dump_file { "dump.bz2" }
dump { "mysqldump :db | bzip2 > :dump_file" }
restore { "bunzip2 -c :dump_file | mysql :db" }
A Dumpdb script needs to be told about its source and target settings. You tell it these when you define your script:
class MysqlFullRestore
include Dumpdb
source do
{ :user => 'something',
:pw => 'secret',
:db => 'something_production',
:something => 'else'
}
end
target do
{ :user => 'root',
:pw => 'supersecret',
:db => 'something_development'
}
end
# ...
end
Any settings keys can be used as command placeholders in dump and restore commands.
As you may have noticed, the script DSL settings methods all take a proc as their argument. This is because the procs are lazy-eval'd in the scope of the script instance. This allows you to use interpolation to help build commands with dynamic data.
Take this example where you want your dump script to honor ignored tables.
require 'dumpdb'
class MysqlIgnoredTablesRestore
include Dumpdb
# ...
dump { "mysqldump -u :user -p :pw :db #{ignored_tables} | bzip2 > :dump_file" }
# ...
def initialize(opts={})
opts[:ignored_tables] ||= []
@opts = opts
end
def ignored_tables
@opts[:ignored_tables].map{ |t| "--ignore-table=#{source.db}.#{t}" }.join(' ')
end
end
See examples/
dir. (TODO)
Add this line to your application's Gemfile:
gem 'dumpdb'
And then execute:
$ bundle
Or install it yourself as:
$ gem install dumpdb
- Fork it
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Added some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create new Pull Request