New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snapshot/Restore API - Phase I #3826
Comments
Assuming this supports alias names you might want to mention that explicitly. Also assuming aliases are supported you may want to mention if restoring an index with an alias automatically gets the alias. If it does you may want to add an option to restore to prevent it from doing so. I could see this being a nasty surprise when people start complaining about results getting doubled.
Might want to mention that this doesn't mean that the snapshot will be just a list of diffs. It sounds like this will work similarly to file based replication so merges will cause big files.
Can I use this to restore my cluster to another cluster? This seems like a good opportunity for such a useful feature. |
Is there a way to perform a full snapshot or merge all the incrementals into a single snapshot? I am thinking of replicating an existing system that has maybe 10+ snapshots already. Instead of restoring the 10+ incremental snapshots, doing a single restore would be much easier. Maybe allow specifying multiple snapshot names in a single call and they will be restored in order?
|
@nik9000 Thanks! Good points, I will clarify the docs. And yes, you can restore into a different cluster. @mattweber I wasn't quite clear in the description and I will fix it in the next iteration. By incremental, I meant that each snapshot only copies files that were changed since the last snapshot. Each snapshot always points to a complete view of snapshotted indices. However, if two snapshots share the same subset of files they will point to the same physical files in the repository and these files will be copied only once. In order to restore cluster to a particular state, you simply restore corresponding snapshot. In other words in order to restore cluster to the state that it was during 4th snapshot, you simple execute
It doesn't matter how many snapshots you created before it. Moreover, you can delete intermediate snapshots (snap1, snap2, snap3), which will leave only files referenced by snap4. |
@imotov ok great, that make sense. Can you clarify how we would replicate an existing cluster using this functionality? I imagine is it something like:
Thanks! |
@mattweber yes that's the idea. Except in 3 is optional. You can restore from the same location where you back. There are no locks at the moment, so you need to make sure that you don't backup to the same location from two clusters at the same time, but you should be able to backup from one and restore from another without any issues. |
a note on compression, I would default it to false since by default, we compress the index. |
Thanks for starting the backup/restore effort, this is great news. In phase I, shared file system (NFS?) and S3 will be supported (the old deprecated gateways). What is planned for phase II? Maybe auto backup? Cross data center synchronization? If there is a plan for phase II, will it also be scheduled for 1.0? |
@jprante Not sure yet. Most likely, these features will appear after 1.0. |
It would be nice if we could specify a retention policy posting to
The above would delete backups older then 30days but keep a minimum of 20 (possibly older than 30 days). |
@Mpdreamz for now you will need to delete snapshots manually, but in the future releases we might add retention policy and automatic scheduled snapshots. |
I find the interface for initiating a snapshot vs initiating a restore a little troubling: the only difference is the HTTP verb. It seems very problematic that PUT will initiate the snapshot and POST will initiate the restore. This is waiting for someone to screw them up (and potentially cause a catastrophe that might be difficult to recover from). A better approach might be to use an explicit path parameter to initiate a restore since it is the destructive operation:
|
nice API, thanks |
Hi, Just to report a small typo, in the docs it says the keyword to use to list all repositories is "_all", but the example curl line:
uses "all", which fails with: This works fine:
Cheers, |
Another comment, I think the attributes "duration" and "duration_in_millis" contain the values of each other and should be interchanged, look at the values below: curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_2?wait_for_completion=true&pretty" |
@sebaes do you mind opening issues for each of your findings - both of them are valid! |
Hi I am trying to make a snap shot of my disk in solaris and send to another server, but it is giving me a error, could you help me please? root@srvdth03 # zfs receive -v rpool/vdisk/vdisk-hdd0@snap < /rpool/snaps/vdisk-hdd0.snap2 |
@keziacp you are performing a ZFS snapshot, not ES snapshot. |
Snapshot And Restore
The snapshot and restore module will allow to create snapshots of individual indices or an entire cluster into a remote repository and restore these indices back to the same or a different cluster afterwards. The phase I will only support shared file system repository and S3 repository.
Repositories
Before any snapshot or restore operation can be performed a snapshot repository should be registered in Elasticsearch. The following command registers a shared file system repository with the name
my_backup
that will use location/mount/backups/my_backup
to store snapshots.Once repository is registered, its information can be obtained using the following command:
If a repository name is not specified, or
_all
is used as repository name Elasticsearch will return information about all repositories currently registered in the cluster:or
Shared File System Repository
The shared file system repository (
"type": "fs"
) is using shared file system to store snapshot. The path specified in thelocation
parameter should point to the same location in the shared filesystem and be accessible on all data and master nodes. The following settings are supported:location
- Location of the snapshots. Mandatorycompress
- Turns on compression of the snapshot files. Defaults totrue
concurrent_streams- Throttles the number of streams (per node) preforming snapshot operation. Defaults to 5
chunk_size` - Big files can be broken down into chunks during snapshotting if needed. Defaults to unlimited.Snapshot
A repository can contain multiple snapshots of the same cluster. Snapshot are identified by unique names within the cluster. A snapshot with the name
snapshot_1
in the repositorymy_backup
can be created by executing the following command:The
wait_for_completion
parameter specifies whether or not the request should return immediately or wait for snapshot completion. By default snapshot of all open and started indices in the cluster is created. This behavior can be changed by specifying the list of indices in the body of the snapshot request.The list of indices that should be included into the snapshot can be specified using the
indices
parameter that supports multi index syntax. The snapshot request also supports theignore_indices
option. Setting it tomissing
will cause indices that do not exists to be ignored during snapshot creation. By default, whenignore_indices
option is not set and an index is missing the snapshot request will fail.The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses the list of the index files that are already stored in the repository and copies only files that were created or changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form. Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index at the moment when snapshot was created, so no records that were added to the index after snapshot process had started will be present in the snapshot.
Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of the snapshot.
Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation filtering. Once snapshot of the shard is finished Elasticsearch will be able to move shard to another node according to the current allocation filtering settings and rebalancing algorithm.
Once a snapshot is created information about this snapshot can be obtained using the following command:
All snapshots currently stored in the repository can be listed using the following command:
A snapshot can be deleted from the repository using the following command:
When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being created the snapshotting process will be aborted and all files created as part of the snapshotting process will be cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were started by mistake.
Restore
A snapshot can be restored using this following command:
By default, all indices in the snapshot as well as cluster state are restored. It's possible to select indices that should be restored as well as prevent global cluster state from being restored by using
indices
andrestore_global_state
options in the restore request body. The list of indices supports multi index syntax. Therename_pattern
andrename_replacement
options can be also used to rename index on restore using regular expression that supports referencing the original text as explained here.The restore operation can be performed on a functioning cluster. However, an existing index can be only restored if it's closed. The restore operation automatically opens restored indices if they were closed and creates new indices if they didn't exist in the cluster. If cluster state is restored, the restored templates that don't currently exist in the cluster are added and existing templates with the same name are replaced by the restored templates. The restored persistent settings are added to the existing persistent settings.
The text was updated successfully, but these errors were encountered: