Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snapshot/Restore API - Phase I #3826

Closed
imotov opened this issue Oct 3, 2013 · 19 comments · Fixed by #3953
Closed

Snapshot/Restore API - Phase I #3826

imotov opened this issue Oct 3, 2013 · 19 comments · Fixed by #3953

Comments

@imotov
Copy link
Contributor

imotov commented Oct 3, 2013

Snapshot And Restore

The snapshot and restore module will allow to create snapshots of individual indices or an entire cluster into a remote repository and restore these indices back to the same or a different cluster afterwards. The phase I will only support shared file system repository and S3 repository.

Repositories

Before any snapshot or restore operation can be performed a snapshot repository should be registered in Elasticsearch. The following command registers a shared file system repository with the name my_backup that will use location /mount/backups/my_backup to store snapshots.

$ curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
    "type": "fs",
    "settings": {
        "location": "/mount/backups/my_backup",
        "compress": true
    }
}'

Once repository is registered, its information can be obtained using the following command:

$ curl -XGET 'http://localhost:9200/_snapshot/my_backup?pretty'
{
  "my_backup" : {
    "type" : "fs",
    "settings" : {
      "compress" : "false",
      "location" : "/mount/backups/my_backup"
    }
  }
}

If a repository name is not specified, or _all is used as repository name Elasticsearch will return information about all repositories currently registered in the cluster:

$ curl -XGET 'http://localhost:9200/_snapshot'

or

$ curl -XGET 'http://localhost:9200/_snapshot/all'
Shared File System Repository

The shared file system repository ("type": "fs") is using shared file system to store snapshot. The path specified in the location parameter should point to the same location in the shared filesystem and be accessible on all data and master nodes. The following settings are supported:

location - Location of the snapshots. Mandatory
compress - Turns on compression of the snapshot files. Defaults to true concurrent_streams- Throttles the number of streams (per node) preforming snapshot operation. Defaults to 5 chunk_size` - Big files can be broken down into chunks during snapshotting if needed. Defaults to unlimited.

Snapshot

A repository can contain multiple snapshots of the same cluster. Snapshot are identified by unique names within the cluster. A snapshot with the name snapshot_1 in the repository my_backup can be created by executing the following command:

$ curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"

The wait_for_completion parameter specifies whether or not the request should return immediately or wait for snapshot completion. By default snapshot of all open and started indices in the cluster is created. This behavior can be changed by specifying the list of indices in the body of the snapshot request.

$ curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_1" -d '{
    "indices": "index_1,index_2",
    "ignore_indices": "missing"
}'

The list of indices that should be included into the snapshot can be specified using the indices parameter that supports multi index syntax. The snapshot request also supports the ignore_indices option. Setting it to missing will cause indices that do not exists to be ignored during snapshot creation. By default, when ignore_indices option is not set and an index is missing the snapshot request will fail.

The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses the list of the index files that are already stored in the repository and copies only files that were created or changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form. Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index at the moment when snapshot was created, so no records that were added to the index after snapshot process had started will be present in the snapshot.

Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of the snapshot.

Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation filtering. Once snapshot of the shard is finished Elasticsearch will be able to move shard to another node according to the current allocation filtering settings and rebalancing algorithm.

Once a snapshot is created information about this snapshot can be obtained using the following command:

$ curl -XGET "localhost:9200/_snapshot/my_backup/snapshot_1"

All snapshots currently stored in the repository can be listed using the following command:

$ curl -XGET "localhost:9200/_snapshot/my_backup/_all"

A snapshot can be deleted from the repository using the following command:

$ curl -XDELETE "localhost:9200/_snapshot/my_backup/snapshot_1"

When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being created the snapshotting process will be aborted and all files created as part of the snapshotting process will be cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were started by mistake.

Restore

A snapshot can be restored using this following command:

$ curl -XPOST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore"

By default, all indices in the snapshot as well as cluster state are restored. It's possible to select indices that should be restored as well as prevent global cluster state from being restored by using indices and restore_global_state options in the restore request body. The list of indices supports multi index syntax. The rename_pattern and rename_replacement options can be also used to rename index on restore using regular expression that supports referencing the original text as explained here.

$ curl -XPOST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore" -d '{
    "indices": "index_1,index_2",
    "ignore_indices": "missing",
    "restore_global_state": false,
    "rename_pattern": "index_(.)+",
    "rename_replacement": "restored_index_$1"
}'

The restore operation can be performed on a functioning cluster. However, an existing index can be only restored if it's closed. The restore operation automatically opens restored indices if they were closed and creates new indices if they didn't exist in the cluster. If cluster state is restored, the restored templates that don't currently exist in the cluster are added and existing templates with the same name are replaced by the restored templates. The restored persistent settings are added to the existing persistent settings.

@nik9000
Copy link
Member

nik9000 commented Oct 3, 2013

The list of indices that should be included into the snapshot can be specified using the indices parameter that supports multi index syntax.

Assuming this supports alias names you might want to mention that explicitly. Also assuming aliases are supported you may want to mention if restoring an index with an alias automatically gets the alias. If it does you may want to add an option to restore to prevent it from doing so. I could see this being a nasty surprise when people start complaining about results getting doubled.

The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses the list of the index files that are already stored in the repository and copies only files that were created or changed since the last snapshot.

Might want to mention that this doesn't mean that the snapshot will be just a list of diffs. It sounds like this will work similarly to file based replication so merges will cause big files.

The restore operation can be performed on a functioning cluster.

Can I use this to restore my cluster to another cluster? This seems like a good opportunity for such a useful feature.

@mattweber
Copy link
Contributor

Is there a way to perform a full snapshot or merge all the incrementals into a single snapshot? I am thinking of replicating an existing system that has maybe 10+ snapshots already. Instead of restoring the 10+ incremental snapshots, doing a single restore would be much easier. Maybe allow specifying multiple snapshot names in a single call and they will be restored in order?

curl -XPOST "localhost:9200/_snapshot/my_backup/snap1,snap2,snap3,snap4"

@imotov
Copy link
Contributor Author

imotov commented Oct 3, 2013

@nik9000 Thanks! Good points, I will clarify the docs. And yes, you can restore into a different cluster.

@mattweber I wasn't quite clear in the description and I will fix it in the next iteration. By incremental, I meant that each snapshot only copies files that were changed since the last snapshot. Each snapshot always points to a complete view of snapshotted indices. However, if two snapshots share the same subset of files they will point to the same physical files in the repository and these files will be copied only once.

In order to restore cluster to a particular state, you simply restore corresponding snapshot. In other words in order to restore cluster to the state that it was during 4th snapshot, you simple execute

curl -XPOST "localhost:9200/_snapshot/my_backup/snap4"

It doesn't matter how many snapshots you created before it. Moreover, you can delete intermediate snapshots (snap1, snap2, snap3), which will leave only files referenced by snap4.

@mattweber
Copy link
Contributor

@imotov ok great, that make sense. Can you clarify how we would replicate an existing cluster using this functionality? I imagine is it something like:

  1. Start up new cluster
  2. Create your "my_backup" repository on the new cluster.
  3. Copy/Rsync contents of the "my_backup" repository from existing cluster to the new cluster's "my_backup" repository location
  4. Execute the restore of a snapshot.

Thanks!

@imotov
Copy link
Contributor Author

imotov commented Oct 4, 2013

@mattweber yes that's the idea. Except in 3 is optional. You can restore from the same location where you back. There are no locks at the moment, so you need to make sure that you don't backup to the same location from two clusters at the same time, but you should be able to backup from one and restore from another without any issues.

@kimchy
Copy link
Member

kimchy commented Oct 5, 2013

a note on compression, I would default it to false since by default, we compress the index.

@jprante
Copy link
Contributor

jprante commented Oct 6, 2013

Thanks for starting the backup/restore effort, this is great news. In phase I, shared file system (NFS?) and S3 will be supported (the old deprecated gateways). What is planned for phase II? Maybe auto backup? Cross data center synchronization? If there is a plan for phase II, will it also be scheduled for 1.0?

@imotov
Copy link
Contributor Author

imotov commented Oct 6, 2013

@jprante Not sure yet. Most likely, these features will appear after 1.0.

@Mpdreamz
Copy link
Member

It would be nice if we could specify a retention policy posting to localhost:9200/_snapshot/_retention

{
   max_age: "30d"
   min_number_of_backups: 20
}

The above would delete backups older then 30days but keep a minimum of 20 (possibly older than 30 days).

@imotov
Copy link
Contributor Author

imotov commented Oct 10, 2013

@Mpdreamz for now you will need to delete snapshots manually, but in the future releases we might add retention policy and automatic scheduled snapshots.

imotov added a commit to imotov/elasticsearch that referenced this issue Nov 4, 2013
@dspangen
Copy link

dspangen commented Nov 6, 2013

I find the interface for initiating a snapshot vs initiating a restore a little troubling: the only difference is the HTTP verb. It seems very problematic that PUT will initiate the snapshot and POST will initiate the restore. This is waiting for someone to screw them up (and potentially cause a catastrophe that might be difficult to recover from).

A better approach might be to use an explicit path parameter to initiate a restore since it is the destructive operation:

$ curl -XPOST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore"

imotov added a commit to imotov/elasticsearch that referenced this issue Nov 8, 2013
@imotov imotov closed this as completed in 510397a Nov 11, 2013
@thienchi
Copy link

nice API, thanks

@sebaes
Copy link

sebaes commented Jan 6, 2014

Hi,

Just to report a small typo, in the docs it says the keyword to use to list all repositories is "_all", but the example curl line:

curl -XGET 'http://localhost:9200/_snapshot/all'

uses "all", which fails with:
{"error":"RepositoryMissingException[[all] missing]","status":404}

This works fine:

curl -XGET 'http://localhost:9200/_snapshot/_all'

Cheers,
Sebastian.

@sebaes
Copy link

sebaes commented Jan 6, 2014

Another comment, I think the attributes "duration" and "duration_in_millis" contain the values of each other and should be interchanged, look at the values below:

curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_2?wait_for_completion=true&pretty"
{
"snapshot" : {
"snapshot" : "snapshot_2",
"indices" : [ "......MY INDICES....." ],
"state" : "SUCCESS",
"start_time" : "2014-01-06T14:42:54.957Z",
"start_time_in_millis" : 1389019374957,
"end_time" : "2014-01-06T14:43:04.128Z",
"end_time_in_millis" : 1389019384128,
"duration" : 9171,
"duration_in_millis" : "9.1s",
"failures" : [ ],
"shards" : {
"total" : 65,
"failed" : 0,
"successful" : 65
}
}
}

@s1monw
Copy link
Contributor

s1monw commented Jan 6, 2014

@sebaes do you mind opening issues for each of your findings - both of them are valid!

@imotov
Copy link
Contributor Author

imotov commented Jan 7, 2014

@sebaes thanks for reporting. Fixed by 5d98341 and 2b49ec1.

@sebaes
Copy link

sebaes commented Jan 9, 2014

Hi @s1monw and @imotov,
Sorry for not opening the issues sooner, I have just seen your comments with the fix already implemented.
Thanks!

@keziacp
Copy link

keziacp commented Sep 30, 2014

Hi I am trying to make a snap shot of my disk in solaris and send to another server, but it is giving me a error, could you help me please?

root@srvdth03 # zfs receive -v rpool/vdisk/vdisk-hdd0@snap < /rpool/snaps/vdisk-hdd0.snap2
cannot receive: can not specify snapshot name for multi-snapshot stream

@jprante
Copy link
Contributor

jprante commented Sep 30, 2014

@keziacp you are performing a ZFS snapshot, not ES snapshot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.