-
Notifications
You must be signed in to change notification settings - Fork 24.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
snapshot should work when cluster is in read_only mode. #8102
Comments
@webmstr Snapshots are still moment in time while updates are happening. You don't need to lock anything. A snapshot will only backup the state of the index at the point that the backup starts, it won't take any later changes into account. |
As I mentioned, snapshots - as currently implemented - are an unreasonable method of performing a consistent backup prior to an upgrade. This enhancement would have allowed that option. Without the enhancement, snapshots should not be used before an upgrade, because the indexes may have been changed while the snapshot was running. As such, the upgrade documentation should be changed to not propose the use of snapshots as backups, and a "full" backup procedure should be documented in its place. |
Out of interest, why don't you just stop writing to your cluster? Reopening for discussion. |
@imotov what are your thoughts? |
I could turn off logstash, but that's just one potential client. Someone could be curl'ing, or using an ES plugin (like head), etc. If you need a consistent backup, you have to disconnect and lock out the clients from the server side. |
@clintongormley see #5876 I think this one is similar. |
@clintongormley Actually, I discovered that the EDIT: I may have a theory on why the snapshots were taking so long... i was taking a snapshot every two hours, and the s3 bucket has a LOT of snapshots now (49). I'm thinking that the calls the ES aws plugin makes to the S3 endpoint slow down over time as the number of snapshots increase. Or may be it's just the number of snapshots that's causing the slowness...i.e. regardless of whether the backend repository is S3 or fs? I guess I should have an additional cron job that deletes older snaphots. Is there a good rule of thumb on the number of snapshots to retain? |
@imotov we discussed this issue but were unclear on what the differences are between the index.blocks.* options are and why the snapshot fails with read_only set to false? |
@colings86 there is an ongoing effort to resolve this issue in #9203 |
This commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between an operation that modifies the index or cluster metadata and an operation that does not change any metadata. Before this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when the cluster or the index is read-only. Related to elastic#8102, elastic#2833 Closes elastic#3703 Closes elastic#5855
This commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between an operation that modifies the index or cluster metadata and an operation that does not change any metadata. Before this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when the cluster or the index is read-only. Related to elastic#8102, elastic#2833 Closes elastic#3703 Closes elastic#5855 Closes elastic#10521 Closes elastic#10522
This commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between an operation that modifies the index or cluster metadata and an operation that does not change any metadata. Before this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when the cluster or the index is read-only. Related to elastic#8102, elastic#2833 Closes elastic#3703 Closes elastic#5855 Closes elastic#10521 Closes elastic#10522
This commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between an operation that modifies the index or cluster metadata and an operation that does not change any metadata. Before this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when the cluster or the index is read-only. Related to #8102 Closes #3703 Closes #5855 Closes #10521 Closes #10522 Closes #2833
After discussing this with @tlrx it looks like the best way to address this issue is by moving snapshot and restore cluster state elements from cluster metadata to a custom cluster element where it seems to belong (since information about currently running snapshot and restore hardly qualifies as metadata). |
…rom custom metadata to custom cluster state part Information about in-progress snapshot and restore processes is not really metadata and should be represented as a part of the cluster state similar to discovery nodes, routing table, and cluster blocks. Since in-progress snapshot and restore information is no longer part of metadata, this refactoring also enables us to handle cluster blocks in more consistent manner and allow creation of snapshots of a read-only cluster. Closes elastic#8102
…rom custom metadata to custom cluster state part Information about in-progress snapshot and restore processes is not really metadata and should be represented as a part of the cluster state similar to discovery nodes, routing table, and cluster blocks. Since in-progress snapshot and restore information is no longer part of metadata, this refactoring also enables us to handle cluster blocks in more consistent manner and allow creation of snapshots of a read-only cluster. Closes elastic#8102
I was trying to make a full, consistent backup before an upgrade. Snapshots are at a moment of time, which doesn't work if clients are still updating your indexes.
I tried putting the cluster into read_only mode by setting cluster.blocks.read_only: true, but running a snapshot returned this error:
Please consider allowing snapshots to provide a consistent backup by running when in read-only mode.
The text was updated successfully, but these errors were encountered: