Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write Consistency Level for index/delete/delete_by_query/bulk with one/quorum/all. Defaults to quorum. #444

Closed
kimchy opened this issue Oct 22, 2010 · 6 comments

Comments

@kimchy
Copy link
Member

kimchy commented Oct 22, 2010

When performing a "write" operation, allow to control if it can be performed only when one shard is active, a quorum of shards are active (within a replication group), or all.

This can be controlled per API call (REST parameter is consistency and can be set to one, quorum, or all. It defaults to a node level setting of action.write_consistency which in turn defaults to quorum. This basically means that the default is quorum.

What does this means? Basically, in a 1 shard with 2 replicas, there will have to be at least 2 shards active (quorum) within the cluster for the operation to be performed.

In a 1 shard with 1 replica, at least 1 shard will need to be active (in this case quorum and one is the same).

@kimchy
Copy link
Member Author

kimchy commented Oct 22, 2010

Write Consistency Level for index/delete/delete_by_query/bulk with one/quorum/all. Defaults to quorum, closed by 5d1d927.

@clintongormley
Copy link

What happens if a quorum isn't available - will the write hang or throw an error?

@kimchy
Copy link
Member Author

kimchy commented Nov 5, 2010

It will wait for the provided timeout parameter for the expected consistency level to be met, and if its not, it will bail with an exception.

@ppearcy
Copy link
Contributor

ppearcy commented Nov 10, 2010

Hey,
This should never effect consistency of search results, correct? Fundamentally, what does this setting get you? Maybe, a extra durability in case of a getting killed before snapshotting?

Not sure if this is completely on topic for this, but when a doc is indexed, it can never fail on one replica and succeed on another, correct? I ask, because I have been and still am seeing minor consistency issues when I test bringing nodes up and down on latest master.

Thanks!

@ppearcy
Copy link
Contributor

ppearcy commented Nov 10, 2010

Err... nevermind about the second part... Everything has stayed consistent on 0.13-SNAPSHOT.

@kimchy
Copy link
Member Author

kimchy commented Nov 10, 2010

No, its just for writes, it basically allows to control for a write to succeed only if a specific number of shards to be available.

medcl pushed a commit to medcl/elasticsearch that referenced this issue Jul 1, 2011
ClaudioMFreitas pushed a commit to ClaudioMFreitas/elasticsearch-1 that referenced this issue Nov 12, 2019
mindw pushed a commit to mindw/elasticsearch that referenced this issue Sep 5, 2022
Add tags to load balancers


Approved-by: Gideon Avida
Approved-by: fabien
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants