-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow retries for statuses other than 429 in streaming_bulk #1004
Comments
I was having the same problem, but with I set client = Elasticsearch(hosts, retry_on_timeout=True) Maybe passing a |
I have to say, adding an option to handle other codes with
exponential backoff works for both those errors, but elasticsearch-py detectsonly one of them :(.
|
Closes elastic#1004. This updates elastic#1005 to work for both the async and sync client as well as adding tests.
Please allow retry on other statuses as well, not just 429. i.e. You can take in an argument which defaults to [429] or some callback to test the status or the error type.
Use case: sometimes the elasticsearch cluster returns 403 -
cluster_block_exception
, like when in maintenance, we want to retry the failed items only.Currently, with
raise_on_error=False
the errors are aggregated but without their data (because_process_bulk_chunk
only adds the data whenraise_on_error=True
or in case of a TransportError), so we don't know which of them failed.with
raise_on_error=True
, the bulk stops whenever it encounters the error, and you can't tell in which chunk the error was found and which item should be retried.https://github.com/elastic/elasticsearch-py/blob/master/elasticsearch/helpers/actions.py
The text was updated successfully, but these errors were encountered: