You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
delete_by_query can take quite some time so the recommendation is to either increase the timeout by calling .params(request_timeout=3600) (or some other number higher than 10) to add the parameter to the method call or .params(wait_for_completion=False) to make the api return immediately instead of blocking and waiting.
It should, but there can always be errors if something unexpected happens. You will, however, get a task id back from the initial API call and you can use it later (via the low-level API) to query for the job status/results via the tasks API (0).
I often have to delete entries which can sometimes exceed 100,000 entries. Thus I have been using:
But this causes timeout issues
elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host='127.0.0.1', port=9200): Read timed out. (read timeout=10))
Is there perhaps a way to let it run and then query every now and then for the status? Just wondering what the usual recommendation is.
The text was updated successfully, but these errors were encountered: