You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am in the process of upgrading our cluster and client from ES 6.1.2 to ES 7.2 and have ran into some confusion regarding timeout settings with the Python client.
I have reduced down to a case where I create my client like this: client = Elasticsearch([{'host': host, 'port': 9200}], timeout=60*60*5, max_retries=0, retry_on_timeout=False, sniff_on_start=True)
Note that I am specifying a timeout value of 5 hours (maybe too long for production, but this is for experimental purposes to diagnose what's happening here).
I then make a query like this: client.search(index=index, body=query, size=0)
with no request_timeout setting (which should inherit the timeout value from the client), or alternately with an explicit request timeout of 5 hours: client.search(index=index, body=query, size=0, request_timeout=60 * 60 * 5)
In either case, the query fails after ~45 seconds with a timeout exception: ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'10.110.182.171', port=9200): Read timed out. (read timeout=18000))
Note that the timeout value I specified is reflected in the error message.
If I've said the timeout should be 18000 seconds, why is it giving up after 45? How do I get my queries to wait as long as needed?
This is on Python 2.7.
The text was updated successfully, but these errors were encountered:
By removing the request_timeout parameter from the decorator, we no longer try to escape it which was causing an issue with urllib3. More details: #1049 (comment)
I am in the process of upgrading our cluster and client from ES 6.1.2 to ES 7.2 and have ran into some confusion regarding timeout settings with the Python client.
I have reduced down to a case where I create my client like this:
client = Elasticsearch([{'host': host, 'port': 9200}], timeout=60*60*5, max_retries=0, retry_on_timeout=False, sniff_on_start=True)
Note that I am specifying a
timeout
value of 5 hours (maybe too long for production, but this is for experimental purposes to diagnose what's happening here).I then make a query like this:
client.search(index=index, body=query, size=0)
with no
request_timeout
setting (which should inherit thetimeout
value from the client), or alternately with an explicit request timeout of 5 hours:client.search(index=index, body=query, size=0, request_timeout=60 * 60 * 5)
In either case, the query fails after ~45 seconds with a timeout exception:
ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'10.110.182.171', port=9200): Read timed out. (read timeout=18000))
Note that the timeout value I specified is reflected in the error message.
If I've said the timeout should be 18000 seconds, why is it giving up after 45? How do I get my queries to wait as long as needed?
This is on Python 2.7.
The text was updated successfully, but these errors were encountered: