New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
urllib socket.timeout, no indication as to why, timeout=0.1 #730
Comments
|
This exception isn't thrown until my worker has been running for several hours, I should add. Per the doc suggestion, I'm creating a single instance within my worker and re-using it. |
|
I've done more testing, and this issue actually appears to be (still) random, stemming from my use of sniffing. The setup that I'm using is a single master/ingest combo, and two data nodes. The master/ingest is a container, and the two data nodes are physical boxes. Removing sniffing, these exceptions disappear. I suppose we can consider this solved... but... 🤒 |
|
I'm trying to recreate and have trouble doing so. This is weird behavior. I've been able to verify that sniffing works:
I started 3 containers clustered together. But when I tried the request again I was able to successfully hit the cluster agin using a different node this time and seeing that there were now 2 nodes in the cluster. have you been able to verify that there's been no cluster issues during your ingestion process? (maybe your master/ingest crashes/restarts iteself). |
|
another possible issue and strange behaivour with socket timeout and python-exceptions. es.index(something, request_timeout = 0.1) If i put this in try/except pass scope, i get errors with chain: "#socket.timeout-->#urllib.exceptions.*" . Further execution after except block is stopped. So where am i wrong or timeout can't be excepted in try? I haven't troubles with catching other es exceptions. |
|
I'm having the same issue and would love to find a way to be able to catch this exception and make sure it's handled correctly |
|
I have the same issue as well and while in some cluster i never get the error, the majority of the es clusters produce the symptoms. |
|
We have also seen this, or at least a very similar, error. In our case it seems to be related to the fact that we are ingesting large documents and the bulk request sometimes takes longer time than we had configured for sniffer_timeout. By making the sniffer_timeout large we could resolve our problem. We are also running with the sniff_on_connection_fail disabled, if that makes any difference. |
|
This issue mentions a number of possibly-related (or not) connection issues. Unfortunately, there is not enough information here to recreate the scenario locally. Given also that this issue has been inactive for over a year, I am going to close it. Please feel free to open a new issue if connection issues are still occurring. It would be most helpful to us to provide as much detail as possible, to allow us to recreate. Also, unless you are certain that you issue is the same as that posted, please refrain from "me too" posting, and open a new issue with your specific problem. |
|
For what it's worth, we're seeing this as well (older Elastic 7x, admittedly). |
I'm seeing the infrequent exception being raised with a strange timeout value, related to connection pooling. The exception below is the entire exception - zero reference to the code that I'm writing. The only redacted info is the host IP, nothing else has been altered.
My ES instance is being created as follows
When I'm inserting data, I'm using
Am I missing something painfully obvious here? It has been "one of those weekends" 😛
The text was updated successfully, but these errors were encountered: