New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I/O timeout between logstash-forwarder and logstash server #360
Comments
A timeout while connecting generally means something is blocking the TCP SYN packets, or at least disrupting the tcp handshake. Usually firewall problems. |
No. This also doesn't seem related to your connection timeout problem. |
I'm running into this problem with
I tried patching the lumberjack/server.rb and that helped a bit, but I'm really unsure how to move forward. Rolling back to 0.3.1 was having problems. |
@davedash can you attach your config and your command line for logstash-forwarder? |
Truncated: https://gist.github.com/davedash/ecb9c520e10d79115287 Here's the commandline:
|
Same here, using 0.4.0 and 1.5.0rc2, the first 4 batches get shipped, then it starts to timeout. Have set so only one server is running logstash-forward at the moment, so feels like it cant be that the logstash server is overloaded. |
Make sure you error is definitely "read error looking for ack: i/o timeout" - as there's many variations with different causes. I'll assume you are getting this. Timeout happens if Logstash is unable to fully process |
Yes it definitely is that error. Do you have any guidance on those settings, as i'm having ha hard time finding the definition of them. |
Ok i seem to have got it fixed. Checked my elasticsearch (which i output to) and it was running kinda a old version (was using the elasticsearch .deb repository, which is not maintained anymore i guess :D), after upgrading that everything looks to be working good. So i guess there was some issue with the old version, making logstash buffer the queue until it self died. So if someone here stumble on same problem, run everyting with latest version (as you always should). |
I had the same problem when following the instructions at: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-centos-7 Logstash would hang and stop responding, resulting in the errors. It would then fail to gracefully shut down, holding open the port. Killing the java process and starting it again would bring it back for a while. To resolve it, I did the following:
|
I had the same issue and found it was caused by my logstash server java defaulting to the IPv6 Stack. The IPv6 connections weren't being closed properly and stuck in CLOSED_WAIT. Had to add -Djava.net.preferIPv4Stack=true to the /etc/init.d/logstash start() { LS_JAVA_OPTS="${LS_JAVA_OPTS} -Djava.io.tmpdir=${LS_HOME} -Djava.net.preferIPv4Stack=true" |
-Djava.net.preferIPv4Stack=true |
Hi,
I have a strange issue where in my logstash forwarder connects to the logstash server only when I reboot my instance running the forwarder. When I run the docker first time, i just get the following,
Failure connecting to x.x.x.x dial tcp x.x.x.x:5043: i/o timeout
Then when I issue reboot and again run the same docker its connected and can ship logs smoothly.
But I can restart my production servers this easily. So I am stuck here. I use self signed SSL keys that can support IP address (to prevent no IP SAN issue) and everything works fine upon a reboot.
But rebooting is not a permanent fix and it can't be done on dedicated production machines.
Is it anything related to firewall or iptables? Please let me know your suggestion or insights. Is it possible to avoid SSL authentication? I love logstash-forwarder and this is the only issue I need to fix to get it working on my production servers.
The text was updated successfully, but these errors were encountered: