Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SSL connection error with es230_l230_k450 #32

Closed
clamor opened this issue Apr 7, 2016 · 2 comments
Closed

SSL connection error with es230_l230_k450 #32

clamor opened this issue Apr 7, 2016 · 2 comments

Comments

@clamor
Copy link

clamor commented Apr 7, 2016

I followed the instructions at readthedocs, sending logfiles with filebeat. With es221_l222_k442 logs are sent and processed:

using

filebeat -c /etc/filebeat/filebeat.yml -e -d "*"

2016/04/07 11:28:30.730432 client.go:90: DBG connect
2016/04/07 11:28:30.832034 outputs.go:126: INFO Activated logstash as output plugin.
2016/04/07 11:28:30.832063 publish.go:232: DBG Create output worker

Changing the docker image to es230_l230_k450 (latest) filebeat cannot connect anymore:

2016/04/07 11:28:34.671825 client.go:90: DBG connect
2016/04/07 11:28:34.672001 transport.go:125: ERR SSL client failed to connect with: dial tcp 192.168.12.66:5044: getsockopt: connection refused
2016/04/07 11:28:34.672008 single.go:126: INFO Connecting error publishing events (retrying): dial tcp 192.168.12.66:5044: getsockopt: connection refused
2016/04/07 11:28:34.672011 single.go:152: INFO send fail
2016/04/07 11:28:34.672015 single.go:159: INFO backoff retry: 2s

elk container error:

==> /var/log/kibana/kibana4.log <==
{"type":"log","@timestamp":"2016-04-07T11:28:10+00:00","tags":["status","plugin:elasticsearch","info"],"pid":211,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2016-04-07T11:28:13+00:00","tags":["status","plugin:elasticsearch","info"],"pid":211,"name":"plugin:elasticsearch","state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

==> /var/log/logstash/logstash.log <==
{:timestamp=>"2016-04-07T11:28:33.553000+0000", :message=>"Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.", "exception"=>#<NoMethodError: undefined method multi_filter' for nil:NilClass>, "backtrace"=>["(eval):191:in cond_func_4'", "org/jruby/RubyArray.java:1613:ineach'", "(eval):188:in cond_func_4'", "(eval):130:infilter_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:271:in filter_batch'", "org/jruby/RubyArray.java:1613:ineach'", "org/jruby/RubyEnumerable.java:852:in inject'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:269:infilter_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:227:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:205:instart_workers'"], :level=>:error} {:timestamp=>"2016-04-07T11:28:33.654000+0000", :message=>"Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.", "exception"=>#<NoMethodError: undefined method multi_filter' for nil:NilClass>, "backtrace"=>["(eval):191:in cond_func_4'", "org/jruby/RubyArray.java:1613:ineach'", "(eval):188:in cond_func_4'", "(eval):130:infilter_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:271:in filter_batch'", "org/jruby/RubyArray.java:1613:ineach'", "org/jruby/RubyEnumerable.java:852:in inject'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:269:infilter_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:227:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:205:instart_workers'"], :level=>:error}

@spujadas
Copy link
Owner

spujadas commented Apr 7, 2016

OK, reproduced it – it isn't so much an SSL error as a Logstash-stopped-running error: as a consequence the (SSL) connection isn't open, hence the error.
I've narrowed the culprit down to the use of Logstash's --auto-reload option which was added yesterday, but I don't understand why it's causing this behaviour.
Anyway, I've removed the option while I investigate, so you should be able to go back to safely using the latest/es230_l230_k450 image.

@spujadas
Copy link
Owner

spujadas commented Apr 7, 2016

The bug I mentioned in my previous comment was corrected in Logstash 2.3.1, so I upgraded Logstash (and Elasticsearch, which had also been updated) in the image. Did some tests, everything's working nicely, so you should be good to go with latest or es231_l231_k450 if you want to use the very latest version of the image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants