New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

constant "A plugin had an unrecoverable error" #18

Closed
bobrik opened this Issue Mar 19, 2015 · 19 comments

Comments

Projects
None yet
@bobrik

bobrik commented Mar 19, 2015

I'm running logstash 1.5.0.rc2 with the following config:

input {
    file {
        path => "/var/log/syslog-ng/uberlog/uberlog.log"
        codec => json
    }
}

output {
    file {
        path => "/var/log/logstash/%{type}/%{channel}/%{channel}.log"
        message_format => "%{extra_formatted}"
    }
}

My log lines look like this:

{"@timestamp":"2015-03-19T11:11:06.415585+03:00","@version":1,"host":"web540","message":"Cron DatingRateCron[{\"copy\":\"96\",\"server_id\":null,\"priority\":\"1\"}] used too much memory: 115605504","type":"topface","channel":"CronManager","level":"WARNING","extra_elapsed":144.54081296921,"extra_revision":"20150318040","extra_mode":"CLI","extra_formatted":"web540 r20150318040 2015-03-19 11:11:06,415 +144.541 CLIWARN  [pid:91812 controller:DatingRateCron] Cron DatingRateCron[{\"copy\":\"96\",\"server_id\":null,\"priority\":\"1\"}] used too much memory: 115605504","ctx_pid":91812,"ctx_controller":"DatingRateCron"}
{"@timestamp":"2015-03-19T11:11:06.485589+03:00","@version":1,"host":"web540","message":"Fixing not existed key for age , user 94358774","type":"topface","channel":"DatingReport","level":"WARNING","extra_elapsed":112.38282799721,"extra_revision":"20150318040","extra_mode":"CLI","extra_formatted":"web540 r20150318040 2015-03-19 11:11:06,485 +112.383 CLIWARN  [pid:92796 controller:DatingRateCron] Fixing not existed key for age , user 94358774","ctx_pid":92796,"ctx_controller":"DatingRateCron"}

After incorrect json line (I assume, not sure) logstash falls into the following state:

{:timestamp=>"2015-03-19T08:09:16.320000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:17.324000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:18.328000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:19.331000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:20.335000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:21.338000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:22.342000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:23.346000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:24.349000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:25.353000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:26.356000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-03-19T08:09:27.360000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}

Incorrect lines happen because syslog-ng splits long lines into several. I'm going to fix this, but I think it should not lead to constant unrecoverable errors. If lines cannot be parsed with json coded, it should be skipped/passed with _jsonparsefailure tag as is. Incorrect lines are going to end at some point anyway.

@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik Mar 19, 2015

Another kind of error:

{:timestamp=>"2015-03-19T09:33:37.209000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: undefined method `[]' for 0.15:Float", :level=>:error}
{:timestamp=>"2015-03-19T09:33:38.215000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: can't convert String into Integer", :level=>:error}
{:timestamp=>"2015-03-19T09:33:39.220000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: can't convert String into Integer", :level=>:error}
{:timestamp=>"2015-03-19T09:33:40.224000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: can't convert String into Integer", :level=>:error}

bobrik commented Mar 19, 2015

Another kind of error:

{:timestamp=>"2015-03-19T09:33:37.209000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: undefined method `[]' for 0.15:Float", :level=>:error}
{:timestamp=>"2015-03-19T09:33:38.215000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: can't convert String into Integer", :level=>:error}
{:timestamp=>"2015-03-19T09:33:39.220000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: can't convert String into Integer", :level=>:error}
{:timestamp=>"2015-03-19T09:33:40.224000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/var/log/syslog-ng/uberlog/uberlog.log\"], start_position=>\"end\", delimiter=>\"\\n\">\n  Error: can't convert String into Integer", :level=>:error}
@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik Mar 19, 2015

Looks like 1.4.2 recovers from errors just fine.

bobrik commented Mar 19, 2015

Looks like 1.4.2 recovers from errors just fine.

@purbon

This comment has been minimized.

Show comment
Hide comment
@purbon

purbon Mar 25, 2015

Contributor

Digging into it, will keep you posted with my advances in here. I agree the plugin should be able to recover from errors like this, and keep working.

Contributor

purbon commented Mar 25, 2015

Digging into it, will keep you posted with my advances in here. I agree the plugin should be able to recover from errors like this, and keep working.

@nabam

This comment has been minimized.

Show comment
Hide comment
@nabam

nabam Apr 1, 2015

Full exception for "string not matched":

string not matched {:level=>:error, :exception=>IndexError, :backtrace=>["org/jruby/RubyString.java:3910:in[]='", "/opt/logstash/lib/logstash/event.rb:62:in initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-json-0.1.6/lib/logstash/codecs/json.rb:35:indecode'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:139:in run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:182:in_read_file'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:181:in_read_file'", "org/jruby/RubyKernel.java:1507:in loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:177:in_read_file'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:83:in subscribe'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:56:ineach'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:53:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:112:in subscribe'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:74:insubscribe'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:136:in run'", "/opt/logstash/lib/logstash/pipeline.rb:174:ininputworker'", "/opt/logstash/lib/logstash/pipeline.rb:168:in start_input'"]}

nabam commented Apr 1, 2015

Full exception for "string not matched":

string not matched {:level=>:error, :exception=>IndexError, :backtrace=>["org/jruby/RubyString.java:3910:in[]='", "/opt/logstash/lib/logstash/event.rb:62:in initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-json-0.1.6/lib/logstash/codecs/json.rb:35:indecode'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:139:in run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:182:in_read_file'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:181:in_read_file'", "org/jruby/RubyKernel.java:1507:in loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:177:in_read_file'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:83:in subscribe'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:56:ineach'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:53:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:112:in subscribe'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:74:insubscribe'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:136:in run'", "/opt/logstash/lib/logstash/pipeline.rb:174:ininputworker'", "/opt/logstash/lib/logstash/pipeline.rb:168:in start_input'"]}

@purbon

This comment has been minimized.

Show comment
Hide comment
@purbon

purbon Apr 1, 2015

Contributor

sorry for the delay on this, got hook up with other stuff ... will go back to this soon.

Contributor

purbon commented Apr 1, 2015

sorry for the delay on this, got hook up with other stuff ... will go back to this soon.

@nabam

This comment has been minimized.

Show comment
Hide comment
@nabam

nabam Apr 2, 2015

I did some investigation and found that issue happens when sincedb points to the position inside the json string (don't know why it happens) that starts with quotes (or number or whatever else). In that case JrJackson in json.rb returns String instead of Hash which is not compatible with event constructor. I have no idea how to fix that in a smart way.

nabam commented Apr 2, 2015

I did some investigation and found that issue happens when sincedb points to the position inside the json string (don't know why it happens) that starts with quotes (or number or whatever else). In that case JrJackson in json.rb returns String instead of Hash which is not compatible with event constructor. I have no idea how to fix that in a smart way.

@purbon

This comment has been minimized.

Show comment
Hide comment
@purbon

purbon Apr 2, 2015

Contributor

Cool, good work...

Contributor

purbon commented Apr 2, 2015

Cool, good work...

@nabam

This comment has been minimized.

Show comment
Hide comment
@nabam

nabam Apr 9, 2015

@purbon do you know is there someone who supports json codec?

nabam commented Apr 9, 2015

@purbon do you know is there someone who supports json codec?

@dev-head

This comment has been minimized.

Show comment
Hide comment
@dev-head

dev-head May 27, 2015

@purbon is this still an active issue? My logstash implementation suddenly started exhibiting this behavior and i'm reaching at straws to find out what is the root cause and how to resolve it going forward. I'm in a production which limits my course of action in this specific case. Any help would be greatly appreciated.

UPDATE:
I turned debugging on and saw that the file input plugin was crashing and reloading and repeating. The output gave a line it was crashing on, which showed it was missing the first character in that row which was a curly {. I checked the log file and there wasn't an issue with that line at all, I then just bounced that line down one line and restarted logstash... and it started running again. This looks like an issue with .sincedb and what column it thought it left off with (shouldn't it always be zero?)


Logstash repeating error

{:timestamp=>"2015-05-27T19:09:53.794000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/logstash/dataset-*-log.csv\"], start_position=>\"beginning\", type=>\"dataset\", sincedb_path=>\"/srv/data/logstash/.sincedb\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-05-27T19:09:54.814000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/logstash/dataset-*-log.csv\"], start_position=>\"beginning\", type=>\"dataset\", sincedb_path=>\"/srv/data/logstash/.sincedb\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}

Input

  file {
    path => "/logstash/dataset-*-log.csv"
    start_position => "beginning"
    stat_interval => 1
    type => "dataset"
    codec => "json"
    sincedb_path => "/srv/data/logstash/.sincedb"
  }

Filter

    if [type]  == "dataset" {
        mutate {
            remove_field => ["message", "@version", "host", "path"]
        }
    }

Output

    if [type] == "dataset" {

        elasticsearch {
            action => "index"
            document_id => "%{_id}"
            index => "dataset"
            index_type => "words"
            flush_size => 1000
            workers => 2
            host => "127.0.0.1"
            cluster => "PinkyRing"
            node_name => "StinkyPinky"
            embedded => false
        }
    }

dev-head commented May 27, 2015

@purbon is this still an active issue? My logstash implementation suddenly started exhibiting this behavior and i'm reaching at straws to find out what is the root cause and how to resolve it going forward. I'm in a production which limits my course of action in this specific case. Any help would be greatly appreciated.

UPDATE:
I turned debugging on and saw that the file input plugin was crashing and reloading and repeating. The output gave a line it was crashing on, which showed it was missing the first character in that row which was a curly {. I checked the log file and there wasn't an issue with that line at all, I then just bounced that line down one line and restarted logstash... and it started running again. This looks like an issue with .sincedb and what column it thought it left off with (shouldn't it always be zero?)


Logstash repeating error

{:timestamp=>"2015-05-27T19:09:53.794000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/logstash/dataset-*-log.csv\"], start_position=>\"beginning\", type=>\"dataset\", sincedb_path=>\"/srv/data/logstash/.sincedb\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}
{:timestamp=>"2015-05-27T19:09:54.814000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::File path=>[\"/logstash/dataset-*-log.csv\"], start_position=>\"beginning\", type=>\"dataset\", sincedb_path=>\"/srv/data/logstash/.sincedb\", delimiter=>\"\\n\">\n  Error: string not matched", :level=>:error}

Input

  file {
    path => "/logstash/dataset-*-log.csv"
    start_position => "beginning"
    stat_interval => 1
    type => "dataset"
    codec => "json"
    sincedb_path => "/srv/data/logstash/.sincedb"
  }

Filter

    if [type]  == "dataset" {
        mutate {
            remove_field => ["message", "@version", "host", "path"]
        }
    }

Output

    if [type] == "dataset" {

        elasticsearch {
            action => "index"
            document_id => "%{_id}"
            index => "dataset"
            index_type => "words"
            flush_size => 1000
            workers => 2
            host => "127.0.0.1"
            cluster => "PinkyRing"
            node_name => "StinkyPinky"
            embedded => false
        }
    }

nabam added a commit to nabam/logstash-codec-json that referenced this issue Jun 8, 2015

@dev-head

This comment has been minimized.

Show comment
Hide comment
@dev-head

dev-head Jun 30, 2015

👍 @purbon will the work @nabam did be in the next 5.2 release?

dev-head commented Jun 30, 2015

👍 @purbon will the work @nabam did be in the next 5.2 release?

@jordansissel

This comment has been minimized.

Show comment
Hide comment
@jordansissel

jordansissel Jul 16, 2015

Contributor

A user in IRC reporting this problem.

Contributor

jordansissel commented Jul 16, 2015

A user in IRC reporting this problem.

@datreic

This comment has been minimized.

Show comment
Hide comment
@datreic

datreic Jul 16, 2015

Implementing the patch referenced by nabam @ nabam/logstash-codec-json@0e5eaf7 sidesteps the issue and allows me to continue to evaluate data. However, once it gets over to the indexer it will cause another failure:

{:timestamp=>"2015-07-16T13:52:51.074000-0500", :message=>"Got error to send bulk of actions: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?)", :level=>:error}
{:timestamp=>"2015-07-16T13:52:51.074000-0500", :message=>"Failed to flush outgoing items", :outgoing_count=>129, :exception=>#<JrJackson::ParseError: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?)>, :backtrace=>["com/jrjackson/JrJacksonRaw.java:146:in `generate'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.2.9/lib/jrjackson/jrjackson.rb:37:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json/adapters/jr_jackson.rb:15:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json/adapter.rb:25:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json.rb:136:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/utils.rb:97:in `__bulkify'", "org/jruby/RubyArray.java:2412:in `map'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/utils.rb:97:in `__bulkify'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/actions/bulk.rb:75:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch/protocol.rb:103:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:505:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:504:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:529:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:528:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1511:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}

To add further information, the actual logs files have the correct JSON format. Here is an example that fails:

{"MessageSourceAddress":"192.168.15.1","EventReceivedTime":"2015-07-16 13:09:52","SourceModuleName":"eventlog","SourceModuleType":"im_msvistalog","EventTime":"2015-07-16 13:09:50","Hostname":"testbox.domain.local","Keywords":-9215364837300034816,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":4776,"SourceName":"Microsoft-Windows-Security-Auditing","ProviderGuid":"{A48496C5-5478-4994-A5BA-3E3B0348C30D}","Version":0,"Task":14336,"OpcodeValue":0,"RecordNumber":2282932509,"ProcessID":700,"ThreadID":47308,"Channel":"Security","Message":"The domain controller attempted to validate the credentials for an account.\r\n\r\nAuthentication Package:\tMICROSOFT_AUTHENTICATION_PACKAGE_V1_0\r\nLogon Account:\tjonny\r\nSource Workstation:\tJIMPC\r\nError Code:\t0x0","Category":"Credential Validation","Opcode":"Info","PackageName":"MICROSOFT_AUTHENTICATION_PACKAGE_V1_0","TargetUserName":"jonny","Workstation":"JIMPC","Status":"0x0","ostype":"server","NXLogSourceAddress":"192.168.15.88"}

And the file input plugin seems to retrieve it as

geName":"MICROSOFT_AUTHENTICATION_PACKAGE_V1_0","TargetUserName":"jonny","Workstation":"JIMPC-STG","Status":"0x0","ostype":"server","NXLogSourceAddress":"192.168.15.88"}

which is clearly not correct.

datreic commented Jul 16, 2015

Implementing the patch referenced by nabam @ nabam/logstash-codec-json@0e5eaf7 sidesteps the issue and allows me to continue to evaluate data. However, once it gets over to the indexer it will cause another failure:

{:timestamp=>"2015-07-16T13:52:51.074000-0500", :message=>"Got error to send bulk of actions: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?)", :level=>:error}
{:timestamp=>"2015-07-16T13:52:51.074000-0500", :message=>"Failed to flush outgoing items", :outgoing_count=>129, :exception=>#<JrJackson::ParseError: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?)>, :backtrace=>["com/jrjackson/JrJacksonRaw.java:146:in `generate'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.2.9/lib/jrjackson/jrjackson.rb:37:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json/adapters/jr_jackson.rb:15:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json/adapter.rb:25:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json.rb:136:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/utils.rb:97:in `__bulkify'", "org/jruby/RubyArray.java:2412:in `map'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/utils.rb:97:in `__bulkify'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/actions/bulk.rb:75:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch/protocol.rb:103:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:505:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:504:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:529:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:528:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1511:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}

To add further information, the actual logs files have the correct JSON format. Here is an example that fails:

{"MessageSourceAddress":"192.168.15.1","EventReceivedTime":"2015-07-16 13:09:52","SourceModuleName":"eventlog","SourceModuleType":"im_msvistalog","EventTime":"2015-07-16 13:09:50","Hostname":"testbox.domain.local","Keywords":-9215364837300034816,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":4776,"SourceName":"Microsoft-Windows-Security-Auditing","ProviderGuid":"{A48496C5-5478-4994-A5BA-3E3B0348C30D}","Version":0,"Task":14336,"OpcodeValue":0,"RecordNumber":2282932509,"ProcessID":700,"ThreadID":47308,"Channel":"Security","Message":"The domain controller attempted to validate the credentials for an account.\r\n\r\nAuthentication Package:\tMICROSOFT_AUTHENTICATION_PACKAGE_V1_0\r\nLogon Account:\tjonny\r\nSource Workstation:\tJIMPC\r\nError Code:\t0x0","Category":"Credential Validation","Opcode":"Info","PackageName":"MICROSOFT_AUTHENTICATION_PACKAGE_V1_0","TargetUserName":"jonny","Workstation":"JIMPC","Status":"0x0","ostype":"server","NXLogSourceAddress":"192.168.15.88"}

And the file input plugin seems to retrieve it as

geName":"MICROSOFT_AUTHENTICATION_PACKAGE_V1_0","TargetUserName":"jonny","Workstation":"JIMPC-STG","Status":"0x0","ostype":"server","NXLogSourceAddress":"192.168.15.88"}

which is clearly not correct.

@suyograo suyograo added the crashes label Jul 16, 2015

nabam added a commit to nabam/logstash-codec-json that referenced this issue Jul 16, 2015

@kennytan

This comment has been minimized.

Show comment
Hide comment
@kennytan

kennytan Jul 23, 2015

Hi everyone, this error seems to occur for lumberjack plugin as well


{:timestamp=>"2015-07-23T10:11:07.609000+0900", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-07-23T10:11:08.199000+0900", :message=>"CircuitBreaker::Close", :name=>"Lumberjack input", :level=>:warn}
{:timestamp=>"2015-07-23T12:59:02.518000+0900", :message=>"Got error to send bulk of actions: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?)", :level=>:error}
{:timestamp=>"2015-07-23T12:59:02.520000+0900", :message=>"Failed to flush outgoing items", :outgoing_count=>18, :exception=>#<JrJackson::ParseError: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?)>, :backtrace=>["com/jrjackson/JrJacksonRaw.java:146:in `generate'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/jrjackson-0.2.9/lib/jrjackson/jrjackson.rb:37:in `dump'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json/adapters/jr_jackson.rb:15:in `dump'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json/adapter.rb:25:in `dump'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json.rb:136:in `dump'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/utils.rb:97:in `__bulkify'", "org/jruby/RubyArray.java:2412:in `map'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/utils.rb:97:in `__bulkify'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/actions/bulk.rb:75:in `bulk'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch/protocol.rb:103:in `bulk'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:505:in `submit'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:504:in `submit'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:529:in `flush'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:528:in `flush'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216:in `buffer_flush'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:193:in `buffer_flush'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1511:in `loop'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}
...

Seems to the exact error, only difference is i am not using the file input
Any chance this could be a concurrency issue? I observed that this happens quite frequently whenever there is a surge in the event stream

kennytan commented Jul 23, 2015

Hi everyone, this error seems to occur for lumberjack plugin as well


{:timestamp=>"2015-07-23T10:11:07.609000+0900", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-07-23T10:11:08.199000+0900", :message=>"CircuitBreaker::Close", :name=>"Lumberjack input", :level=>:warn}
{:timestamp=>"2015-07-23T12:59:02.518000+0900", :message=>"Got error to send bulk of actions: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?)", :level=>:error}
{:timestamp=>"2015-07-23T12:59:02.520000+0900", :message=>"Failed to flush outgoing items", :outgoing_count=>18, :exception=>#<JrJackson::ParseError: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?)>, :backtrace=>["com/jrjackson/JrJacksonRaw.java:146:in `generate'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/jrjackson-0.2.9/lib/jrjackson/jrjackson.rb:37:in `dump'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json/adapters/jr_jackson.rb:15:in `dump'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json/adapter.rb:25:in `dump'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/multi_json-1.11.1/lib/multi_json.rb:136:in `dump'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/utils.rb:97:in `__bulkify'", "org/jruby/RubyArray.java:2412:in `map'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/utils.rb:97:in `__bulkify'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/actions/bulk.rb:75:in `bulk'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch/protocol.rb:103:in `bulk'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:505:in `submit'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:504:in `submit'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:529:in `flush'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:528:in `flush'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216:in `buffer_flush'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:193:in `buffer_flush'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1511:in `loop'", "/home/cout-user/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}
...

Seems to the exact error, only difference is i am not using the file input
Any chance this could be a concurrency issue? I observed that this happens quite frequently whenever there is a surge in the event stream

@dev-head

This comment has been minimized.

Show comment
Hide comment
@dev-head

dev-head Oct 17, 2015

Each logstash release pushed since this was brought up in March/2015 has not addressed the issue. Is there more information we can provide to get a fix??

🎱 when will it be addressed??

dev-head commented Oct 17, 2015

Each logstash release pushed since this was brought up in March/2015 has not addressed the issue. Is there more information we can provide to get a fix??

🎱 when will it be addressed??

@ranlei

This comment has been minimized.

Show comment
Hide comment
@ranlei

ranlei Dec 22, 2015

I'm running logstash 2.1.0 with config:
input {
file {
path => "access.log"
start_position => "end"
type => "b2c.dapp.access"
}
}

output {
redis {
data_type => "list"
key => "log"
db => 44
host => ['hostname"]
password => "redispass"
}
}

error log is :
{:timestamp=>"2015-12-22T17:14:28.542000+0800", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::File path=>["/export/loganalyse/tuan/xuelei/vocation/loganaly/log/dapp1/b2c.dapp.access.log"], start_position=>"end", type=>"b2c.dapp.access", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, stat_interval=>1, discover_interval=>15, sincedb_write_interval=>15, delimiter=>"\n">\n Error: undefined method `+' for nil:NilClass", :level=>:error}

ranlei commented Dec 22, 2015

I'm running logstash 2.1.0 with config:
input {
file {
path => "access.log"
start_position => "end"
type => "b2c.dapp.access"
}
}

output {
redis {
data_type => "list"
key => "log"
db => 44
host => ['hostname"]
password => "redispass"
}
}

error log is :
{:timestamp=>"2015-12-22T17:14:28.542000+0800", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::File path=>["/export/loganalyse/tuan/xuelei/vocation/loganaly/log/dapp1/b2c.dapp.access.log"], start_position=>"end", type=>"b2c.dapp.access", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, stat_interval=>1, discover_interval=>15, sincedb_write_interval=>15, delimiter=>"\n">\n Error: undefined method `+' for nil:NilClass", :level=>:error}

@guyboertje

This comment has been minimized.

Show comment
Hide comment
@guyboertje

guyboertje Apr 26, 2016

Contributor

Unless told otherwise, we believe this behaviour is restricted to the 1.5.X releases.

Please upgrade to the latest version of Logstash.

Closing.

Contributor

guyboertje commented Apr 26, 2016

Unless told otherwise, we believe this behaviour is restricted to the 1.5.X releases.

Please upgrade to the latest version of Logstash.

Closing.

@guyboertje guyboertje closed this Apr 26, 2016

@get2arun

This comment has been minimized.

Show comment
Hide comment
@get2arun

get2arun Nov 18, 2016

I too facing the issue during execution of logstash.pipeline. I have the latest Logstash (logstash 5.0.1) installed but seeing the same error.

Error in the line:

[2016-11-18T14:11:11,695][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"3e41b992054e8ff9dabbc5e4da48cdf1b344fcd6-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_483a8f13-8c7f-4280-938b-40369032b893", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: Address already in use

Output log copied. I can not copy or share the logs due to restrictions.

[2016-11-18T14:11:05,399][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://hidden:hidden@myelkserver.domain.name:9200"]}}
[2016-11-18T14:11:05,400][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["myelkserver.domain.name:9200"]}
[2016-11-18T14:11:05,403][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2016-11-18T14:11:05,407][INFO ][logstash.pipeline ] Pipeline main started
[2016-11-18T14:11:05,415][DEBUG][logstash.agent ] Starting puma
[2016-11-18T14:11:05,416][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2016-11-18T14:11:05,418][DEBUG][logstash.api.service ] [api-service] start
[2016-11-18T14:11:05,434][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9601}
[2016-11-18T14:11:05,435][DEBUG][logstash.api.service ] [api-service] start
[2016-11-18T14:11:05,438][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
[2016-11-18T14:11:05,536][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:05 +0100}
[2016-11-18T14:11:06,545][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:06 +0100}
[2016-11-18T14:11:07,547][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:07 +0100}
[2016-11-18T14:11:08,551][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:08 +0100}
[2016-11-18T14:11:09,553][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:09 +0100}
[2016-11-18T14:11:10,408][DEBUG][logstash.pipeline ] Pushing flush onto pipeline
[2016-11-18T14:11:10,554][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:10 +0100}
[2016-11-18T14:11:11,557][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:11 +0100}
[2016-11-18T14:11:11,695][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"3e41b992054e8ff9dabbc5e4da48cdf1b344fcd6-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_483a8f13-8c7f-4280-938b-40369032b893", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:223)
sun.nio.ch.ServerSocketAdaptor.bind(sun/nio/ch/ServerSocketAdaptor.java:74)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:125)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:554)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1258)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:511)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:496)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:980)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:250)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:363)
io.netty.util.concurrent.SingleThreadEventExecutor.safeExecute(io/netty/util/concurrent/SingleThreadEventExecutor.java:451)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:418)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:401)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:877)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(io/netty/util/concurrent/DefaultThreadFactory.java:144)
java.lang.Thread.run(java/lang/Thread.java:745)
[2016-11-18T14:11:12,560][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:12 +0100}
[2016-11-18T14:11:12,695][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2016-11-18T14:11:12,705][WARN ][io.netty.channel.AbstractChannel] Force-closing a channel whose registration task was not accepted by an event loop: [id: 0x6ab3d2e5]
java.util.concurrent.RejectedExecutionException: event executor terminated

Any workaround to push the code from Logstash to Elasticsearch is appreciated.

get2arun commented Nov 18, 2016

I too facing the issue during execution of logstash.pipeline. I have the latest Logstash (logstash 5.0.1) installed but seeing the same error.

Error in the line:

[2016-11-18T14:11:11,695][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"3e41b992054e8ff9dabbc5e4da48cdf1b344fcd6-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_483a8f13-8c7f-4280-938b-40369032b893", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: Address already in use

Output log copied. I can not copy or share the logs due to restrictions.

[2016-11-18T14:11:05,399][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://hidden:hidden@myelkserver.domain.name:9200"]}}
[2016-11-18T14:11:05,400][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["myelkserver.domain.name:9200"]}
[2016-11-18T14:11:05,403][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2016-11-18T14:11:05,407][INFO ][logstash.pipeline ] Pipeline main started
[2016-11-18T14:11:05,415][DEBUG][logstash.agent ] Starting puma
[2016-11-18T14:11:05,416][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2016-11-18T14:11:05,418][DEBUG][logstash.api.service ] [api-service] start
[2016-11-18T14:11:05,434][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9601}
[2016-11-18T14:11:05,435][DEBUG][logstash.api.service ] [api-service] start
[2016-11-18T14:11:05,438][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
[2016-11-18T14:11:05,536][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:05 +0100}
[2016-11-18T14:11:06,545][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:06 +0100}
[2016-11-18T14:11:07,547][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:07 +0100}
[2016-11-18T14:11:08,551][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:08 +0100}
[2016-11-18T14:11:09,553][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:09 +0100}
[2016-11-18T14:11:10,408][DEBUG][logstash.pipeline ] Pushing flush onto pipeline
[2016-11-18T14:11:10,554][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:10 +0100}
[2016-11-18T14:11:11,557][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:11 +0100}
[2016-11-18T14:11:11,695][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"3e41b992054e8ff9dabbc5e4da48cdf1b344fcd6-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_483a8f13-8c7f-4280-938b-40369032b893", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:223)
sun.nio.ch.ServerSocketAdaptor.bind(sun/nio/ch/ServerSocketAdaptor.java:74)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:125)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:554)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1258)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:511)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:496)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:980)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:250)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:363)
io.netty.util.concurrent.SingleThreadEventExecutor.safeExecute(io/netty/util/concurrent/SingleThreadEventExecutor.java:451)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:418)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:401)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:877)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(io/netty/util/concurrent/DefaultThreadFactory.java:144)
java.lang.Thread.run(java/lang/Thread.java:745)
[2016-11-18T14:11:12,560][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:12 +0100}
[2016-11-18T14:11:12,695][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2016-11-18T14:11:12,705][WARN ][io.netty.channel.AbstractChannel] Force-closing a channel whose registration task was not accepted by an event loop: [id: 0x6ab3d2e5]
java.util.concurrent.RejectedExecutionException: event executor terminated

Any workaround to push the code from Logstash to Elasticsearch is appreciated.

@jsvd

This comment has been minimized.

Show comment
Hide comment
@jsvd

jsvd Nov 18, 2016

Contributor

@get2arun can you please open a new issue since it seems to be a different problem? I'll respond there

Contributor

jsvd commented Nov 18, 2016

@get2arun can you please open a new issue since it seems to be a different problem? I'll respond there

@get2arun

This comment has been minimized.

Show comment
Hide comment
@get2arun

get2arun Nov 22, 2016

Thanks, some how issue is resolved but not sure how. Sure will do that..

Arun

On Nov 18, 2016 7:34 PM, "João Duarte" notifications@github.com wrote:

@get2arun https://github.com/get2arun can you please open a new issue
since it seems to be a different problem? I'll respond there


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#18 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARZWJRLnRqOTbEPTnp6uIccLapgSJy10ks5q_bBhgaJpZM4DxLIP
.

get2arun commented Nov 22, 2016

Thanks, some how issue is resolved but not sure how. Sure will do that..

Arun

On Nov 18, 2016 7:34 PM, "João Duarte" notifications@github.com wrote:

@get2arun https://github.com/get2arun can you please open a new issue
since it seems to be a different problem? I'll respond there


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#18 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARZWJRLnRqOTbEPTnp6uIccLapgSJy10ks5q_bBhgaJpZM4DxLIP
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment