Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logstash - datadog - Logs missing #18

Closed
bl85774 opened this issue Dec 16, 2019 · 9 comments
Closed

Logstash - datadog - Logs missing #18

bl85774 opened this issue Dec 16, 2019 · 9 comments

Comments

@bl85774
Copy link

bl85774 commented Dec 16, 2019

Good morning,

In my logstash environment (version 7.4 and 7.5), we installed these Datadog plugins:
• logstash-output-datadog (3.0.5)
• logstash-output-datadog_logs (0.3.1)

After installing, we installed filebeat on linux server and this filbeat process send all syslog to logstash and logstash takes care to sending these logs to my Datadog platform.

The issue we have, logstash don’t send all logs in my Datadog platform. Some logs are missing when I’m looking in Log explorer in my Datadog.

In the log file of logstash, when we don’t receive the log in Datadog, we see this error:
Dec 12 14:39:11 lvz-logstash-p001 logstash[1112]: [2019-12-12T14:39:11,133][WARN ][logstash.outputs.datadoglogs][main] Could not send payload {:exception=>#<IOError: Broken pipe>, :backtrace=>["org/jruby/ext/openssl/SSLSocket.java:950:in syswrite'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/jopenssl23/openssl/buffering.rb:322:in do_write'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/jopenssl23/openssl/buffering.rb:339:in block in write'", "org/jruby/RubyArray.java:1800:in each'", "org/jruby/RubyEnumerable.java:1093:in inject'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/jopenssl23/openssl/buffering.rb:338:in write'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.3.1/lib/logstash/outputs/datadog_logs.rb:36:in block in register'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-codec-json-3.0.5/lib/logstash/codecs/json.rb:42:in encode'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:31:in block in encode'", "org/logstash/instrument/metrics/AbstractSimpleMetricExt.java:45:in time'", "org/logstash/instrument/metrics/AbstractNamespacedMetricExt.java:44:in time'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:30:in encode'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-datadog_logs-0.3.1/lib/logstash/outputs/datadog_logs.rb:55:in receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:89:in block in multi_receive'", "org/jruby/RubyArray.java:1800:in each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:89:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:250:in `block in start_workers'"]}
Dec 12 14:39:12 lvz-logstash-p001 logstash[1112]: [2019-12-12T14:39:12,164][INFO ][logstash.outputs.datadoglogs][main] Starting SSL connection {:host=>"intake.logs.datadoghq.com", :port=>10516}

The message before this error was not sending to Datadog. We receive this message not all the time….
Do you have any idea about this problem???

We did have this error with logstash version 7.4 and 7.5…

@ajacquemot

Je parle Français...

Erreur logstash - DD - logs missing.txt

output installation plugin logstash-output-datadog.txt

@NBParis
Copy link
Contributor

NBParis commented Dec 18, 2019

Hello @bl85774 ,

Thanks for reporting this issue and providing the full error message.
At a glance, it seems that there is indeed an issue with the multi_receive configuration and somehow it generate an error when sending the logs.

We will definitely look into it and come back to you.

In the mean time, do you already have a support ticket open on Datadog? (just to link it with this issue if that was the case)
And would you have some more information about your Logstash configuration so we can replicate with the exact same setup?

Thanks a lot

@bl85774
Copy link
Author

bl85774 commented Dec 18, 2019 via email

@NBParis
Copy link
Contributor

NBParis commented Dec 18, 2019

Thanks a lot,

I have linked both this issue and the support case. I believe we will start discussing more in details about this in the support ticket.

Just for me to double check, the apikey you shared as been edited to not reflect exactly the value right? As we strongly recommend to never share publicly an API KEY.

@bl85774
Copy link
Author

bl85774 commented Dec 18, 2019

Hi,

Thank you for your answer....

I hope we will discussing more in the Datadog support ticket.

The apikey is not reflect the right value....

@mkoleva
Copy link

mkoleva commented Dec 30, 2019

I observed a similar error, I was wondering what versions of logstash are supported? Is it all versions? I am observing broken pipe errors on logstash 6.8 and it has made the dd plugin not usable.

@LawrenceLin690
Copy link

LawrenceLin690 commented Jan 9, 2020

The issue described here and in the linked support case is now solved. The broken pipe errors generally indicate a closed connection -the tcp log ingestion endpoint will shut down after x seconds of inactivity. For this situation, HTTP forwarding would be better suited.

If you are experiencing missing logs due to broken pip errors using the plugin, currently you can use the default Logstash HTTP plugin which might look something like:

output {
  http {
    url => "https://http-intake.logs.datadoghq.com/v1/input/<DATADOG_API_KEY>?host=<HOST>&service=<SERVICE>&ddsource=<SOURCE>"
    http_method => "post"
    headers => ["Content-Type", "application/json"]
}
}

If you'd like to be added to the feature request for an HTTP forwarder in the Datadog output plugin, contact Datadog Support.

@remiville
Copy link

Thanks @LawrenceLin690 your workaround worked for me.

@gaetan-deputier
Copy link
Contributor

Safe to close?

@seanmuth
Copy link

seanmuth commented Apr 1, 2020

@gaetan-deputier Upgraded my plugin from 0.3.1 to 0.4.1 and it appears to have solved this issue for me. SS shows the past hour of a process that runs every 5 mins, upgraded plugin halfway thru and all logs appear to be coming in correctly now.
Probably safe to close!
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants