Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logstash integration #328

Closed
sparrc opened this issue Oct 28, 2015 · 12 comments · Fixed by #1320
Closed

Logstash integration #328

sparrc opened this issue Oct 28, 2015 · 12 comments · Fixed by #1320
Labels
help wanted Request for community participation, code, contribution

Comments

@sparrc
Copy link
Contributor

sparrc commented Oct 28, 2015

Looking for community input.

We'd like Telegraf to have some integration with logstash. Not being a user of logstash myself, what should this look like? An output that can send arbitrary telegraf data to logstash, or a plugin that could consume logstash output and forward to InfluxDB?

@sparrc sparrc added help wanted Request for community participation, code, contribution Need More Info labels Oct 28, 2015
@jrxFive
Copy link
Contributor

jrxFive commented Oct 28, 2015

I'm a pretty heavy logstash user. I think it would be cool to do both if possible but I favor a pure output. Since logstash is a great router to different services, it could in itself send to InfluxDB as logstash output.

It could be alot like the Kafka/NSQ output. With it being formatted into the logstash protocol. Since we already using tags, that can be an exact mapping for logstash. Type configured from the configuration file, defaulting to telegraf. I'm unsure if anyone has created a codec in logstash for the new line protocol yet.

Additionally we do have outputs that logstash uses as inputs very heavily amqp,kafka,redis. It could just be as simple as adding the logstash protocol option to those. Its not very often you will directly send to logstash without a broker in the middle.

@jrxFive
Copy link
Contributor

jrxFive commented Nov 11, 2015

@sparrc, any suggestion on which direction? I probably have some time to start this.

@sparrc
Copy link
Contributor Author

sparrc commented Jan 22, 2016

@jrxFive Sorry I missed this comment. I think that supporting the logstash encoding would be good. I was recently thinking about ways to do this, one option would be to have "protocol" plugins that could parse the client.Point object to various different protocols, such as logstash. Currently we pretty much just do line-protocol, JSON would be easy and an obvious one to support as well.

So let me think on how that would look architecturally, it would be nice if users could plug different protocols into input and output plugins adhoc.

@sslupsky
Copy link

The current output in logstash does not work well. We have experienced issues where influx drops data because the logstash output does not backoff when influx is busy. I am assuming Telegraf knows to do so? So a plugin that consumes logstash output would be nice.

@zstyblik
Copy link
Contributor

@sslupsky what do you mean by "busy"?

@sslupsky
Copy link

@zstyblik When Influx cannot accept data, it throws a 500 error. When it does that you need to back off for a while. Unfortunately, the influx output for logstash does not back off and keeps shoving data at influx.

My developer mentioned that if the influx plugin for logstash was a codec, then he thought maybe you could use the logstash http output to send data to influx instead.

@zstyblik
Copy link
Contributor

@sslupsky I see and thank you for explanation. As for http plugin in logstash, it doesn't allow batching as far as my experience goes :-s

@deanefrati
Copy link

@sparrc another idea for logstash integration could be to integrate with elastic's beates framework. It's also written in go so the integration may not be that difficult. The beats framework knows how to ship data to various outputs including logstash and elasticsearch and does some nice things like reducing the send frequency when logstash/elasticsearch are "busy". We would be very interested in a logstash output plugin.

@elvarb
Copy link

elvarb commented Apr 12, 2016

Having Kafka as a middle man for both Input and Output plugins in Logstash would be in my opinion the best way. Solves all possible performance problems.

Another use case for a Logstash integration would be a InfluxDB filter. That way you could configure Logstash to query InfluxDB when a certain event comes in. For example if an error message comes in it could query InfluxDB for current active sessions, average sessions for the past 10 minutes and for the past 60 minutes.

sparrc added a commit that referenced this issue Jun 2, 2016
@sparrc sparrc mentioned this issue Jun 2, 2016
7 tasks
sparrc added a commit that referenced this issue Jun 2, 2016
sparrc added a commit that referenced this issue Jun 3, 2016
sparrc added a commit that referenced this issue Jun 5, 2016
sparrc added a commit that referenced this issue Jun 5, 2016
sparrc added a commit that referenced this issue Jun 5, 2016
sparrc added a commit that referenced this issue Jun 5, 2016
sparrc added a commit that referenced this issue Jun 5, 2016
sparrc added a commit that referenced this issue Jun 6, 2016
sparrc added a commit that referenced this issue Jun 6, 2016
sparrc added a commit that referenced this issue Jun 6, 2016
sparrc added a commit that referenced this issue Jun 6, 2016
sparrc added a commit that referenced this issue Jun 6, 2016
sparrc added a commit that referenced this issue Jun 7, 2016
sparrc added a commit that referenced this issue Jun 7, 2016
sparrc added a commit that referenced this issue Jun 7, 2016
sparrc added a commit that referenced this issue Jun 7, 2016
sparrc added a commit that referenced this issue Jun 7, 2016
sparrc added a commit that referenced this issue Jun 7, 2016
sparrc added a commit that referenced this issue Jun 7, 2016
sparrc added a commit that referenced this issue Jun 7, 2016
@alimousazy
Copy link
Contributor

@sparrc
Is this will handle aggregation also, Please take a look at this #1349 ?

@sparrc
Copy link
Contributor Author

sparrc commented Jun 8, 2016

aggregation will not be specific to any one plugin, see #380

sparrc added a commit that referenced this issue Jun 9, 2016
sparrc added a commit that referenced this issue Jun 9, 2016
sparrc added a commit that referenced this issue Jun 9, 2016
sparrc added a commit that referenced this issue Jun 10, 2016
sparrc added a commit that referenced this issue Jun 10, 2016
sparrc added a commit that referenced this issue Jun 10, 2016
sparrc added a commit that referenced this issue Jun 14, 2016
sparrc added a commit that referenced this issue Jun 14, 2016
sparrc added a commit that referenced this issue Jun 14, 2016
sparrc added a commit that referenced this issue Jun 14, 2016
sparrc added a commit that referenced this issue Jun 14, 2016
sparrc added a commit that referenced this issue Jun 15, 2016
sparrc added a commit that referenced this issue Jun 15, 2016
sparrc added a commit that referenced this issue Jun 16, 2016
sparrc added a commit that referenced this issue Jun 17, 2016
sparrc added a commit that referenced this issue Jun 17, 2016
sparrc added a commit that referenced this issue Jun 20, 2016
sparrc added a commit that referenced this issue Jun 20, 2016
sparrc added a commit that referenced this issue Jun 21, 2016
sparrc added a commit that referenced this issue Jun 21, 2016
sparrc added a commit that referenced this issue Jun 21, 2016
chebrolus pushed a commit to chebrolus/telegraf that referenced this issue Jun 24, 2016
@robinsmidsrod
Copy link

robinsmidsrod commented May 10, 2017

If you want specific events from logstash into InfluxDB then you can also use this output configuration in Logstash:

# Output HTTP access log info to Telegraf TCP listener which ends up in InfluxDB
if "http-access" in [type] {
    tcp {
        host => "127.0.0.1"
        port => 8094
        codec => line {
            format => "http_access,program=%{program},host=%{host},vhost=%{vhost},port=%{port},status=%{status},scheme=%{scheme},method=%{method},severity=%{severity} bytes=%{bytes}i,duration=%{duration},count=1i %{@timestamp_ns}"
        }
    }
}

You'll need this in your filter section as well:

if [@timestamp] {
    ruby {
        # event.get('@timestamp').time is an object of class 'Time'
        # See http://ruby-doc.org/core-2.2.0/Time.html for details
        # Convert to rational number (fraction) and multiple with 1e9, store as integer
        code => "event.set('@timestamp_ns', ( event.get('@timestamp').time.to_r * 1000000000 ).to_i )"
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Request for community participation, code, contribution
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants