feat(blackhole sink): accept metric events, too#1237
Merged
loony-bean merged 1 commit intovectordotdev:masterfrom Nov 23, 2019
Merged
feat(blackhole sink): accept metric events, too#1237loony-bean merged 1 commit intovectordotdev:masterfrom
loony-bean merged 1 commit intovectordotdev:masterfrom
Conversation
Contributor
|
Thanks @thoughtpolice! We'll get this reviewed. @loony-bean do you mind reviewing this? Thanks. |
lukesteensen
approved these changes
Nov 23, 2019
Member
lukesteensen
left a comment
There was a problem hiding this comment.
Thanks for this! We still have some work to do with the Any type for sources and transforms, but we should definitely be using it here.
Member
|
@thoughtpolice Would you mind running |
The Vector website indicates that the blackhole sink accepts both log and metric events, but this isn't true: it only accepts logs. There's no reason this should be the case. In particular, for metric sources (such as `statsd` in my case) that want to accept/transform metrics but just send everything to `/dev/null` for the moment (e.g. when debugging your configuration), there's no easy way to do this, resulting in an annoying error in the startup topology check. I also like enabling blackholes with high `print_amount` settings anyway, so that my `journald` logs show some progress occasionally. This will allow me to send all streams to one blackhole that prints that info. This patch simply takes events and distinguish metrics, adding them to the total event count, and taking the total length of the .to_string() method for the byte size, AKA the json representation. I believe this is roughly the right metric, since it is basically a measure of "how many bytes this stream will roughly push out", which will generally be some rendered form of the metric data -- such as JSON. Signed-off-by: Austin Seipp <aseipp@pobox.com>
a7175c8 to
17d90b9
Compare
loony-bean
approved these changes
Nov 23, 2019
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The Vector website indicates that the blackhole sink accepts both log
and metric events, but this isn't true: it only accepts logs. There's no
reason this should be the case.
In particular, for metric sources (such as
statsdin my case) thatwant to accept/transform metrics but just send everything to
/dev/nullfor the moment (e.g. when debugging your configuration), there's no easy
way to do this, resulting in an annoying error in the startup topology
check.
I also like enabling blackholes with high
print_amountsettingsanyway, so that my
journaldlogs show some progress occasionally. Thiswill allow me to send all streams to one blackhole that prints that
info.
This patch simply takes events and distinguish metrics, adding them to
the total event count, and taking the total length of the .to_string()
method for the byte size, AKA the json representation. I believe this is
roughly the right metric, since it is basically a measure of "how many
bytes this stream will roughly push out", which will generally be some
rendered form of the metric data -- such as JSON.