New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New grpc
source
#573
Comments
As some prior art, here is what the Stackdriver proto looks like. |
We currently already have protobuf definitions for our internal event type https://github.com/timberio/vector/blob/master/proto/event.proto. Looks like we could easily expose a grpc service using https://github.com/tower-rs/tower-grpc that uses are internal protobuf structure. |
That would be great! |
This adds an inital gRPC source that provides non TLS based http2 access. It accepts a single unary rpc call that accepts a batch of `EventWrapper`'s from the `event.proto`. These get translated directly into `Event`. Closes #573 Signed-off-by: Lucio Franco <luciofranco14@gmail.com>
This is basically complete, but it raised a number of design questions that we need to answer before we proceed to merge this. See #617 (review). Once we obtain consensus on this we can reopen that PR, adjust, and then merge. |
This would be very interesting as a sink for envoy access logs too, to replace the file based approach (which entails dealing with log-rotation and mounting a hostpath in the container etc...) |
I've been thinking about this more, especially as we start to ramp up our marketing efforts around application monitoring and observability. I'm going to respond to this comment.
No, to start, clients will deal with this their self. We have discussed repurposing the Timber libraries as Vector in-app extensions but that is not a near term change.
This is where I think we deviate from the initial implementation. I do not think we should expose our exact internal data model, we should, instead, expose a raw event protobuf (the highest form of our data model). The act of deriving metrics, traces, exceptions, etc from events should be done within Vector itself. This fits nicely into some of our upcoming marketing efforts around application monitoring and observability. Ideally, applications would emit raw events that describe their internal state and that's it. Deriving specific observability data types from this data should happen external to the application. If an application emits other data types they can use the source designed for that data type. For example, if the application exposes metrics data through a prometheus endpoint then Vector can scrape that with the
Yes. The
I actually think these are too opinionated for this specific source. Raw events are the true normalized form of all of these data types. If we did decide to support SSF or OpenTelemetry we should support them explicitly through specific sources or decoding transforms. Final thoughtsThe reason I've been thinking about this is because I find it superior to the traditiaonl app -> disk (file) -> agent design. This allows the application to talk directly to Vector. The use of the disk as a buffer is a design decision the user can make in the form of how they configure Vector's buffers. @lukesteensen let me know what you think. |
This makes sense to me. I'm curious what kind of schema you have in mind for these events. I assume it would be a pretty loose key/value type of structure with various types of values available. Would we support higher level things like metrics, or just simple scalar values?
I wonder if it would make sense to rename them in that case (e.g.
This is always the sticky bit in my mind. Direct, "smart" logging to some probably-network endpoint puts a pretty large burden on the logging library to handle things like load shedding, never blocking, safe buffering, etc, etc. Going to stdout or local disk is pretty well understood and supported in all languages, so I'd just want to be careful that whatever we recommend is similarly unlikely to cause an application outage for users. |
Agree, I've just been seeing a lot of advice to avoid the disk which got me thinking about direct app -> vector communication, especially since Vector has configurable disk buffers. I still agree stdout is the best method. It's simple, decoupled, 12-factor, etc. |
Stackdriver seems to do some funky any magic for their format, which we could adapt to not have to expose our proto internally but not sure how worth it is then to have a grpc sink. https://github.com/googleapis/googleapis/blob/master/google/logging/v2/log_entry.proto#L72 |
@binarylogic Did having the grpc input fizzle out? |
@ckdarby The |
@jszwedko Does that also mean |
It is indeed 😄 This is with |
Repurposing this to be an issue for documenting the gRPC interface supported by the |
We currently use the Stackdriver Go client compiled into our applications because I hate the extra stdout/stderr parsing overhead. I already have a structured log in hand, so why would I serialize it to json, print it to stdout and have it parsed back in with significant chances for parsing errors. I'd prefer to be able to ship the structured log directly to Vector and then have it directed to the appropriate sinks.
The text was updated successfully, but these errors were encountered: