-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send observability signals to Kafka #290
Comments
Hi, thank you for your suggestion! It would be great to leverage Kafka's resiliency, but I'm not sure how we could do that. The Agent is normally the one which sends the signals to the end location. Would each Agent component send data to Kafka, only for another component on the same Agent process to pick it up? It seems like a lot of unnecessary networking overhead. Another issue with this is that it'd mean the user of the Agent has to be able to run Kafka. The Agent is normally a self sufficient executable, so this would be a big break from convention for us. I think a more realistic solution would be something like #323. This article explains it in a bit more detail. |
@ptodev, once we have all the data in Kafka we can use either native consumers (Tempo/Mimir) to consume data or Agent to consume it and push it to the backend system. |
and again, this shouldn't be the only way how data can be ingested. Having a Kafka exporter (Kafka as destination) will allow building such systems that have Kafka in place. But whoever doesn't have it or doesn't want it can still use the existing approach to push directly to the backend system. |
it seems to me you're trying to look for equivalent of Filebeat's Kafka output and Vector's Kafka sink? Having said that, the PR you've linked is for Kafka's message consumption, which can be done by the agent's |
@hainenber, I'm looking for something like this: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/kafkaexporter. The idea is to use Agent to collect all kinds of observability data and push it to Kafka. Then it can be consumed by different systems (another Agent,Mimir,Tempo, another external system). |
I see, thank you for clarifying. It would certainly be possible to port Collector's Kafka exporter. I suspect it won't be a lot of effort - most of the time would be spent in documenting the features. However, I do not know to what extent databases such as Mimir, Loki, and Tempo can ingest signals from Kafka (especially on Grafana Cloud). Also, if we reuse the OTel Collector's component, we wouldn't be able to send "profile" signals, because they are not yet part of the OTel standard. |
@ptodev I think "profiles" will eventually be onboarded as well in OTel. |
Realistically, I don't think the core development team can work on adding an |
This issue has not had any activity in the past 30 days, so the |
Request
Hello,
Grafana Tempo can consume traces directly from Kafka already. Grafana Mimir merged the changes to do that recently (grafana/mimir#6929).
I think it will be good to have the ability to send all kinds of observability data from the agent to Kafka.
Is it somewhere on the roadmap?
Use case
Kafka can be used as an intermediate buffer for all observability signals. So metrics scraping/ tracing, logs and profile collection, and actual data ingestion can go at their own pace. It also will prevent data loosing in case of overloaded ingesters. So data will not be discarded but delayed
The text was updated successfully, but these errors were encountered: