Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Service for daemonsets as well #623

Conversation

jpkrohling
Copy link
Member

When running the OTel Demo chart, I realized that I could just change the collector to grab the pod's logs. That requires daemonsets to be used, which caused traces to break, as there are no services created for it. This PR changes the chart by making it create a service also for that case.

I'm not at all familiar with Helm charts, and at first I tried to add another "or" clause to the service template, but that changed way more things than I wanted. I confirmed locally that the service this changes generates is enough to get traces into the collector.

Signed-off-by: Juraci Paixão Kröhling juraci@kroehling.de

Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
@puckpuck
Copy link
Contributor

puckpuck commented Feb 1, 2023

Daemonsets shouldn't have services since a service will route to any pod selected, and may not necessarily be the local node.

The use case here, which is to run the Otel demo pointed to a collector deployed as a daemonset instead of a deployment is what we should focus on.

I would like to see a way to abstract the collector's endpoint into its own environment variable (part of the defaults), which then gets reused as part of the Kubernetes env variable expansion for what each demo component needs when specifying the OTEL_EXPORTER_OTLP*_ENDPOINT variables. Then the user only needs to override a single environment variable at the default level to point toward the local K8s node instead. This might require us to lock onto a port (4317/4318) but should be doable.

@jpkrohling
Copy link
Member Author

Daemonsets shouldn't have services since a service will route to any pod selected, and may not necessarily be the local node.

That's true if you care to reach a local daemonset. This is not the case for the spans, which is why I think a service in front of daemonsets are appropriate for this case.

@povilasv
Copy link
Contributor

povilasv commented Feb 3, 2023

@jpkrohling why not send to the local daemonset?

I think in the daemonset mode if you send traces thru a Service, it might cause issues with k8sattributes, i.e. source ip will be service's ?

Maybe we could do headless service (https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) instead?

@jpkrohling
Copy link
Member Author

why not send to the local daemonset?

Using the downward API for the nodeIP? I considered that, but the demo has 10 services, each one would require to be changed to use the downward API only if collecting logs from pods is desirable (otherwise a Deployment is used).

@puckpuck
Copy link
Contributor

puckpuck commented Feb 6, 2023

why not send to the local daemonset?

Using the downward API for the nodeIP? I considered that, but the demo has 10 services, each one would require to be changed to use the downward API only if collecting logs from pods is desirable (otherwise a Deployment is used).

I described in detail how we would change the demo's Helm chart to make it work in this comment. I have a company offsite this week, so I don't think I will get to it, but doing this is at the top of my priority list.

@jpkrohling
Copy link
Member Author

Sounds good to me :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants