New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Server exporter config not match document and fail to achieve same flexability after refactor #4157
Comments
|
Why not raise those point of views in the original PR ( #3932 )? We could have the config layout discussed and solve it in time. And if it's not met the requirement and defination, we should fix it during code review, not rushing to merge it and raise another PR quickly. We are already developing based on the previous config layout. It's quite common sending o11y data to both cloud provider infrastructure and internal platform at the same time, with different config to filter out different types of span. Last but not least, it's not the case that we develop a new config layout (like the current one), which means we could have more optimization later. We are changing a config layout that ALREADY provide higher flexability to a more limited one. I mean, it's not a bad thing to provide and retain the flexability. I am not sure why this should not be discussed in issue (or maybe we are just discussing now). |
你提的PR( #3932 )已经是完整的了,当时是无需再处理的。 |
So some of the questions have conclusion (and maybe some advices) here.
|
/todo doc update |
第一点你已经修改了 |
I still suggest making them configurable separately, similar to what we had before, or referring to the OpenTelemetry processor approach: link to OpenTelemetry Tailsampling Processor README ↗. The key point is that each exporter instance should be able to work independently, without being affected by the configuration of other exporters. The current approach still doesn't achieve the goal of sending different types of data to different receivers under the same protocol or export type. For example, sending basic trace data to an external platform like Datadog or New Relic, while sending all data to an internal platform. Also, we must consider the cost of hardware resources. Users need to be aware that running an additional exporter will incur an additional 100% cost. I assume this is easily understandable. However, sharing the same configuration doesn't reduce the cost, while fully separating them can take advantage of various benefits. If you consider it necessary, we can also provide something like "multiple exporter under same type, while having multiple addr for each exporter"...so that we can save a lot of time to handle situation like: 10 exporters with 10 addrs, while 5 of them need smaller trace dataset and others need full trace data. (But yes it's over designed.) I am going to provide config layout for two solution we discussed above, and see if others want to discuss and come up with a conclusion. Layout 1 exporters:
enabled: true
export-datas: [cbpf-net-span,ebpf-sys-span]
otlp-exporters:
- enabled: false
addr: [127.0.0.1:4317, www.my_sw_receiver.com]
queue-count: 4
... Layout 2 exporters:
export-datas: [cbpf-net-span,ebpf-sys-span]
- name: tencent-otlp-exporter
type: otlp-exporter
otlp-exporter-config:
addr: 127.0.0.1:4317
queue-count: 4
- name: skywalking-platform-exporter
type: otlp-exporter
otlp-exporter-config:
addr: www.my_sw_receiver.com
queue-count: 8 |
I'm reviewing the code changes again. And found that seperate configuration for each exporter it's already supported. Sorry for the misunderstanding. |
This is refactored and solved in #4177 . Docs updated as well. I'm closing the issue now. |
Search before asking
DeepFlow Component
Server
What you expected to happen
It seems the document is not updated after #4050: https://github.com/deepflowio/docs/blob/main/docs/zh/07-server-integration/02-export/02-opentelemetry-collector.md
Also, there are some small issues with this refactor:
ebpf-sys-span
only and in Loki exporter we wantcbpf-net-span
+ebpf-sys-span
.How to reproduce
No response
DeepFlow version
No response
DeepFlow agent list
No response
Kubernetes CNI
No response
Operation-System/Kernel version
No response
Anything else
No response
Are you willing to submit a PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: