-
Notifications
You must be signed in to change notification settings - Fork 409
Description
📋 Prerequisites
- I have searched the existing issues to avoid creating a duplicate
- By submitting this issue, you agree to follow our Code of Conduct
- I am using the latest version of the software
- I have tried to clear cache/cookies or used incognito mode (if ui-related)
- I can consistently reproduce this issue
🎯 Affected Service(s)
Multiple services / System-wide issue
🚦 Impact/Severity
Minor inconvenience
🐛 Bug Description
When deploying kagent v0.7.13 with OpenTelemetry (OTel) tracing and logging enabled, tracing is successfully exported to the configured OTLP endpoint, but logs are not being sent to the OpenTelemetry collector.
No errors appear in the OTel collector logs, yet logs never arrive in the collector (and therefore never reach Loki).
Additionally, the kagent runtime repeatedly emits the following warning in the agent logs:
/.kagent/.venv/lib/python3.13/site-packages/opentelemetry/sdk/_events/__init__.py:53:
LogDeprecatedInitWarning: LogRecord init with `trace_id`, `span_id`, and/or `trace_flags` is deprecated since 1.35.0.
Use `context` instead.
log_record = LogRecord(
This suggests that kagent may be using a deprecated OpenTelemetry logging API that could be preventing proper log export.
Configuration Used
otel:
tracing:
enabled: true
exporter:
otlp:
endpoint: http://obs-agent-ai-signals.observability.svc.cluster.local:4317
timeout: 15
insecure: true
logging:
enabled: true
exporter:
otlp:
endpoint: http://obs-agent-ai-signals.observability.svc.cluster.local:4317
timeout: 15
insecure: true$ kubectl get svc obs-agent-ai-signals -n observability
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
obs-agent-ai-signals ClusterIP 172.17.236.35 <none> 12345/TCP,4317/TCP,4318/TCP 60m
🔄 Steps To Reproduce
-
Deploy kagent v0.7.13 in a Kubernetes cluster.
-
Enable OpenTelemetry tracing and logging using the configuration above.
-
Confirm that:
- Traces are successfully received by the OpenTelemetry collector.
-
Interact with the agent and observe kagent logs in the pod.
-
Check the OpenTelemetry collector and downstream Loki for logs from kagent.
🤔 Expected Behavior
OTel Logs should work as intended.
📱 Actual Behavior
OTel Tracing works, but not Logs.
💻 Environment
- Kagent version
v0.7.13 - Kubernetes cluster version:
v1.34.3 - Kubernetes provider: kubeadm
🔧 CLI Bug Report
No response
🔍 Additional Context
I followed the following docs - https://kagent.dev/docs/kagent/observability/audit-prompts
The main differences were:
- We already had a Tempo and Loki installation up and running;
- We didn't use a otel collector like the example, instead we used Grafana Alloy with the
otelcol.receiver.otlpcomponent (which is basically a wrapper around the OTel Collector).- For reference the full Alloy config is the following:
otelcol.receiver.otlp "otlp" {
http {}
grpc {}
output {
logs= [otelcol.processor.attributes.default.input]
traces= [otelcol.processor.filter.drop_health_checks.input]
}
}
otelcol.exporter.otlphttp "loki" {
client {
endpoint = "http://obs-agent-logs.monitoring-system.svc.cluster.local:4317"
tls {
insecure = true
insecure_skip_verify = true
}
}
}
otelcol.exporter.otlp "traces" {
client {
endpoint = "http://obs-agent-traces.monitoring-system.svc.cluster.local:4317"
tls {
insecure = true
insecure_skip_verify = true
}
}
}
📋 Logs
/.kagent/.venv/lib/python3.13/site-packages/opentelemetry/sdk/_events/__init__.py:53:
LogDeprecatedInitWarning: LogRecord init with `trace_id`, `span_id`, and/or `trace_flags` is deprecated since 1.35.0.
Use `context` instead.
log_record = LogRecord(📷 Screenshots
No response
🙋 Are you willing to contribute?
- I am willing to submit a PR to fix this issue
Metadata
Metadata
Assignees
Labels
Type
Projects
Status