diff --git a/docs/shipping/Code/dotnet-traces-kafka.md b/docs/shipping/Code/dotnet-traces-kafka.md index 908d06a3..2110c454 100644 --- a/docs/shipping/Code/dotnet-traces-kafka.md +++ b/docs/shipping/Code/dotnet-traces-kafka.md @@ -74,7 +74,7 @@ builder.Services.AddOpenTelemetry() var app = builder.Build(); ``` -You can configure the otlp endpoint and protocol using envurinment variables: +You can configure the otlp endpoint and protocol using environment variables: - OTEL_EXPORTER_OTLP_ENDPOINT (The destination of your otel collector 'http://localhost:4318`) - OTEL_EXPORTER_OTLP_PROTOCOL (`grpc` or `http/protubuf`) @@ -180,7 +180,7 @@ It will deploy a small pod that extracts your cluster domain name from your Kube ### Configure your .NET Kafka application to send spans to `logzio-monitoring` -You can configure the otlp endpoint and protocol using envurinment variables: +You can configure the otlp endpoint and protocol using environment variables: - OTEL_EXPORTER_OTLP_ENDPOINT (The destination of your otel collector `<>`) - OTEL_EXPORTER_OTLP_PROTOCOL (`grpc` or `http/protubuf`) diff --git a/docs/user-guide/distributed-tracing/troubleshooting/tracing-troubleshooting.md b/docs/user-guide/distributed-tracing/troubleshooting/tracing-troubleshooting.md index a99a71b4..36165b02 100644 --- a/docs/user-guide/distributed-tracing/troubleshooting/tracing-troubleshooting.md +++ b/docs/user-guide/distributed-tracing/troubleshooting/tracing-troubleshooting.md @@ -10,6 +10,173 @@ slug: /distributed-tracing/troubleshooting/tracing-troubleshooting/ Setting up a Distributed Tracing account might require some additional help. In this doc, we'll walk through troubleshooting some common issues. +## Post install troubleshooting + +There are a few common issues that may prevent data from reaching Logz.io. Use the following checklist to validate your setup: + +**1. No traces showing in the UI** + +Even if installation succeeded, you might not see any trace data. To troubleshoot: + +* Confirm that your application is sending traces to the correct collector endpoint. + +* Check the collector/agent logs for connection errors or dropped spans. + +* Ensure that the correct token is configured for the tracing account. + +**2. Missing service or operation names** + +If the UI doesn’t show services or operations in the dropdown menus: + +* Restart the collector to re-send metadata (especially in OTEL setups). + +* Verify that your instrumentation sets the service.name resource attribute. + +* Search for recent trace logs in Open Search Dashboards to confirm data is reaching Logz.io. + +**3. Metrics dashboards showing "No data"** + +If you’re using Service Performance Monitoring and dashboards show no results: + +* Make sure your metrics account is configured to receive tracing-related metrics (like `calls_total`). + +* Confirm that the collector is configured to export metrics as well as traces. + +**4. Use debug mode to test setup** + +If you’re still unsure whether trace data is being collected: + +* Temporarily enable a debug exporter in your collector config to log incoming spans locally. + +* Run a trace from your application and check the collector output. + +This can help confirm that data is generated and received before reaching Logz.io. + +**5. Missing data after installing the chart** + +If you've installed the Logz.io Helm chart for tracing but no data appears in the UI: + +* Check that the collector pods are running and healthy. + +* Ensure your instrumented applications are correctly configured to send traces to the OTLP endpoint exposed by the chart (e.g., `otel-collector.monitoring.svc.cluster.local:4317`). + +* Verify that your token and endpoint are set correctly in your app’s environment variables. + +* Use the debug exporter to confirm data is reaching the collector. + +## Routing data to the collector + +When Tracing data doesn't appear in your account, one of the most common root causes is misconfigured instrumentation or data not reaching the collector. Before diving into deeper debugging steps, it’s important to verify that your application is correctly set up to export tracing data. + +Your instrumented application needs to send telemetry data (traces, metrics, or logs) to the OpenTelemetry Collector using the OTLP protocol. + +Set environment variables to configure the destination collector and other tracing options. + +If your collector runs on the same host or as a sidecar, set the OTLP endpoint to localhost: + +```shell +ENV OTEL_TRACES_SAMPLER=always_on +ENV OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317" +ENV OTEL_RESOURCE_ATTRIBUTES="service.name=" +``` + +If you're running your app on ECS, define the environment variables in your task definition: + +```json +"environment": [ + { + "name": "OTEL_TRACES_SAMPLER", + "value": "always_on" + }, + { + "name": "OTEL_EXPORTER_OTLP_ENDPOINT", + "value": "http://localhost:4317" + }, + { + "name": "OTEL_RESOURCE_ATTRIBUTES", + "value": "service.name=" + } +] +``` + +Replace with the name you want to appear in the Tracing UI. + + +If the collector runs in another service or namespace, set the OTLP endpoint to its DNS or internal IP: + +```shell +ENV OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector.monitoring.svc.cluster.local:4317" +``` + +Make sure the collector is reachable and exposes the OTLP receiver on that port. + + +## Debug mode for instrumentation and collector + +To enable debug mode for Opentelemetry Operator, add the `OTEL_LOG_LEVEL` environment variable with value `DEBUG`. + +### Enable debug mode for a single pod +To enable debug mode for a specific pod, add the following environment variable directly to its spec: + +```yaml +spec: + template: + spec: + containers: + - name: "" + env: + - name: OTEL_LOG_LEVEL + value: "debug" +``` + +### Enable debug mode for all instrumented pods +To apply debug mode to all pods instrumented by the OpenTelemetry Operator, update your Logz.io Helm chart with the following configuration, replacing with your application's programming language: + +```yaml +instrumentation: + : + extraEnv: + - name: OTEL_LOG_LEVEL + value: "debug" +``` + +:::tip +`` can be one of `dotnet`, `java`, `nodejs` or `python`. +::: + +:::caution +Enabling debug mode generates highly verbose logs. It is recommended to apply it per pod and not for all pods. +::: + +### Debugging with the Collector + +To troubleshoot data routing issues in your OpenTelemetry Collector, you can use the built-in debug exporter. This exporter prints telemetry data directly to the console or logs, making it easy to validate what the collector is receiving. + +To enable it, add this to your Collector config: + +```yaml +exporters: + debug: + verbosity: detailed # or: basic, normal + sampling_initial: 5 + sampling_thereafter: 200 +``` + +Then attach it to the relevant pipeline: + +```yaml +service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] +``` + +With verbosity: `detailed`, you'll get full trace/span info, which is useful to check attributes like `service.name`, `trace ID`, `span.kind`, and verify routing logic. + +This debug output is for local validation only and not intended for production. + +For more, see [OpenTelemetry Collector Debug Exporter](https://github.com/open-telemetry/opentelemetry-collector/blob/v0.111.0/exporter/debugexporter/README.md). ## Distributed Tracing is not showing data in Service/Operations dropdown lists