diff --git a/troubleshoot/ingest/opentelemetry/429-errors-motlp.md b/troubleshoot/ingest/opentelemetry/429-errors-motlp.md index 25d05751ba..546c3ae6d4 100644 --- a/troubleshoot/ingest/opentelemetry/429-errors-motlp.md +++ b/troubleshoot/ingest/opentelemetry/429-errors-motlp.md @@ -51,7 +51,7 @@ A 429 status means that the rate of requests sent to the Managed OTLP endpoint h Refer to the [Rate limiting section](opentelemetry://reference/motlp.md#rate-limiting) in the mOTLP reference documentation for details. * In {{ech}}, the {{es}} capacity for your deployment might be underscaled for the current ingest rate. -* In {{serverless-full}}, rate limiting should not result from {{es}} capacity, since the platform automatically scales ingest capacity. If you suspect a scaling issue, [contact Elastic Support](contact-support.md). +* In {{serverless-full}}, rate limiting should not result from {{es}} capacity, since the platform automatically scales ingest capacity. If you suspect a scaling issue, [contact Elastic Support](/troubleshoot/ingest/opentelemetry/contact-support.md). * Multiple Collectors or SDKs are sending data concurrently without load balancing or backoff mechanisms. ## Resolution @@ -62,7 +62,7 @@ To resolve 429 errors, identify whether the bottleneck is caused by ingest limit If you’ve confirmed that your ingest configuration is stable but still encounter 429 errors: -* {{serverless-full}}: [Contact Elastic Support](contact-support.md) to request an increase in ingest limits. +* {{serverless-full}}: [Contact Elastic Support](/troubleshoot/ingest/opentelemetry/contact-support.md) to request an increase in ingest limits. * {{ech}} (ECH): Increase your {{es}} capacity by scaling or resizing your deployment: * [Scaling considerations](../../../deploy-manage/production-guidance/scaling-considerations.md) * [Resize deployment](../../../deploy-manage/deploy/cloud-enterprise/resize-deployment.md) @@ -106,7 +106,7 @@ exporters: enabled: true ``` -This ensures the Collector buffers data locally while waiting for the ingest endpoint to recover from throttling. +This ensures the Collector buffers data locally while waiting for the ingest endpoint to recover from throttling. For more information on export failures and queue configuration, refer to [Export failures when sending telemetry data](/troubleshoot/ingest/opentelemetry/edot-collector/trace-export-errors.md). ## Best practices diff --git a/troubleshoot/ingest/opentelemetry/connectivity.md b/troubleshoot/ingest/opentelemetry/connectivity.md index 6d2aa988ad..c548743fe6 100644 --- a/troubleshoot/ingest/opentelemetry/connectivity.md +++ b/troubleshoot/ingest/opentelemetry/connectivity.md @@ -2,7 +2,7 @@ navigation_title: Connectivity issues description: Troubleshoot connectivity issues between EDOT SDKs, the EDOT Collector, and Elastic. applies_to: - serverless: all + serverless: ga product: edot_collector: ga products: @@ -75,14 +75,14 @@ Connectivity errors usually trace back to one of the following issues: Errors can look similar whether they come from an SDK or the Collector. Identifying the source helps you isolate the problem. :::{note} -Note: Some SDKs support setting a proxy directly (for example, using `HTTPS_PROXY`). Refer to [Proxy settings for EDOT SDKs](../opentelemetry/edot-sdks/proxy.md) for details. +Note: Some SDKs support setting a proxy directly (for example, using `HTTPS_PROXY`). Refer to [Proxy settings for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/proxy.md) for details. ::: #### SDK Application logs report failures when the SDK cannot send data to the Collector or directly to Elastic. These often appear as `connection refused` or `timeout` messages. If seen, verify that the Collector endpoint is reachable. -For guidance on enabling logs in your SDK, see [Enable SDK debug logging](../opentelemetry/edot-sdks/enable-debug-logging.md). +For guidance on enabling logs in your SDK, refer to [Enable SDK debug logging](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). Example (Java SDK): @@ -154,6 +154,6 @@ If basic checks and configuration look correct but issues persist, collect more * Review proxy settings. For more information, refer to [Proxy settings](opentelemetry://reference/edot-collector/config/proxy.md). -* If ports are confirmed open but errors persist, [enable debug logging in the SDK](../opentelemetry/edot-sdks/enable-debug-logging.md) or [in the Collector](../opentelemetry/edot-collector/enable-debug-logging.md) for more detail. +* If ports are confirmed open but errors persist, [enable debug logging in the SDK](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md) or [in the Collector](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md) for more detail. * Contact your network administrator with test results if you suspect firewall restrictions. \ No newline at end of file diff --git a/troubleshoot/ingest/opentelemetry/contact-support.md b/troubleshoot/ingest/opentelemetry/contact-support.md index 57a4a779ec..b7a1178d92 100644 --- a/troubleshoot/ingest/opentelemetry/contact-support.md +++ b/troubleshoot/ingest/opentelemetry/contact-support.md @@ -77,7 +77,7 @@ To help Elastic Support investigate the problem efficiently, please include the ### Logs and diagnostics -* Recent Collector logs with relevant errors or warning messages +* Recent Collector logs with relevant errors or warning messages. For guidance on enabling debug logging, refer to [Enable debug logging for the EDOT Collector](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md) or [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). * Output from: ```bash @@ -92,7 +92,7 @@ To help Elastic Support investigate the problem efficiently, please include the ### Data and UI symptoms -* Are traces, metrics, or logs missing from the UI? +* Are traces, metrics, or logs missing from the UI? For troubleshooting steps, refer to [No data visible in {{kib}}](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md) or [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md). * Are you using the [Elastic Managed OTLP endpoint](https://www.elastic.co/docs/observability/apm/otel/managed-otel-ingest/)? * If data is missing or incomplete, consider enabling the [debug exporter](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/debugexporter/README.md) to inspect the raw signal data emitted by the Collector. diff --git a/troubleshoot/ingest/opentelemetry/edot-collector/collector-not-starting.md b/troubleshoot/ingest/opentelemetry/edot-collector/collector-not-starting.md index 3fee510ead..feaf86ab8b 100644 --- a/troubleshoot/ingest/opentelemetry/edot-collector/collector-not-starting.md +++ b/troubleshoot/ingest/opentelemetry/edot-collector/collector-not-starting.md @@ -66,7 +66,7 @@ If you're deploying the EDOT Collector in a standalone configuration, try to: ./otelcol --set=service.telemetry.logs.level=debug ``` - This is especially helpful for diagnosing configuration parsing issues or startup errors. + This is especially helpful for diagnosing configuration parsing issues or startup errors. For more information on enabling debug logging, refer to [Enable debug logging](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md). * Confirm required components are defined @@ -95,7 +95,7 @@ If you're deploying the EDOT Collector in a standalone configuration, try to: lsof -i :4317 ``` - If needed, adjust your configuration or free up the port. + If needed, adjust your configuration or free up the port. For network connectivity issues, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). ### Kubernetes EDOT Collector @@ -117,6 +117,8 @@ If you're deploying the EDOT Collector using the Elastic Helm charts, try to: Common issues include volume mount errors, image pull failures, or misconfigured environment variables. +If the Collector starts but no data appears in {{kib}}, refer to [No data visible in {{kib}}](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md) for additional troubleshooting steps. + ## Resources * [Collector configuration documentation](https://opentelemetry.io/docs/collector/configuration/) diff --git a/troubleshoot/ingest/opentelemetry/edot-collector/collector-oomkilled.md b/troubleshoot/ingest/opentelemetry/edot-collector/collector-oomkilled.md index d5f93cb0e8..555c2de8dd 100644 --- a/troubleshoot/ingest/opentelemetry/edot-collector/collector-oomkilled.md +++ b/troubleshoot/ingest/opentelemetry/edot-collector/collector-oomkilled.md @@ -17,6 +17,8 @@ products: If your EDOT Collector pods terminate with an `OOMKilled` status, this usually indicates sustained memory pressure or potentially a memory leak due to an introduced regression or a bug. You can use the Performance Profiler (`pprof`) extension to collect and analyze memory profiles, helping you identify the root cause of the issue. +If you're running the Collector in Kubernetes and experiencing resource allocation issues, refer to [Insufficient resources in Kubernetes](/troubleshoot/ingest/opentelemetry/edot-collector/insufficient-resources-kubestack.md) for troubleshooting steps. + ## Symptoms These symptoms typically indicate that the EDOT Collector is experiencing a memory-related failure: @@ -25,6 +27,8 @@ These symptoms typically indicate that the EDOT Collector is experiencing a memo - Memory usage steadily increases before the crash. - The Collector's logs don't show clear errors before termination. +For more detailed diagnostics, refer to [Enable debug logging](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md). + ## Resolution Turn on runtime profiling using the `pprof` extension and then gather memory heap profiles from the affected pod: diff --git a/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md b/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md index fa94469390..784cbcb67f 100644 --- a/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md +++ b/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md @@ -88,4 +88,4 @@ Debug logging for the Collector is not currently configurable through {{fleet}}. ## Resources -To learn how to enable debug logging for the EDOT SDKs, refer to [Enable debug logging for EDOT SDKs](../edot-sdks/enable-debug-logging.md). +To learn how to enable debug logging for the EDOT SDKs, refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). diff --git a/troubleshoot/ingest/opentelemetry/edot-collector/index.md b/troubleshoot/ingest/opentelemetry/edot-collector/index.md index e67c50d8f0..96a5c105a0 100644 --- a/troubleshoot/ingest/opentelemetry/edot-collector/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-collector/index.md @@ -13,18 +13,36 @@ products: # Troubleshoot the EDOT Collector -Perform these checks when troubleshooting common Collector issues: - -* Check logs: Review the Collector’s logs for error messages. -* Validate configuration: Use the `--dry-run` option to test configurations. -* Enable debug logging: Run the Collector with `--log-level=debug` for detailed logs. -* Check service status: Ensure the Collector is running with `systemctl status ` (Linux) or `tasklist` (Windows). -* Test connectivity: Use `telnet ` or `curl` to verify backend availability. -* Check open ports: Run netstat `-tulnp or lsof -i` to confirm the Collector is listening. -* Monitor resource usage: Use top/htop (Linux) or Task Manager (Windows) to check CPU & memory. -* Validate exporters: Ensure exporters are properly configured and reachable. -* Verify pipelines: Use `otelctl` diagnose (if available) to check pipeline health. -* Check permissions: Ensure the Collector has the right file and network permissions. -* Review recent changes: Roll back recent config updates if the issue started after changes. - -For in-depth details on troubleshooting refer to the [OpenTelemetry Collector troubleshooting documentation](https://opentelemetry.io/docs/collector/troubleshooting/). \ No newline at end of file +Use the topics in this section to troubleshoot issues with the EDOT Collector. + +If you're not sure where to start, review the Collector's logs for error messages and validate your configuration using the `--dry-run` option. For more detailed diagnostics, refer to [Enable debug logging](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md). + +## Resource issues + +* [Collector out of memory](/troubleshoot/ingest/opentelemetry/edot-collector/collector-oomkilled.md): Diagnose and resolve out-of-memory issues in the EDOT Collector using Go's Performance Profiler. + +* [Insufficient resources in {{k8s}}](/troubleshoot/ingest/opentelemetry/edot-collector/insufficient-resources-kubestack.md): Troubleshoot resource allocation issues when running the EDOT Collector in {{k8s}} environments. + +## Configuration issues + +* [Collector doesn't start](/troubleshoot/ingest/opentelemetry/edot-collector/collector-not-starting.md): Resolve startup failures caused by invalid configuration, port conflicts, or missing components. + +* [Missing or incomplete traces due to Collector sampling](/troubleshoot/ingest/opentelemetry/edot-collector/misconfigured-sampling-collector.md): Troubleshoot missing or incomplete traces caused by sampling configuration. + +* [Collector doesn't propagate client metadata](/troubleshoot/ingest/opentelemetry/edot-collector/metadata.md): Learn why the Collector doesn't extract custom attributes and how to propagate such values using EDOT SDKs. + +## Connectivity and export issues + +* [Export failures when sending telemetry data](/troubleshoot/ingest/opentelemetry/edot-collector/trace-export-errors.md): Resolve export failures caused by `sending_queue` overflow and {{es}} exporter timeouts. + +## Debugging + +* [Enable debug logging](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md): Learn how to enable debug logging for the EDOT Collector in supported environments. + +## See also + +* [EDOT SDKs troubleshooting](/troubleshoot/ingest/opentelemetry/edot-sdks/index.md): For end-to-end issues that may involve both the Collector and SDKs. + +* [Troubleshoot EDOT](/troubleshoot/ingest/opentelemetry/index.md): Overview of all EDOT troubleshooting resources. + +For in-depth details on troubleshooting, refer to the contrib [OpenTelemetry Collector troubleshooting documentation](https://opentelemetry.io/docs/collector/troubleshooting/). diff --git a/troubleshoot/ingest/opentelemetry/edot-collector/insufficient-resources-kubestack.md b/troubleshoot/ingest/opentelemetry/edot-collector/insufficient-resources-kubestack.md index 629b092551..a38ca4e6fc 100644 --- a/troubleshoot/ingest/opentelemetry/edot-collector/insufficient-resources-kubestack.md +++ b/troubleshoot/ingest/opentelemetry/edot-collector/insufficient-resources-kubestack.md @@ -25,6 +25,8 @@ These symptoms are common when the Kube-Stack chart is deployed with insufficien - Cluster or Daemon pods are unable to export data to the Gateway collector due being `OOMKilled` (high memory usage). - Pods have logs similar to: `error internal/queue_sender.go:128 Exporting failed. Dropping data.` +For detailed diagnostics on OOMKilled issues, refer to [Collector out of memory](/troubleshoot/ingest/opentelemetry/edot-collector/collector-oomkilled.md). For more information on enabling debug logging, refer to [Enable debug logging](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md). + ## Resolution Follow these steps to resolve the issue. diff --git a/troubleshoot/ingest/opentelemetry/edot-collector/metadata.md b/troubleshoot/ingest/opentelemetry/edot-collector/metadata.md index 002df51ada..40a13c6ddd 100644 --- a/troubleshoot/ingest/opentelemetry/edot-collector/metadata.md +++ b/troubleshoot/ingest/opentelemetry/edot-collector/metadata.md @@ -62,7 +62,7 @@ This will not work, as the Collector doesn't automatically extract such values f ## Resolution -If you want to propagate customer IDs or project names into spans or metrics, you must instrument this in your code using one of the SDKs. +If you want to propagate customer IDs or project names into spans or metrics, you must instrument this in your code using one of the SDKs. For SDK-specific troubleshooting guidance, refer to [EDOT SDKs troubleshooting](/troubleshoot/ingest/opentelemetry/edot-sdks/index.md). Use `span.set_attribute` in your application code, where OpenTelemetry spans are created. For example: diff --git a/troubleshoot/ingest/opentelemetry/edot-collector/misconfigured-sampling-collector.md b/troubleshoot/ingest/opentelemetry/edot-collector/misconfigured-sampling-collector.md index c8ddfd7614..dc2f2b847e 100644 --- a/troubleshoot/ingest/opentelemetry/edot-collector/misconfigured-sampling-collector.md +++ b/troubleshoot/ingest/opentelemetry/edot-collector/misconfigured-sampling-collector.md @@ -2,7 +2,7 @@ navigation_title: Collector sampling issues description: Learn how to troubleshoot missing or incomplete traces in the EDOT Collector caused by sampling configuration. applies_to: - serverless: all + serverless: ga product: edot_collector: ga products: @@ -12,11 +12,11 @@ products: # Missing or incomplete traces due to Collector sampling -If traces or spans are missing in {{kib}}, the issue might be related to the Collector’s sampling configuration. +If traces or spans are missing in {{kib}}, the issue might be related to the Collector's sampling configuration. For general troubleshooting when no data appears in {{kib}}, refer to [No data visible in {{kib}}](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md). {applies_to}`stack: ga 9.2` Tail-based sampling (TBS) allows the Collector to evaluate entire traces before deciding whether to keep them. If TBS policies are too strict or not aligned with your workloads, traces you expect to see may be dropped. -Both Collector-based and SDK-level sampling can lead to gaps in telemetry if not configured correctly. See [Missing or incomplete traces due to SDK sampling](../edot-sdks/misconfigured-sampling-sdk.md) for more information. +Both Collector-based and SDK-level sampling can lead to gaps in telemetry if not configured correctly. Refer to [Missing or incomplete traces due to SDK sampling](/troubleshoot/ingest/opentelemetry/edot-sdks/misconfigured-sampling-sdk.md) for more information. ## Symptoms @@ -79,4 +79,4 @@ Follow these steps to resolve sampling configuration issues: - [Tail sampling processor (Collector)](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor) - [OpenTelemetry sampling concepts - contrib documentation](https://opentelemetry.io/docs/concepts/sampling/) -- [Missing or incomplete traces due to SDK sampling](../edot-sdks/misconfigured-sampling-sdk.md) \ No newline at end of file +- [Missing or incomplete traces due to SDK sampling](/troubleshoot/ingest/opentelemetry/edot-sdks/misconfigured-sampling-sdk.md) \ No newline at end of file diff --git a/troubleshoot/ingest/opentelemetry/edot-collector/trace-export-errors.md b/troubleshoot/ingest/opentelemetry/edot-collector/trace-export-errors.md index ca63be3900..c3bd8a9d75 100644 --- a/troubleshoot/ingest/opentelemetry/edot-collector/trace-export-errors.md +++ b/troubleshoot/ingest/opentelemetry/edot-collector/trace-export-errors.md @@ -2,7 +2,7 @@ navigation_title: Export errors from the EDOT Collector description: Learn how to resolve export failures caused by `sending_queue` overflow and Elasticsearch exporter timeouts in the EDOT Collector. applies_to: - serverless: all + serverless: ga product: edot_collector: ga products: @@ -14,6 +14,8 @@ products: During high traffic or load testing scenarios, the EDOT Collector might fail to export telemetry data (traces, metrics, or logs) to {{es}}. This typically happens when the internal queue for outgoing data fills up faster than it can be drained, resulting in timeouts and dropped data. +If you're experiencing network connectivity issues, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). If no data appears in {{kib}}, refer to [No data visible in {{kib}}](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md). + ## Symptoms You might see one or more of the following messages in the EDOT Collector logs: @@ -90,6 +92,8 @@ For a complete list of available metrics, refer to the upstream OpenTelemetry me * Ensure sufficient CPU and memory for the EDOT Collector. * Scale vertically (more resources) or horizontally (more replicas) as needed. + +For Kubernetes deployments, refer to [Insufficient resources in Kubernetes](/troubleshoot/ingest/opentelemetry/edot-collector/insufficient-resources-kubestack.md) for detailed resource configuration guidance. :::: ::::{step} Optimize Elasticsearch performance @@ -105,6 +109,8 @@ Focus tuning efforts on {{es}} performance, Collector resource allocation, and q ::: +For more detailed diagnostics, refer to [Enable debug logging](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md) to troubleshoot export failures. + ## Resources * [Upstream documentation - OpenTelemetry Collector configuration](https://opentelemetry.io/docs/collector/configuration) diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/android/index.md b/troubleshoot/ingest/opentelemetry/edot-sdks/android/index.md index e090ce7eb8..033ea6fbad 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/android/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/android/index.md @@ -25,11 +25,11 @@ If you have an Elastic support contract, create a ticket in the [Elastic Support The SDK creates logs that allow you to see what it's working on and what might have failed at some point. You can find the logs in [logcat](https://developer.android.com/studio/debug/logcat), filtered by the tag `ELASTIC_AGENT`. -For more information about the SDK's internal logs, as well as how to configure them, refer to the [internal logging policy](apm-agent-android://reference/edot-android/configuration.md#internal-logging-policy) configuration. +For more information about the SDK's internal logs, as well as how to configure them, refer to the [internal logging policy](apm-agent-android://reference/edot-android/configuration.md#internal-logging-policy) configuration. For more information on enabling debug logging, refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). ## Connectivity to the {{stack}} -If after following the [getting started](apm-agent-android://reference/edot-android/getting-started.md) guide and configuring your {{stack}} [endpoint parameters](apm-agent-android://reference/edot-android/configuration.md#export-connectivity), you can't see your application's data in {{kib}}, you can follow the following tips to try and figure out what could be wrong. +If after following the [getting started](apm-agent-android://reference/edot-android/getting-started.md) guide and configuring your {{stack}} [endpoint parameters](apm-agent-android://reference/edot-android/configuration.md#export-connectivity), you can't see your application's data in {{kib}}, you can follow the following tips to try and figure out what could be wrong. For more detailed connectivity troubleshooting, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). If telemetry data isn't appearing in {{kib}}, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md) or [No data visible in {{kib}}](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md). ### Check out the logs diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/dotnet/index.md b/troubleshoot/ingest/opentelemetry/edot-sdks/dotnet/index.md index 499fe8106d..5668b176e5 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/dotnet/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/dotnet/index.md @@ -21,7 +21,7 @@ If you have an Elastic support contract, create a ticket in the [Elastic Support ## Obtain EDOT .NET diagnostic logs -For most problems, such as when you don't see data in your Elastic Observability backend, first check the EDOT .NET logs. These logs show initialization details and OpenTelemetry SDK events. If you don't see any warnings or errors in the EDOT .NET logs, switch the log level to `Trace` to investigate further. +For most problems, such as when you don't see data in your {{product.observability}} backend, first check the EDOT .NET logs. These logs show initialization details and OpenTelemetry SDK events. If you don't see any warnings or errors in the EDOT .NET logs, switch the log level to `Trace` to investigate further. For more information on enabling debug logging, refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). If telemetry data isn't appearing in {{kib}}, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md). The {{edot}} .NET includes built-in diagnostic logging. You can direct logs to a file, STDOUT, or, in common scenarios, an `ILogger` instance. EDOT .NET also observes the built-in diagnostics events from the contrib OpenTelemetry SDK and includes those in its logging output. You can collect the log output and use it to diagnose issues locally during development or when working with Elastic support channels. @@ -74,7 +74,7 @@ In the preceding code, you have filtered `Elastic.OpenTelemetry` to only emit lo ## Enable global file logging -Integrated logging is helpful because it requires little to no setup. The logging infrastructure is not present by default in some application types, such as console applications. EDOT .NET also offers a global file logging feature, which is the easiest way for you to get diagnostics and debug information. You must enable file logging when you work with Elastic support, as trace logs will be requested. +Integrated logging is helpful because it requires little to no setup. The logging infrastructure is not present by default in some application types, such as console applications. EDOT .NET also offers a global file logging feature, which is the easiest way for you to get diagnostics and debug information. You must enable file logging when you work with Elastic support, as trace logs will be requested. For more details, refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). Specify at least one of the following environment variables to make sure that EDOT .NET logs into a file. diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md b/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md index fa7f7ac1fc..84a02c849c 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md @@ -189,4 +189,4 @@ Disable diagnostic collection when you're done by unsetting the variable or rest ## Resources -To learn how to enable debug logging for the EDOT Collector, refer to [Enable debug logging for EDOT Collector](../edot-collector/enable-debug-logging.md). +To learn how to enable debug logging for the EDOT Collector, refer to [Enable debug logging for EDOT Collector](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md). diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/index.md b/troubleshoot/ingest/opentelemetry/edot-sdks/index.md index 3fb76f3b2f..647a383ec1 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/index.md @@ -13,14 +13,40 @@ products: # Troubleshooting the EDOT SDKs -Find solutions to common issues with EDOT SDKs. +Find solutions to common issues with EDOT SDKs for various programming languages and platforms. -- [.NET](/troubleshoot/ingest/opentelemetry/edot-sdks/dotnet/index.md) -- [Java](/troubleshoot/ingest/opentelemetry/edot-sdks/java/index.md) -- [Node.js](/troubleshoot/ingest/opentelemetry/edot-sdks/nodejs/index.md) -- [PHP](/troubleshoot/ingest/opentelemetry/edot-sdks/php/index.md) -- [Python](/troubleshoot/ingest/opentelemetry/edot-sdks/python/index.md) +* [Android SDK](/troubleshoot/ingest/opentelemetry/edot-sdks/android/index.md): Troubleshoot common problems affecting the {{product.edot-android}} SDK. + +* [.NET SDK](/troubleshoot/ingest/opentelemetry/edot-sdks/dotnet/index.md): Troubleshoot common problems affecting the EDOT .NET SDK. + +* [iOS SDK](/troubleshoot/ingest/opentelemetry/edot-sdks/ios/index.md): Troubleshoot common problems affecting the {{product.edot-ios}} agent. + +* [Java SDK](/troubleshoot/ingest/opentelemetry/edot-sdks/java/index.md): Troubleshoot common problems affecting the EDOT Java agent, including connectivity, agent identification, and debugging. + +* [Node.js SDK](/troubleshoot/ingest/opentelemetry/edot-sdks/nodejs/index.md): Troubleshoot issues using EDOT Node.js SDK. + +* [PHP SDK](/troubleshoot/ingest/opentelemetry/edot-sdks/php/index.md): Troubleshoot issues using EDOT PHP agent. + +* [Python SDK](/troubleshoot/ingest/opentelemetry/edot-sdks/python/index.md): Troubleshoot issues using EDOT Python agent. + +## Shared troubleshooting topics + +These guides apply to all EDOT SDKs: + +* [Enable debug logging](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md): Learn how to enable debug logging for EDOT SDKs to troubleshoot application-level instrumentation issues. + +* [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md): Diagnose lack of telemetry flow due to issues with EDOT SDKs. + +* [Proxy settings for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/proxy.md): Configure proxy settings for EDOT SDKs when your application runs behind a proxy. + +* [Missing or incomplete traces due to SDK sampling](/troubleshoot/ingest/opentelemetry/edot-sdks/misconfigured-sampling-sdk.md): Troubleshoot missing or incomplete traces caused by SDK-level sampling configuration. + +## See also + +* [EDOT Collector troubleshooting](/troubleshoot/ingest/opentelemetry/edot-collector/index.md): For end-to-end issues that may involve both the Collector and SDKs. + +* [Troubleshoot EDOT](/troubleshoot/ingest/opentelemetry/index.md): Overview of all EDOT troubleshooting resources. :::{warning} -Avoid using EDOT SDKs alongside any other APM agent, including Elastic APM agents. Running multiple agents in the same application process may lead to unexpected behavior, conflicting instrumentation, or duplicated telemetry. +Avoid using EDOT SDKs alongside any other {{apm-agent}}, including Elastic {{product.apm}} agents. Running multiple agents in the same application process may lead to unexpected behavior, conflicting instrumentation, or duplicated telemetry. ::: diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/ios/index.md b/troubleshoot/ingest/opentelemetry/edot-sdks/ios/index.md index 676387e294..1231a5e80e 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/ios/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/ios/index.md @@ -24,7 +24,7 @@ When troubleshooting the EDOT iOS agent, ensure your app is compatible with the ## SDK fails to export data -If your app is running but no telemetry reaches Elastic, the SDK might be failing to send data to the configured endpoint. +If your app is running but no telemetry reaches Elastic, the SDK might be failing to send data to the configured endpoint. For connectivity troubleshooting, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). If telemetry data isn't appearing in {{kib}}, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md) or [No data visible in {{kib}}](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md). ### Symptoms [symptoms-fail-to-export] @@ -157,7 +157,7 @@ If problems persist: * Review the [iOS SDK reference documentation](apm-agent-ios://reference/edot-ios/index.md). -* [Enable debug logging for the Collector](../../edot-collector/enable-debug-logging.md) and [the SDKs](../enable-debug-logging.md). +* [Enable debug logging for the Collector](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md) and [the SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). ::::{important} **Upload your complete debug logs** to a service like [GitHub Gist](https://gist.github.com) so that we can analyze the problem. Logs should include everything from when the application starts up until the first request executes. diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/java/index.md b/troubleshoot/ingest/opentelemetry/edot-sdks/java/index.md index 5989d7b335..c6e7a69c4a 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/java/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/java/index.md @@ -25,7 +25,7 @@ Make you have set a service name, for example `-Dotel.service.name=Service1` or ## Connectivity to endpoint -Check from the host, VM, pod, container, or image running the app that connectivity is available to the Collector. +Check from the host, VM, pod, container, or image running the app that connectivity is available to the Collector. For more detailed connectivity troubleshooting, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). The following examples use a default URL, `http://127.0.0.1:4318/, which you should replace with the endpoint you are using: @@ -47,12 +47,14 @@ Determine if the issue is related to the agent by following these steps: ## Agent debug logging -As debugging output is verbose and might produce noticeable overhead on the application, follow one of these strategies when you need logging: +As debugging output is verbose and might produce noticeable overhead on the application, follow one of these strategies when you need logging: - In case of a technical issue or exception with the agent, use [agent debugging](#agent-debugging). - If you need details on the captured data, use [per-signal debugging](#per-signal-debugging). -In case of missing data, check first that the technology used in the application is supported in [OpenTelemetry Java Instrumentation](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md) and in [EDOT Java](elastic-otel-java://reference/edot-java/supported-technologies.md). +For more information, refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). + +In case of missing data, check first that the technology used in the application is supported in [OpenTelemetry Java Instrumentation](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md) and in [EDOT Java](elastic-otel-java://reference/edot-java/supported-technologies.md). For more troubleshooting guidance, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md). ### Agent debugging diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/java/proxy-issues.md b/troubleshoot/ingest/opentelemetry/edot-sdks/java/proxy-issues.md index 2e088c1818..35fce10936 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/java/proxy-issues.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/java/proxy-issues.md @@ -15,7 +15,7 @@ products: # Proxy issues with EDOT Java SDK -If your Java SDK sends telemetry but fails to communicate with the APM server, the issue might be due to missing or misconfigured proxy settings, which are required for outbound HTTP/S communication in some environments. +If your Java SDK sends telemetry but fails to communicate with the APM server, the issue might be due to missing or misconfigured proxy settings, which are required for outbound HTTP/S communication in some environments. For general proxy configuration guidance, refer to [Proxy settings for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/proxy.md). For connectivity troubleshooting, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). ## Symptoms diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/misconfigured-sampling-sdk.md b/troubleshoot/ingest/opentelemetry/edot-sdks/misconfigured-sampling-sdk.md index e84db33de4..dd59b6824a 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/misconfigured-sampling-sdk.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/misconfigured-sampling-sdk.md @@ -2,9 +2,7 @@ navigation_title: SDK sampling issues description: Learn how to troubleshoot missing or incomplete traces in EDOT SDKs caused by head sampling configuration. applies_to: - serverless: all - product: - elastic-otel-sdk: ga + serverless: ga products: - id: observability - id: edot-sdk @@ -12,9 +10,9 @@ products: # Missing or incomplete traces due to SDK sampling -If traces or spans are missing in Kibana, the issue might be related to SDK-level sampling configuration. By default, SDKs use head-based sampling, meaning the decision to record or drop a trace is made when the trace is first created. +If traces or spans are missing in {{kib}}, the issue might be related to SDK-level sampling configuration. By default, SDKs use head-based sampling, meaning the decision to record or drop a trace is made when the trace is first created. For general troubleshooting when no data appears in {{kib}}, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md) or [No data visible in {{kib}}](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md). -Both SDK-level and Collector-based sampling can result in gaps in telemetry if misconfigured. Refer to [Missing or incomplete traces due to Collector sampling](../edot-collector/misconfigured-sampling-collector.md) for more details. +Both SDK-level and Collector-based sampling can result in gaps in telemetry if misconfigured. Refer to [Missing or incomplete traces due to Collector sampling](/troubleshoot/ingest/opentelemetry/edot-collector/misconfigured-sampling-collector.md) for more details. ## Symptoms @@ -73,7 +71,7 @@ Follow these steps to resolve SDK sampling configuration issues: - Head sampling can't evaluate the full trace context before making a decision. - For more control (for example "keep all errors, sample 10% of successes"), use Collector tail sampling. - For more information, refer to [Missing or incomplete traces due to Collector sampling](../edot-collector/misconfigured-sampling-collector.md). + For more information, refer to [Missing or incomplete traces due to Collector sampling](/troubleshoot/ingest/opentelemetry/edot-collector/misconfigured-sampling-collector.md). ::: :::: @@ -82,4 +80,4 @@ Follow these steps to resolve SDK sampling configuration issues: - [OTEL_TRACES_SAMPLER environment variable specifications](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#otel_traces_sampler) - [OpenTelemetry sampling concepts - contrib documentation](https://opentelemetry.io/docs/concepts/sampling/) -- [Missing or incomplete traces due to Collector sampling](../edot-collector/misconfigured-sampling-collector.md) +- [Missing or incomplete traces due to Collector sampling](/troubleshoot/ingest/opentelemetry/edot-collector/misconfigured-sampling-collector.md) diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md b/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md index 4863dcfa16..01bc4e119e 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md @@ -90,7 +90,7 @@ To fix the issue, try the following: Restart after changing any configuration. Some SDKs only read environment variables at startup. -If telemetry is still missing, you can enable debug logging. Refer to [Enable debug logging for EDOT SDKs](enable-debug-logging.md) for guidance. Make sure to [verify that you're looking at the right logs](enable-debug-logging.md#verify-youre-looking-at-the-right-logs). +If telemetry is still missing, you can enable debug logging. Refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md) for guidance. Make sure to [verify that you're looking at the right logs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md#verify-youre-looking-at-the-right-logs). If traces are missing due to sampling configuration, refer to [Missing or incomplete traces due to SDK sampling](/troubleshoot/ingest/opentelemetry/edot-sdks/misconfigured-sampling-sdk.md). ## Auto-instrumentation isn’t attaching [auto-instrumentation-not-attached] @@ -127,7 +127,7 @@ Check the following: * **PHP:** Ensure the extension is loaded and restart PHP-FPM/Apache so bootstrap hooks are active. Refer to [PHP SDK setup](opentelemetry://reference/edot-sdks/php/setup/index.md). - If using Docker or Kubernetes confirm preloading flags or environment variables are placed where the actual process starts. + If using Docker or Kubernetes confirm preloading flags or environment variables are placed where the actual process starts. For connectivity issues that might prevent telemetry from reaching the Collector, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). ### Resolution [res-instrumentation] @@ -174,3 +174,5 @@ To fix the issue, try the following: * **Retest with a minimal app** Strip down to core dependencies to rule out issues introduced by third-party libraries. + +If you're not seeing any telemetry data in {{kib}} at all, refer to [No data visible in {{kib}}](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md) for additional troubleshooting steps. diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/nodejs/index.md b/troubleshoot/ingest/opentelemetry/edot-sdks/nodejs/index.md index c59cb3a001..5b096ce874 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/nodejs/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/nodejs/index.md @@ -27,7 +27,7 @@ Make sure you have set a service name set using `OTEL_SERVICE_NAME=my-service` o ## Check connectivity -Check from the host, VM, pod, container running your application, that connectivity is available to the Collector. Run the following command: +Check from the host, VM, pod, container running your application, that connectivity is available to the Collector. For more detailed connectivity troubleshooting, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). Run the following command: ```bash curl -i $ELASTIC_OTLP_ENDPOINT \ @@ -85,9 +85,11 @@ node --import @elastic/opentelemetry-node my-app.js Turn on verbose diagnostic or debug logging from EDOT Node.js: 1. Set the `OTEL_LOG_LEVEL` environment variable to `verbose`. -2. Restart your application, and reproduce the issue. If the issue is about not seeing telemetry that you expect to see, be sure to use your application so that telemetry data is generated. +2. Restart your application, and reproduce the issue. If the issue is about not seeing telemetry that you expect to see, be sure to use your application so that telemetry data is generated. For troubleshooting missing telemetry, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md). 3. Gather the full verbose log from application start until after the issue was reproduced. +For more information, refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). + The start of the diagnostic log will look something like this: ``` diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/php/index.md b/troubleshoot/ingest/opentelemetry/edot-sdks/php/index.md index c2c7eaf282..b11a6d2f69 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/php/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/php/index.md @@ -23,7 +23,7 @@ As a first step, review the [supported technologies](elastic-otel-php://referenc ## Turn on logging -When diagnosing issues with the agent's operation, logs play a key role. You can find a detailed explanation of the logging configuration options in [Configuration](elastic-otel-php://reference/edot-php/configuration.md#logging-configuration). +When diagnosing issues with the agent's operation, logs play a key role. You can find a detailed explanation of the logging configuration options in [Configuration](elastic-otel-php://reference/edot-php/configuration.md#logging-configuration). For more information on enabling debug logging, refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). In most cases, setting the logging level to `debug` is sufficient. You can also use `trace` can be used, but keep in mind that the amount of generated data might be significant. @@ -50,7 +50,7 @@ You need to restart your application for the changes to apply. ## Agent is not instrumenting code -If the agent doesn't seem to be instrumenting code from your application, try the following actions. +If the agent doesn't seem to be instrumenting code from your application, try the following actions. For more troubleshooting guidance, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md). ### Native OTLP serializer issues diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/proxy.md b/troubleshoot/ingest/opentelemetry/edot-sdks/proxy.md index dee09ba411..b9c2f04cea 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/proxy.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/proxy.md @@ -15,7 +15,7 @@ products: # Proxy settings for EDOT SDKs -EDOT SDKs generally use the standard proxy environment variables. However, there are exceptions and limitations depending on the language and exporter type. +EDOT SDKs generally use the standard proxy environment variables. However, there are exceptions and limitations depending on the language and exporter type. For general connectivity troubleshooting, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). ## Python SDK @@ -33,7 +33,7 @@ The Node.js SDK does not currently support `HTTP_PROXY`, `HTTPS_PROXY`, or `NO_P ## Java SDK -If you’re using Java SDK, you must configure Java system properties using the Java Virtual Machine (JVM). Refer to [Troubleshooting Java SDK proxy issues](/troubleshoot/ingest/opentelemetry/edot-sdks/java/proxy-issues.md) for more information. +If you're using Java SDK, you must configure Java system properties using the Java Virtual Machine (JVM). Refer to [Troubleshooting Java SDK proxy issues](/troubleshoot/ingest/opentelemetry/edot-sdks/java/proxy-issues.md) for more information. ## Other SDKs diff --git a/troubleshoot/ingest/opentelemetry/edot-sdks/python/index.md b/troubleshoot/ingest/opentelemetry/edot-sdks/python/index.md index c3557a2fa5..aa74a4b429 100644 --- a/troubleshoot/ingest/opentelemetry/edot-sdks/python/index.md +++ b/troubleshoot/ingest/opentelemetry/edot-sdks/python/index.md @@ -25,14 +25,14 @@ As a first step, review the [supported technologies](elastic-otel-python://refer Follow these recommended actions to make sure that EDOT Python is configured correctly. -### EDOT Logging level +### EDOT logging level ```{applies_to} product: edot_python: ga 1.9.0 ``` -You can change the default verbosity of both EDOT Python and OpenTelemetry Python SDK code with `OTEL_LOG_LEVEL`, see [configuration](elastic-otel-python://reference/edot-python/configuration.md#differences-from-opentelemetry-python) for the possible values. +You can change the default verbosity of both EDOT Python and OpenTelemetry Python SDK code with `OTEL_LOG_LEVEL`, see [configuration](elastic-otel-python://reference/edot-python/configuration.md#differences-from-opentelemetry-python) for the possible values. For more detailed debugging information, refer to [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). ### Log configuration @@ -69,6 +69,8 @@ If only a subset of instrumentation are causing disruptions, turn them off using Activating the Python logging module auto-instrumentation with `OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true` calls the [logging.basicConfig](https://docs.python.org/3/library/logging.html#logging.basicConfig) method that makes your own application calls to it a no-op. The side effect of this is that you won't see your application logs in the console. If you are already shipping logs by other means, you don't need to turn this on. +If you're not seeing telemetry data in {{kib}}, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md) for troubleshooting steps. + ## Check stability of semantic conventions For some semantic conventions, like HTTP, there is a migration path, but the conversion to stable HTTP semantic conventions is not done yet for all the instrumentations. diff --git a/troubleshoot/ingest/opentelemetry/index.md b/troubleshoot/ingest/opentelemetry/index.md index e543abf1d3..2fba51a7cb 100644 --- a/troubleshoot/ingest/opentelemetry/index.md +++ b/troubleshoot/ingest/opentelemetry/index.md @@ -15,5 +15,28 @@ products: Find solutions to common issues in EDOT components and SDKs. -- [EDOT Collector troubleshooting](/troubleshoot/ingest/opentelemetry/edot-collector/index.md) -- [EDOT SDKs troubleshooting](/troubleshoot/ingest/opentelemetry/edot-sdks/index.md) +## Component troubleshooting + +* [EDOT Collector troubleshooting](/troubleshoot/ingest/opentelemetry/edot-collector/index.md): Troubleshoot issues with the EDOT Collector, including resource problems, configuration errors, and connectivity issues. + +* [EDOT SDKs troubleshooting](/troubleshoot/ingest/opentelemetry/edot-sdks/index.md): Troubleshoot issues with EDOT SDKs for Android, .NET, iOS, Java, Node.js, PHP, and Python. + +## Common troubleshooting topics + +These guides apply to both the Collector and SDKs: + +* [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md): Resolve connection problems between EDOT components and Elastic, including firewall, proxy, and network configuration issues. + +* [No data visible in Kibana](/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md): Diagnose why telemetry data (logs, metrics, traces) doesn't appear in Kibana after setting up EDOT. + +* [429 errors when using the mOTLP endpoint](/troubleshoot/ingest/opentelemetry/429-errors-motlp.md): Resolve HTTP 429 `Too Many Requests` errors when sending data through the Elastic Cloud Managed OTLP endpoint. + +* [Contact support](/troubleshoot/ingest/opentelemetry/contact-support.md): Learn how to contact Elastic Support and what information to include to help resolve issues faster. + +## Additional resources + +* [Troubleshoot ingestion tools](/troubleshoot/ingest.md): Overview of troubleshooting for all ingestion tools, including EDOT, Logstash, Fleet, and Beats. + +* [Elastic Support Portal](https://support.elastic.co/): Access support cases, subscriptions, and licenses. + +* [Elastic community forums](https://discuss.elastic.co): Get answers from experts in the community, including Elastic team members. diff --git a/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md b/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md index cfaf81c2dd..75bab1803e 100644 --- a/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md +++ b/troubleshoot/ingest/opentelemetry/no-data-in-kibana.md @@ -77,7 +77,7 @@ Also look for: * TLS handshake failures * Invalid character errors, which may indicate proxy or HTML redirect instead of JSON -Increase verbosity using `--log-level=debug` for deeper insights. +Increase verbosity using `--log-level=debug` for deeper insights. For more information, refer to [Enable debug logging for the EDOT Collector](/troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md) or [Enable debug logging for EDOT SDKs](/troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md). ### Test network connectivity @@ -87,11 +87,11 @@ You can validate connectivity using `curl`: curl -v https:// -H "Authorization: ApiKey " ``` -Or use `telnet` or `nc` to verify port 443 is reachable. +Or use `telnet` or `nc` to verify port 443 is reachable. For detailed connectivity troubleshooting, refer to [Connectivity issues](/troubleshoot/ingest/opentelemetry/connectivity.md). @@ -109,4 +109,6 @@ service: exporters: [otlp] ``` -If only logs are configured, metrics and traces will not be sent. \ No newline at end of file +If only logs are configured, metrics and traces will not be sent. + +If you're using EDOT SDKs and not seeing application-level telemetry, refer to [No application-level telemetry visible in {{kib}}](/troubleshoot/ingest/opentelemetry/edot-sdks/missing-app-telemetry.md) for SDK-specific troubleshooting. \ No newline at end of file