Skip to content
14 changes: 8 additions & 6 deletions solutions/observability/applications/llm-observability.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
navigation_title: LLM Observability
navigation_title: LLM and agentic AI observability
---

# LLM Observability
# LLM and agentic AI observability

Check notice on line 5 in solutions/observability/applications/llm-observability.md

View workflow job for this annotation

GitHub Actions / vale

Elastic.Capitalization: 'LLM and agentic AI observability' should use sentence-style capitalization.

While LLMs hold incredible transformative potential, they also bring complex challenges in reliability, performance, and cost management. Traditional monitoring tools require an evolved set of observability capabilities to ensure these models operate efficiently and effectively.
To keep your LLM-powered applications reliable, efficient, cost-effective, and easy to troubleshoot, Elastic provides a powerful LLM observability framework including key metrics, logs, and traces, along with pre-configured, out-of-the-box dashboards that deliver deep insights into model prompts and responses, performance, usage, and costs.
Expand All @@ -11,19 +11,21 @@
- Metrics and logs ingestion for LLM APIs (via [Elastic integrations](integration-docs://reference/index.md))
- APM tracing for LLM Models (via [instrumentation](opentelemetry://reference/index.md))

## Metrics and logs ingestion for LLM APIs (via Elastic integrations)
## LLM and agentic AI platform observability with Elastic integrations

Check notice on line 14 in solutions/observability/applications/llm-observability.md

View workflow job for this annotation

GitHub Actions / vale

Elastic.Capitalization: 'LLM and agentic AI platform observability with Elastic integrations' should use sentence-style capitalization.

Elastic’s LLM integrations now support the most widely adopted models, including OpenAI, Azure OpenAI, and a diverse range of models hosted on Amazon Bedrock and Google Vertex AI. Depending on the LLM provider you choose, the following table shows which type of data -- log or metrics -- you can collect.

| **LLM Provider** | **Metrics** | **Logs** |
| **LLM or agentic AI platform** | **Metrics** | **Logs** |
|--------|------------|------------|
| [Amazon Bedrock](integration-docs://reference/aws_bedrock.md)| ✅ | ✅ |
| [Amazon Bedrock AgentCore](integration-docs://reference/aws_bedrock_agentcore.md)| ✅ | ✅ |
| [Azure AI Foundry](integration-docs://reference/azure_ai_foundry.md) | ✅| ✅ |
| [Azure OpenAI](integration-docs://reference/azure_openai.md)| ✅ | ✅ |
| [GCP Vertex AI](integration-docs://reference/gcp_vertexai.md) | ✅ | ✅ |
| [OpenAI](integration-docs://reference/openai.md) | ✅| 🚧 |
| [Azure AI Foundry](integration-docs://reference/azure_ai_foundry.md) | ✅| ✅ |

## APM tracing for LLM models (via instrumentation)

## LLM and agentic AI application observability with APM (distributed tracing)

Check notice on line 28 in solutions/observability/applications/llm-observability.md

View workflow job for this annotation

GitHub Actions / vale

Elastic.Capitalization: 'LLM and agentic AI application observability with APM (distributed tracing)' should use sentence-style capitalization.

Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging LLM models hosted on Amazon Bedrock, OpenAI, Azure OpenAI, and GCP Vertex AI, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your LLM-powered application.

Expand Down
Loading