diff --git a/img/integrations/opensearch-trace-details.png b/img/integrations/opensearch-trace-details.png
new file mode 100644
index 0000000..bbc7af2
Binary files /dev/null and b/img/integrations/opensearch-trace-details.png differ
diff --git a/mint.json b/mint.json
index 97cc52a..36ba2ad 100644
--- a/mint.json
+++ b/mint.json
@@ -135,6 +135,7 @@
"openllmetry/integrations/langsmith",
"openllmetry/integrations/middleware",
"openllmetry/integrations/newrelic",
+ "openllmetry/integrations/opensearch",
"openllmetry/integrations/otel-collector",
"openllmetry/integrations/oraclecloud",
"openllmetry/integrations/scorecard",
diff --git a/openllmetry/integrations/introduction.mdx b/openllmetry/integrations/introduction.mdx
index 33f7742..d958bb1 100644
--- a/openllmetry/integrations/introduction.mdx
+++ b/openllmetry/integrations/introduction.mdx
@@ -34,6 +34,7 @@ in any observability platform that supports OpenTelemetry.
+
+ This integration requires an OpenTelemetry Collector and Data Prepper as intermediaries between the Traceloop OpenLLMetry SDK and OpenSearch.
+ Data Prepper 2.0+ supports OTLP ingestion natively.
+
+
+## Quick Start
+
+
+
+ Install the Traceloop SDK alongside your LLM provider client:
+
+ ```bash
+ pip install traceloop-sdk openai
+ ```
+
+
+
+ Configure your OpenTelemetry Collector to receive traces from OpenLLMetry and forward them to Data Prepper.
+
+Create an `otel-collector-config.yaml` file:
+
+```yaml
+receivers:
+ otlp:
+ protocols:
+ http:
+ endpoint: localhost:4318
+ grpc:
+ endpoint: localhost:4317
+
+processors:
+ batch:
+ timeout: 10s
+ send_batch_size: 1024
+
+ memory_limiter:
+ check_interval: 1s
+ limit_mib: 512
+
+ resource:
+ attributes:
+ - key: service.name
+ action: upsert
+ value: your-service-name # Match this to app_name parameter value when calling Traceloop.init()
+
+exporters:
+ # Export to Data Prepper via OTLP
+ otlp/data-prepper:
+ endpoint: http://localhost:21890
+ tls:
+ insecure: true # Allow insecure connection from OTEL Collector to Data Prepper (for demo purposes)
+
+ # Logging exporter for debugging (can ignore if not needed)
+ logging:
+ verbosity: normal
+ sampling_initial: 5
+ sampling_thereafter: 200
+
+ # Debug exporter to verify trace data
+ debug:
+ verbosity: detailed
+ sampling_initial: 10
+ sampling_thereafter: 10
+
+extensions:
+ health_check:
+ endpoint: localhost:13133
+
+service:
+ extensions: [health_check]
+
+ pipelines:
+ traces:
+ receivers: [otlp]
+ processors: [memory_limiter, batch, resource]
+ exporters: [otlp/data-prepper, logging, debug]
+
+ metrics:
+ receivers: [otlp]
+ processors: [memory_limiter, batch, resource]
+ exporters: [logging]
+
+ logs:
+ receivers: [otlp]
+ processors: [memory_limiter, batch, resource]
+ exporters: [logging]
+```
+
+
+In production, enable TLS and use authentication between the OpenTelemetry Collector and Data Prepper.
+Set `tls.insecure: false` and configure appropriate certificates.
+
+
+
+
+ Data Prepper receives traces from the OpenTelemetry Collector, processes them, and writes them to OpenSearch.
+
+Create a `data-prepper-pipelines.yaml` file:
+
+```yaml
+entry-pipeline:
+ source:
+ otel_trace_source:
+ ssl: false
+ sink:
+ - pipeline:
+ name: raw-trace-pipeline
+ - pipeline:
+ name: service-map-pipeline
+
+raw-trace-pipeline:
+ source:
+ pipeline:
+ name: entry-pipeline
+ processor:
+ - otel_traces:
+ sink:
+ - opensearch:
+ hosts: ["https://localhost:9200"]
+ index_type: trace-analytics-raw
+ username: admin
+ password: admin
+
+service-map-pipeline:
+ source:
+ pipeline:
+ name: entry-pipeline
+ processor:
+ - service_map:
+ sink:
+ - opensearch:
+ hosts: ["https://localhost:9200"]
+ index_type: trace-analytics-service-map
+ username: admin
+ password: admin
+```
+
+
+ Data Prepper automatically creates the `otel-v1-apm-span` and `otel-v1-apm-service-map` indices in OpenSearch.
+ The `entry-pipeline` listens on port 21890 by default for OTLP gRPC traffic.
+
+
+
+
+ Import and initialize Traceloop before any LLM imports:
+
+ ```python
+ from os import getenv
+
+ from traceloop.sdk import Traceloop
+ from openai import OpenAI
+
+ # Initialize Traceloop with OTLP endpoint
+ Traceloop.init(
+ app_name="your-service-name",
+ api_endpoint="http://localhost:4318"
+ )
+
+ # Traceloop must be initialized before importing the LLM client
+ # Traceloop instruments the OpenAI client automatically
+ client = OpenAI(api_key=getenv("OPENAI_API_KEY"))
+
+ # Make LLM calls - automatically traced
+ response = client.chat.completions.create(
+ model="gpt-4o-mini",
+ messages=[{"role": "user", "content": "Hello!"}]
+ )
+ ```
+
+
+ The `app_name` parameter sets the service name visible in OpenSearch Dashboards' Trace Analytics.
+
+
+
+
+ Navigate to OpenSearch Dashboards to explore your LLM traces:
+
+ 1. Open OpenSearch Dashboards at `http://localhost:5601`
+ 2. Go to **Observability → Trace Analytics → Traces**
+ 3. Click on a trace to view the full span waterfall
+ 4. Inspect individual spans for LLM metadata
+
+ Each LLM call appears as a span containing:
+ - Model name (`gen_ai.request.model`)
+ - Token usage (`gen_ai.usage.input_tokens`, `gen_ai.usage.output_tokens`)
+ - Prompts and completions (configurable)
+ - Request duration and latency
+
+
+
+## Environment Variables
+
+Configure OpenLLMetry behavior using environment variables:
+
+| Variable | Description | Default |
+|----------|-------------|---------|
+| `TRACELOOP_BASE_URL` | OpenTelemetry Collector endpoint | `http://localhost:4318` |
+| `TRACELOOP_TRACE_CONTENT` | Capture prompts/completions | `true` |
+
+
+
+Set `TRACELOOP_TRACE_CONTENT=false` in production to prevent logging sensitive prompt content.
+
+
+## Using Workflow Decorators
+
+For complex applications with multiple steps, use workflow decorators to create hierarchical traces:
+
+```python
+from os import getenv
+from traceloop.sdk import Traceloop
+from traceloop.sdk.decorators import workflow, task
+from openai import OpenAI
+
+Traceloop.init(
+ app_name="recipe-service",
+ api_endpoint="http://localhost:4318",
+)
+
+# Traceloop must be initialized before importing the LLM client
+# Traceloop instruments the OpenAI client automatically
+client = OpenAI(api_key=getenv("OPENAI_API_KEY"))
+
+@task(name="generate_recipe")
+def generate_recipe(dish: str):
+ """LLM call - creates a child span"""
+ response = client.chat.completions.create(
+ model="gpt-4o-mini",
+ messages=[
+ {"role": "system", "content": "You are a chef."},
+ {"role": "user", "content": f"Recipe for {dish}"}
+ ]
+ )
+ return response.choices[0].message.content
+
+
+@workflow(name="recipe_workflow")
+def create_recipe(dish: str, servings: int):
+ """Parent workflow - creates the root transaction"""
+ recipe = generate_recipe(dish)
+ return {"recipe": recipe, "servings": servings}
+
+# Call the workflow
+result = create_recipe("pasta carbonara", 4)
+```
+
+In OpenSearch Dashboards' Trace Analytics, you'll see:
+- `recipe_workflow.workflow` as the parent trace
+- `generate_recipe.task` as a child span
+- `openai.chat.completions` as the LLM API span with full metadata
+
+## Example Trace Visualization
+
+
+
+
+
+## Captured Metadata
+
+OpenLLMetry automatically captures these attributes in each LLM span:
+
+**Request Attributes:**
+- `gen_ai.request.model` - Model identifier
+- `gen_ai.request.temperature` - Sampling temperature
+- `gen_ai.system` - Provider name (OpenAI, Anthropic, etc.)
+
+**Response Attributes:**
+- `gen_ai.response.model` - Actual model used
+- `gen_ai.response.id` - Unique response identifier
+- `gen_ai.response.finish_reason` - Completion reason
+
+**Token Usage:**
+- `gen_ai.usage.input_tokens` - Input token count
+- `gen_ai.usage.output_tokens` - Output token count
+- `llm.usage.total_tokens` - Total tokens
+
+**Content (if enabled):**
+- `gen_ai.prompt.{N}.content` - Prompt messages
+- `gen_ai.completion.{N}.content` - Generated completions
+
+## Production Considerations
+
+
+
+ Disable prompt/completion logging in production:
+
+ ```bash
+ export TRACELOOP_TRACE_CONTENT=false
+ ```
+
+ This prevents sensitive data from being stored in OpenSearch.
+
+
+
+ Configure sampling in the OpenTelemetry Collector to reduce trace volume:
+
+ ```yaml
+ processors:
+ probabilistic_sampler:
+ sampling_percentage: 10 # Sample 10% of traces
+ ```
+
+
+
+ Enable TLS and authentication for Data Prepper and OpenSearch:
+
+ **Data Prepper TLS:**
+
+ ```yaml
+ entry-pipeline:
+ source:
+ otel_trace_source:
+ ssl: true
+ sslKeyCertChainFile: "/path/to/cert.pem"
+ sslKeyFile: "/path/to/key.pem"
+ ```
+
+ **OpenSearch Authentication:**
+
+ ```yaml
+ sink:
+ - opensearch:
+ hosts: ["https://opensearch-node:9200"]
+ username: "${OPENSEARCH_USERNAME}"
+ password: "${OPENSEARCH_PASSWORD}"
+ ```
+
+
+
+## Resources
+
+- [OpenSearch Documentation](https://opensearch.org/docs/latest/)
+- [Data Prepper Documentation](https://opensearch.org/docs/latest/data-prepper/)
+- [OpenTelemetry Collector Configuration](https://opentelemetry.io/docs/collector/configuration/)
+- [Traceloop SDK Configuration](https://www.traceloop.com/docs/openllmetry/configuration)