id | title | description | hide_table_of_contents | keywords | image | |||||
---|---|---|---|---|---|---|---|---|---|---|
running-python-app-with-opentelemetry-collector-and-tracetest |
Python with OpenTelemetry manual instrumention |
Quick start how to configure a Python app to use OpenTelemetry instrumentation with traces, and Tracetest for enhancing your e2e and integration tests with trace-based testing. |
false |
|
:::note Check out the source code on GitHub here. :::
Tracetest is a testing tool based on OpenTelemetry that allows you to test your distributed application. It allows you to use data from distributed traces generated by OpenTelemetry to validate and assert if your application has the desired behavior defined by your test definitions.
This is a simple quick start on how to configure a Python app to use OpenTelemetry instrumentation with traces, and Tracetest for enhancing your e2e and integration tests with trace-based testing.
You will need Docker and Docker Compose installed on your machine to run this quick start app!
The project is built with Docker Compose. It contains two distinct docker-compose.yaml
files.
The docker-compose.yaml
file and Dockerfile
in the root directory are for the Python app.
The docker-compose.yaml
file, collector.config.yaml
, tracetest-provision.yaml
, and tracetest-config.yaml
in the tracetest
directory are for setting up Tracetest and the OpenTelemetry Collector.
The tracetest
directory is self-contained and will run all the prerequisites for enabling OpenTelemetry traces and trace-based testing with Tracetest.
All services
in the docker-compose.yaml
are on the same network and will be reachable by hostname from within other services. E.g. tracetest:4317
in the collector.config.yaml
will map to the tracetest
service, where the port 4317
is the port where Tracetest accepts traces.
The Python app is a simple Flask app, contained in the app.py
file.
The code below imports all the Flask, and OpenTelemetry libraries and configures both manual and automatic OpenTelemetry instrumentation.
from flask import Flask, request
import json
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.trace.export import ConsoleSpanExporter
provider = TracerProvider()
processor = BatchSpanProcessor(ConsoleSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)
There are 3 endpoints in the Flask app. For seeing manual instrumentation trigger the "/manual"
endpoint. For seeing the automatic instrumentation trigger the "/automatic"
endpoint respectively.
app = Flask(__name__)
@app.route("/manual")
def manual():
with tracer.start_as_current_span(
"manual",
attributes={ "endpoint": "/manual", "foo": "bar" }
):
return "App works with a manual instrumentation."
@app.route('/automatic')
def automatic():
return "App works with automatic instrumentation."
@app.route("/")
def home():
return "App works."
The Dockerfile
includes bootstrapping the needed OpenTelemetry packages. As you can see it does not have the CMD
command. Instead, it's configured in the docker-compose.yaml
below.
FROM python:3.10.1-slim
WORKDIR /opt/app
COPY . .
RUN pip install --no-cache-dir -r requirements.txt
RUN opentelemetry-bootstrap -a install
EXPOSE 8080
The docker-compose.yaml
contains just one service for the Python app. The service is stared with the command
parameter.
version: '3'
services:
app:
image: quick-start-python
platform: linux/amd64
extra_hosts:
- "host.docker.internal:host-gateway"
build: .
ports:
- "8080:8080"
# using the command here instead of the Dockerfile
command: opentelemetry-instrument --traces_exporter otlp --service_name app --exporter_otlp_endpoint otel-collector:4317 --exporter_otlp_insecure true flask run --host=0.0.0.0 --port=8080
depends_on:
tracetest:
condition: service_started
To start it, run this command:
docker compose build # optional if you haven't already built the image
docker compose up
This will start the Python app. But, you're not sending the traces anywhere.
Let's fix this by configuring Tracetest and OpenTelemetry Collector.
The docker-compose.yaml
in the tracetest
directory is configured with three services.
- Postgres - Postgres is a prerequisite for Tracetest to work. It stores trace data when running the trace-based tests.
- OpenTelemetry Collector - A vendor-agnostic implementation of how to receive, process and export telemetry data.
- Tracetest - Trace-based testing that generates end-to-end tests automatically from traces.
version: "3"
services:
tracetest:
image: kubeshop/tracetest:latest
platform: linux/amd64
volumes:
- type: bind
source: ./tracetest/tracetest-config.yaml
target: /app/tracetest.yaml
- type: bind
source: ./tracetest/tracetest-provision.yaml
target: /app/provisioning.yaml
ports:
- 11633:11633
command: --provisioning-file /app/provisioning.yaml
depends_on:
postgres:
condition: service_healthy
otel-collector:
condition: service_started
healthcheck:
test: ["CMD", "wget", "--spider", "localhost:11633"]
interval: 1s
timeout: 3s
retries: 60
environment:
TRACETEST_DEV: ${TRACETEST_DEV}
postgres:
image: postgres:14
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
healthcheck:
test: pg_isready -U "$$POSTGRES_USER" -d "$$POSTGRES_DB"
interval: 1s
timeout: 5s
retries: 60
otel-collector:
image: otel/opentelemetry-collector-contrib:0.59.0
command:
- "--config"
- "/otel-local-config.yaml"
volumes:
- ./tracetest/collector.config.yaml:/otel-local-config.yaml
Tracetest depends on both Postgres and the OpenTelemetry Collector. Both Tracetest and the OpenTelemetry Collector require config files to be loaded via a volume. The volumes are mapped from the root directory into the tracetest
directory and the respective config files.
The tracetest-config.yaml
file contains the basic setup of connecting Tracetest to the Postgres instance.
postgres:
host: postgres
user: postgres
password: postgres
port: 5432
dbname: postgres
params: sslmode=disable
The tracetest-provision.yaml
file provisions the trace data store and polling to store in the Postgres database. The data store is set to OTLP meaning the traces will be stored in Tracetest itself.
---
type: DataStore
spec:
name: OpenTelemetry Collector
type: otlp
isdefault: true
But how are traces sent to Tracetest?
The collector.config.yaml
explains that. It receives traces via either grpc
or http
. Then, exports them to Tracetest's otlp endpoint tracetest:4317
.
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
timeout: 100ms
exporters:
logging:
loglevel: debug
otlp/1:
endpoint: tracetest:4317
# Send traces to Tracetest.
# Read more in docs here: https://docs.tracetest.io/configuration/connecting-to-data-stores/opentelemetry-collector
tls:
insecure: true
service:
pipelines:
traces/1:
receivers: [otlp]
processors: [batch]
exporters: [otlp/1]
To start both the Python app and Tracetest we will run this command:
docker-compose -f docker-compose.yaml -f tracetest/docker-compose.yaml up # add --build if the images are not built already
This will start your Tracetest instance on http://localhost:11633/
. Go ahead and open it up.
Start creating tests! Make sure to use the http://app:8080/
url in your test creation, because your Python app and Tracetest are in the same network.
Feel free to check out our examples in GitHub, and join our Slack Community for more info!