Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example: add new opentelemetry filter to filter tracetest traces #3422

Merged
merged 5 commits into from
Dec 4, 2023
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
43 changes: 27 additions & 16 deletions docs/docs/configuration/sampling-tracetest-spans.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,6 @@ your test spans not sampled by your probabilistic sampler. There are a couple of

## Add a Separate Pipeline for Tracetest in your OpenTelemetry Collector

> :warning: Note: This requires the [OpenTelemetry collector contrib](https://hub.docker.com/r/otel/opentelemetry-collector-contrib) instead of the core release
> of the collector

Your existing OpenTelemetry Collector already receives traces from your applications and sends them to your datastore and you have a set of processors configured to ensure the quality of the traces your datastore is receiving. It probably looks like this one:

```yaml
Expand Down Expand Up @@ -59,22 +56,20 @@ receivers:
http:

processors:
batch:

probabilistic_sampler:
hash_seed: 22
sampling_percentage: 5.0

batch:

# Filters spans that have the tracestate `tracetest=true` in their context. This value
# is injected by Tracetest when triggering the test
#
# Note: this requires the `collector-contrib` version of the collector
tail_sampling:
decision_wait: 5s
policies:
- name: tracetest-spans
type: trace_state
trace_state: { key: tracetest, values: ["true"] }
# If this configuration fails on your collector, make sure to update it to a newer version.
# This is the recommended way of filtering spans based on the `trace_state`. It's faster and less
# resource intensive than using a `tail_sampling` approach.
filter/tracetest:
error_mode: ignore
traces:
span:
- 'trace_state["tracetest"] != "true"'

exporters:
otlp/jaeger:
Expand All @@ -92,10 +87,26 @@ service:
pipelines:
traces/tracetest:
receivers: [otlp]
processors: [tail_sampling, batch]
processors: [filter/tracetest, batch]
exporters: [otlp/jaeger]
```

### Tail sampling approach
mathnogueira marked this conversation as resolved.
Show resolved Hide resolved
Before December 2023, we were suggesting people to use tail sampling to filter the traces generated by tracetests.
mathnogueira marked this conversation as resolved.
Show resolved Hide resolved
However, the new `filter` capabilities are better for performance than tail sampling as it
requires less memory to decide if a trace should be sampled or not. But even with those arguments you still want to use the
mathnogueira marked this conversation as resolved.
Show resolved Hide resolved
tail sampling approach, this is the processor you can use:

```yaml
processors:
tail_sampling:
decision_wait: 5s
policies:
- name: tracetest-spans
type: trace_state
trace_state: { key: tracetest, values: ["true"] }
```

With this configuration, you will still get 5% of all your traces, but you will also ensure that all your test traces are collected and sent to
Jaeger.

Expand Down
31 changes: 31 additions & 0 deletions examples/collector-filtering/collector.config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
receivers:
otlp:
protocols:
grpc:
http:

processors:
batch:
timeout: 100ms

filter/tracetest:
error_mode: ignore
traces:
span:
- 'trace_state["tracetest"] != "true"'

exporters:
logging:
loglevel: debug

otlp/1:
endpoint: ${TRACETEST_ENDPOINT}
tls:
insecure: true

service:
pipelines:
traces/1:
receivers: [otlp]
processors: [filter/tracetest, batch]
exporters: [otlp/1]
54 changes: 54 additions & 0 deletions examples/collector-filtering/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
version: '3'
services:

tracetest:
image: kubeshop/tracetest:${TAG:-latest}
# uncommentig this line breaks the portability of this file, that is the base for the installer
# platform: linux/amd64
volumes:
- type: bind
source: ./tracetest-config.yaml
target: /app/tracetest.yaml
- type: bind
source: ./tracetest-provision.yaml
target: /app/provision.yaml
command: --provisioning-file /app/provision.yaml
ports:
- 11633:11633
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
postgres:
condition: service_healthy
otel-collector:
condition: service_started
healthcheck:
test: ["CMD", "wget", "--spider", "localhost:11633"]
interval: 1s
timeout: 3s
retries: 60
environment:
TRACETEST_DEV: ${TRACETEST_DEV}

postgres:
image: postgres:14
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
healthcheck:
test: pg_isready -U "$$POSTGRES_USER" -d "$$POSTGRES_DB"
interval: 1s
timeout: 5s
retries: 60

otel-collector:
image: otel/opentelemetry-collector:0.90.1
command:
- "--config"
- "/otel-local-config.yaml"
volumes:
- ./collector.config.yaml:/otel-local-config.yaml
ports:
- 4317:4317
environment:
- TRACETEST_ENDPOINT=tracetest:4317
17 changes: 17 additions & 0 deletions examples/collector-filtering/tests/list-tests.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
type: Test
spec:
id: e9c6cff9-974d-4263-8a23-22f1e9f975aa
name: List all tracetest tests
description: List all existing tests from tracetest API
trigger:
type: http
httpRequest:
url: http://localhost:11633/api/tests
method: GET
headers:
- key: Content-Type
value: application/json
specs:
- selector: span[tracetest.span.type="http" name="GET /api/tests"]
assertions:
- attr:tracetest.selected_spans.count = 1
21 changes: 21 additions & 0 deletions examples/collector-filtering/tracetest-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
postgres:
host: postgres
user: postgres
password: postgres
port: 5432
dbname: postgres
params: sslmode=disable

telemetry:
exporters:
collector:
serviceName: tracetest
sampling: 100 # 100%
exporter:
type: collector
collector:
endpoint: otel-collector:4317

server:
telemetry:
exporter: collector
24 changes: 24 additions & 0 deletions examples/collector-filtering/tracetest-provision.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
type: PollingProfile
spec:
name: Default
strategy: periodic
default: true
periodic:
retryDelay: 5s
timeout: 10m

---
type: DataStore
spec:
name: OpenTelemetry Collector
type: otlp
default: true
---
type: TestRunner
spec:
id: current
name: default
requiredGates:
- analyzer-score
- test-specs