Open
Description
I faced a memory leak in OT spans batch processing. There were a network connectivity issue at the production server, when Grafana Tempo was unavailable for about a minute or so. It provokes a situation when otel_export_table1
is never get cleaned and grows up for days until OOM happens.
Here is a stats of this ets-table. Pay attention on memory taken.
[
id: #Reference<0.1085774251.2960785409.231555>,
decentralized_counters: false,
read_concurrency: false,
write_concurrency: true,
compressed: false,
memory: 97652967,
owner: #PID<0.13917038.0>,
heir: :none,
name: :"otel_batch_processor_otel_export_table1_<0.13917038.0>",
size: 799875,
node: :"node@name",
named_table: true,
type: :duplicate_bag,
keypos: 16,
protection: :public
]
Happens with opentelemetry 1.3.1
Maybe related to https://github.com/open-telemetry/opentelemetry-erlang/blob/v1.3.0/apps/opentelemetry/src/otel_batch_processor.erl#L211-L212
Metadata
Metadata
Assignees
Labels
No labels