Skip to content

Remove thread.name from metrics #14061

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

jhayes2-chwy
Copy link

As outlined in #13407 and #14047, the thread.name attribute can create very high cardinality in many cases, and also contributes to a memory leak in the collection mechanism of those metrics. This PR is aimed at fixing both issues.

The affected metrics are:

  • jvm.memory.allocation
  • jvm.cpu.longlock
  • jvm.network.io
  • jvm.network.time

@jhayes2-chwy jhayes2-chwy requested a review from a team as a code owner June 18, 2025 15:29
Copy link

linux-foundation-easycla bot commented Jun 18, 2025

CLA Signed


The committers listed above are authorized under a signed CLA.

Comment on lines 18 to 26
// Use an access-ordered LinkedHashMap so we get a bounded LRU cache
private final Map<String, Consumer<RecordedEvent>> perThread =
new LinkedHashMap<String, Consumer<RecordedEvent>>(16, 0.75F, true) {
@Override
protected boolean removeEldestEntry(Map.Entry<String, Consumer<RecordedEvent>> eldest) {
// Bound this map to prevent memory leaks with fast-cycling thread frameworks
return size() > 512;
}
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this map needed now that the thread name isn't being on the metrics?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Certainly not for correctness reasons. I didn't see any explicit documentation around this map, but after reading through the code my impression was this was mostly a performance optimization to reduce allocations of Consumer<RecordedEvent> instances.

Since this was previously using an unsynchronized hashmap, it does appear to me that the invocation of these consumers is all single-threaded (I haven't worked directly with JFR before, so maybe that's not true?); it smells to me like there's no contention or throughput reasons to have this cache other than to reduce allocations.

Copy link
Author

@jhayes2-chwy jhayes2-chwy Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that jives with your understanding, I could simply remove the cache. While all the little allocations of the PerThread*Handler inner classes probably aren't a problem for the use-cases I'm coming from (high-scale ecommerce), I imagine there are definitely existing OTEL use-cases where it would be, especially on older and/or smaller JVMs.

With that in mind, I'd actually prefer to jump all the way to inlining the PerThread*Handler-based logic directly into the AbstractThreadDispatchingHandler-subclasses.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With that in mind, I'd actually prefer to jump all the way to inlining the PerThread*Handler-based logic directly into the AbstractThreadDispatchingHandler-subclasses.

I've gone ahead and done this, should be easy to revert if needed.


// FIXME doesn't actually do any grouping, but should be safe for now
// FIXME only handles substrings of contiguous digits -> a single `x`, but should be good
// enough for now
@Nullable
public String groupedName(RecordedEvent ev) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this used anywhere now that AbstractThreadDispatchingHandler was deleted?

Copy link
Author

@jhayes2-chwy jhayes2-chwy Jun 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, addressed.

@laurit
Copy link
Contributor

laurit commented Jun 19, 2025

@jhayes2-chwy you need to sign the CLA in order to get the PR merged

@jhayes2-chwy
Copy link
Author

@jhayes2-chwy you need to sign the CLA in order to get the PR merged

Indeed; I've been working with my company to determine if we have a Corporate CLA, so I'm still waiting on that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants