Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libcoreclr.so!EventPipeInternal::GetNextEvent high CPU use #43985

Closed
djluck opened this issue Oct 27, 2020 · 5 comments · Fixed by #48435
Closed

libcoreclr.so!EventPipeInternal::GetNextEvent high CPU use #43985

djluck opened this issue Oct 27, 2020 · 5 comments · Fixed by #48435

Comments

@djluck
Copy link

djluck commented Oct 27, 2020

Issue Title

When using EventListener over prolonged periods of time, CPU consumption slowly climbs until majority of CPU time is spent forwarding events.

General

.NET core version: v3.1.3
OS: Docker on linux (base image= mcr.microsoft.com/dotnet/core/aspnet:3.1.3-bionic)

When using the prometheus-net.DotNetRuntime package to capture CLR events, I see a gradual increase in CPU over many days:
image

The application itself is a simple ASP.NET core web application that does not perform intensive processing. After collecting a CPU trace using perfcollect, I see this call stack standing out by a mile:
image

This pattern has showed itself consistently over the last month, suggesting that the amount of CPU consumption required utilized by the event pipes increases over time to the point where we need to restart our applications.

Some notes about the prometheus-net.DotNetRuntime package (of which I authored):

  • The purpose of this package is to instrument .NET runtime internals by listening to telemetry events consumed by EventListeners
  • Multiple EventListener are spun up to listen to events. One listener is created per "type" of events (e.g. one for counting exceptions, one for listening to GC events, etc.)
@carlossanlop carlossanlop transferred this issue from dotnet/core Oct 28, 2020
@Dotnet-GitSync-Bot Dotnet-GitSync-Bot added area-Tracing-coreclr untriaged New issue has not been triaged by the area owner labels Oct 28, 2020
@tommcdon tommcdon removed the untriaged New issue has not been triaged by the area owner label Oct 29, 2020
@tommcdon tommcdon added this to the 6.0.0 milestone Oct 29, 2020
@tommcdon tommcdon added this to On Deck in .NET Core Diagnostics Oct 29, 2020
@ghost ghost added the in-pr There is an active PR which will close this issue when it is merged label Feb 22, 2021
@pcwiese
Copy link

pcwiese commented Mar 8, 2021

Is there any way to mitigate this in 3.1/5.0 without tearing down the entire application? Would destroying/re-creating the EventListener's have any effect?

@josalem
Copy link
Contributor

josalem commented Mar 9, 2021

Yes, you should be able to mitigate this problem for previous versions by restarting the EventPipe session used by EventListeners periodically. For EventListeners, you should be able to call EventListener.EnableEvents with the Microsoft-Windows-DotNETRuntime provider and the same keywords as the last call to refresh the session.

.NET Core Diagnostics automation moved this from On Deck to Done Mar 16, 2021
@ghost ghost removed the in-pr There is an active PR which will close this issue when it is merged label Mar 16, 2021
@djluck
Copy link
Author

djluck commented Mar 16, 2021

@josalem will #48435 be merged into .NET 3.1?

@josalem
Copy link
Contributor

josalem commented Mar 16, 2021

I don't believe it will. The patch targets the C version of EventPipe that we switched to in 6.0. There is a workaround for pre-6.0 (see my previous comment on this thread).

@djluck
Copy link
Author

djluck commented Mar 16, 2021

Thanks @josalem. So unfortunately the workaround (Disable/ Enable events) is not always tenable due to deadlocks that occur on 3.1 when a high load of events occurs. I haven't had time to write up an issue for this yet but have a reliable reproduction on my machine. Hopefully should find the time to create an issue later today.

@ghost ghost locked as resolved and limited conversation to collaborators Apr 15, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
Development

Successfully merging a pull request may close this issue.

5 participants