-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TelemetryClient creates InMemoryChannel in Track but never disposes it #364
Comments
Proper Dispose cleanup is hampered by the fact that the TelemetryChannel setter on TelemetryConfiguration is public. Someone can replace the channel instance on a configuration instance without disposing of the old channel which makes for a messed up anti-pattern. |
We have several scenarios supported in the existing code:
TelemetryConfiguration currently has no non-default constructor and needs a setter on the TelemetryChannel property but this poses the problem about when a channel is created and who is responsible for disposing of it. |
I'd like us to separate creation of a default and custom |
Reopened until the PR is complete and merged. |
Is there any guidance if we're seeing this very specific stacktrace in production? It's not particularly clear, but for instance, we have code that looks similar to this (in a shared dll which is used all over)
then I'm guessing our "fix" should be something like this (
We're using v.2.4.0.0 of the AppInsights dll, and I'm not seeing the lines of code mentioned in OP (using dotpeek), so it's a little confusing, but we've about half a year of mysterious timeouts, which I finally traced to the same stacktrace as OP. |
@stofte Yes, you should change your code to avoid repeatedly creating new TelemetryConfigurations each time DefaultTelemetryConfiguration is called. i.e. make it a static field-backed property. Like this, for example: public static TelemetryConfiguration DefaultTelemetryConfiguration { get; } = new TelemetryConfiguration { InstrumentationKey = "ikey" }; You could also do the same for DefaultTelemetryClient (i.e. cache the result in a static field) to avoid constructing a new one each time, but that's less of a problem. |
Even if I cache my instance of TelemetryClient in a static field I still observe the number of open threads growing. Is there any ETA on a fix for this? |
@mark-janos , the known issue is that the repetitive creation of Telemetry Configuration objects will lead to the orphaned threads produced. If you only leverage one Telemetry Configuration object, you should not see ever increasing number of the threads. If you do, it can be a new issue we are not aware about yet - can you post the example stacks from such threads and their approximate quantity? |
I have an Azure Function App which, when I add the TelemetryClient, the Thread Count of the function app continues to grow until the Function App crashes. The growth of threads is shown in the attached screen shot (they only decrease when I manually restart the Function App), and the timescale is about 24 hours. I note that this issue does not occur when I remove the TelemetryClient from the Function App. As suggested above I use static fields for both the TelemetryConfiguration and TelemetryClient, and all I do is call TrackAvailability inside the 'Run' method of the Function App:
|
@mark-janos |
Same issues as @mark-janos (static or DI does not help). When using the "ApplicationInsights-aspnetcore" the behaviour is not reproducible |
@dradoaica in my case I reverted to using the instance of ILogger that's passed into the Function (as per the example posted by @cijothomas ). |
I have an issue that looks similar to this, one particular instance of the affected service had over 55000 threads, I took a dump and most of them had the following stack:
This is a SF ASP.NET Core (netcoreapp3.0) service, using the following packages:
The diagnostics pipeline is initialised in
And using the following eventFlow.config:
Interestingly, all other services in the application, including other ASP.NET Core services don't exhibit the same issue. I haven't yet spotted any difference in implementation but I'll keep looking. For now, I've had to remove everything above from the service. |
After a bit more research, I realised that we had a I thought an update might help anyone else in the same self-inflicted situation! |
Did that help? Am facing similar issue too. Instead of memory, CPU goes to 100% and there are way too many threads stuck at 'defaultaggregationperiodcycle'. |
Yes that change sorted the issue, we had to restart our nodes one by one in order to free enough resources to deploy the fix, but it didn't happen again. |
TelemetryClient has a logic to create InMemoryChannel in case no other channel is configured.
The code is here.
InMemoryChannel creates InMemoryTransmitter that in turn starts a task (Runner()) on a thread pool thread. When TelemetryClient is no longer used and the new TelemetryClient is created, the old thread stays in WaitOneNative() waiting for the next send interval. Dispose() method of InMemoryTransmitter/InMemoryChannel would fire the event to stop waiting and will disable the Runner(), however neither seems to be currently invoked if Telemetry Client is GC'd.
The issue will reproduce if custom or default configuration object is passed into TelemetryClient on each creation, the issue should not reproduce if TelemetryConfiguration.Active is used - one channel will remain active.
The stack of the stale threads is:
System.Threading.WaitHandle.WaitOneNative
System.Threading.WaitHandle.InternalWaitOne
System.Threading.WaitHandle.WaitOne
Microsoft.ApplicationInsights.Channel.InMemoryTransmitter.Runner()
The text was updated successfully, but these errors were encountered: