-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible deadlock in ConfigurationManager in .NET 6 #61747
Comments
Tagging subscribers to this area: @safern Issue DetailsDescriptionWe have a production HTTP application that we updated recently to .NET 6.0.0 from .NET 6.0.0-rc.2 and have observed a number of issues where the application appeared to be become suddenly unresponsive to HTTP requests. This would cause application health checks to fail, and the instances to be taken out of service. Having dug into this over the last day or so (#60654 (comment)), I think I've tracked this down to a deadlock that occurs in Overall the issue appears to be that if an options class is bound to configuration via a type such as runtime/src/libraries/Microsoft.Extensions.Configuration/src/ConfigurationManager.cs Line 46 in 13024af
will be waiting for the lock acquired during the reload: runtime/src/libraries/Microsoft.Extensions.Configuration/src/ConfigurationManager.cs Line 109 in 13024af
I've captured a memory dump from the application after triggering the issue in our staging environment, and a screenshot of the Parallel Stacks window from Visual Studio taken from inspecting the memory dump is below. Thread 852 has called Thread 3516 is deadlocked on a call to
I haven't ruled out this being a latent bug in our application that .NET 6 has brought to the surface, but we've only had the issue with .NET 6.0.0. We've reverted the application to .NET 6.0.0-rc2 for the time being, and the problem has gone away. I figured I would log the issue now in case someone looks at it and can quickly find the root cause while I'm continuing to repro this independently or determine it's an actual bug in our app. Reproduction StepsWill update with a minimal repro project once I can write one up either later today or tomorrow A conceptual repro is to do the following two actions concurrently in an app using
After a period of time (in testing I found this happened within 10 minutes), the application will deadlock. Expected behaviorConfiguration reloads successfully and does not deadlock requests in flight. Actual behaviorThe application deadlocks the thread reloading the configuration and other threads accessing the configuration to bind options. Regression?Compared to using Known WorkaroundsI'm not aware of any workarounds at this point, other than not using Minimal APIs when doing configuration reloading at runtime. Configuration
Other informationNo response
|
Tagging subscribers to this area: @maryamariyan, @safern Issue DetailsDescriptionWe have a production HTTP application that we updated recently to .NET 6.0.0 from .NET 6.0.0-rc.2 and have observed a number of issues where the application appeared to be become suddenly unresponsive to HTTP requests. This would cause application health checks to fail, and the instances to be taken out of service. Having dug into this over the last day or so (#60654 (comment)), I think I've tracked this down to a deadlock that occurs in Overall the issue appears to be that if an options class is bound to configuration via a type such as runtime/src/libraries/Microsoft.Extensions.Configuration/src/ConfigurationManager.cs Line 46 in 13024af
will be waiting for the lock acquired during the reload: runtime/src/libraries/Microsoft.Extensions.Configuration/src/ConfigurationManager.cs Line 109 in 13024af
I've captured a memory dump from the application after triggering the issue in our staging environment, and a screenshot of the Parallel Stacks window from Visual Studio taken from inspecting the memory dump is below. Thread 852 has called Thread 3516 is deadlocked on a call to
I haven't ruled out this being a latent bug in our application that .NET 6 has brought to the surface, but we've only had the issue with .NET 6.0.0. We've reverted the application to .NET 6.0.0-rc2 for the time being, and the problem has gone away. I figured I would log the issue now in case someone looks at it and can quickly find the root cause while I'm continuing to repro this independently or determine it's an actual bug in our app. Reproduction StepsWill update with a minimal repro project once I can write one up either later today or tomorrow A conceptual repro is to do the following two actions concurrently in an app using
After a period of time (in testing I found this happened within 10 minutes), the application will deadlock. Expected behaviorConfiguration reloads successfully and does not deadlock requests in flight. Actual behaviorThe application deadlocks the thread reloading the configuration and other threads accessing the configuration to bind options. Regression?Compared to using Known WorkaroundsI'm not aware of any workarounds at this point, other than not using Minimal APIs when doing configuration reloading at runtime. Configuration
Other informationNo response
|
Did you mean to link to a repository here? This appears to link to the current issue/ |
Thanks Kevin - fat-fingered copy-paste. |
I've been having a play with this, the repo is super useful to recreate this. There are 2 resources involved in this deadlock:
The deadlock
I presume the reload path is fine when the Perhaps the |
Nice investigation, Stu. I wonder through if maybe there's something in the lock implementation that should be reworked in The class summary says it should be frozen once Any thoughts on this @halter73 ? |
From a customer/production perspective, this is an example of the behaviour we saw when the issue first arose. The application was in a steady state serving requests (each coloured line is a different HTTP endpoint), then where the red arrow is when the application's configuration was reloaded in each of the 3 AWS EC2 instances that were in service at the time in response to a change made in our remote configuration store. The application then very quickly went into a state of deadlock in each instance, with health checks eventually also deadlocking, leading to our load-balancer marking the instances all as unhealthy and taking them out of service. Traffic then flatlines until new instances come into service to take up the load. |
It looks like it's very easy to have application code run with the I think the best solution might be to make a copy-on-write version of the |
Thanks for the great repro @martincostello . One possible workaround for now if you control the code calling foreach (var provider in ((IConfigurationRoot)Configuration).Providers)
{
provider.Load();
} This isn't exactly the same as |
Thanks @halter73 - I'll give that workaround a try out tomorrow and see if it provides an alternative. Today in the meantime we backed-out Minimal Hosting to go back to |
Yep, the workaround works for our use case. Thanks! |
Will the fix be backported to 6.0? |
I opened a backport PR at #63816. If you're aware of any others who have run into this issue, that might make it easier pass the servicing bar. |
@martincostello Is it possible for you verify the |
Sure, I'll try this out tomorrow in our staging environment. |
Looks like the installer repo still isn't producing the zip packages for the SDK on main for v7 😞 The required version of the .NET Core SDK is not installed. Expected 7.0.100-alpha.1.22068.3.
dotnet-install: Note that the intended use of this script is for Continuous Integration (CI) scenarios, where:
dotnet-install: - The SDK needs to be installed without user interaction and without admin rights.
dotnet-install: - The SDK installation doesn't need to persist across multiple CI runs.
dotnet-install: To set up a development environment or to run apps, use installers rather than this script. Visit https://dotnet.microsoft.com/download to get the installer.
dotnet-install: Downloading primary link https://dotnetcli.azureedge.net/dotnet/Sdk/7.0.100-alpha.1.22068.3/dotnet-sdk-7.0.100-alpha.1.22068.3-win-x64.zip
dotnet-install: The resource at https://dotnetcli.azureedge.net/dotnet/Sdk/7.0.100-alpha.1.22068.3/dotnet-sdk-7.0.100-alpha.1.22068.3-win-x64.zip is not available.
dotnet-install: Downloading legacy link https://dotnetcli.azureedge.net/dotnet/Sdk/7.0.100-alpha.1.22068.3/dotnet-dev-win-x64.7.0.100-alpha.1.22068.3.zip
dotnet-install: The resource at https://dotnetcli.azureedge.net/dotnet/Sdk/7.0.100-alpha.1.22068.3/dotnet-dev-win-x64.7.0.100-alpha.1.22068.3.zip is not available. I can test this out locally with the installer exe, but our build-deploy process relies on acquiring the SDK using the dotnet-install scripts, so until that's resolved I won't be able to put it into one of our dev/QA/staging envs for a more detailed exercise. |
I ran a variant of the steps in this repro app with the real application that surfaced this issue locally for 15 minutes with no issues using SDK version If necessary I can do some further validation of the fix in our staging environment under more load when the issues with the v7 ZIP availability in the |
Thanks @martincostello. If you could do further validation that would be great. It would help give us more confidence that it fixes the deadlock without causing other regressions. |
No problem. Is the installer team looking into the issue? It's been broken since before the holidays. There was an issue open about it, but it got closed once the exe started working: dotnet/installer#12850 |
Is the installer at https://aka.ms/dotnet/7.0.1xx/daily/dotnet-sdk-win-x64.exe not up to date? I'm seeing build number 7.0.100-alpha.1.22069.1 which looks recent. |
The exe's are fine, but the zips 404 when using the CI scripts. |
Oh! I should have read your previous comments more carefully. If you want to download 7.x previews using the This was fixed by dotnet/install-scripts#233, but it doesn't look like the scripts at dot.net/v1/dotnet-install.sh or dot.net/v1/dotnet-install.ps1 have been updated yet. @bekir-ozturk do you have any idea how long it will take to get the official scripts updated? |
Aha! Cool, I'll add that argument to our bootstrapper script tomorrow morning, then I should be able to get something up into our staging environment for a load/soak test. |
@halter73 I deployed a .NET 7 build of our application with the fix to our staging environment today for 2 hours, and there were no functional or performance issues observed. Our load test test sends constant synthetic load at the application, plus I additionally fired requests at it to reload the configuration continuously in a loop for an hour. We didn't observe any deadlocks during the period, compared to with .NET 6 where we could reproduce the deadlock in these circumstances within a few minutes. |
Description
We have a production HTTP application that we updated recently to .NET 6.0.0 from .NET 6.0.0-rc.2 and have observed a number of issues where the application appeared to be become suddenly unresponsive to HTTP requests. This would cause application health checks to fail, and the instances to be taken out of service.
Having dug into this over the last day or so (#60654 (comment)), I think I've tracked this down to a deadlock that occurs in
ConfigurationManager
if the application's configuration is manually reloaded at runtime.Overall the issue appears to be that if an options class is bound to configuration via a type such as
IOptionsMonitor<T>
and there is a change callback bound toIConfigurationRoot.Reload()
, then the application will deadlock trying to get configuration values to bind to the options class as the lock around getting an option's value:runtime/src/libraries/Microsoft.Extensions.Configuration/src/ConfigurationManager.cs
Line 46 in 13024af
will be waiting for the lock acquired during the reload:
runtime/src/libraries/Microsoft.Extensions.Configuration/src/ConfigurationManager.cs
Line 109 in 13024af
I've captured a memory dump from the application after triggering the issue in our staging environment, and a screenshot of the Parallel Stacks window from Visual Studio taken from inspecting the memory dump is below.
Thread 852 has called
IConfigurationRoot.Reload()
, which is blocked on thread 3516 waiting on an options monitor callback for an options class.Thread 3516 is deadlocked on a call to
IConfiguration[string]
to create an options class.IConfiguration
andIConfigurationRoot
are both the same instance ofConfigurationManager
.I haven't ruled out this being a latent bug in our application that .NET 6 has brought to the surface, but we've only had the issue with .NET 6.0.0. We've reverted the application to .NET 6.0.0-rc2 for the time being, and the problem has gone away.I figured I would log the issue now in case someone looks at it and can quickly find the root cause while I'm continuing to repro this independently or determine it's an actual bug in our app.Reproduction Steps
To reproduce this issue, follow the instructions in this repo: https://github.com/martincostello/ConfigurationManagerDeadlock
A conceptual repro is to do the following two actions concurrently in an app usingWebApplicationBuilder
soConfigurationManager
is the app'sIConfigurationRoot
:Reload theIConfigurationRoot
from an HTTP request in a loop;Issue an HTTP request that resolvesIOptionsMonitor<T>
orIOptionsSnapshot<T>
from the service provider which is bound to configuration in a loop.After a period of time (in testing I found this happened within 10 minutes), the application will deadlock.Expected behavior
Configuration reloads successfully and does not deadlock requests in flight.
Actual behavior
The application deadlocks the thread reloading the configuration and other threads accessing the configuration to bind options.
Regression?
Compared to using
IConfigurationRoot
directly withProgram
/Startup
(i.e. a non-Minimal API), yes.Known Workarounds
I'm not aware of any workarounds at this point, other than not using Minimal APIs when doing configuration reloading at runtime.
Configuration
6.0.0-rtm.21522.10
6.0.0-rtm.21522.10+4822e3c3aa77eb82b2fb33c9321f923cf11ddde6
6.0.0+ae1a6cbe225b99c0bf38b7e31bf60cb653b73a52
Other information
No response
The text was updated successfully, but these errors were encountered: