New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why IConcurrencyLock
in DefaultCache<T>
?
#659
Comments
IConcurrencyLock
in DefaultCache<T>
IConcurrencyLock
in DefaultCache<T>
?
Hi, thanks for getting in touch. We haven't encountered many performance problems with the DefaultCache implementation, so its really helpful to hear when folks have issues. In many cases, the cached items are updated very infrequently, so locking while they are updated seems like a sensible defensive strategy at face value to me. But, if you're seeing performance problems, my assumptions might not always hold. Have you tried replacing the cache implementation as you describe? If so, did it resolve your performance problem? |
I will do that... but it will take some time for that release to get in production. And this is one of those cases where you can only see the results in production... But I don't understand your point of view. You start the service and get the first 2 requests at the same time to connect/token with different clients. Is it reasonable to have 1 request wait for the other? More, you get to a point where for some reason the database is slow, is it reasonable to have a request fail because you are not able to add the client to the cache? |
These are all good points that you're raising. While we probably want some mechanism to prevent someone from reading the cache as it is being written, the locking is probably more aggressive than it needs to be. We'll consider updating the cache lock behavior in a future release, and also look forward to hearing more about how your custom implementation of the cache works. For now, I'll close this issue, but feel free to reopen it if you get interesting results re:your custom cache implementation that you want to share. Thanks! |
Hi @hugoqribeiro, |
Hi @alirezanet. I replaced the default implementation of |
Which version of Duende IdentityServer are you using?
6.2.3
Which version of .NET are you using?
.NET 7
Describe the bug/question
While investigating an issue in our production environment - from time to time, one of our servers rises above 95% CPU - I found a lot of exceptions raised by
DefaultCache<T>
around the same time:One could argue that, given the CPU is loaded, the connection to the database gets much slower, so we get this problem with the cache timeout. I am still to find the reason for the CPU being under stress.
My issue is with the implementation of
GetOrAddAsync()
.Isn't this a huge bottleneck? You are blocking retrieving client X while retrieving client Y?
Does that make sense? Why the lock in the first place? What would be the problem of 2 threads reading the same client at the same time from the database?
On a side note, it would be much better if ALL your service implementations could be easily overridable. This is something that I think you should consider, to improve Identity Server extensibility at large. I could change
GetOrAddAsync()
without having to implement the whole service...Log output/exception with stacktrace
The text was updated successfully, but these errors were encountered: