-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add method RemoveOrUpdate to ConcurrentDictionary #23644
Comments
With With your |
To clarify @jnm2, you should be able to counterfeit "removal" by checking count during the "get" update and recreate the item if the count is 0 (presumably the item was previously disposed when the count was set to 0). @jnm2 - |
In the case of
ConcurrentDictionary with current API is useless for this case. |
Since I doubt you have that many |
In my case actual type of |
What if you use |
I wonder if I can bother @StephenCleary - what's the best way to implement a keyed semaphore? Seems like mixing and matching synchronization primitives may not be ideal. |
Hm, the other option would be just to remove the entry (capture the value during the initial update to check later), than add/update it again if it turned out to have been updated in between the update-for-decrement and remove. Of course, that would mean the reference count could go negative, but we don't particularly care what the actual value is, so long as it's not zero |
@jnm2 Shared object actually is not shared. This is asp.net core app, so every request restores personal object from DB. There are many of them point to the same DB entity simultaneously. So there is no way to move sync object to the Object. |
@Clockwork-Muse If you remove an entry thinking that reference count is zero, but it is not. And after that somebody will create new entry, you will have two objects with two different AsyncLocks. |
@alewmt I meant stop using |
@jnm2 Tuple with counter is still required. Otherwise impossible to know when to delete entry |
@alewmt - no, you're right, that was a silly proposal. I guess I was just thinking of the reference count, which wouldn't be helpful. @jnm2 - unfortunately, lock-taken-count (or really, "number of people waiting to enter the semaphore remaining count", given you'd need to check the result of |
Ah, got it. You might be referencing it even though you don't need to use it right then. |
I have always avoided keyed semaphores. In every scenario I've been in where I've wanted one, I take a step back and reconsider the design at a higher level. Every time (so far), I've been able to use an alternative design that does not require a keyed semaphore, and it's usually a cleaner design, too. |
@StephenCleary Simultaneously received requests require some synchronization. I'm curious what is wrong with keyed semaphores? |
As you have been experiencing, semaphore management is a pain. And there's almost always a better place for it that doesn't require keyed semaphores. With the example you provided, you have a request context object and then a keyed semaphore that maps those requests to semaphores. Well, why not have the request context object contain its own semaphore? It's a cleaner design and removes the need for a keyed semaphore in the first place. Have the context contain its own components rather than looking them up in a shared, static dictionary. |
@StephenCleary I’m not sure how it can help abandon keyed semaphores. Two request contexts needs the same semaphore. Certainly, it is possible to move a dictionary of semaphores from application layer to somewhere in a pipeline infrastructure. It just hides usage of keyed semaphore. Did I miss something? |
@alewmt I thought you meant that each request would have a semaphore. If you are talking about some other resource, then just move the semaphore into whatever that resource is. So instead of locking a keyed semaphore based on some lookup, you'd just lock the resource's semaphore directly. |
@StephenCleary In my case it is impossible. The resource is an orm entity for DB row. It is instantiated on every request. |
@alecont I see. I've never been in that situation, but I can see how it's possible, depending on your db concurrency strategy. I'd say that modifying |
When I've been in this scenario, I use a database-table-backed mutex implementation that not only locks within the same process but any other instance running on the same machine or a different machine. Acquiring, maintaining and releasing the lock takes I/O which is slower than concurrency primitives, but in my case, I specifically needed cross-process and cross-machine locking of resources by row ID. (Of course, none of this will help you if you're not interested in locking across multiple instances of your application.) |
@StephenCleary Thank you for your recommendation. |
@alewmt -
Um what? The |
@alewmt If your application is a website, and you're relying on only a single instance of it ever running (per database), and you ever want to scale it horizontally, you're going to have to use some central system like the database for pessimistic locking or you'll have concurrency bugs again. Or eventual consistency, but that also solves your problem by not needing mutexes. Also say it's website running on IIS. By default (I think) IIS starts up a new instance of your application before shutting down the last one so that recycling doesn't interrupt web traffic. If your application is a desktop or console app, @Clockwork-Muse's point is exactly right. How are you guaranteeing that no two instances of your app are sharing the same DB? |
I was thinking of the web situation, too, really. If you need to lock a resource, it's usually better to do it at the resource level, not the thing that's running, because otherwise there's a possibility that something previously unanticipated is allowed access to the resource, and now you're in trouble. |
There's also https://github.com/dotnet/corefx/issues/24770 which I think is reasonable, though maybe not exactly the same scenario. |
Related: reactiveui/splat#370 (comment) |
Due to lack of recent activity, this issue has been marked as a candidate for backlog cleanup. It will be closed if no further activity occurs within 14 more days. Any new comment (by anyone, not necessarily the author) will undo this process. This process is part of the experimental issue cleanup initiative we are currently trialing in a limited number of areas. Please share any feedback you might have in the linked issue. |
up |
What would be the proposed API shape for such a method? Does the proposed method account for cases where it might make sense to add, update or remove an entry depending on the current state? Note that a transaction on a given key could support four possible outcomes, depending on the current state of the dictionary:
Assuming we had some form of optional type it might be possible to expose a method that supports all four combinations dynamically: public partial class ConcurrentDictionary<TKey, TValue>
{
public Option<TValue> Transact(TKey key, Func<TKey, Option<TValue>, Option<TValue>> updater);
public Option<TValue> Transact<TState>(TKey key, Func<TState, TKey, Option<TValue>, Option<TValue>> updater, TState state);
} which could be used as follows (using pseudo C# DU syntax): var dictionary = new ConcurrentDictionary<string, int>();
dictionary.Transact("key", value => value switch
{
case None => None, // attempting to remove a key that doesn't exist, do not make any changes
case Some { Value: <= 1 } => Option<T>.None, // value decreased to zero, remove key
case Some { Value: int v } => Option<T>.Some(--v), // update entry with decremented counter
}); This pattern might not be possible until C# receives discriminated union support but I think it might be worth the wait before we do something in this space. |
It would be very useful to have the ability to conditionally remove or update
ConcurrentDictionary
entry. It already hasAddOrUpdate
. Right now I can't useConcurrentDictionary
in ref counting scenarios, because I can't check reference count and remove it if value is 0 and forced to use commonDictionary
withlock
.The text was updated successfully, but these errors were encountered: