-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
confluent set and map is thread safe? #1
Comments
Nodes are reference counted using atomic counters and updating operations will access hash tables that are guarded by mutexes. Otherwise the sets and maps are backed by immutable trees that make them inherently thread-safe by design. You can think of the sets and maps as smart pointers that point to nodes in a forest where all nodes are immutable. Updating a set or a map can add new nodes to the forest and/or delete old nodes that are no longer reachable, but can never modify any existing node that can be reached from other sets or maps. As with smart pointers, one instance of a set or a map should not be updated simultaneously from different threads (as a smart pointer itself is usually not guarded in that way), but it is fine to perform read operations from different threads. It is also fine to use the copy constructor to clone a set or map in O(1) and then update the cloned instances concurrently. |
yeah, smart pointer itself is usually not guarded updated simultaneously. usage: thread1: thread2-threadn: |
The following is not safe:
An obvoius problem is that thread2 would explore the tree without holding a reference count on the root node, so that the searched nodes could disappear while the find operation is in progress. I have worked with similar implementations in the past that allow this kind concurrency by making the root node pointer atomic and also increasing the reference count on the root node, to protect against deallocation while operations like find() are exploring the tree. It comes with a severe performance penalty though. First it requires load and store fences in all entry points to ensure cache consistency. Then all usage of atomics, mutexes and memory barriers insert compiler barriers that prevent the optimizations a compiler otherwise could do. With the current implementation read operations performs similar to ephemeral implementations, but that would not be possible if additional synchronization was always added just because it would be useful in rare cases. On the other hand it should be fully possible to wrap the current implementation to add more synchronization when needed. The following should be fine:
|
yeah, wrap with mutex is simple, and i have a same mutex treap implementation ;-) |
no atomic operation in source code, are they threadsafe?
The text was updated successfully, but these errors were encountered: