Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvement of node synchronization #29

Closed
maypok86 opened this issue Jan 6, 2024 · 2 comments
Closed

Improvement of node synchronization #29

maypok86 opened this issue Jan 6, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@maypok86
Copy link
Owner

maypok86 commented Jan 6, 2024

Clarification and motivation

Currently otter uses spinlocks to synchronize nodes while getting values/updating information in nodes. This solution blocks reading values from the same node and slows down reads a lot. Unfortunately, we can't use sync.RWMutex because of xadd operations inside it, and since the amount of work under locks is very small, we will get the same speed, but will waste more memory than spinlock. There are several solutions to this problem:

  • Using atomic.Value to store values. This solution leads to fantastic speed of reads/updates, but will require additional 16 bytes per node to store an empty interface and unnecessary allocation for each value.
  • Using seqlock to synchronize nodes. This synchronization mechanism used in linux requires only two load operations on reads, which should help a lot in our case. But this method also has its problems:
    1. According to go data race detector seqlock is a data race, as it allows reading dirty data, but decides whether to give it or not by an additional check. This problem can be solved by using compile flags and compile otter with seqlock only if the -race flag was not passed, but still not very nice.
    2. As it was found out recently, the go compiler does not force barriers between two atomic load operations and rearranges value copying after the second load, which leads to the fact that such code often returns dirty data. gossa. It seems that this problem can be solved by wrapping the value copying of lock and unlock operations on a local mutex for each function, which does not lead to additional contention but triggers memory barriers.
  • Using RCU operations. That is, instead of updating nodes, we will create new ones and update the eviction policy, then we don't need any synchronization when reading. This unlocks fantastic speed on reads, but slows down updates a lot.

Acceptance criteria

  1. The type of node synchronization to be used in the future has been selected.
  2. Implemented a new version of node synchronization.
  3. Implementation tests are written
@maypok86
Copy link
Owner Author

maypok86 commented Jan 9, 2024

Okay, seqlocks didn't work for us, let's try RCU.

@maypok86
Copy link
Owner Author

I think I'm fine with that for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant