Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

read-write lock with writer-preference #1553

Closed
wants to merge 1 commit into from

Conversation

Karry
Copy link
Collaborator

@Karry Karry commented Jan 5, 2024

C++ std::shared_mutex may lead to writer starvation with multiple highly concurrent readers, and it doesn't provide API to configure priority. Luckily, with Pthread implementation, we may configure writer preference. It expect that write operations are rare, compared to read ones.

This problem (writer starvation) is visible during navigation, when multiple threads reads database with every position update (renderer thread and navigation thread). When you drive to the tunnel and application change style to nigh one (daylight flag setup to false), it may take several minutes to change map style - because style-sheet change requires write lock to databases...

C++ std::shared_mutex may lead to writer starvation with multiple highly concurrent readers,
and it doesn't provide API to configure priority. Luckily, with Pthread implementation,
we may configure writer preference. It expect that write operations are rare, compared to read ones.
@Karry Karry marked this pull request as ready for review January 6, 2024 17:51
@Karry Karry changed the title provide read-write lock with writer-preference read-write lock with writer-preference Jan 6, 2024
Copy link

sonarcloud bot commented Jan 6, 2024

@janbar
Copy link
Contributor

janbar commented Jan 23, 2024

For this case you could use the "no lock concept", using an atomic operation on int, as follows:

  • the reader load(atomic) value, then test it for 0. if not 0 then yield and loop for next load(atomic) ...
  • the writer "lock" by fetch_and_add 1. when works is finished, the writer store(atomic) 0.

Using this algorithm the writer always takes priority and you don't need mutex. The performance is therefore much better.

edit: I will post a sample implementation of ReadWriteLock using this algo.

@Karry
Copy link
Collaborator Author

Karry commented Jan 23, 2024

the reader load(atomic) value, then test it for 0. if not 0 then yield and loop for next load(atomic) ...

but this is spinlock - it is good for write operations that are really fast, but in our case, it is better to block the thread I believe...

@janbar
Copy link
Contributor

janbar commented Jan 23, 2024

I prepares a PR with the sample. Currently I made that thread writer must wait all readers have freed the lock. So One question, is a writer must wait all reader are freed ?

@Karry
Copy link
Collaborator Author

Karry commented Jan 23, 2024

Yes, because readers are expecting that structures will not be modified by writer when they are holding the lock...

@janbar
Copy link
Contributor

janbar commented Jan 23, 2024

Karry, I made the PR for your branch.

@Framstag
Copy link
Owner

Hello @Karry , hello @janbar : I missed this PR. Sorry. Should I now still wait or merge?

@Karry
Copy link
Collaborator Author

Karry commented Jan 30, 2024

Please wait - @janbar prepared another implementation Karry#2 that is not using pthread, just STL locking primitives. It will provide write precedence on all platforms...

@janbar
Copy link
Contributor

janbar commented Jan 30, 2024

@Framstag , I created the new PR #1555 for the new implementation of the lock.

@Framstag
Copy link
Owner

Framstag commented Feb 1, 2024

@Karry : This one is now obsolete?

@Karry
Copy link
Collaborator Author

Karry commented Feb 2, 2024

@Karry : This one is now obsolete?

yes, but I still have some doubts about @janbar implementation...

@Karry Karry closed this Feb 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants