Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add RwLock to tokio-sync #1699

Merged
merged 23 commits into from
Jan 4, 2020
Merged

add RwLock to tokio-sync #1699

merged 23 commits into from
Jan 4, 2020

Conversation

jxs
Copy link
Member

@jxs jxs commented Oct 28, 2019

Motivation

following a conversation on tokio-dev channel to implement a RwLock for tokio, where @carllerche suggested looking at Watch and the watchers and watcher_id part, I came up with this solution

Solution

I defined a RwLock struct similar to the Watch struct that wraps a Sharedstruct, that is composed of an internal std::sync::RwLock, and an fnv HashMapwith watchers (id and inner waker).
an AtomicWaker and an id.
Rwlock calls try_read and try_write on read and write. if try_reador try_writereturn Err::WouldBlock, the current context waker is registered by ref on the inner AtomicWaker.
if try_read/try_write return a Read/Write guard, a new Read/Write struct is returned composed of the inner guard and the list of watchers.
When the guard goes out of scope, on it's Drop, it iterates the watchers and calls wake on it's inner AtomicWaker.
On Clone RwLock adds watcher to the watchers list, and on Drop removes the current RwLock instance from the watchers list.
If there are multiple read and write atempts from the same RwLock instance, their context is registred multiple times to the same waker, i am not so sure of this. I added a couple tests and also imported @asomers future-locksones
thanks

@jxs jxs force-pushed the master branch 2 times, most recently from 366d6bc to 73fcaa8 Compare October 28, 2019 10:25
Copy link
Member

@carllerche carllerche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for getting this started 👍 I did a quick skim and had a question inline.

@jonhoo
Copy link
Contributor

jonhoo commented Nov 1, 2019

@jxs I'm curious — why wrap a std::sync::RwLock rather than build directly on tokio::sync::Semaphore (like tokio::sync::Mutex does)? I believe it should be possible to do that. In fact, one perhaps even easier implementation here is to follow this algorithm that lets you implement a RwLock using just two regular Mutexes (which already exists!). It's not terribly efficient, but a decent start!

Also, as an aside I think we probably want to have a similar design as std::sync::RwLock does in that it does not internally keep an Arc, but instead expects to be wrapped in a Arc to be used.

@jxs jxs force-pushed the master branch 5 times, most recently from 53db2de to f3008ff Compare November 3, 2019 21:10
@jxs
Copy link
Member Author

jxs commented Nov 4, 2019

@jxs I'm curious — why wrap a std::sync::RwLock rather than build directly on tokio::sync::Semaphore (like tokio::sync::Mutex does)?

Following @carllerche's tip on Discourse to look into Watch and watcher's Id I saw it's usage of std::sync::Mutex and std::sync::RwLock. And so, the idea of using std::sync::RwLock seemed the simplest approach to match.

Trying to implement the wikipedia algorithm was not trivial due to having to keep the second mutex locked with multiple reads spread, so I looked into other implementations, and the simplest to start off was the one implemented by Qt which also seems the approach followed by Golang:
defining a max number of reads (permits), and when a write happens, a mutex is used to prevent other writes, while it gather all the permits.

I have updated the code with that implementation

tokio/src/sync/rwlock.rs Outdated Show resolved Hide resolved
tokio/tests/sync_rwlock.rs Outdated Show resolved Hide resolved
tokio/tests/sync_rwlock.rs Outdated Show resolved Hide resolved
tokio/tests/sync_rwlock.rs Outdated Show resolved Hide resolved
@jonhoo
Copy link
Contributor

jonhoo commented Nov 6, 2019

This is looking really good now! Now we just need to iron out the tests.

@jonhoo
Copy link
Contributor

jonhoo commented Nov 7, 2019

@jxs It'd be better if you avoided force-pushing to this branch now — GitHub isn't great about tracking reviews across force pushes. We can always rebase at the end!

@jonhoo
Copy link
Contributor

jonhoo commented Nov 7, 2019

@asomers Ah, so, I agree completely that we want the deterministic polling tests. I think we also want the racy version as a loom test so that we ensure we actually test that all the interleavings are right.

@asomers
Copy link
Contributor

asomers commented Nov 7, 2019

@asomers Ah, so, I agree completely that we want the deterministic polling tests. I think we also want the racy version as a loom test so that we ensure we actually test that all the interleavings are right.

Yeah. The loom tests will help to detect races we don't know about, and the deterministic tests will cover races that we already know about.

update read_write_contested loom test
tokio/tests/sync_rwlock.rs Outdated Show resolved Hide resolved
@jxs
Copy link
Member Author

jxs commented Nov 18, 2019

@jxs Looks like GitHub is hiding some of @asomers' comments above. Make sure you unhide them in case you missed them :) Any idea why the tests fail?

tests were failing due to new naming of the tokio_threadpool and the location of poll_fn, I have updated since!
I think I have address all of @asomers comments, Am I missing any?

@jonhoo
Copy link
Contributor

jonhoo commented Nov 18, 2019

If you look carefully above, you'll see a little box that says:

5 hidden conversations
Load more…

If you click it, you'll see five unresolved comments from @asomers. If you have actually addressed them, then just mark them as resolved :)

# Conflicts:
#	tokio/src/sync/mod.rs
#	tokio/src/sync/tests/mod.rs
@skeet70
Copy link

skeet70 commented Dec 6, 2019

Is there anything left to do here that I could help out with? I see that builds failed but I'm not able to see what failed in them. Are there any outstanding review comments that I could address to help get it moved forward?

@jxs
Copy link
Member Author

jxs commented Dec 6, 2019

builds are failing due to the loom tests timing out I think, apart from that I think it's ready to be merged cc @carllerche @jonhoo

@jonhoo
Copy link
Contributor

jonhoo commented Dec 10, 2019

Yup, I think this is just missing a final review from @carllerche

pub async fn write(&self) -> RwLockWriteGuard<'_, T> {
let _lock = self.w.lock().await;

let mut permits = vec![];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that this Vec will eventually contain MAX_READS, so it should probably be pre-allocated with that capacity.

Copy link
Member

@carllerche carllerche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, sorry for the HUGE delay in reviewing this. It does look like you got some great reviews from @jonhoo and @asomers (thanks you two). I think we are almost there. I made a note inline where I think there could be a deadlock. Unfortunately, I think it requires fixing semaphore.

@carllerche
Copy link
Member

I merged master into this PR. I will investigate changing semaphore shortly.

@carllerche
Copy link
Member

In preparation for the semaphore changes landing, could you add a loom test that checks two concurrent threads attempting to acquire the write lock. This should deadlock.

@carllerche carllerche mentioned this pull request Dec 21, 2019
tokio/src/sync/rwlock.rs Outdated Show resolved Hide resolved
Copy link
Member

@carllerche carllerche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yay 👍 thanks for sticking w/ this!

@carllerche carllerche merged commit 32e15b3 into tokio-rs:master Jan 4, 2020
@Sherlock-Holo
Copy link

I notice there are no try_read or try_write with this RwLock, the doc doesn't show any method about it.

If I want to try_read or try_write, how do I do this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants