New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a fiber-affine read write mutex implementation #1287

Merged
merged 21 commits into from Oct 19, 2015

Conversation

Projects
None yet
3 participants
@mfrischknecht
Contributor

mfrischknecht commented Oct 8, 2015

cf. this forum thread.

I figured it would be easier to discuss the last few changes to the code in a pull request itself rather than the forum.

final class Mutex(LockingIntent INTENT): core.sync.mutex.Mutex, Lockable
{
/** Try to lock the mutex. cf. $(D core.sync.mutex.Mutex) */
override bool tryLock() { return m_state.tryLock!INTENT(); }

This comment has been minimized.

@mfrischknecht

mfrischknecht Oct 8, 2015

Contributor

Using a core.sync.mutex.Mutex (as suggested) leads to the non-interruptible Mutex implementation to also not being nothrow anymore.
It doesn't seem to be a real downside though, as I can't really see any sense in using a TaskReadWriteMutex as an object monitor anyway...

@mfrischknecht

mfrischknecht Oct 8, 2015

Contributor

Using a core.sync.mutex.Mutex (as suggested) leads to the non-interruptible Mutex implementation to also not being nothrow anymore.
It doesn't seem to be a real downside though, as I can't really see any sense in using a TaskReadWriteMutex as an object monitor anyway...

This comment has been minimized.

@s-ludwig

s-ludwig Oct 8, 2015

Member

That plus that the initial reason for building the uninterruptible mutex implementations was that the one from core.sync.Mutex was supposed to be made nothrow, but it broke the ones in vibe.d that way. It was postponed for that reason, but the switch is planned to be made soon, now that the vibe.d classes have been made forward-compatible.

@s-ludwig

s-ludwig Oct 8, 2015

Member

That plus that the initial reason for building the uninterruptible mutex implementations was that the one from core.sync.Mutex was supposed to be made nothrow, but it broke the ones in vibe.d that way. It was postponed for that reason, but the switch is planned to be made soon, now that the vibe.d classes have been made forward-compatible.

This comment has been minimized.

@mfrischknecht

mfrischknecht Oct 8, 2015

Contributor

In that case, I'd propose to delay the nothrow until the switch is made, and re-add the attribute in TaskReadWriteMutex afterwards (thanks to inference, only the definitions of TaskReadWriteMutex.Mutex.tryLock(), as well as lock() and unlock() should be affected).

@mfrischknecht

mfrischknecht Oct 8, 2015

Contributor

In that case, I'd propose to delay the nothrow until the switch is made, and re-add the attribute in TaskReadWriteMutex afterwards (thanks to inference, only the definitions of TaskReadWriteMutex.Mutex.tryLock(), as well as lock() and unlock() should be affected).

*
* cf. $(D core.sync.mutex.ReadWriteMutex)
*/
class TaskReadWriteMutex

This comment has been minimized.

@mfrischknecht

mfrischknecht Oct 11, 2015

Contributor

As I already noted, I had similar issues creating a global shared TaskMutex instance in the same fashion as my previous tests for the read-write version; so I decided to just revert to the basic non-template version for the two mutex classes.

If the missing shared overloads are an issue, I'd propose to solve that in a separate PR for all mutex classes in one go.

@mfrischknecht

mfrischknecht Oct 11, 2015

Contributor

As I already noted, I had similar issues creating a global shared TaskMutex instance in the same fashion as my previous tests for the read-write version; so I decided to just revert to the basic non-template version for the two mutex classes.

If the missing shared overloads are an issue, I'd propose to solve that in a separate PR for all mutex classes in one go.

This comment has been minimized.

@s-ludwig

s-ludwig Oct 11, 2015

Member

Sounds good!

@s-ludwig

s-ludwig Oct 11, 2015

Member

Sounds good!

version(RWMutexPrint)
printInfo!("unlock",INTENT)();
debug assert(activeCounter!INTENT > 0);

This comment has been minimized.

@Geod24

Geod24 Oct 12, 2015

Contributor

debug assert is redundant, just use assert.

@Geod24

Geod24 Oct 12, 2015

Contributor

debug assert is redundant, just use assert.

This comment has been minimized.

@s-ludwig

s-ludwig Oct 12, 2015

Member

Not completely true, assert will be there when neither -debug, nor -release is used, but debug statements are only included if -debug is given. I usually use debug assert in performance critical places, so that it's possible to build a fast "release" build without resorting to the unsafe -release switch.

@s-ludwig

s-ludwig Oct 12, 2015

Member

Not completely true, assert will be there when neither -debug, nor -release is used, but debug statements are only included if -debug is given. I usually use debug assert in performance critical places, so that it's possible to build a fast "release" build without resorting to the unsafe -release switch.

This comment has been minimized.

@Geod24

Geod24 Oct 12, 2015

Contributor

Hum right, I forgot that defaults builds are not actually debug builds ;)
It looks like here the extra check won't hurt.

@Geod24

Geod24 Oct 12, 2015

Contributor

Hum right, I forgot that defaults builds are not actually debug builds ;)
It looks like here the extra check won't hurt.

Manuel Frischknecht added some commits Oct 12, 2015

Manuel Frischknecht
Resorted to a "poor man's join" for now.
Tried multiple Task joining methods but didn't have much luck so far.
synchronized(s_taskMutex) {
do s_taskCondition.wait();
while(s_runningTasks > 0);
}

This comment has been minimized.

@mfrischknecht

mfrischknecht Oct 17, 2015

Contributor

The example now uses a TaskMutex for joining. Thanks for your advice and patience!

@mfrischknecht

mfrischknecht Oct 17, 2015

Contributor

The example now uses a TaskMutex for joining. Thanks for your advice and patience!

This comment has been minimized.

@s-ludwig

s-ludwig Oct 19, 2015

Member

Please no worries about patience, after all this has brought a couple of important issues to light and it's never wrong to take a detailed look at these places! I'll do a final pass over the changes.

@s-ludwig

s-ludwig Oct 19, 2015

Member

Please no worries about patience, after all this has brought a couple of important issues to light and it's never wrong to take a detailed look at these places! I'll do a final pass over the changes.

This comment has been minimized.

@s-ludwig

s-ludwig Oct 19, 2015

Member

Theoretically, in the current version, the worker tasks could have finished before the wait loop at the bottom is reached, thus causing an infinite wait on s_taskCondition. This will of course never happen in this example, but maybe it's better to just initialize s_runningTasks with workerThreadCount before starting up the tasks. Damnit, lazy worker thread construction looks like it gets in the way of this and workerThreadCount would just return 0.

I'm leaning towards letting workerThreadCount() simply call setupWorkerThreads(), as 0 is certainly not the expected answer is most practical situations.

@s-ludwig

s-ludwig Oct 19, 2015

Member

Theoretically, in the current version, the worker tasks could have finished before the wait loop at the bottom is reached, thus causing an infinite wait on s_taskCondition. This will of course never happen in this example, but maybe it's better to just initialize s_runningTasks with workerThreadCount before starting up the tasks. Damnit, lazy worker thread construction looks like it gets in the way of this and workerThreadCount would just return 0.

I'm leaning towards letting workerThreadCount() simply call setupWorkerThreads(), as 0 is certainly not the expected answer is most practical situations.

This comment has been minimized.

@mfrischknecht

mfrischknecht Oct 19, 2015

Contributor

In the meantime, to assure the correctness of this example, I could just extend the synchronized block so that it includes the runWorkerTaskDist() call. It's perhaps not the most readable, but would guarantee that the mutex is locked until s_taskCondition.wait(); is performed. This means that the started tasks would block yield on the increment until this is the case.
The only assumption left would then be that runWorkerTaskDist() does start >0 threads (which seems sensible ;-) ).

@mfrischknecht

mfrischknecht Oct 19, 2015

Contributor

In the meantime, to assure the correctness of this example, I could just extend the synchronized block so that it includes the runWorkerTaskDist() call. It's perhaps not the most readable, but would guarantee that the mutex is locked until s_taskCondition.wait(); is performed. This means that the started tasks would block yield on the increment until this is the case.
The only assumption left would then be that runWorkerTaskDist() does start >0 threads (which seems sensible ;-) ).

This comment has been minimized.

@s-ludwig

s-ludwig Oct 19, 2015

Member

I've commited the change to workerThreadCount now and also updated the docs to mention that runWorkerTaskDist will start exactly workerThreadCount tasks, and workerThreadCount also has an out contract that it will always be > 0.

Extending the synchronized block is also an interesting solution, but I fear that most people looking at the example will have trouble to understand the implications. With the workerThreadCount change done, my vote would go for setting up the counter up-front.

@s-ludwig

s-ludwig Oct 19, 2015

Member

I've commited the change to workerThreadCount now and also updated the docs to mention that runWorkerTaskDist will start exactly workerThreadCount tasks, and workerThreadCount also has an out contract that it will always be > 0.

Extending the synchronized block is also an interesting solution, but I fear that most people looking at the example will have trouble to understand the implications. With the workerThreadCount change done, my vote would go for setting up the counter up-front.

s-ludwig added a commit that referenced this pull request Oct 17, 2015

Add an example to `TaskCondition` and improve docs. See #1287.
Also mentions that using interruptible mutex types is not safe with conditions (#1292).
exitEventLoop(true);
//Start multiple tasks performing requests on "http://localhost:8080/" concurrently
synchronized(s_taskMutex) s_runningTasks = workerThreadCount();

This comment has been minimized.

@mfrischknecht

mfrischknecht Oct 19, 2015

Contributor

I added the counter initialization with workerThreadCount() and did a rebase to the upstream master branch. Interestingly, workerThreadCount() even worked before the rebase (at least on my system).
I'm quite sure the synchronized(s_taskMutex) is not technically necessary, but since this line sets up the condition variable, I figured it doesn't hurt to have it there.

@mfrischknecht

mfrischknecht Oct 19, 2015

Contributor

I added the counter initialization with workerThreadCount() and did a rebase to the upstream master branch. Interestingly, workerThreadCount() even worked before the rebase (at least on my system).
I'm quite sure the synchronized(s_taskMutex) is not technically necessary, but since this line sets up the condition variable, I figured it doesn't hurt to have it there.

This comment has been minimized.

@s-ludwig

s-ludwig Oct 19, 2015

Member

Thanks! I'll merge now.

Yeah, I'd consider the synchronized display of good practices for the sake of a generic example.

@s-ludwig

s-ludwig Oct 19, 2015

Member

Thanks! I'll merge now.

Yeah, I'd consider the synchronized display of good practices for the sake of a generic example.

s-ludwig added a commit that referenced this pull request Oct 19, 2015

Merge pull request #1287 from mfrischknecht/rwmutex
Add a fiber-affine read write mutex implementation

@s-ludwig s-ludwig merged commit 88f73f4 into vibe-d:master Oct 19, 2015

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment