New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lock-free IOQueue #6154

Open
wants to merge 3 commits into
base: master
from

Conversation

Projects
None yet
6 participants
@benaadams
Copy link
Contributor

benaadams commented Dec 27, 2018

Second half of #4060

The lock contention shows up on my 4 core (8-HT) machine at around 12% of IOQueue.Schedule

image

and is the highest source (33%) of lock connection (1.8% of run) from the traces @sebastienros sent me (from back in Nov) from the 14 core (28 Hyper-threaded cores) benchmarking machines:

image

Resolves #6153

/cc @davidfowl @pakrym @halter73

@davidfowl

This comment has been minimized.

Copy link
Member

davidfowl commented Dec 28, 2018

cc @stephentoub Can you take a look when you return from vacay?

@stephentoub

This comment has been minimized.

Copy link

stephentoub commented Dec 28, 2018

Yup.

@benaadams benaadams force-pushed the benaadams:lock-free-IOQueue branch from 9314fc6 to bad1591 Jan 11, 2019

@halter73

This comment has been minimized.

Copy link
Member

halter73 commented Jan 11, 2019

If the perf improvement is there, even if it takes microbenmarks to prove it, I think we should take this. It reads nearly as easily as the lock-based version, and seems pretty clearly correct. I would remove the Debug.Assert's before merging though.

@benaadams

This comment has been minimized.

Copy link
Contributor

benaadams commented Jan 11, 2019

Seems a shame to use a concurrent queue with a lock :)

@benaadams benaadams force-pushed the benaadams:lock-free-IOQueue branch from 3a47920 to 61a2f87 Jan 11, 2019

@davidfowl
Copy link
Member

davidfowl left a comment

OK I read the code enough and it's green and there are no more explicit barriers 😄

@davidfowl

This comment has been minimized.

Copy link
Member

davidfowl commented Jan 11, 2019

So I ran the PlaintextPlatform benchmark before and after and didn't see a difference in throughput. However that might be on too small of a machine or with too little concurrency.

@stephentoub

This comment has been minimized.

Copy link

stephentoub commented Jan 11, 2019

My $.02 would be to only take this if/when you can demonstrate a real win. Even with all of our eyes on this, it's a lot easier to prove the correctness of the lock-based code.

@Daniel-Svensson
Copy link

Daniel-Svensson left a comment

Looks good apart from the place where I recommend double checking the memory ordering. I haven't looked at the implementation of IsEmpty (if it takes a lock or if it's just a volatile read) which might be required to understand if reordering might happen or not.

The code definitely looks to work fine on x86 platforms at least.

@benaadams

This comment has been minimized.

Copy link
Contributor

benaadams commented Jan 11, 2019

My $.02 would be to only take this if/when you can demonstrate a real win.

The lock contention shows up on my 4 core (8-HT) machine at around 12% of IOQueue.Schedule

image

and is the highest source (33%) of lock connection (1.8% of run) from the traces @sebastienros sent me (from back in Nov) from the 14 core (28 Hyper-threaded cores) benchmarking machines:

image

System.IO.Pipelines is a bit locky too making up the majority of the rest; however what that does inside the locks is more complicated so not as straightforward to remove (I've attempted it before 😄)

@benaadams

This comment has been minimized.

Copy link
Contributor

benaadams commented Jan 11, 2019

Ofc, it may just add more contention to ThreadPool.QUWI not having a lock here...

@Eilon Eilon added the area-servers label Jan 18, 2019

@benaadams benaadams force-pushed the benaadams:lock-free-IOQueue branch from c6160ee to 89b82a5 Jan 19, 2019

@benaadams benaadams requested review from jkotalik and Tratcher as code owners Jan 19, 2019

@benaadams benaadams force-pushed the benaadams:lock-free-IOQueue branch 5 times, most recently from d176ed0 to 24d798e Jan 19, 2019

@benaadams benaadams force-pushed the benaadams:lock-free-IOQueue branch from 24d798e to 9f34f84 Jan 20, 2019

@benaadams benaadams force-pushed the benaadams:lock-free-IOQueue branch 2 times, most recently from 143c98d to deacf7c Jan 20, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment