Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved ParkTakeStrategy #227

Merged
merged 1 commit into from Apr 6, 2019

Conversation

Projects
None yet
3 participants
@franz1981
Copy link
Contributor

franz1981 commented Jan 13, 2019

It includes 2 improvements for park take strategies:

  • single-consumer: uses field updater to avoid AtomicReference instance
  • multi-consumer: uses intrinsic locks to avoid garbage on contention
    and avoided locking while checking waiting threads
Improved ParkTakeStrategy
It includes 2 improvements for park take strategies:
- single-consumer: uses field updater to avoid AtomicReference instance
- multi-consumer: uses intrinsic locks to avoid garbage on contention
and avoided locking while checking waiting threads
@franz1981

This comment has been minimized.

Copy link
Contributor Author

franz1981 commented Jan 13, 2019

Using my tiny, but quiet, box I have gotten these numbers with McParkTakeStrategy:

MASTER (with ReentrantLock):
Benchmark                 (burstSize)  (consumerCount)  (qCapacity)  (qType)  (warmup)  Mode  Cnt      Score      Error  Units
QueueBurstCost.burstCost            1                1       132000      608      true  avgt   20    196.313 ±    4.873  ns/op
QueueBurstCost.burstCost          100                1       132000      608      true  avgt   20  10302.968 ± 1110.205  ns/op


MASTER with intrinsic lock:
Benchmark                 (burstSize)  (consumerCount)  (qCapacity)  (qType)  (warmup)  Mode  Cnt     Score     Error  Units
QueueBurstCost.burstCost            1                1       132000      608      true  avgt   20   196.327 ±   6.058  ns/op
QueueBurstCost.burstCost          100                1       132000      608      true  avgt   20  9722.554 ± 148.198  ns/op


PR with intrinsic lock:

Benchmark                 (burstSize)  (consumerCount)  (qCapacity)  (qType)  (warmup)  Mode  Cnt     Score     Error  Units
QueueBurstCost.burstCost            1                1       132000      608      true  avgt   20   195.678 ±   6.314  ns/op
QueueBurstCost.burstCost          100                1       132000      608      true  avgt   20  5574.261 ± 146.196  ns/op

PR with ReentrantLock:

Benchmark                 (burstSize)  (consumerCount)  (qCapacity)  (qType)  (warmup)  Mode  Cnt     Score     Error  Units
QueueBurstCost.burstCost            1                1       132000      608      true  avgt   20   192.659 ±   6.307  ns/op
QueueBurstCost.burstCost          100                1       132000      608      true  avgt   20  5519.689 ± 142.494  ns/op

it seems that on the single-producer single-consumer usage of this strategy the improvement is dramatic:

10302.968 ± 1110.205  ns/op VS 5574.261 ± 146.196  ns/op

The weird thing (need to check the ASM, probably) is that the original ScParkTakeStrategyis performing worse then McParkTakeStrategy for burstSize=100:

Benchmark                 (burstSize)  (consumerCount)  (qCapacity)  (qType)  (warmup)  Mode  Cnt      Score      Error  Units
QueueBurstCost.burstCost            1                1       132000      608      true  avgt  100    197.587 ±    3.148  ns/op
QueueBurstCost.burstCost          100                1       132000      608      true  avgt  100   7724.747 ±  223.752  ns/op

FYI the new born Mpsc blocking array q:

Benchmark                 (burstSize)  (consumerCount)  (qCapacity)  (qType)  (warmup)  Mode  Cnt      Score      Error  Units
QueueBurstCost.burstCost            1                1       132000       72      true  avgt  100    197.071 ±    3.232  ns/op
QueueBurstCost.burstCost          100                1       132000       72      true  avgt  100   5637.344 ±  259.194  ns/op

I was expecting an even better behaviour TBH, but probably I need to play with the producers number/consumers number/DELAY_CONSUMER/DELAY_PRODUCER.

@coveralls

This comment has been minimized.

Copy link

coveralls commented Jan 13, 2019

Pull Request Test Coverage Report for Build 456

  • 18 of 18 (100.0%) changed or added relevant lines in 2 files are covered.
  • 23 unchanged lines in 3 files lost coverage.
  • Overall coverage decreased (-0.2%) to 69.175%

Files with Coverage Reduction New Missed Lines %
jctools-core/src/main/java/org/jctools/maps/NonBlockingHashMap.java 4 79.21%
jctools-core/src/main/java/org/jctools/maps/NonBlockingHashMapLong.java 5 79.15%
jctools-core/src/main/java/org/jctools/maps/NonBlockingIdentityHashMap.java 14 75.06%
Totals Coverage Status
Change from base Build 455: -0.2%
Covered Lines: 5810
Relevant Lines: 8399

💛 - Coveralls

@nitsanw nitsanw merged commit 49e2355 into JCTools:master Apr 6, 2019

1 of 2 checks passed

coverage/coveralls Coverage decreased (-0.2%) to 69.175%
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.