-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Someone intrest this result? #55
Comments
Actually that test is a little bit of inside joke on my part, demonstrating how you can lie with a benchmark. All it is doing is testing how fast the sequencer can signal the consumer, but only on every 10th update. It doesn't do any actual useful work. How does the modest-lock fair with the OnePublisherToOneProcessorUniCastThroughputTest? Also do you have a link to the code? If all I wanted to do was improve that test I could just have one thread polling a sequence with another thread updating it in batches of 10. Existing Disruptor: Simple polling code: |
OnePublisherToOneProcessorRawBatchThroughputTest OK lets back to the 10-1 pattern. if((counter&1)==1) Thread.yield(); This result after applied the modest-lock to this test, still use write10:read1 pattern change SingleProduceSequencer to next reserve operation by Thread.yield instead of parkNanos apply the modest-lock to the next reserve operation: BTW: a strange feeling: Indeed, we just do guess for the OS scheduler program. |
I created a gist at here:https://gist.github.com/qinxian/5771879 |
Are you running with HyperThreading enabled? |
NO! |
BTW, I tried the JDK8 @contended annotation at field. |
The different I see with ModestLock is not as marked as your results and the difference on the OnePublisherToOneProcessorUniCastThroughputTest is lower than the noise. I think these small optimisations will vary between hardware platforms. One of reasons we made the WaitStrategy pluggable is to allow these types of optimisations. If it speeds up your system end to end, then go for it, but don't base your decision on the OnePublisherToOneProcessorRawBatchThroughputTest, as it doesn't test anything useful, base it on your own macro-benchmarks. I've also had a go with @contended, didn't make a massive difference, but it should be a little bit quicker as it would remove one indirection. Unfortunately it will be a while before Java 8 is the standard. I might do a Java 8 specific version if there is enough interest. |
Expected when P10-C10 pattern improvement 2.X over P10-C1 pattern in your result. Indeed I always use JDK8 with win8 on AMD HT. Of cause, the results just only an one-test-case. As to if useful or useless, depends. BTW, like before "guess" words, some sadness on the kernel. One like me, only works at high level, no some willing to lower level, maybe cannot, maybe kernel cannot. A real world! BTW, you do plan refactor the WaitStrategy to more generalized? I did some work. |
I'm going to close this as it not really an issue just a discussion, which can happen on the google groups page. |
This is some result abount OnePublisherToOneProcessorRawBatchThroughputTest
Run 0, Disruptor=806,451,612 ops/sec
Run 1, Disruptor=821,018,062 ops/sec
Run 2, Disruptor=1,122,964,626 ops/sec
Run 3, Disruptor=1,164,144,353 ops/sec
Run 4, Disruptor=1,133,144,475 ops/sec
Run 5, Disruptor=1,186,239,620 ops/sec
Run 6, Disruptor=1,175,088,131 ops/sec
Run 7, Disruptor=1,143,510,577 ops/sec
Run 8, Disruptor=1,174,398,120 ops/sec
Run 9, Disruptor=1,153,402,537 ops/sec
Run 10, Disruptor=1,154,068,090 ops/sec
Run 11, Disruptor=1,175,088,131 ops/sec
Run 12, Disruptor=1,133,144,475 ops/sec
Run 13, Disruptor=1,049,868,766 ops/sec
Run 14, Disruptor=1,094,690,749 ops/sec
Run 15, Disruptor=1,164,144,353 ops/sec
Run 16, Disruptor=1,186,239,620 ops/sec
Run 17, Disruptor=1,219,512,195 ops/sec
Run 18, Disruptor=1,207,729,468 ops/sec
Run 19, Disruptor=1,196,888,090 ops/sec
In my memory, the best result I saw, is 1,500M o/s, just by some try
//mybusyspin
Run 0, Disruptor=1,583,531,274 ops/sec
Run 1, Disruptor=1,454,545,454 ops/sec
Run 2, Disruptor=1,803,426,510 ops/sec
Run 3, Disruptor=1,728,608,470 ops/sec
Run 4, Disruptor=1,777,777,777 ops/sec
Run 5, Disruptor=1,728,608,470 ops/sec
Run 6, Disruptor=1,908,396,946 ops/sec
Run 7, Disruptor=1,754,385,964 ops/sec
Run 8, Disruptor=1,937,984,496 ops/sec
Run 9, Disruptor=1,706,484,641 ops/sec
Run 10, Disruptor=1,801,801,801 ops/sec
Run 11, Disruptor=1,776,198,934 ops/sec
Run 12, Disruptor=1,855,287,569 ops/sec
Run 13, Disruptor=1,828,153,564 ops/sec
Run 14, Disruptor=1,471,670,345 ops/sec
Run 15, Disruptor=1,801,801,801 ops/sec
Run 16, Disruptor=1,752,848,378 ops/sec
Run 17, Disruptor=1,910,219,675 ops/sec
Run 18, Disruptor=1,828,153,564 ops/sec
Run 19, Disruptor=1,855,287,569 ops/sec
Hahha, this is my new modest-lock, applied for both end
Run 0, Concurrentor=1,644,736,842 ops/sec
Run 1, Concurrentor=1,640,689,089 ops/sec
Run 2, Concurrentor=1,968,503,937 ops/sec
Run 3, Concurrentor=1,968,503,937 ops/sec
Run 4, Concurrentor=1,968,503,937 ops/sec
Run 5, Concurrentor=1,968,503,937 ops/sec
Run 6, Concurrentor=1,998,001,998 ops/sec
Run 7, Concurrentor=1,968,503,937 ops/sec
Run 8, Concurrentor=1,968,503,937 ops/sec
Run 9, Concurrentor=1,968,503,937 ops/sec
Run 10, Concurrentor=2,000,000,000 ops/sec
Run 11, Concurrentor=1,968,503,937 ops/sec
Run 12, Concurrentor=1,968,503,937 ops/sec
Run 13, Concurrentor=1,968,503,937 ops/sec
Run 14, Concurrentor=1,968,503,937 ops/sec
Run 15, Concurrentor=2,000,000,000 ops/sec
Run 16, Concurrentor=2,000,000,000 ops/sec
Run 17, Concurrentor=2,000,000,000 ops/sec
Run 18, Concurrentor=1,968,503,937 ops/sec
Run 19, Concurrentor=1,968,503,937 ops/sec
so, does it have some xxx?
The text was updated successfully, but these errors were encountered: