Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some feedback and performance tests with CPU heavy pipeline #2

Open
samuell opened this issue May 12, 2017 · 3 comments
Open

Some feedback and performance tests with CPU heavy pipeline #2

samuell opened this issue May 12, 2017 · 3 comments

Comments

@samuell
Copy link

samuell commented May 12, 2017

Hi @textnode and thanks so much sharing this code!

I also got really interested in the disruptor pattern after reading about it somewhere, and was happy to find this Go implementation.

I've been thinking it might speed up pipelines created with my experimental flowbase and scipipe libraries (perhaps mostly relevant to flowbase), so I set out to experiment a little, with a slightly modified version of gringo, results of which is available here.

It is a pretty CPU heavy pipeline, but I still manage to get speedups, although mostly for 1 or 2 CPUs, as can be seen in the example output (the times vary a bit, so one should really do some averaging).

If you are interested in having a look whether I do any silly mistakes, my slightly adapted version of gringo is available on these lines in the disruptor version of the pipeline.

@samuell
Copy link
Author

samuell commented May 12, 2017

Wait ... I just realized I have some data races there ... will have to dig into that ...

@MarcMagnin
Copy link

Hey @samuell,
Looking at your benchmark it suggest that channels gets better when you are specifying a higher number of GOMAXPROCS. Any idea why as I was expecting channels to be slower anyway?

@samuell
Copy link
Author

samuell commented Jan 31, 2018

Hi @MarcMagnin ,

I'd say these numbers are sometimes hard to get a complete picture of, and might vary due to various circumstances. You'd also optimally want to do at least 3 replicates of these kind of tests to be sure that you don't catch a temporary anomaly.

The only common pattern I have seen in my (many) channel experiments is that running times typically decrease while increasing GOMAXPROCS up until NUMCORES-1, where NUMCORES is the number of virtual cores on the computer (which is 4 on my laptop).

Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants