New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel execution is slower than sequential exectuion for trivial tasks. #868

Closed
unboundval opened this Issue Mar 21, 2013 · 3 comments

Comments

Projects
None yet
2 participants
@unboundval

unboundval commented Mar 21, 2013

Hi!

First of all I would like to thank you for developing this great tool, it really makes my life much easier.

I encounter a strange problem when using fabric in parallel mode: for trivial jobs like getting hostname, sequential execution on multiple hosts are much faster than parallel mode. I didn't find anything in the documentation to configure to speed up the process and I didn't find anyone having similar issues on the Internet. After reading the code, I found that this is caused by the JobQueue.__fill_results() trying to retrieve additional results from the _comms_queue. It is called both in the main loop processing jobs as well as after the main loop. The functions use a timeout of 1 seconds as in version 1.6 which causes unnecessary long delay when executing trivial jobs.

I am wondering whether there is any specific reason to use 1 second timeout. For my application, I need it to be fast so I plan to change it to 0.01 second timeout for the statement
datum = self._comms_queue.get(timeout=0.01)

Will there be any issues? Thank you very much!

@bitprophet

This comment has been minimized.

Member

bitprophet commented Mar 21, 2013

If you can look at the Git history for those lines it might be useful re: finding out why 1s was used as the default. Many other situations have an 0.1 or 0.01 or etc, loop sleep time -- see e.g. fabric/io.py which has a timer length used for various IO loops. Assuming the 1s wasn't for some other reason like "anything shorter seems to run into race conditions", I would be in favor of lowering it. Thanks!

@unboundval

This comment has been minimized.

unboundval commented Mar 22, 2013

Thank you! I am happy to know it won't have any side effect to the code. Lowering the value does help to significantly improve the performance in my application.

@bitprophet

This comment has been minimized.

Member

bitprophet commented Apr 29, 2013

Closing in favor of #877

@bitprophet bitprophet closed this Apr 29, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment