Parallel execution is slower than sequential exectuion for trivial tasks. #868

unboundval opened this Issue Mar 21, 2013 · 3 comments


None yet

2 participants



First of all I would like to thank you for developing this great tool, it really makes my life much easier.

I encounter a strange problem when using fabric in parallel mode: for trivial jobs like getting hostname, sequential execution on multiple hosts are much faster than parallel mode. I didn't find anything in the documentation to configure to speed up the process and I didn't find anyone having similar issues on the Internet. After reading the code, I found that this is caused by the JobQueue.__fill_results() trying to retrieve additional results from the _comms_queue. It is called both in the main loop processing jobs as well as after the main loop. The functions use a timeout of 1 seconds as in version 1.6 which causes unnecessary long delay when executing trivial jobs.

I am wondering whether there is any specific reason to use 1 second timeout. For my application, I need it to be fast so I plan to change it to 0.01 second timeout for the statement
datum = self._comms_queue.get(timeout=0.01)

Will there be any issues? Thank you very much!


If you can look at the Git history for those lines it might be useful re: finding out why 1s was used as the default. Many other situations have an 0.1 or 0.01 or etc, loop sleep time -- see e.g. fabric/ which has a timer length used for various IO loops. Assuming the 1s wasn't for some other reason like "anything shorter seems to run into race conditions", I would be in favor of lowering it. Thanks!


Thank you! I am happy to know it won't have any side effect to the code. Lowering the value does help to significantly improve the performance in my application.


Closing in favor of #877

@bitprophet bitprophet closed this Apr 29, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment