ProxyCommand performance issues #420

acdha opened this Issue Oct 15, 2014 · 2 comments


None yet

2 participants

acdha commented Oct 15, 2014

I noticed Python chewing 100% CPU and transferring very slowly while using a host with ProxyCommand set.

Looking at shows two performance issues:

  1. It hammers (17% of total time) to get the time in milliseconds, which is a LOT slower than calling time.time():
cadams@ganymede:~ $ python -m timeit -s 'from datetime import datetime' ''
1000000 loops, best of 3: 1.08 usec per loop
cadams@ganymede:~ $ python -m timeit -s 'from time import time' 'time()'
10000000 loops, best of 3: 0.0801 usec per loop

(To avoid portability issues, from timeit import default_timer picks the best timer function on both Unix and Windows)

  1. The select call specifies 0.0 for the timeout, which causes the loop to endlessly churn reading only a few bytes in each call. It looks like simply calling select with self.timeout would match semantics with blocking behaviour by default.
lndbrg commented Oct 15, 2014

Could you create a patch/pull request of your findings? :)

acdha commented Oct 15, 2014

Ack, forgot to close this earlier – this is the same issue in #413 / #414. I'm not entirely sure of the merits of io_sleep vs. select() with a timeout but either should avoid the massive performance hit.

@acdha acdha closed this Oct 15, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment