You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Comparing _heapq with our own legacy C implementation, blue.heapq at
CCP, I noticed that ours was somewhat faster.
I discovered that a lot of effort is being spent to dynamically search
for a __lt__ operator, to provide backwards compatibility. I think we
should consider dropping that after this much time, especially for a
new python version. Running this code:
fromtimeitimport*s="""import heapqimport randoml = [random.random() for i in xrange(10000)]"""
The compatability code was just added in Py2.6 and is needed for apps
like Twisted that currently rely on __le__ being tested. In 3.0, the
compatability code is removed and full speed is restored.
Also, the timing suite exaggerates the effect. A more typical use of
heaps involves a heap of tuples with the first tuple element being used
as a priority level. That increases the comparison time and decreases
the relative significance of the dispatch logic.
I am sorry for not doing my research about the age of the compatibility
fix.
However, modifying the test slightly to work with tuples of
(random.random(), random.random())
shows a performance increase from:
heapify 0.366187741738
pushpop 0.541365033824
replace 2.69348946584
push and pop 3.12545093022
to:
heapify 0.186918030085
pushpop 0.405662172148
replace 1.46039447751
push and pop 1.75253663524
This does look like a large price to pay for this compatibility layer.
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: