-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster implementation to collapse non-consecutive ip-addresses #67455
Comments
I found the code used to collapse addresses to be very slow on a large number (64k) of island addresses which are not collapseable. The code at Line 349 in 0f164cc
was found to be guilty, especially the index lookup. The patch changes the code to discard the index lookup and have _find_address_range return the number of items consumed. That way the set operation to dedup the addresses can be dropped as well. Numbers from the testrig I adapted from http://bugs.python.org/issue20826 with 8k non-consecutive addresses: Execution time: 0.6893927365541458 seconds MfG |
Added the testrig. |
This is great, thank you. Can you sign the contributor's agreement? |
Here is an updated patch with a fix to the tests and docstrings. |
I just signed the agreement, ewa@ is processing it. |
New changeset f7508a176a09 by Antoine Pitrou in branch 'default': |
Ok, I've committed the patch. Thank you! |
Deduplication should not be omitted. This slowed down collapsing of duplicated addresses. $ ./python -m timeit -s "import ipaddress; ips = [ipaddress.ip_address('2001:db8::1000') for i in range(1000)]" -- "ipaddress.collapse_addresses(ips)" Before f7508a176a09: After f7508a176a09: Proposed patch restores performance for duplicated addresses and simplifies the code using generators. |
Good catch. What is the performance on the benchmark posted here? |
The same as with current code. |
Then +1. The patch looks fine to me. |
New changeset 021b23a40f9f by Serhiy Storchaka in branch 'default': |
My initial patch was wrong wrt. _find_address_range. Here is a patch to fix _find_address_range, drop the set, and improve performance again. python3 -m timeit -s "import bipaddress; ips = [bipaddress.ip_address('2001:db8::1000') for i in range(1000)]" -- "bipaddress.collapse_addresses(ips)" python3 -m timeit -s "import aipaddress; ips = [aipaddress.ip_address('2001:db8::1000') for i in range(1000)]" -- "aipaddress.collapse_addresses(ips)" |
Only one duplicated address is degenerated case. When there is a lot of duplicated addresses in range the patch causes regression. $ ./python -m timeit -s "import ipaddress; ips = [ipaddress.ip_address('2001:db8::%x' % (i%100)) for i in range(100000)]" -- "ipaddress.collapse_addresses(ips)" Unpatched: 10 loops, best of 3: 369 msec per loop |
Eleminating duplicates before processing is faster once the overhead of the set operation is less than the time required to sort the larger dataset with duplicates. So we are basically comparing sort(data) to sort(set(data)). python3 -m timeit -s "import random; import bipaddress; ips = [bipaddress.ip_address('2001:db8::') + i for i in range(100000)]; random.shuffle(ips)" -- "bipaddress.collapse_addresses(ips)" 10 loops, best of 3: 1.49 sec per loop If the data is pre-sorted, possible if you retrieve from database, things are drastically different: python3 -m timeit -s "import random; import bipaddress; ips = [bipaddress.ip_address('2001:db8::') + i for i in range(100000)]; " -- "bipaddress.collapse_addresses(ips)" So for my usecase, I basically have less than 0.1% duplicates (if at all), dropping the set would be better, but ... other usecases will exist. Still, it is easy to "emulate" the use of "sorted(set())" from a users perspective - just call collapse_addresses(set(data)) in case you expect to have duplicates and experience a speedup by inserting unique, possibly even sorted, data. On the other hand, if you have a huge load of 99.99% sorted non collapseable addresses, it is not possible to drop the set() operation in your sorted(set()) from a users perspective, no way to speed things up, and the slowdown you get is x10. That said, I'd drop the set(). |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: