Skip to content

Conversation

oberstet
Copy link
Contributor

@oberstet oberstet commented Nov 8, 2020

this is an attempt to fix #180

the does not change any existing behavior, but adds new behavior:

when PYTHON_ECDSA_ENABLE_LAZY_PRECOMPUTE env var is set, the multiplication tables for curves are only precomputed upon first use.

without the env var being set (the current behavior), tables for all supported curves are precomputed at import time regardless of whether the curves are used in the user app at all.

effect can be seen in this test:

(cpy382_2) oberstet@intel-nuci7:~/scm/3rdparty/python-ecdsa$ python test.py 
python-ecdsa v0.16.0+1.g3388afc
0.4552568449998944
(cpy382_2) oberstet@intel-nuci7:~/scm/3rdparty/python-ecdsa$ PYTHON_ECDSA_ENABLE_LAZY_PRECOMPUTE=1 python test.py 
python-ecdsa v0.16.0+1.g3388afc
0.020854599999438506
(cpy382_2) oberstet@intel-nuci7:~/scm/3rdparty/python-ecdsa$ 

using test.py:

import timeit

def test1():
    from ecdsa.curves import SECP256k1

print(timeit.timeit(test1, number=1))

@oberstet oberstet changed the title add option to lazily precompute multiplication tables via env var PYT… add option to lazily precompute multiplication tables Nov 8, 2020
@coveralls
Copy link

coveralls commented Nov 8, 2020

Coverage Status

Coverage decreased (-0.02%) to 97.996% when pulling e0e6c38 on oberstet:lazy-precompute-jacobi into f27b20f on warner:master.

@tomato42 tomato42 added the feature functionality to be implemented label Nov 8, 2020
@tomato42
Copy link
Member

tomato42 commented Nov 8, 2020

@oberstet
Copy link
Contributor Author

oberstet commented Nov 8, 2020

ok, pushed adjustments based on your feedback, and tested on cpy and pypy:

2 notes:

  • in the original code, the precomputation was called with constructor x/y whereas in the new code, x/y can be a gmp wrapped x/y. I tested both the non-gmp and gmp tox envs, so that should be good?
  • I now added a writer (single I guess) lock within the precompute block. new locks, means new risks for deadlock. also that lock will now happen later on .. lazy (which is the whole point).

@tomato42
Copy link
Member

tomato42 commented Nov 8, 2020

ok, pushed adjustments based on your feedback, and tested on cpy and pypy:

* https://gist.github.com/oberstet/905251d53a5168ea8c56996618a3a4cb

* https://gist.github.com/oberstet/c8ed75f1be47f2eacbfccc9b25d5565d

2 notes:

* in the original code, the precomputation was called with constructor x/y whereas in the new code, x/y can be a gmp wrapped x/y. I tested both the non-gmp and gmp tox envs, so that should be good?

check the CI results: https://travis-ci.org/github/warner/python-ecdsa/pull_requests

* I now added a writer (single I guess) lock within the precompute block. new locks, means new risks for deadlock. also that lock will now happen later on .. lazy (which is the whole point).

the solution to deadlocking is not locking in fewer places but to lock in correct places; here we need to acquire reader lock when checking if the precompute is initialised, free it, and then acquire a writer lock if it wasn't initialised; then after getting the writer lock we need to check again for initialisation

@oberstet
Copy link
Contributor Author

oberstet commented Nov 8, 2020

here we need to acquire reader lock when checking if the precompute is initialised, free it, and then acquire a writer lock if it wasn't initialised; then after getting the writer lock we need to check again for initialisation

I've updated the PR which should reflect ^ now ..

@@ -34,6 +34,7 @@
# Written in 2005 by Peter Pearson and placed in the public domain.

from __future__ import division
import os
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unused import

# since we lack promotion of read-locks to write-locks, we do a
# "acquire-read-lock check, acquire-write-lock plus recheck" cycle ..
try:
self._scale_lock.reader_acquire()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's _scale_lock because it was used for scale() but now we also use it for precompute... maybe change its name to _update_lock?

@tomato42
Copy link
Member

tomato42 commented Nov 8, 2020

also few previous versions failed in CI because of excessive line length, you may want to run tox -e codechecks before pushing a new version

@oberstet
Copy link
Contributor Author

oberstet commented Nov 8, 2020

thanks for your time and for reviewing! pls merge.

sidenote: the CI settings on travis for this repo don't allow for unbuilt pushes to the PR supersede others (to CI run requests pile up), and for whatever reasons, a single CI run takes hours.

@tomato42
Copy link
Member

tomato42 commented Nov 8, 2020

the builds take hours because travis limited the number of free workers recently: https://www.traviscistatus.com/#month don't know why exactly, unfortunately I don't have access to administer it to fix the dismissal of stale PRs or lack of CI status in PRs either... I'll send an email to Warner tomorrow

Copy link
Member

@tomato42 tomato42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, looks good, but I'll wait until the CI finishes before merging

@oberstet
Copy link
Contributor Author

oberstet commented Nov 8, 2020

thanks for approving!

re: travis: ah, right. interesting - news for me. guess we'll run into this soonish as well on our oss stuff ..

@tomato42 tomato42 merged commit 73f3d3e into tlsfuzzer:master Nov 9, 2020
@tomato42 tomato42 changed the title add option to lazily precompute multiplication tables add code to lazily precompute multiplication tables Nov 9, 2020
@tomato42
Copy link
Member

tomato42 commented Nov 9, 2020

merged, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature functionality to be implemented
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Importing anything from ecdsa is too expensive
3 participants