-
Notifications
You must be signed in to change notification settings - Fork 331
Precompute-related updates #213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
as we now precompute lazily, force the precomputation to happen when we call the method, now then it's first used; allow also the lazy option to delay it to the first time point is used (at the price of first verification latency)
as we don't use the lock now just for scaling the point but for precomputation too, rename it so it's more apparent
df5daed
to
38e5253
Compare
the math is the same for NIST P-256 and Brainpool 160r1 curves, but since the P-256 is larger, it takes longer to process, it in turn causes random timeouts (tlsfuzzer#206) decrease the size of numbers to hopefully make it pass CI more consistently
Looks good to me. My only question is what the the precompute has a lazy flag at all. Why not just always have it lazy? Is there a use case where being lazy would be slower? |
yes, the first operation with lazy precompute will be much slower (like few hundred times slower), so if the code is latency sensitive, it may be too late to do it the first time the key is actually used |
But is that really a factor ? If performance were that critical, would you use python? Wouldn't this need to use ctypes ? If you startup a program to do calculations, then you always take this hit anyway before you can "get to work". So the only use case we are talking about is a daemon waiting to do signing operations. Is "a few hundred times slower" meaningful compared to like 20ms of network latency ? It feels a bit more like feature creep than actual real life use case to me. But having said that, I don't object to any of it. |
because on very underperforming platforms (like Raspberry Pi) that few hundred times slower translates for few seconds, see #211 and linked issues
yes, on a regular PC it's just 20ms so it doesn't matter, but on a phone it may be 2-3 seconds
well, yeah, but it's reusing the code we need to library init so... meh |
Uh oh!
There was an error while loading. Please reload this page.