New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce the number of PublicKey decompressions #38
Comments
Allowing this library to take in the uncompressed public key would be super useful. A project I'm working on has a use case for using the uncompressed version. (Verifying many signatures from the same pubkey, might as well cache the decompression and avoid that performance hit) |
Hi @coltfred,
Out of curiosity, which library is doing that? Apologies if you're already familiar with this background: The My second complaint against that library is, given the affine Anyway, hope that helps. If you'd like further reading on the internals of point representations in |
EDIT: Never mind, this library is doing the much smarter thing of just having a public key object to avoid repeated decompressions, from its design. (My referrence was golang/crypto/ed25519, should have read this API before commenting, apologies) |
Hi @ValarDragon,
I might be mistaken in my understanding of what you and @coltfred are asking for, but the uncompressed form in this case would be an extended, twisted Edwards point (as described above) in the form
On the surface, one would think this would be a good idea, until realising that the verification routine must compute |
My bad, your totally right. I had also originally misread the issue and hurriedly replied as well. What I intended to write was: could we cache both the compressed and internal representation of the pubkey curve point. e.g. cache the computation on lines: The intent of this would be a space / time trade-off, if your using the same pubkey repeatedly, this becomes a worthwhile trade imo. |
@isislovecruft To my embarrassment I had mixed up the value that was 64 bytes in my other library. I'm so sorry about the issue. |
@coltfred No worries at all! Did you figure out what it was or if there was a way to be compatible? |
… key. Note that I've yet to benchmark this to see if it's faster, but there's two new comparision benchmark functions in the benches/ directory. * ADD a feature suggested by Dev Ojha (@ValarDragon) to reduce the number of point decompressions done when verifying a batch of signatures created all with the same key: dalek-cryptography#38 (comment)
@ValarDragon Aha, I see what you're saying. Sorry for misunderstanding! I'm wasn't sure much much noticeably faster it would be, but I tried it out since it was ten minutes to write the code to try it, if you want to benchmark it on the architecture(s) with the batch sizes you're using to see the tradeoffs. The results using the normal On an i9-7800X using the avx2 backend, for each batch size below, the percent improvement of the new function is:
If it's something you'll use, I'm happy to merge it! (What are you doing with so many signatures created with the same key, if you don't mind my asking?) |
Thanks for writing this, it would be awesome to have! I can easily optimize the batch size around the benchmarks. The idea is for a Proof of Stake blockchain (namely Tendermint), we have validators signing every block. If someone is trying to sync to download the entire chain, the bottleneck assuming sufficient bandwidth is verifying tons of signatures, which are from this same set of 100 people. (Especially so for light clients, who basically just download the signatures and a couple of hashes) So instead of verifying signatures block by block, we could have them download the block, and then verify a batch of the given validators signatures. (Or alternatively verifying a batch of all 100 per block at once) We could cache the compressed and internal formats of the pubkey in that setting as well. It will probably be awhile before we integrate this (currently we are using agl's golang ed25519 library, we'll have to use cgo to use the rust api), but my hope is to switch to this library soon. I do think this is something we would use though! I benchmarked with an avx2 backend as well on an i7-7700, the benchmarks are essentially in the same ratio as on your system. As an aside, thanks for writing such well documented batch verification code in the main dalek repo! I was previously under the impression that DJB's suggestion in the ed25519 paper with heaps was the fastest way without switching to Pippenger's, which seemed slightly odd. |
This may be more re-usable for other projects as well if there was a "cached pubkey" struct (or something to that extent) which you can batchverify, and normal verify with. The point of such a struct would be to store both the edwards point and compressed point. Then it would be easier to batch verify signatures from multiple cached pubkeys together. Additionally you could single verify on a single already-cached pubkey a bit faster than you could otherwise. Doing that cleanly though may require a pubkey trait, I'm unsure if rust incurs any performance overhead due to do that though. (I'm still fairly new to rust) |
Since the |
I believe this should be implemented by #61 and #62, which transparently caches the decompressed Potential remaining work to be done is to implement a generalised interface to verification which takes e.g. |
When interacting with other Ed25519 libraries one might need to be able to bring in a Public Key which is in x + y form (
[u8,64]
). One might also want to export the x + y form.Neither of these use cases is currently supported by ed25519-dalek, which can be a deal breaker for some users. Is there something I'm missing that would allow the export of the public key in expanded form? Is there interest in adding this kind of support if it's not already possible?
The text was updated successfully, but these errors were encountered: