-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ed25519 leaks private key if public key is incorrect #170
Comments
On Monday, June 16, 2014, MaartenBaert notifications@github.com wrote:
Because of the multitude of real-world entropy failures with random nonces |
Here's a djb blog post on entropy attacks: http://blog.cr.yp.to/20140205-entropy.html I'd also note that the CFRG is moving towards deterministic protocols to |
Just wondering why this was closed with no explanation? It's an interesting idea. |
Because as the original poster admitted, it's not a problem thanks to the Ed25519 API:
One of the suggested functions already exists (I contributed it):
This is what Regarding any other changes to Ed25519: it's far too late for those (nor do I think the proposed changes are a good idea). The final draft of the EdDSA/Ed25519 RFC has already been sent to the RFC editor. |
I agree that the original issue is not a problem with the current API. However I think my point about fault attacks still stands. Suppose you are signing a large block of data on a device that does not have ECC memory. It's feasible that a bit gets flipped in between the first and second hash, replacing
The receiver tries to verify the signature and finds that it is invalid. So he makes the same request again and the second time, no bit flip happens and he gets a valid signature. He can now extract the private key like this:
Even if the receiver only gets Of course such bit flips are extremely rare, but they do happen, otherwise there would be no need for ECC memory. And it's absolutely not intuitive to normal programmers that a simple bit flip in the message will somehow leak the private key. Luckily there is an easy way to avoid the entire problem: just verify your own signature before you transmit it. This also covers the previous issue where the public key is invalid. It also protects against implementation flaws in the math routines that could result in invalid signatures. This could be easily added to libsodium, the processing cost is not that high. It requires no changes to the Ed25519 algorithm or proposed standard. |
Interesting. These bit flips become much less scary if |
Correct, and signing the hash of M is actually faster if M is large, since you only have to calculate the large hash once. Still it's not 100% secure, bit flips in the hash are still possible even though they are far less likely. But if they happen, they are equally damaging. There are also other points in the signature computation where any error will leak the private key. Faults can also be introduced intentionally through glitches on the power supply if the attacker has access to it (this is a known problem for smartcards). AFAIK, the easiest way to be sure is to just verify your own signatures. |
Note the soon-to-be-RFC EdDSA draft specifies an IUF-capable prehashing variant of Ed25519 called Ed25519ph |
A more general note: yes fault attacks on deterministic signature schemes are definitely possible, however in practice we've seen entropy failures in nondeterministic signature schemes as a fairly common real-world problem. Likewise there's an entire class of malleability attacks that applies to systems which themselves include a signature under a random nonce in a subsequent hash calculation which does not apply to deterministic signature schemes. (Edit: in retrospect these attacks are best addressed by omitting signatures from hashed content entirely) Overall I think the rewards greatly outweigh the risks. Re-verifying the signature protects against random(ly injected) faults, but not ones that can be deliberately caused in a deterministic manner, as an attacker who can precisely inject faults can simply cause the same fault on the second pass, so now you've just made your system substantially slower for nothing. It might be worth backing up and asking what your threat model is. |
I'm not arguing against deterministic signatures, I understand their value. There are other ways to protect against fault attacks. Verifying signatures does not follow the same code path as signing, so this is not trivial. Also, it's quite easy to inject a random fault by messing with the supply, but much harder to reliably inject the same fault twice, even in the same code path. I'm not focusing on any threat model in particular, Ed25519 is a general-purpose algorithm after all. I don't see why it couldn't be used in smartcards at some point. |
The important thing to keep in mind about adding signature verification to each signing operation is verification is much slower than signing (over 3X slower on my computer), so using this approach to mitigate fault attacks comes with pretty significant performance overhead. I would also note that while there have been multiple demonstrations of fault attacks on deterministic ECDSA, to my knowledge there has not yet been a practical demonstrating of a fault attack on EdDSA, and papers analyzing their relative resistance to fault attacks have found EdDSA to be more resilient: https://books.google.com/books?id=EC0DDQAAQBAJ&lpg=PA192&ots=UHwHH8LGA4&pg=PA182#v=onepage&q&f=false |
My tests gave me 42 us for signing and 103 us for verifying. So yes, the overhead is very significant, but it is still much faster than a single RSA-2048 signature (559 us) or RSA-4096 signature (5873 us). So is it really a problem? Maybe it should not be the default behaviour, but I think it's reasonable for applications where the private key must be protected at all costs and the overhead is acceptable. I also like the fact that it protects me from a lot of potential implementation bugs in the signing code. I will definitely implement it in my applications, even if the risk is small. |
As I said upthread, I'd think even a mediocre checksum of Rowhammer-type attacks only inject random faults, except in like crazy smart card type situations, but even there obtaining say an md5 collision against an unknown seed with your rowhammer sounds pretty hard. Are you worried about someone corrupting the running sha512 function somehow? |
My main concern was smartcard-like scenarios, and bit flips in M on regular hardware. Ignoring those, I would be worried about implementation errors in I agree that checksums on R, M and A will probably cover 99.9% of the problems on regular hardware, just like ECC RAM would. |
@tarcieri I'm reading the paper you linked, and maybe I'm missing something, but I don't agree with what they claim here (page 189):
What they call Their second proposed attack considers bit flips induced in
... will produce an incorrect |
I just realized that verifying the signature isn't sufficient if the bit flips in Also, if you decide to sign |
If you're worried about corruption of the sha512 state, then you can always rerun it too after the signature is produced. Along with a checksum on I think the remaining weak point is actually |
Yes, in case of intentional fault attacks (smartcard scenario), |
Lots of discussion about EdDSA fault attacks on @trevp's curves list: https://moderncrypto.org/mail-archive/curves/2016/000772.html |
An interesting suggestion re: fault attacks: compute the signature twice and compare. This should be considerably faster than a signature verification (although won't save you from an attacker who can inject the same fault twice) |
That is totally fine with short messages or with Ed25519ph, but doubling the time it takes for computing a signature may not be acceptable in other contexts. We also need to duplicate the code, since a fault can also affect the code itself. That's doable, but maybe it should require the library to be compiled with a specific flag in order to improve its resistance against this class of attacks, even if computations then become more expensive. |
@jedisct1 yeah, I wasn't necessarily suggesting libsodium do this per default. @MaartenBaert was suggesting performing a signature verification after each signing operation (in his own code), which would be considerably slower than signing twice. |
I just noticed something while reading the Ed25519 paper. The signature algorithm works like this:
Key generation
Sign message M
Ed25519 is only secure if r is secret and random. If it isn't, an attacker can easily recover the private key using
a = (S-r)/h
. Also, signing two messages with the same value for r would also leak the private key, even when r is unknown, becausea = (S1-S2)/(h1-h2)
. This was all explained in the paper.r is the hash of a secret value z and the message M, so whenever the message changes, r changes too. However, if an attacker can somehow convince the victim to sign a message with the correct private key, but the wrong public key, the value of the second hash (h) changes while the value of the first hash (r) stays the same. Thus, the attacker can recover the private key based on two signatures of the same message with a different public key, using
a = (S1-S2)/(h1-h2)
.Obviously a properly written program should never use the wrong public key in the first place, but it's an easy mistake to make. This doesn't seem to affect NaCl or Sodium though, because the API stores the keys like this:
A copy of the public key is stored as part of the private key, and crypto_sign_ed25519 uses that copy. So it's pretty hard to accidentally use the wrong public key. Unless you try to do what I did: I noticed that the last 32 bytes of the private key were just a copy of the public key, so I decided not to store them to save some space. That's probably a bad idea, so I'm looking for a less error-prone alternative.
I can think of two possible solutions:
r = sha512(z,M)
withr = sha512(z,A,M)
. With this modification, A effectively becomes part of the message. Incorrect values of A would no longer affect the security of the signature scheme (since you're just signing a different message, with a different r). This change does not require any changes to the verification algorithm - in fact only the owner of the private key can tell whether the signature was generated with the original or the modified algorithm. The signatures will still be different though, and if the same message is signed with both the original and the modified algorithm, it would leak the difference between r1 and r2. I think an attacker would have to break sha512 to do anything useful with that information, but I'm not quite sure.Again, this is not a problem with Ed25519 or a bug in NaCl or Sodium. It's just a potential source of vulnerabilities because the current design encourages programmers to mess with the public key.
PS: By the same reasoning, Ed25519 is quite sensitive to fault attacks. If the message gets corrupted after the first hash but before the second hash, an attacker can use the same trick to extract the private key, thanks to the deterministic nonce. I assume this kind of memory corruption could happen in real life, since there would be no need for ECC RAM otherwise. I don't really understand why Ed25519 uses a deterministic nonce in the first place - these problems wouldn't exist if r was random.
The text was updated successfully, but these errors were encountered: