Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ed25519 leaks private key if public key is incorrect #170

Closed
MaartenBaert opened this issue Jun 16, 2014 · 22 comments
Closed

Ed25519 leaks private key if public key is incorrect #170

MaartenBaert opened this issue Jun 16, 2014 · 22 comments

Comments

@MaartenBaert
Copy link

I just noticed something while reading the Ed25519 paper. The signature algorithm works like this:

Key generation

seed = random 32-byte string
(a,z) = sha512(seed)
A = a*B

Sign message M

r = sha512(z,M)
R = r*B
h = sha512(R,A,M)
S = r+h*a
signature = (R,S)

Ed25519 is only secure if r is secret and random. If it isn't, an attacker can easily recover the private key using a = (S-r)/h. Also, signing two messages with the same value for r would also leak the private key, even when r is unknown, because a = (S1-S2)/(h1-h2). This was all explained in the paper.

r is the hash of a secret value z and the message M, so whenever the message changes, r changes too. However, if an attacker can somehow convince the victim to sign a message with the correct private key, but the wrong public key, the value of the second hash (h) changes while the value of the first hash (r) stays the same. Thus, the attacker can recover the private key based on two signatures of the same message with a different public key, using a = (S1-S2)/(h1-h2).

Obviously a properly written program should never use the wrong public key in the first place, but it's an easy mistake to make. This doesn't seem to affect NaCl or Sodium though, because the API stores the keys like this:

public key = A
private key = (seed,A)

A copy of the public key is stored as part of the private key, and crypto_sign_ed25519 uses that copy. So it's pretty hard to accidentally use the wrong public key. Unless you try to do what I did: I noticed that the last 32 bytes of the private key were just a copy of the public key, so I decided not to store them to save some space. That's probably a bad idea, so I'm looking for a less error-prone alternative.

I can think of two possible solutions:

  • Replace r = sha512(z,M) with r = sha512(z,A,M). With this modification, A effectively becomes part of the message. Incorrect values of A would no longer affect the security of the signature scheme (since you're just signing a different message, with a different r). This change does not require any changes to the verification algorithm - in fact only the owner of the private key can tell whether the signature was generated with the original or the modified algorithm. The signatures will still be different though, and if the same message is signed with both the original and the modified algorithm, it would leak the difference between r1 and r2. I think an attacker would have to break sha512 to do anything useful with that information, but I'm not quite sure.
  • Add a function that extracts the public key from the private key (i.e. copies the last 32 bytes), so programmers that want to save 32 bytes can simply store only the private key and extract the public key when needed, without any risk of messing up the public key.

Again, this is not a problem with Ed25519 or a bug in NaCl or Sodium. It's just a potential source of vulnerabilities because the current design encourages programmers to mess with the public key.

PS: By the same reasoning, Ed25519 is quite sensitive to fault attacks. If the message gets corrupted after the first hash but before the second hash, an attacker can use the same trick to extract the private key, thanks to the deterministic nonce. I assume this kind of memory corruption could happen in real life, since there would be no need for ECC RAM otherwise. I don't really understand why Ed25519 uses a deterministic nonce in the first place - these problems wouldn't exist if r was random.

@tarcieri
Copy link
Contributor

tarcieri commented Jun 16, 2014

On Monday, June 16, 2014, MaartenBaert notifications@github.com wrote:

I don't really understand why Ed25519 uses a deterministic nonce in the
first place - these problems wouldn't exist if r was random.

Because of the multitude of real-world entropy failures with random nonces
in practice. The PS3 master ECDSA key was compromised this way, as well as
over 50 Bitcoins sent from buggy Android clients.

@tarcieri
Copy link
Contributor

Here's a djb blog post on entropy attacks:

http://blog.cr.yp.to/20140205-entropy.html

I'd also note that the CFRG is moving towards deterministic protocols to
avoid these sorts of attacks, including data exfiltration via "random"
nonces (e.g. Dual_EC). See Kenny Patterson et al's recent paper:

http://eprint.iacr.org/2014/438

@jedisct1 jedisct1 closed this as completed Aug 6, 2014
@ioquatix
Copy link

Just wondering why this was closed with no explanation? It's an interesting idea.

@tarcieri
Copy link
Contributor

tarcieri commented Oct 23, 2016

Because as the original poster admitted, it's not a problem thanks to the Ed25519 API:

This doesn't seem to affect NaCl or Sodium though, because the API stores the keys like this

One of the suggested functions already exists (I contributed it):

Add a function that extracts the public key from the private key

This is what crypto_sign_seed_keypair() does.

Regarding any other changes to Ed25519: it's far too late for those (nor do I think the proposed changes are a good idea). The final draft of the EdDSA/Ed25519 RFC has already been sent to the RFC editor.

@MaartenBaert
Copy link
Author

MaartenBaert commented Oct 23, 2016

I agree that the original issue is not a problem with the current API. However I think my point about fault attacks still stands. Suppose you are signing a large block of data on a device that does not have ECC memory. It's feasible that a bit gets flipped in between the first and second hash, replacing M with M':

r = sha512(z,M)
R = r*B
h' = sha512(R,A,M')
S' = r+h'*a
signature = (R,S')

The receiver tries to verify the signature and finds that it is invalid. So he makes the same request again and the second time, no bit flip happens and he gets a valid signature. He can now extract the private key like this:

h = sha512(R,A,M)
h' = sha512(R,A,M')
a = (S-S')/(h-h')

Even if the receiver only gets M or M' rather than both, he can just try this procedure repeatedly for every possible bit flip until he finds one where A = a*B.

Of course such bit flips are extremely rare, but they do happen, otherwise there would be no need for ECC memory. And it's absolutely not intuitive to normal programmers that a simple bit flip in the message will somehow leak the private key. Luckily there is an easy way to avoid the entire problem: just verify your own signature before you transmit it. This also covers the previous issue where the public key is invalid. It also protects against implementation flaws in the math routines that could result in invalid signatures. This could be easily added to libsodium, the processing cost is not that high. It requires no changes to the Ed25519 algorithm or proposed standard.

@burdges
Copy link

burdges commented Oct 23, 2016

Interesting. These bit flips become much less scary if M is a hash of your actual message, presumably keeping it in cache. If you're still worried, then you could do your own checks that R, M, and A do not change.

@MaartenBaert
Copy link
Author

Correct, and signing the hash of M is actually faster if M is large, since you only have to calculate the large hash once. Still it's not 100% secure, bit flips in the hash are still possible even though they are far less likely. But if they happen, they are equally damaging. There are also other points in the signature computation where any error will leak the private key. Faults can also be introduced intentionally through glitches on the power supply if the attacker has access to it (this is a known problem for smartcards). AFAIK, the easiest way to be sure is to just verify your own signatures.

@tarcieri
Copy link
Contributor

Note the soon-to-be-RFC EdDSA draft specifies an IUF-capable prehashing variant of Ed25519 called Ed25519ph

@tarcieri
Copy link
Contributor

tarcieri commented Oct 23, 2016

A more general note: yes fault attacks on deterministic signature schemes are definitely possible, however in practice we've seen entropy failures in nondeterministic signature schemes as a fairly common real-world problem. Likewise there's an entire class of malleability attacks that applies to systems which themselves include a signature under a random nonce in a subsequent hash calculation which does not apply to deterministic signature schemes. (Edit: in retrospect these attacks are best addressed by omitting signatures from hashed content entirely) Overall I think the rewards greatly outweigh the risks.

Re-verifying the signature protects against random(ly injected) faults, but not ones that can be deliberately caused in a deterministic manner, as an attacker who can precisely inject faults can simply cause the same fault on the second pass, so now you've just made your system substantially slower for nothing.

It might be worth backing up and asking what your threat model is.

@MaartenBaert
Copy link
Author

I'm not arguing against deterministic signatures, I understand their value. There are other ways to protect against fault attacks.

Verifying signatures does not follow the same code path as signing, so this is not trivial. Also, it's quite easy to inject a random fault by messing with the supply, but much harder to reliably inject the same fault twice, even in the same code path.

I'm not focusing on any threat model in particular, Ed25519 is a general-purpose algorithm after all. I don't see why it couldn't be used in smartcards at some point.

@tarcieri
Copy link
Contributor

The important thing to keep in mind about adding signature verification to each signing operation is verification is much slower than signing (over 3X slower on my computer), so using this approach to mitigate fault attacks comes with pretty significant performance overhead.

I would also note that while there have been multiple demonstrations of fault attacks on deterministic ECDSA, to my knowledge there has not yet been a practical demonstrating of a fault attack on EdDSA, and papers analyzing their relative resistance to fault attacks have found EdDSA to be more resilient:

https://books.google.com/books?id=EC0DDQAAQBAJ&lpg=PA192&ots=UHwHH8LGA4&pg=PA182#v=onepage&q&f=false

@MaartenBaert
Copy link
Author

My tests gave me 42 us for signing and 103 us for verifying. So yes, the overhead is very significant, but it is still much faster than a single RSA-2048 signature (559 us) or RSA-4096 signature (5873 us). So is it really a problem?

Maybe it should not be the default behaviour, but I think it's reasonable for applications where the private key must be protected at all costs and the overhead is acceptable. I also like the fact that it protects me from a lot of potential implementation bugs in the signing code. I will definitely implement it in my applications, even if the risk is small.

@burdges
Copy link

burdges commented Oct 23, 2016

As I said upthread, I'd think even a mediocre checksum of R, M, and A should do the trick, without the cost of verifying.

Rowhammer-type attacks only inject random faults, except in like crazy smart card type situations, but even there obtaining say an md5 collision against an unknown seed with your rowhammer sounds pretty hard.

Are you worried about someone corrupting the running sha512 function somehow?

@MaartenBaert
Copy link
Author

My main concern was smartcard-like scenarios, and bit flips in M on regular hardware.

Ignoring those, I would be worried about implementation errors in S = r+h*a. The code is really complex and unreadable, especially after SIMD optimization, and can't be tested exhaustively. There could be bugs in there that make certain edge cases produce incorrect results, leaking information about the private key as a result. It's purely hypothetical of course, but given the low cost of verification and the huge damage done by leaking the key, I think it's worth it for many applications.

I agree that checksums on R, M and A will probably cover 99.9% of the problems on regular hardware, just like ECC RAM would.

@MaartenBaert
Copy link
Author

@tarcieri I'm reading the paper you linked, and maybe I'm missing something, but I don't agree with what they claim here (page 189):

However, since the value of k depends on both msg and an unknown portion of the hash of d, the attacker will not be able to exploit it to successfully forge a signature for any message different from msg.

What they call k corresponds to r in my pseudocode. They claim that even when the attacker knows a, they can't forge a signature for any new message because they can't calculate r = sha512(z,M) since they don't know z. However I don't think that's relevant. An attacker can pick an arbitrary value for r and complete the signature, there is no way for the verifier to know whether r was valid or not. From the point of view of the verifier, r is just a random value anyway.

Their second proposed attack considers bit flips induced in r, which like I suspected (see first post) doesn't do any damage. However my attack uses bit flips in h, which leaks a like in their first attack. Basically any fault in this section:

R = r*B
h = sha512(R,A,M)

... will produce an incorrect S' which leaks a provided that the attacker can guess h'. If R is damaged in any way, guessing h' is trivial because R' is transmitted as part of the signature. If R, A or M is damaged, h' can be found by exhaustively trying all possible bit flips of R, A and M. This works assuming that the fault is limited to a few bits/bytes at most so the exhaustive search is doable.

@MaartenBaert
Copy link
Author

I just realized that verifying the signature isn't sufficient if the bit flips in M' are permanent. The verification will use the same faulty M' and get the same h', so the verification will succeed. The verifier doesn't know that r didn't match M', but if the verifier sees that there is a bit flip in M', it can get the signer to sign M again, and that will produce a signature for a different message M but with the same r. This leaks the private key a. So even when the signature is verified by the signer, the signer still needs to checksum M.

Also, if you decide to sign hash(M) instead of M, you have to recompute the hash when verifying, otherwise you get the same problem.

@burdges
Copy link

burdges commented Oct 24, 2016

If you're worried about corruption of the sha512 state, then you can always rerun it too after the signature is produced. Along with a checksum on A and Mprotected by a secret seed, this should be pretty tough to attack.

I think the remaining weak point is actually R itself because R = r*B is the slowest part of the computation, making it the easiest place to cause a task switch to attack registers, etc. And S = r+h*a should be fast by comparison. Any bit flips during r*B propagate into big changes in R, but that provides no protection since R' is revealed. I think that's your argument for verification.

@MaartenBaert
Copy link
Author

Yes, in case of intentional fault attacks (smartcard scenario), R = r*B is the easiest target.

@tarcieri
Copy link
Contributor

Lots of discussion about EdDSA fault attacks on @trevp's curves list:

https://moderncrypto.org/mail-archive/curves/2016/000772.html

@tarcieri
Copy link
Contributor

An interesting suggestion re: fault attacks: compute the signature twice and compare. This should be considerably faster than a signature verification (although won't save you from an attacker who can inject the same fault twice)

@jedisct1
Copy link
Owner

That is totally fine with short messages or with Ed25519ph, but doubling the time it takes for computing a signature may not be acceptable in other contexts.

We also need to duplicate the code, since a fault can also affect the code itself.

That's doable, but maybe it should require the library to be compiled with a specific flag in order to improve its resistance against this class of attacks, even if computations then become more expensive.

@tarcieri
Copy link
Contributor

tarcieri commented Oct 30, 2016

@jedisct1 yeah, I wasn't necessarily suggesting libsodium do this per default.

@MaartenBaert was suggesting performing a signature verification after each signing operation (in his own code), which would be considerably slower than signing twice.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants