-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KeyGen, weak keys, and interop #25
Comments
In general, any suggestion to declare some of the KeyGen's output invalid/unsuitable
Arguably this can hardly be a concern. KeyGen already requires modulo reduction, so that implementation is already equipped with whatever it takes to perform such operations, including modulo
Formally speaking this would take a reduction and still check for suitability. It doesn't really buy you much, it merely rearranges some terms in probability equation. Besides, some applications out there were observed to reject non-reduced sk... [But don't ask me why, as I don't know.] |
On an even more general note. I'm inclined to say that the root of the problem is that there is a tendency to consider spec[s] and even its[their] parts in isolation. Abstract does mention "cryptographic properties," and SK is specified as "infeasible to guess" (as in "MUST be"), yet it doesn't prevent readers from interpreting (*) Malicious user would be interested to have victim pick |
Discussed with @kwantam offline. There is what we think:
Beyond this, there are three options:
At the moment we are leaning towards method 2. We are happy to hear other suggestions and recommendations. |
Let's not forget that besides setting most significant bit to 1 the referred code also clears 3 least significant bits. This works because that specific modulus has specific appearance, such that clearing 3 least significant bits ensures that resulting value is less than modulus. In other words just setting most significant bit to 1 is not a universal solution, it would have to be more nuanced than that. |
Agreed. I guess I should clarify the 2nd option:
// q ~ 2^254.85 I didn't mean to suggest that any 32 bytes string with first bit == 1 is a valid secret key |
Ah, @dot-asm's point is a good one: for curves where the order is slightly less than a power of 2, forcing MSB=1 is fine. But for curves where the order is slightly more than a power of 2, forcing MSB = 1 is bad, because it shrinks the keyspace too much. In the case of BLS12-381, q is roughly 2^255 - 2^252, so forcing the MSB to 1 means that there are still more than 2^253 keys. But imagine a different curve where q is roughly 2^255 + 2^64. In this case, setting the MSB leaves us with only 2^64 keys. Here the right approach is probably to clear the MSB and set the 2nd MSB, which would mean 2^254 <= sk < 2^255. Specifying this in a clear way isn't impossible, but it could be annoying. |
Maybe the right move is a "light touch" --- option (1) that @zhenfeizhang mentions. I would really love to make it harder for naive implementations to badly mess up constant timeness, but perhaps increasing he complexity of KeyGen's description is not worth that. |
A few quick thoughts on this, from a theoretical cryptographer's perspective :)
|
I agree that as long as sk is sampled reasonably there's no way anyone is generating sk = 0 honestly. But I do worry a little about how maliciously choosing sk = 0 can break things that rely on implicit properties of BLS signatures. My main concern right now is that sk = 0 allows equivocation about the message being signed. For sk != 0, given Arguably, the mistake here is relying on any property other than EUF-CMA. But in reality people will use BLS signatures for VRF-like properties, and we should try to remove obvious footguns if we can. |
So I guess my vote is for (1) with a rejection sampling loop in KeyGen that will ~never be triggered, and to reject PKs that are the identity point during pubkey validation. |
Thanks, Riad, and everyone! That's a good point (sk=0 being more "devastating" than sk=1,q-1, etc), and (1) does sound like a small and worthwhile tweak! |
I'd like you, guys, to consider overall messaging. KeyGen is defined as |
Sorry, I think I wasn't being clear. I completely agree that KeyGen should always give suitable output. One possibility is to include a loop inside KeyGen that executes until sk != 0. Depending on one's perspective, this may not count as unconditionally suitable: for any bound on execution time, there is miniscule but nonzero probability that no suitable sk is found. (Presumably in practice the loop would be bounded to, say, 255 iterations; and KeyGen would return an error if it couldn't find a suitable sk.) If this is a problem, we could instead specify reducing OKM mod (r - 2) and then adding 1. This is slightly annoying because it requires implementations to include reduction mod (r - 2), which I assume many would not otherwise do (I'd expect many implementations to include only arithmetic mod r and arithmetic mod q). But maybe this is OK. Does this (partially) address your concern? |
Only partially. Because I (for one?) don't view "KeyGen's output would measure up to IKM" as just "sk!=0."
KeyGen is just a way to generate sk. If application developer finds it too troublesome to implement, [s]he is free to do something else. Not to mention that "troublesome" will have to be weighed against the value it brings to the table. And for this reason it makes way more sense to put as much value into it than just a bare minimum of sk!=0. In other words I'd advocate for not to be concerned with how annoying it might appear, and go for something more sophisticated than r-2. |
In essence the objection to |
I've thought more about this, and I'm now convinced that the sk = 0 and small-sk cases are qualitatively different, and only the first one should be disallowed by the standard. The main reason to exclude sk = 0 is that a malicious person can choose it to do bad things in some cases. In contrast, the main reason to exclude "small" sk is that an honest person might accidentally choose something that's easy to guess, and then their key will be broken. The first one happens with probability 1. The second one happens with probability essentially zero. These things are very, very different. If the standard disallows sk = 0 (as it should, I think), then libraries can still do fancier things if they really want to avoid small-sk issues. But to be clear, with overwhelming probability there will never be a person who accidentally chooses a small sk. In this view, making the standard more complicated to rule out an event that already has practically zero probability of happening is just not a good idea. Even if the folks in this thread decide that we are really convinced disallowing small sk is a good idea, I'm almost certain CFRG will take one look and say no way. So: I'm happy to do r - 2, or I'm happy to just return an error in the event that sk = 0, or I guess I could be convinced to add a loop that tries a few times before returning an error. But really the loop is crazy, because the difference between 2^-255 and 2^-(255 * 255) is undetectable in practice. |
Just in case to clarify, I (for one) am not advocating for disallowing sk=small_n, but merely for specifying KeyGen in such a manner that would preclude the generation of sk=small_n, ...
... because libraries would be better off if KeyGen was specified to meet the said criterion per its algorithm. I mean users will ask "are you compliant with The Standard?" and "100% yes" would be so much better than "not quite, we really wanted to do fancier things."
Proposed standard suggests to choose IKM, not sk, and it implies that the rest will be taken adequate care of.
And what would be the suggested course of action for user (who did nothing wrong)? Generate another IKM? What if it's not an option? For example it was generated during device manufacturing, just as a high-entropy thing manufacturer offers to derive application-specific keys from... Again, I do recognize the astronomic scale of improbability, but one can also recognize that it doesn't actually have to be there at all. |
Making implementations more complicated to turn probability 2^-255 into probability 0 is counter-productive. The bad event will never happen, period. The computer is astronomically more likely to spontaneously catch fire than to fail to generate sk. |
I'm happy to make clear in the standard that other KeyGen algorithms are fine as long as they do not return sk = 0 and they generate keys with high min-entropy. Then "fancier" approaches are fine and explicitly standards-compliant. But, again, small sk simply will not happen, for the same reason that the error case will not happen for sk = 0, when KeyGen is run honestly. |
Please see #26. Discussion on the PR should be in that thread, please. |
Elsewhere folks have been discussing whether it makes sense to allow sk = 0 and/or whether the specification should be interpreted as requiring implementations to accept signatures under sk = 0.
It seems to me like we would much rather people reject sk = 0 than accept it, and certainly we should prefer that people not interpret the draft as requiring sk = 0 to be a valid key.
Related actions we could take (not mutually exclusive):
Make explicit that even though KeyGen can output sk = 0, it is acceptable (or required?) for implementations to reject public keys and signatures under "weak" keys (sk = 0, sk = 1, ???).
Change KeyGen so that it cannot output sk = 0. Lots of ways to do this.
We could say something like
which ensures that 2^64 <= sk < q - 2^64. But this is annoying because it requires implementing reduction mod some weird value (note that since q - 2^65 is odd Montgomery reduction is possible, but I don't love asking people to implement it.)
Another possibility is to just do rejection sampling where we reject small (in "absolute value") sk. The probability of looping more than once is negligible, so observing KeyGen timing would give away negligible information in expectation.
Another possibility is just to specify
2^64 <= sk < q - q^64
and let people do whichever of the above two they want.Another possibility is to force the most significant bit of sk to 1. This is ugly to do using our current notation (would, e.g., need to convert from integer to bit string, set bit, convert back to integer) but arguably is kinder to implementors because it guarantees that all sk have exactly the same bit length, which eliminates a class of implementation mistakes related to constant-time signing operations.
State two sets of requirements for KeyGen---interop requirements and security requirements---and allow any KeyGen that conforms to both. State that (a modified version of) the current KeyGen algorithm is an example, but implementations MAY use any approach that meets the security requirements.
I don't love this, because it's not so great to leave things too open-ended. Also, writing down the security requirements may carry us into the weeds---e.g., do we require a KDF to try and avoid (certain) randomness failures? Seems like this direction could end up being a headache for everyone.
Thoughts?
The text was updated successfully, but these errors were encountered: