Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infinity pubkey and signature #11

Open
paulhauner opened this issue Jul 18, 2020 · 37 comments · Fixed by Consensys/teku#2453
Open

Infinity pubkey and signature #11

paulhauner opened this issue Jul 18, 2020 · 37 comments · Fixed by Consensys/teku#2453

Comments

@paulhauner
Copy link
Contributor

The Eth2 test vectors contain a test that requires the infinity signature to represent a valid signature by the infinity pubkey across any message:

input: {pubkey: '0xc00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000',
  message: '0xabababababababababababababababababababababababababababababababab', signature: '0xc00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'}
output: true

The output: true here means that we should consider this pubkey/message/signature combination to be valid.

Presently blst will return output: false for the above test vector. I'm wondering if this project would expect to change their behaviour to match the Eth2 spec or if it plans to leave it as-is?

Thank you for your time :)

@nisdas
Copy link

nisdas commented Jul 18, 2020

@paulhauner we currently have this check in go

if pk.Equals(&zeroPk) && sig.Equals(&zeroSig) {

if len(sig) == BLST_P2_COMPRESS_BYTES && sig[0] == 0xc0 &&

so that we are in conformance with the spec. It will probably have to be added as a check in the rust wrapper too.

@paulhauner
Copy link
Contributor Author

Thanks @nisdas. I'm also wondering if there's variation in the batch operations too.

I'm writing some tests to compare milagro and blst too. I'll update once I have them :)

@dot-asm
Copy link
Collaborator

dot-asm commented Jul 19, 2020

Here is the thing. Which qualities make a sequence of bytes to a signature? Or alternatively, can one call a sequence of bytes a signature if it's same value for any message you sign with corresponding secret key, and everybody can forge it at no [and any] time? Do you see the dilemma? The question is what the library is supposed to do if you pass a non-signature. In essence individual infinite public keys should be rejected, and one can wonder who should be responsible for doing so. Maybe everybody should, application and library... Because if application fails for whatever reason... No? Of course one can refer to test vector, but does specification actually say that infinite signatures points are in fact to be treated as signatures(*)?

As for discrepancy between Go and Rust. It's appropriate to resolve it...

As for non-individual, i.e. aggregated, signatures and public keys. It's formally reasonable to assume that aggregated signature can be infinity, so they should be treated differently from individual. Currently they'll be mishandled in min-sig case, and treated suboptimally in min-pk (everybody's favourite:-) and it will be addressed...

(*) Analogy would be gcc test-suite exercising patterns that are formally undefined by language standard:-)

@Nashatyrev
Copy link

Just wanted to open exactly the same issue 😃

@benjaminion
Copy link

benjaminion commented Jul 23, 2020

but does specification actually say that infinite signatures points are in fact to be treated as signatures(*)?

This is the key issue. We explicitly adopt the (draft) IETF BLS spec in Eth2: https://tools.ietf.org/html/draft-irtf-cfrg-bls-signature-02

I looked into it previously when this Eth2 spec test was added, and have just rechecked, and the BLS signature spec says nothing about signing with the infinity/identity pubkey/privkey pair. That is, it is not special-cased.

Thus, given that 0 is a valid secret key (0 <= SK < r), and the identity is a valid G1 point, there seems to be nothing in the BLS signature spec that would prevent using them for signing.

(We actually had an incidence of an infinite/identity public key on the Eth2 Witti testnet. As per spec, all client implementations accepted this.)

@dot-asm
Copy link
Collaborator

dot-asm commented Jul 23, 2020

BLS spec [says] (0 <= SK < r)

At the same time it also says that "SK MUST be indistinguishable from uniformly random modulo r and infeasible to guess." There is no stop after "modulo r." And the context of "feasibility" is customarily "given that all other information is available [to the one who tries to guess]." In other words it's not about a guess in isolation. With this in mind, can one actually argue that SK==0 would be infeasible to guess from the looks of the public key?

We actually had an incidence of an infinite/identity public key

Well, one might be able to argue that it's up to client to protect itself, and if it chooses not to, then it's client's problem, not network's. But is it actually? Recall that it means that anybody would be able to produce "signature" in this client's name and everything that was "signed" could be modified afterwards...

@benjaminion
Copy link

At the same time...

Generating the SK is out of scope for this discussion. This is about what your library should accept as a valid signature. In that respect, nothing rules out SK==0 leading to an acceptable signature. In fact, the formulation 0 <= SK < r explicitly rules it in.

Of course it's idiotic for a client to use SK=0, but it is not the signature verifier's role to enforce this. Are you going to check for and return false for SK==1 as everybody knows the G1 generator, or SK < 10^6 as we can easily pre-compute a table and check known public keys?

@dot-asm
Copy link
Collaborator

dot-asm commented Jul 23, 2020

SK==1

Question I'm struggling with is a little bit more subtle. Yes, from forging viewpoint SK==0 and SK==small_n are equally bad. But trouble is that specifically SK==0 facilitates retrospective manipulation. Thing is that infinite signatures will be simply skipped over, all bets are off. And the question if it's good for entire network, as opposite to specific client. I mean SK==1 is kind of "it's your money," while SK==0 is not necessarily... Or what am I missing?

To clarify, it's not like I just refuse to fix it, I just want to see that issue is deliberated.

bors bot pushed a commit to sigp/lighthouse that referenced this issue Jul 25, 2020
## Issue Addressed

NA

## Proposed Changes

- Refactor the `bls` crate to support multiple BLS "backends" (e.g., milagro, blst, etc).
- Removes some duplicate, unused code in `common/rest_types/src/validator.rs`.
- Removes the old "upgrade legacy keypairs" functionality (these were unencrypted keys that haven't been supported for a few testnets, no one should be using them anymore).

## Additional Info

Most of the files changed are just inconsequential changes to function names.

## TODO

- [x] Optimization levels
- [x] Infinity point: supranational/blst#11
- [x] Ensure milagro *and* blst are tested via CI
- [x] What to do with unsafe code?
- [x] Test infinity point in signature sets
Nashatyrev added a commit to Consensys/teku that referenced this issue Jul 31, 2020
* Replace BLSPublicKey.isValid() with fromBytesCompressedValidate() method
* Make BLSPublicKey fromBytesCompressed/toBytesCompressed operating with Bytes48 type. SecretKey.toBytes() returns Bytes32
* Rename BLSPublicKey/BLSSignature.from/toBytes() methods to from/toSSZBytes() to be more explicit
* Hide constructors. Make PublicKey and Signature implementations to be lazily evaluated inside corresponding BLS wrappers
* Fix DLL binary attribute
* Validate BlstSignature when creating from bytes
* Add mikuli and blst specific infinity signature tests
* Add BlstPublicKey.fromBytesUncompressed for testing
* For BLSPublicKey/BLSSignature use bytes representation for hashCode() and equals() since we need ability to compare valid (from SSZ standpoint) 'empty' instances, but shouldn't decode them to real instances
* In case of fail fast when deserializing invalid signature bytes, just mark the signature invalid. This is to deal with empty signatures (valid from SSZ perspective)
* Fix reference tests running on Windows
* Add Mikuli fallback when couldn't load Blst
* Move SWIG wrapper to a separate project
* Update license for jblst
* Implement BlstSecretKey.destroy()
* Temporary handling infinite pubkey/signature as a special case until supranational/blst#11 is resolved
* Generate KeyPair from seed in an implementation independent way. Throw exception when instantiating SecretKey from non-valid BLS12381 scalar value
* Make BLSPublicKey.getPublicKey() package private
* Make BLSSignature.getSignature() package private
* Make BlstBLS12381.INSTANCE initialization more reliable and make it as Optional
* Move BLS constants to BLSConstants class. Make a clear separation of BLSPublicKey fromSSZBytes() and fromBytesCompressed() though they are equivalent with current SSZ implementation
* Make a distinction of BLSSignature from/toSSZBytes() and from/toBytesCompressed() though they are equivalent with the current SSZ implementation
@kirk-baird
Copy link
Contributor

I think that it is acceptable to have a secret key of zero as all functions will handle it adequately.

  • Pairings e(inf, x) = e(x, inf)
  • Multiplications: [0]G = inf
  • Addition: Y + 0 = Y
  • Subgroup check passes.

Hence signing, aggregation and verifying all operate correctly with the point infinity and secret key 0 so it does not need to be treated separately.

It's also beneficial to be consistent between implementations for interoperability and I believe ZCash, Apache and Herumi all allow a secret key of 0.

@dot-asm
Copy link
Collaborator

dot-asm commented Aug 5, 2020

Hence signing [with secret key 0]

Quoting wikipedia, "A valid digital signature [...] gives a recipient very strong reason to believe that the message was created by a known sender, and that the message was not altered in transit." Signature produced with SK==0 is free from both these properties. It doesn't qualify as "a valid signature," hence it's only appropriate to reject it. As invalid.

I believe ZCash, Apache and Herumi all allow a secret key of 0.

And I can imagine that everybody simply kicks the can forward, simply assuming that no client application is dumb enough to not ensure that its SK!=0. But my question is what if a client application does it on malicious purpose? Whose responsibility would it be to stop it?

Well, I can also imagine that the library can recognize that it can be oblivious of signature being individual or aggregated, and therefore reckon that it's not in a formal position to judge about a SK being 0. Thus assume that it's direct application responsibility to pass meaningful inputs to the library. Is this what I'm missing? However! Even if so, on a grand scale, I can't imagine that it would be actually appropriate for a network to tolerate SK==0 for interoperability reasons. Any reason actually. And I would be very surprised if it can be shown that infinite aggregated[!] signature would necessarily mean infinity on the other side of the pairing equation...

@benjaminion
Copy link

it's only appropriate to reject it. As invalid.

@dot-asm - you have a valid point of view. But this is simply not what the (draft) BLS signature standard says. It is important to us to be standard-conforming, and it would be preferable if that were reflected in the libraries that we use. In consensus systems like blockchains, agreement is of the essence. For now, we have special-cased our integration of Blst so that we trivially allow the infinite signature, but that's extra code I'd rather not have in there. Other teams have done the same.

By all means take this conversation to the IETF standards body. It's not too late to get it changed. But the definition of the verify functions that are currently there clearly allow SK == 0.

If you really hate this, how about a compile-time or run-time flag standard-conforming=true/false?

@kwantam
Copy link
Contributor

kwantam commented Aug 6, 2020

From my perspective it seems totally reasonable for implementations to refuse to accept signatures from preposterous keys. It seems much less reasonable for the Eth2 spec to require people to accept the identity point as a valid public key. I would certainly recommend revisiting that decision. (ping @JustinDrake?)

It's true, as others have pointed out, that it is not so easy (:smile:) to exhaustively list all of the "bad" keys---but that does not mean that we cannot identify ones that surely ought to be considered bad!

I'll ping the rest of the BLS sigs authors about this. Seems like we should add language to the draft to the effect that implementations MAY reject signatures made under weak public keys (e.g., the identity point).

Come to think of it, it might be good to suggest (but probably not require) fixing the most significant bit of the secret key to 1. Has negligible effect on guessing entropy of sk, but makes it more obvious how to implement constant-time signing via double-and-add-always.

(EDIT: I suppose I should point out, I'm one of the authors of the bls sigs draft. Also made small edits to text above.)

@JustinDrake
Copy link

It seems much less reasonable for the Eth2 spec to require people to accept the identity point as a valid public key. I would certainly recommend revisiting that decision. (ping @JustinDrake?)

My position is that the Eth2 BLS spec should be essentially the same as the IETF spec. As I see it, deviating from the IEFT spec goes against the standardisation effort and could lead to consensus bugs (as well as complicate things like cross-chain interoperability).

As I understand SK == 0 is IETF-valid and should be supported by IETF-compliant implementations. Consensus is very black and white—we can't have some implementations reject what they feel are "bad" keys because that's a recipe for consensus failures. I guess similar to @benjaminion I would advocate for BLST to stick exactly to the IETF spec (which is the Eth2 BLS spec). Also the formal verification effort for BLST will be checking for compliance with the IETF standard.

@dot-asm
Copy link
Collaborator

dot-asm commented Aug 6, 2020

Come to think of it, it might be good to suggest (but probably not require) fixing the most significant bit of the secret key to 1. Has negligible effect on guessing entropy of sk, but makes it more obvious how to implement constant-time signing via double-and-add-always.

Just in case for reference. blst handles zero-padded scalars in constant-time manner as it stands now. In other words it doesn't reveal information about factual bit-width of the secret key. All secret keys are treated as 255-bit values without making any assumptions about most significant bit values.

@kwantam
Copy link
Contributor

kwantam commented Aug 6, 2020

As I understand SK == 0 is IETF-valid and should be supported by IETF-compliant implementations. Consensus is very black and white—we can't have some implementations reject what they feel are "bad" keys because that's a recipe for consensus failures.

Hm. This does not seem like a consensus issue, it seems like a security issue.

The probability of an honestly generated sk = 0 is roughly 2^-255. By comparison, the probability that every Ethereum node experiences a cosmic ray--induced bit flip that causes everyone to spontaneously accept a bad block seems, from a back-of-the-envelope calculation, to be astronomically higher---better than 2^-160 [1].

So with overwhelming probability any node claiming to have sk = 0 is malicious, and probably the right thing to do is require nodes to reject malicious signatures, not accept them.

Obviously you are far more of an expert on how things ought to be run than I am, so maybe I am missing something obvious, but I just do not see how "the spec allows it" should be interpreted as "all nodes must accept obviously preposterous signing keys."

[1] Boeing says 1 upset per 10^13 hours, so let's call this 10^-15 in any 5-minute period which is roughly 2^-55. Roughly 2^13 Ethereum nodes gives 2^-68. Each node has roughly 2^40 bits in memory; generously assume memory is independent every clock cycle, so in 5 minutes there are about 2^80 bits that could be flipped, and one critical bit that says "yes" or "no" to a single block; this gives 2^-148.

@JustinDrake
Copy link

Hm. This does not seem like a consensus issue, it seems like a security issue. The probability of an honestly generated sk = 0 is roughly 2^-255

Eth2's security model is adversarial! We do not assume 100% of the validators to be honest. If someone can "maliciously" make an Eth2 deposit to the pubkey corresponding to sk = 0 to cause a fork between two Eth2 clients such a deposit will happen with probability essentially 1.

@kwantam
Copy link
Contributor

kwantam commented Aug 7, 2020

Hm. This does not seem like a consensus issue, it seems like a security issue. The probability of an honestly generated sk = 0 is roughly 2^-255

Eth2's security model is adversarial! We do not assume 100% of the validators to be honest. If someone can "maliciously" make an Eth2 deposit to the pubkey corresponding to sk = 0 to cause a fork between two Eth2 clients such a deposit will happen with probability essentially 1.

Right, so my point is: the correct choice is to forbid sk == 0, not to require supporting it, because any node whose sk == 0 is malicious with probability essentially 1.

@JustinDrake
Copy link

the correct choice is to forbid sk == 0, not to require supporting it, because any node whose sk == 0 is malicious with probability essentially 1

As I see it the correct choice is to strictly follow the IETF standard for simplicity 😂 Every exceptional case that diverges from the IETF standard (e.g. explicitly disallowing sk == 0) complicates the story and ultimately makes it harder to reach intra-blockchain consensus and inter-blockchain interoperability. The simplest situation is for everyone to be compliant with the IETF standard (as opposed to Eth2 introducing its custom BLS spec with non-compliant edge cases).

@kwantam
Copy link
Contributor

kwantam commented Aug 8, 2020

I completely understand the concern with interop, and like I said in my first comment you know way more about this than I ever will.

But allowing sk = 0 violates a bunch of implicit security properties that people are very likely to rely upon. For example, for an honestly-generated key it is infeasible to sign some message m and then later claim that the signature was really for message m'---this would require finding a second preimage of the hash function. In contrast, for sk = 0, equivocation is trivial: every message has the same signature. It's really easy to imagine this causing problems.

In any case, the right solution is to forbid sk = 0 in the spec. We'll plan to do this.

@JustinDrake
Copy link

the right solution is to forbid sk = 0 in the spec. We'll plan to do this.

Are you suggesting making signature verification fail when the pubkey is the identity? If so, given that Eth2 phase 0 (which includes BLS) accepts the identity pubkey, changing the IEFT standard this late in the game would create two BLS standards. cc @djrtwo @CarlBeek

for sk = 0, equivocation is trivial

There are other secret keys where equivocation is trivial, right? For example when sk = 1 equivocation is trivial because the signature of a message is the hash of that message. Going against the most mathematically natural construction and forbidding "special" secret keys does not feel like the right solution to me (in addition to forcing two BLS standards).

@kwantam
Copy link
Contributor

kwantam commented Aug 8, 2020

There are other secret keys where equivocation is trivial, right?

No, sk = 0 is special.

sk = 1 (or any other easily guessable key) allows trivial forgery --- a valid signature is just H(m) when sk = 1 --- but equivocation is infeasible for any nonzero secret key because it's infeasible to find m and m' such that H(m) = H(m'), or to find m' matching a given m.

The problem with sk = 0 is that the correct signature for every message is equal. This is the only secret value with this property.

Are you suggesting making signature verification fail when the pubkey is the identity?

I was assuming step 1 would be to change KeyGen so that it never outputs sk = 0 (discussion).

It could also make sense to specify that public key verification (e.g., for proof of possession) should reject the identity pubkey.

changing the IEFT standard this late in the game would create two BLS standards

Well, I hope not. Wasn't the whole point of this discussion that strict adherence to the IETF spec was necessary?

@JustinDrake
Copy link

JustinDrake commented Aug 9, 2020

I was assuming step 1 would be to change KeyGen so that it never outputs sk = 0

FWIW, doing this is fine for Eth2. KeyGen is not part of consensus.

It could also make sense to specify that public key verification (e.g., for proof of possession) should reject the identity pubkey.

This would not be so great for Eth2. Pubkey verification is part of Eth2 consensus and currently we accept the identity pubkey, as per the current IETF draft standard.

Wasn't the whole point of this discussion that strict adherence to the IETF spec was necessary?

The BLS part of Eth2 phase 0 (which is compliant with the current IETF draft standard) is frozen and is almost certainly not changing. If a modification to the IETF draft standard is made there will effectively be two standards: the current draft one, and the future modified one. It would be an unfortunate situation, a bit like Keccak/SHA3.

@kwantam
Copy link
Contributor

kwantam commented Aug 9, 2020

The BLS part of Eth2 phase 0 (which is compliant with the current IETF draft standard) is frozen and is almost certainly not changing. If a modification to the IETF draft standard is made there will effectively be two standards: the current draft one, and the future modified one. It would be an unfortunate situation, a bit like Keccak/SHA3.

My position is that the Eth2 BLS spec should be essentially the same as the IETF spec. As I see it, deviating from the IEFT spec goes against the standardisation effort and could lead to consensus bugs (as well as complicate things like cross-chain interoperability).

It's not obvious to me how to resolve these two statements.

In any case, sk = 0 looks like a big enough footgun that it should be forbidden in the IETF standard. If you are confident that Eth2 will never ever ever attempt to use the VRF-ish properties of BLS signatures, then maybe sk = 0 really isn't a problem for you. But it seems to me like in general people will do this, and even if we regard it as a mistake we should try to design the standard so that slight misuses are not fatal. Allowing sk = 0 goes against this principle.

@JustinDrake
Copy link

JustinDrake commented Aug 10, 2020

If you are confident that Eth2 will never ever ever attempt to use the VRF-ish properties of BLS signatures, then maybe sk = 0 really isn't a problem for you.

Eth2 phase 0 is using BLS as a VRF (see code here). The logic is to sign a single well-defined message (the epoch number) and given BLS signature uniqueneses (i.e. every (message, pubkey) pair has a unique signature) it's all good, even for sk = 0.

It's not obvious to me how to resolve these two statements.

One possible way forward is to disallow sk = 0 in KeyGen and not disallow sk = 0 in pubkey verification.

@kirk-baird
Copy link
Contributor

If you are confident that Eth2 will never ever ever attempt to use the VRF-ish properties of BLS signatures, then maybe sk = 0 really isn't a problem for you.

Eth2 phase 0 is using BLS as a VRF (see code here). The logic is to sign a single well-defined message (the epoch number) and given BLS signature uniqueneses (i.e. every (message, pubkey) pair has a unique signature) it's all good, even for sk = 0.

The issue is the the signatures of sk = 0 are not unique. The signature will be the 0 point for all messages. That is sign(msg0, 0) = sign(msg1, 0) = sign(msg2, 0) = Point at Infinity

@mratsim
Copy link
Contributor

mratsim commented Aug 11, 2020

What happens with 0 point on aggregation?

@JustinDrake
Copy link

The issue is the the signatures of sk = 0 are not unique.

How is that an issue for RANDAO in Eth2 phase 0 @kirk-baird? As I see it all that is required is that for every (msg, pubkey) pair there's a unique signature. Using BLS as a VRF is fine even allowing sk = 0.

@kirk-baird
Copy link
Contributor

The issue is the the signatures of sk = 0 are not unique.

How is that an issue for RANDAO in Eth2 phase 0 @kirk-baird? As I see it all that is required is that for every (msg, pubkey) pair there's a unique signature. Using BLS as a VRF is fine even allowing sk = 0.

A validator using sk = 0 will have the same randao_reveal for every epoch. Not so bad an issue but it would make their portion of randomness consistent and predictable, always sha2(point at infinity bytes). Since we combine this randomness with other users it wouldn't have much of an impact.

@kirk-baird
Copy link
Contributor

What happens with 0 point on aggregation?

This is a good point, if sk = 0 is invalid then we should really reject it in aggregate signature verification (and all the variants) by ensuring no public key is set to infinity.

@JustinDrake
Copy link

A validator using sk = 0 will have the same randao_reveal for every epoch. Not so bad an issue but it would make their portion of randomness consistent and predictable, always sha2(point at infinity bytes). Since we combine this randomness with other users it wouldn't have much of an impact.

A validator using sk = 0 (or simple to guess secret keys like sk = 1, sk = 2, etc.) will quickly get slashed. There's no loss in accepting sk = 0 for Eth2.

@kirk-baird
Copy link
Contributor

For reference this has now been merged into BLS Specs.

@michaelsproul
Copy link
Contributor

michaelsproul commented Oct 13, 2020

When running the tests for conformance to BLSv04 I noticed that blst (as of v0.2.0, and 414ac6b) now allows the 0 secret key... Did the behaviour of blst change between when this issue was opened (sk=0 rejected) to now (sk=0 accepted)? We now need to reject sk=0 for Eth2

Specifically, I'm referring to the behaviour of SecretKey::deserialize in Rust, which calls out to blst_scalar_from_bendian and blst_scalar_fr_check

@dot-asm
Copy link
Collaborator

dot-asm commented Oct 13, 2020

Specifically, I'm referring to the behaviour of ... blst_scalar_fr_check

But blst_scalar_fr_check never tested for zero, it tested for < modulus. In other words it's not, and never was, a check for secret key suitability, but a check for a value being properly bound. But yes, one can argue that if a key is to be checked for being zero, it would be appropriate to do that specifically in SecretKey::deserialize, the subroutine that doesn't treat "just a value." Though see the last paragraph...

As for accepting sk==0 in more general sense. It's kind of a trick question. blst rejects signatures produced with such key, and it doesn't generate such key. And it's sufficient to claim compliance. Because deserialization of a secret key falls outside the draft's scope. It does say sk can't be zero, but it's a reference to alternative key generation[!] procedure rather than deserialization.

With this in mind one can actually wonder if a secret key deserialzation should perform any sanity checks at all. Indeed, a secret key is something you generate yourself, so it's never "rogue." So why would you rely on deserialization to vet it instead of ensuring that it's generated properly bound to begin with? Well, I can imagine that one might say it's a protection against corruption when pulling the key from permanent storage at later occasion. But it's pretty weak protection, virtually none. Right thing to do would be to generate the public key and compare that to a previously saved copy. But this is obviously not deserialization subroutine's busyness...

@dot-asm
Copy link
Collaborator

dot-asm commented Oct 13, 2020

With this in mind one can actually wonder if a secret key deserialzation should perform any sanity checks at all.

Well, question ought to be rather about handling test vectors with sk==0. But even from this viewpoint, which of two possible ways to handle it gives you more confidence? a) Reject sk==0 upfront and fail the test without going beyond deserializion. b) Have deserialization accept sk==0, generate signature and have the signature rejected. I for one would be more comfortable with b).

@kirk-baird
Copy link
Contributor

I agree that we should never reach a case where sk == 0 without a previous error occurring.

Though it would be a cheap check on deserialising secret keys which would likely be infrequent and prevents us from continuing to execute with an invalid key so I don't see the harm in rejecting it at deserialisation.

@dot-asm
Copy link
Collaborator

dot-asm commented Oct 14, 2020

Assuming that we agree that it's appropriate to prevent application from producing infinite signature, as opposite to relying on everybody else to reject such signature.

We have to recognize one thing. It's not exclusively about what feels right or convenient in this (or any) specific context, but even about what would be the right thing to do in a more general sense. Maybe there is something else that would be appropriate to do. Or maybe this appropriate something else is the only thing one needs to do. Consider the following. We seem to be in agreement that sk==0 [at sk object instantiation moment] can be viewed as a case of application kind of "messing" with the library, right? Because abnormal sk has to originate from outside the library. But as long as we are talking about "messing" why would we assume that the caller is so well behaved that it would call deserialization at all, or pay attention to error code, or whatever? With this in mind, wouldn't it be appropriate to reject sk==0 in sign procedure? (Just in case, yes, one can just as well argue that the sign procedure can perform the equivalent of blst_fr_check too.) However, formally speaking, before we take this approach, we have to recognize that specification doesn't directly justify returning errors from CoreSign, instead it suggests that input is always sane. And that the option of failure in the sign procedure doesn't go against the spirit of the specification(*). Yes, one can just as well turn this argument around and use it to put the complete burden of checks on deserialization... Thoughts?

As a more concrete suggestion. What if deserialization zeroed the buffer in case of blst_fr_check failure and counted on sign procedure to reject it? It would cover the case of not paying attention to error code from deserializaion...

(*) Just in case, in my mind it doesn't, but it might be just me.

@kirk-baird
Copy link
Contributor

Assuming that we agree that it's appropriate to prevent application from producing infinite signature, as opposite to relying on everybody else to reject such signature.

I think this is a good assumption to make, as you mention the Sign() function has a prerequisite that "SK, a secret key in the format output by KeyGen.". So we should then only provide a secret key in the range 0 < SK < r.

The library will handle this when generating secret keys using KeyGen. If we then also prevent it during deserialisation then the only way a user could end up with an invalid secret key is by deliberately manipulating the library / underlying data. From this point of view I'm happy to reject the 0 secret key at deserialiation.

Otherwise, if you prefer that prerequsite checks are handled by the client rather than the library we could have the check in our client instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants