-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FROST #138
base: master
Are you sure you want to change the base?
FROST #138
Conversation
@jesseposner Interesting work! I am currently thinking of a system where a large group of people would maintain ownership over a pool of funds, for example, within a single Bitcoin wallet. The FROST scheme looks like it could be a primitive in this scheme, especially if combined with an accountability protocol to detect/exclude malicious signers. Would this PR be ready for some preliminary experiments in a P2P setting? |
@devos50 The PR is currently in a very early state and isn't that useful yet, in particular, it doesn't yet have support for signing or generating nonces. However, I've made significant progress in my local branch and I will be pushing substantial changes to the PR by the end of the month that will have nonce generation and signing and should be sufficient for experimentation. |
@jesseposner thanks for your response! I will wait for your changes then and try to build something around it later when the PR is more feature-complete 👍 (I'm going on holiday tomorrow so no need to rush). |
Here's another BIP340 FROST implementation (in Go): I think it would be nice to have a chat with them. |
By the way, there's a new paper about FROST that has a nice pseudocode in Figure 10: https://eprint.iacr.org/2021/1375.pdf |
Just curious, are there any significant changes to the protocol itself (ignoring implementation details -- of course the line is fuzzy)? |
@real-or-random I'll have a full write-up ready very soon, but the general idea is that I'm using a method for combining MuSig and FROST that provides the following advantages: (1) MuSig keys can be converted to FROST keys without changing the aggregate public key, (2) a "broadcast channel" is implemented using MuSig to sign a hashed list of coefficient commitments, and (3) MuSig APIs are utilized ( The FROST protocol is modified such that the first step is to generate a MuSig key. From there, the participants generate random polynomial coefficients, however, for their first coefficient the participants use their respective individual MuSig private key multiplied by the key aggregation coefficient. This results in the FROST and MuSig aggregate public key being the same, because the FROST aggregate public key is the sum of the commitments to each participant's first polynomial coefficient. This also means that the participants no longer need to distribute a proof of knowledge of their first coefficient, as specified by FROST, because the rogue key attack is prevented via MuSig and the key aggregation coefficients. Then, when the participants distribute their shares and coefficient commitments, the FROST protocol is modified such that they also distribute a MuSig nonce commitment pair. When they have received coefficient commitments from all the other participants, they generate a hash of the list (i.e. the At that point, each participant has a FROST share that they can use for threshold signing, however, they use MuSig nonces (MuSig nonce generation and FROST nonce generation are nearly identical: in both cases the protocols use a pair of nonces and nonce commitments that are combined with a binding value). The example file currently implements this method of key generation, nonce generation, and signing: https://github.com/ElementsProject/secp256k1-zkp/blob/36eba62b3b5450f0acde3459cf7600af2116bc32/examples/frost.c. If we want to use the more standard FROST protocol, the code changes required will be relatively modest (e.g. remove keyagg coefficient from share generation, add a proof of knowledge in share generation, create a |
I had a closer look at the modified KeyGen (the picture in the first comment) and here are few comments:
|
@real-or-random Thank you for the comments!
Reuse is potentially helpful if you have funds secured with a MuSig setup and later decide you'd like to convert to a threshold setup without needing to move the funds. In practice I'm not sure how useful or desirable this actually is and I'm open to the idea that the security downsides outweigh the benefits. Even if that's the case, we can still use MuSig for the broadcast channel (assuming we want to do that), and change the API such that it doesn't allow for converting a pre-existing MuSig key.
I agree that we need a security proof for the modified protocol if we think that's worth pursuing. However, the MuSig partial signatures are also proofs of knowledge, so I think the modified protocol still provides that.
The MuSig partial signatures are not generated until every participant has received their shares, verified their shares, and aggregated their shares. So when a participant receives partial signatures from every other participant, they know that each participant has the shares required to sign for the aggregate public key assuming that all participants verified their shares against the same commitments. This final assumption is validated when the partial signatures are aggregated and the aggregate is verified as a valid signature. So at that point, I believe the participants are left in that same final state you described at the end, where they are potentially uncertain (because they don't know if the other participants were able to verify the aggregate signature), however signing will be possible assuming they don't timeout and delete.
That looks very useful, thanks for the suggestion!
Great point! |
Hm yes, I tend to believe that converting from a MuSig pubkey to a FROST pubkey is rare enough that it doesn't matter to change the pubkey.
The partial sigs is one part where I'm not sure. A single partial sig is not a proof of knowledge because the hashed nonce is not the partial nonce of the participant. A maybe surprising property is that partial sigs during a MuSig2 run can be forged -- just not all partial signatures of one run because this would imply a real forgery. Now in this case we get all partial sigs (and a full sig) but one would need to see if this really means that all keys can be extracted. I never thought about this to be honest.
Ok yes, I think you're right. |
examples/frost.c
Outdated
if (!secp256k1_musig_nonce_agg(ctx, &agg_pubnonce, pubnonces, THRESHOLD)) { | ||
return 0; | ||
} | ||
if (!secp256k1_musig_nonce_process(ctx, &session, &agg_pubnonce, msg32, &cache, NULL)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The combination of using MuSig2-style nonce aggregation and reusing MuSig2's way of computing the coefficient for the nonce ("b" in the notation of MuSig2 paper) seems to make this vulnerable to Wagner/BLLOR-style attacks (see MuSig2 paper) if the attacker can choose the set of t participants after seeing an honest signer's nonce pair. Letting the attacker choose the set of participants will give the attacker some control over the Lagrange coefficient, which has a similar effect as giving the attacker control over the Fiat-Shamir challenge, because the two values are simply multiplied to each other.. I say "similar" because the Lagrange coefficient is structured and not random but I don't except that this makes an attack significantly harder or easier.
The fix is to include the set of participants in the derivation of b.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that's a great point. This is fixed in changes I will be pushing shortly that make nonce generation conform to the latest FROST spec (https://datatracker.ietf.org/doc/draft-irtf-cfrg-frost/).
Awsome work @jesseposner, thank you. Do you consider stress testing your code with 10M users? |
Thanks @synctext! I'm excited to see that your students were able to leverage this code for their project, I will check it out. Please be careful, the code is still early and has not yet been reviewed and should not be considered secure software. Great suggestion regarding the stress testing, I will add it to the to-do list. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jesseposner I had a shallow look at your PR. In particular, I like the implementation of the vss_hashes
because it allows to see when key generation has worked. Ideally, we would also be able to assign blame if the vss_hashes
are not identical and then resolve the issue through some other process.
As far as I can see, there are no proofs of knowledge involved in key aggregation and, instead, MuSig-aggregation is assumed make PoKs unnecessary. I think this would require a security proof/argument.
Agree with @real-or-random that we should try to get rid of signer indices entirely if possible.
I pushed a branch with a few fixup! commits and CI integration. Feel free to cherry-pick: https://github.com/jonasnick/secp256k1-zkp/commits/frost-jn.
Thanks for the comments @jonasnick!
The By moving the PoK to the second round and including the
Great point, I added this to the to-do list.
Thanks, will do! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, clever idea to use this signature as the PoK. I pushed two more commits to my branch. Feel free to cherry-pick if you agree that they're useful.
I think one open todo that may be useful to keep in the OP is:
- decide on whether to use MuSig key aggregation
Btw, I mentioned earlier that
Ideally, we would also be able to assign blame if the vss_hashes are not identical and then resolve the issue through some other process.
It seems like this API is already close (requires share_agg(...)
to indicate which share failed to verify) to providing the possibility to do something like this.
- When distributing shares, signers sign vss_commitment and share, and send the signature to the along with the vss_commitment.
- The receiving signer does vss verification of the share. If it fails, the share, the vss_commitment and the signature can be used to complain. Perhaps it makes sense to make the
vss_verify
function in my branch public because it would allow a third party with minimal information about the key generation session to verify the complaint. - If the
vss_hash
turns out to not be identical, every signer submits all received vss_commitments and their signatures to the complaint handler, which should be sufficient to identify disruptive signers.
One problem is that there's no agreed-upon session identifer that is covered by the signature, so signatures are transferrable between sessions. A malicious signer could send a received vss_commitment and signature from a different session to the complaint handler to blame an innocent signer that was present in two keygen sessions.
done
Here's an idea for how we might incorporate a session ID by building on what you described:
|
Regarding the issue of identifying dishonest signers in the key generation phase, @real-or-random rightfully points out that transcript + signatures only helps if the signers already agree on the individual public keys. If there's disagreement, the signature is worthless. Establishing agreement is difficult. I'm not sure how much sense it makes in practice to simply assume that there is already agreement on the public keys. |
@jonasnick Perhaps we can use the key pair provided to |
54120ad
to
28fa632
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The session IDs prevent data from another session being used to blame an innocent signer.
Let me try to expand on the argument: If malicious signer A complains that innocent signer B sent a bad vss_comm that resulted in A having the wrong vss_hash, A must provide sig_B(vss_comm, share, ID) as well as sig_B(vss_hash, ID). If these signatures are from a different session, then the vss_hash differs and therefore B can prove its innocence by showing that it computed the vss_hash correctly. As an aside, the vss_hash may be the same across sessions because they are derived deterministically from the inputs to keygen.
Cool idea. But it's really hard to convince yourself that it works in any case and risks inventing complicated machinery that turns out to be broken. I think this needs more formal treatment. In any case, blaming - even with your idea - does not seem to require API changes. Blaming seems to be something that we can focus on later.
Responds to comment by @ariard: > It could be valuable to add an example for a FROST-based adaptor signature protocol. I can try to add one later. See BlockstreamResearch#138 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed until second commit (ec537f5)
include/secp256k1_frost.h
Outdated
* each generate 2 shares, distribute 1 share to each other using a secure | ||
* channel, and keep 1 for themselves. | ||
* | ||
* Each participant _must_ have a secure channel with each other participant |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we maybe use the pk of the recipient to do an ECDH+encryption such that only the recipient can decrypt this?
That way we remove the need for a secure channel and everything can be publicly sent, the downside, is that this potentially mixes multiple cryptographic schemes (pk might be used for schnorr+ecdh+share at 0 in the dkg)
if (!secp256k1_xonly_pubkey_serialize(ctx, buf, pk)) { | ||
return 0; | ||
} | ||
secp256k1_sha256_initialize_tagged(&sha, (unsigned char*)"FROST/index", sizeof("FROST/index") - 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also use the contextString
here?
src/modules/frost/keygen_impl.h
Outdated
secp256k1_sha256_initialize_tagged(&sha, (unsigned char*)"FROST/index", sizeof("FROST/index") - 1); | ||
secp256k1_sha256_write(&sha, buf, sizeof(buf)); | ||
/* TODO: add sub_indices for weights > 1 */ | ||
secp256k1_sha256_write(&sha, zerobyte, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the point of the zero byte? AFAIR this function is used to generate the index ID of a party using its public key, so I'm not sure why would that hash also some sort of an index that is always zero?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The structure tagged_hash(pk || idx)
includes sub_indices (idx
) that allows weighting for participants. For example, a participant with weight 2 would have an index at tagged_hash(pk || 0)
and tagged_hash(pk || 1)
, with the same pk
in both, and would receive 2 sets of shares, for each index, giving that participant a greater weight.
src/modules/frost/keygen_impl.h
Outdated
secp256k1_scalar idx; | ||
secp256k1_scalar sk; | ||
secp256k1_scalar share_i; | ||
secp256k1_ge ge_tmp; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you rename this to sender_pk
and the pk
in the input to recipient_pk
?
ARG_CHECK(session_id != NULL); | ||
ARG_CHECK(keypair != NULL); | ||
ARG_CHECK(pk != NULL); | ||
ARG_CHECK(threshold > 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we really disallow threshold=1
? this will just create a constant polynomial with the secret key as the only term and every party will have the full secret key (I understand this will defeat the purpose of using a threshold scheme, but it's technically possible)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thinking is that we should presume this is user error because even though it's possible it doesn't seem useful. But if we think that flexibility might have some utility, I think it's fine to allow.
include/secp256k1_frost.h
Outdated
* threshold: the minimum number of signers required to produce a | ||
* signature | ||
*/ | ||
SECP256K1_API int secp256k1_frost_share_gen( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the FROST KeyGen figure in page 12 of the FROST paper, step 4 requires a broadcast of commitments + zkproofs before round 2 where you send the actual shares, I don't see any attack stemming from this change, but I wouldn't do this without a justification.
Also, if I understand this correctly, each participant P_i should run this function should in a loop over all P_j (i != j) participants? isn't that a bit wasteful?(regenerating all the coefficients) why not pass in a list of pks, and output a state + all messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, it seems like the PoK of the first coefficient(secret key) is missing? (step 2 of the figure) this is required to prevent a t-1 rouge key attack (we could also maybe use a MuSig-like trick but we might need a security proof for feldman + musig trick @real-or-random )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@elichai I need to call this out more clearly in the docs and PR description and BIP, but see my comment here:
The vss_hash signatures are also PoKs. Instead of distributing PoKs in the first round, we distribute them in the second round, and instead of simply using the index and a context string as the message for the signature that serves as the PoK, we use the vss_hash as the message.
By moving the PoK to the second round and including the vss_hash in the PoK, we fulfill the broadcast channel requirement with the same set of signatures used for the PoK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, if I understand this correctly, each participant P_i should run this function should in a loop over all P_j (i != j) participants? isn't that a bit wasteful?(regenerating all the coefficients) why not pass in a list of pks, and output a state + all messages
For devices with limited resources, the API provides the flexibility to generate the data sequentially instead of simultaneously, and it only needs to store its private key and session_id (because shares are generated deterministically from those values) and can go offline between share generations if it doesn't have all the public keys for each participant when it begins generating shares.
The commitments are an optional argument to avoid requiring that they be regenerated over and over again, but it's a good point that the coefficients get regenerated each time.
@l2xl also makes the suggestion for even more API flexibility:
Please split the secp256k1_frost_share_gen() function in two:
- secp256k1_frost_keyshare_commitment_gen()
- secp256k1_frost_keyshare_gen()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still unsure if sending the evaluation of f(index)
together with the coefficients is actually safe. this reduces a round from the FROST keygen algorithm so seems like something they'd want to do in the paper if safe.
I'd need to go over their security proof and see the reasoning there to see if this is used, if you have any insights / justifications I'd love to hear
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@elichai That's a good point. This optimization doesn't even save us a round, it just saves sending an extra signature.
In the prescribed protocol in this PR, there's 2 rounds:
- distribute shares and coefficient commitments
- distribute signatures of VSS hashes
If we stick to the paper (and add VSS hash sigs for the broadcast requirement), there's also 2 rounds:
- distribute coefficient commitments and signatures of a context string
- distribute shares and signatures of the VSS hash
With the second approach (paper + VSS hash sigs) there's 2 distributions of signatures, the first signs a context string to prevent a rogue key attack and the second signs the VSS hash for the broadcast requirement.
It would only take some slight revisions to the docs and maybe the APIs as well to support this change. The optimization is probably not worth the time and effort required to prove it secure so I'm currently thinking I should make this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm you're right. the "3rd round" in my head was actually not a round but a local computation.
I'll try to figure out from the proof in the paper if it's safe to send the evaluation before first committing to the coefficients + the zkproof of ai_0
did you write up something about this signature satisfying the broadcast requirement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you write up something about this signature satisfying the broadcast requirement?
The only reference to it is here.
In Section 2.1 of the FROST paper, it states:
In practice, implementations guarantee consistency of participants’ views by using techniques such as posting commitments to a centralized server that is trusted to provide a single view to all participants, or adding another protocol round where participants compare their received commitment values to ensure they are identical.
The VSS hash signatures are intended to serve as that comparison of received commitment values.
src/modules/frost/keygen_impl.h
Outdated
for (i = 0; i < 8; i++) { | ||
rngseed[i + 0] = threshold / (1ull << (i * 8)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe add a secp256k1_write_be64
like secp256k1_write_be32
for better readability?
src/modules/frost/keygen_impl.h
Outdated
} | ||
/* Compute commitment to each coefficient */ | ||
secp256k1_ecmult_gen(&ctx->ecmult_gen_ctx, &rj, &rand[i % 2]); | ||
secp256k1_ge_set_gej(&rp, &rj); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
secp256k1_ge_set_gej_var
should be enough, the commitments here are public
src/modules/frost/keygen_impl.h
Outdated
/* Compute seed which commits to threshold and session ID */ | ||
secp256k1_scalar_get_b32(buf, &sk); | ||
secp256k1_sha256_initialize(&sha); | ||
secp256k1_sha256_write(&sha, session_id, 32); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the session_id is used as a seed to generate the coefficients? if so this must have ~128 bits of secure randomness, so we should document that this must be uniformly random and not just recommend this.
#ifdef __cplusplus | ||
extern "C" { | ||
#endif | ||
|
||
#include <stdint.h> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't seem like this header is actually being used here? I think you meant to include stddef but that's already transitively included from secp256k1.h
My goto answer is always chacha20-poly1305 :), now that we have chacha20 I don't think adding a poly1305 is that terrible, but will let others weigh in on this |
include/secp256k1_frost.h
Outdated
secp256k1_pubkey *vss_commitment, | ||
secp256k1_frost_share *share, | ||
const unsigned char *session_id32, | ||
const secp256k1_keypair *keypair, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it a good idea to pass the "secret that is shared" as input? As this secret shouldn't be used anywhere else because its no longer your secret after you shared it between n other people (assuming t < n)
Meaning that if a user uses his regular private key here he actually allows any t parties here to steal his money, which is unrelated to the generated public key at the end of the protocol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need some sort of secret value passed to this API because, as far as I understand it, the secp APIs don't provide entropy directly (e.g. from urandom) but require that entropy be provided as an input.
It's a good call-out though, perhaps some strongly worded warnings and instructions can warn users not to use that secret for anything else. Do you have something else in mind? Is there a way we can avoid passing the secret that is shared as input?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can replace it with a 32 byte ephemeral(document that it has to be ephemeral) random entropy and also generate the first coefficient from chacha.
But maybe this limits some use-cases I'm not seeing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The availability of the keypair for the first coefficient makes it easy to reuse this keypair for the signed VSS hash that also is serving as a proof of knowledge.
However, as we discussed above, we might want to opt for a more conservative approach of sending the proof of knowledge in the first round independently of the signed VSS hash. If we do that, then share_gen
can export a proof of knowledge, so the availability of the keypair for the first coefficient is not needed.
We'll still need some keypair to use for the signed VSS hash. Perhaps there should be a set of authentication public keys exchanged between the participants that can be used for signing the VSS hash and also for encrypting/decrypting shares.
Responds to comment: BlockstreamResearch#138 (comment)
Proposed refactor based upon the review comments received so far: https://gist.github.com/jesseposner/0168e88f9cc911a3dc9095a78f6efc15 |
Just found this fork of libsecp256k1 with a FROST module: https://github.com/bancaditalia/secp256k1-frost/blob/frost/src/modules/frost/README.md |
Not specifically related to this PR but I'd like to flag that there is now a 2 BTC bounty for a FROST powered wallet that allows signer membership set to change: https://twitter.com/gladstein/status/1684567113430892545 |
I added three commits to a forked branch that applies the global updates introduced with the recent sync of -zkp to upstream libsecp (see #266): https://github.com/jonasnick/secp256k1-zkp/commits/frost-jn Feel free to cherry-pick. |
for (j = 0; j < N_SIGNERS; j++) { | ||
assigned_shares[j] = &shares[j][i]; | ||
} | ||
/* Each participant aggregates the shares they received. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to perform the verification of the VSS hashes of all participants inside of Scratch that, share_agg
to enforce this verification requirement as a prerequisite to share aggregation.share_agg
generates the vss_hash
.
src/modules/frost/keygen_impl.h
Outdated
/* Computes indexhash = tagged_hash(pk || idx) */ | ||
static int secp256k1_frost_compute_indexhash(const secp256k1_context *ctx, secp256k1_scalar *indexhash, const secp256k1_xonly_pubkey *pk) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment seems to be outdated. It's just tagged_hash(pk||0)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually we might want tagged_hash(pk || sub_idx)
as referred to below on line 67.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason to add the zero byte before we support weights? Is there a downside to adding the sub_idx only once we start supporting weights?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there's a downside, so it's probably better to remove the cruft until we decide to implement it.
examples/frost.c
Outdated
printf("Signing VSS............."); | ||
if (!sign_vss(ctx, signer_secrets, signers, sigs)) { | ||
printf("FAILED\n"); | ||
return 1; | ||
} | ||
printf("ok\n"); | ||
printf("Verifying VSS..........."); | ||
for (i = 0; i < N_SIGNERS; i++) { | ||
if (!secp256k1_schnorrsig_verify(ctx, sigs[i], signers[0].vss_hash, 32, &signers[i].pubkey)) { | ||
printf("FAILED\n"); | ||
return 1; | ||
} | ||
} | ||
printf("ok\n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this now that we verify the pok already inside share_gen
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These signatures, unlike the pok, sign the vss_hash
, an output of share_agg
, to fulfill the broadcast channel requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a requirement for the broadcast channel? It seems quite likely to me that the broadcast channel has its own means of ensuring authenticity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to self: the share_agg
doc instructs the user to sign the vss_hash (but doesn't mention secure broadcast).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I entirely forgot that we tried to instantiate the broadcast channel. However, I don't think that the doc in share_agg
correctly explains how to do that.
As pointed out by @real-or-random, a malicious signer could send the correct vss_hash
to Alice and a fake vss_hash
to Bob. Alice would think the DKG has succeeded, while Bob thinks it has failed. Bob delets his key material which may result in the inability to sign despite Alice believing everything succeeded. We'd need to describe an echo-broadcast protocol (https://eprint.iacr.org/2002/040.pdf Protocol 1 and Protocol 2, H/T @real-or-random). It's probably best to describe this option in detail in the BIP and refer to it in the function documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the malicious signer can make n-of-n succeed unless the malicious signer's PoK is invalid. I think the most the malicious signer can do with an inconsistent VSS, but valid PoK, is trick participants into thinking they have valid shares that won't actually be usable for signing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, this is how I'd think a malicious signer could make it work in a 2-of-3 setup:
- Eve generates random key x
- Eve generates polynomial f for x, sends commitment to f, share and pok to Alice
- Eve generates polynomial g for x distinct from f, sends commitment to g, share and pok to Bob
- When creating partial sigs, Alice and Bob use their aggregate shares multiplied with the lagrange coefficient
- Eve creates its "normal" partial signature, but subtracts Alice and Bobs partial signatures for the garbage shares and adds a partial signature for the key x
This should result in a valid signature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But wouldn't this be a signature for key x, not for the output of the DKG, which would be the sum of Alice, Bob, and Eve's keys? If Eve generates random key x, and Alice generates a random key y, and Bob generates a random key z, then how would Eve make the DKG output the key xG for the public key, when it should output (x+y+z)G for the public key? And if the DKG does output (x+y+z)G, then wouldn't Eve need to produce a signature for (x+y+z)G and not simply xG?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I'm doing a lot of handwaving. It's true that the DKG would have to produce a signature for (x+y+z)G
and not just x
. Assume Eve has shares y'
from Alice and z'
from Bob. Then the sum of Alice and Bob's partial signature is a signature for (y-y'+z-z'+g)G
where g
are the garbage shares sent by Eve. Eve produces a partial sig for (x + y' + z' - g)G
such that the sum of all partial sigs should be (x+y+z)G
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x
, y
, and z
are secrets, and y'
and z'
are polynomial shares that reconstruct to secrets, and typically shares and secrets are not added to each other. So I'm not following how we can compute a pubkey by adding secrets to shares with (y-y'+z-z'+g)G
and (x + y' + z' - g)G
.
Maybe it would help to define a full set of terms:
- Eve generates random secret
x
, and sharesx_e
,x_a
, andx_b
.x_a
is from polynomialf
andx_b
is from polynomialg
. - Alice generates random secret
y
, and sharesy_e
,y_a
, andy_b
- Bob generates random secret
z
, and sharesz_e
,z_a
, andz_b
- The DKG pubkey is
P = (x + y + z)G
- Eve's aggregate share is
e = x_e + y_e + z_e
- Alice's aggregate share is
a = x_a + y_a + z_a
- Bob's aggregate share is
b = x_b + y_b + z_b
- For the signature, the nonce pubkey is
R = r_e*G + r_a*G + r_b*G
- Alice's partial sig is
s_a = r_a + H(P || R || m) * a * l_a
- Bob's partial sig is
s_b = r_b + H(P || R || m) * b * l_b
With this setup, how does Eve produce a valid signature for (P, R, m)
?
@real-or-random and I have been brainstorming about the potential structure of the frost module's DKG, as well as what a potential bip-frost's DKG section might look like. We've consolidated our ideas so far in this repository: https://github.com/jonasnick/bip-frost-dkg. In summary we've been considering SimplPedPop and variants. This is pretty close to what we've already attempted here, but we now have a much better picture of the requirements on the caller and the broadcast channel. |
|
||
for (i = 0; i < N_SIGNERS; i++) { | ||
/* Generate a polynomial share for the participants */ | ||
if (!secp256k1_frost_shares_gen(ctx, shares[i], signer[i].vss_commitment, signer[i].pok, signer_secrets[i].seed, THRESHOLD, N_SIGNERS, ids)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked why the valgrind CI jobs fail and it looks like signer_secrets[i].seed
is never initialized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good catch. Thanks for tracking that down!
This commit adds the foundational configuration and building scripts and an initial structure for the project.
This commit adds share generation, as well as share serialization and parsing.
This commit adds share aggregation and verification, as well as computation of public verification shares.
This commits adds nonce generation, as well as serialization and parsing.
This commits add BIP-341 ("Taproot") and BIP-32 ("ordinary") public key tweaking.
This commit adds nonce aggregation, as well as adaptor signatures.
This commit adds signature generation and aggregation, as well as partial signature serialization and parsing.
This commit adds an example file to demonstrate how to use the module.
Add api tests, nonce tests, tweak tests, sha256 tag tests, and constant time tests.
This commit adds a documentation file with detailed instructions for how to use the module properly.
This PR implements a BIP-340 compatible threshold signature system based on FROST (Flexible Round-Optimized Schnorr Threshold Signatures).
TODO
FROST Paper
This PR is based upon the FROST paper by Chelsea Komlo and Ian Goldberg and the draft RFC.
Python Implementation [WIP]
https://github.com/jesseposner/FROST-BIP340
Prior Work
@apoelstra worked on a threshold implementation in #46 and this PR builds on the techniques introduced there.
This PR is patterned on the APIs, and in many instances duplicates code, from the MuSig implementation due to their similarities in nonce generation and signing.