Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FROST #138

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open

FROST #138

wants to merge 10 commits into from

Conversation

jesseposner
Copy link
Contributor

@jesseposner jesseposner commented Jul 21, 2021

This PR implements a BIP-340 compatible threshold signature system based on FROST (Flexible Round-Optimized Schnorr Threshold Signatures).

TODO

  • Key generation APIs
  • Nonce generation APIs
  • Signing APIs
  • Basic tests
  • Refactor APIs
  • Resolve merge conflicts
  • Get CI working
  • Sign list of VSS proofs from all participants to simulate broadcast channel
  • Verify shares with VSS
  • Update keygen protocol based on feedback from @LLFourn
  • refactor nonce and index handling
  • key tweaking (e.g. taproot and bip32)
  • adaptor support
  • Partial sig verification API
  • change indexes to x-only pubkeys
  • revert change to indexes
  • Public VSS Verify
  • Verification checks
  • Serialization APIs
  • Example file
  • API for generating public verification shares
  • Full tests
  • more detailed code comments, including references to RFC
  • Documentation file
  • Update API diagram [in-progress]
  • Update Python implementation [in-progress]
  • FROST BIP [in-progress]
  • Benchmarks and Stress Testing [in-progress]

FROST Paper

This PR is based upon the FROST paper by Chelsea Komlo and Ian Goldberg and the draft RFC.

Python Implementation [WIP]

https://github.com/jesseposner/FROST-BIP340

Prior Work

@apoelstra worked on a threshold implementation in #46 and this PR builds on the techniques introduced there.

This PR is patterned on the APIs, and in many instances duplicates code, from the MuSig implementation due to their similarities in nonce generation and signing.

@devos50
Copy link

devos50 commented Aug 20, 2021

@jesseposner Interesting work! I am currently thinking of a system where a large group of people would maintain ownership over a pool of funds, for example, within a single Bitcoin wallet. The FROST scheme looks like it could be a primitive in this scheme, especially if combined with an accountability protocol to detect/exclude malicious signers. Would this PR be ready for some preliminary experiments in a P2P setting?

@jesseposner
Copy link
Contributor Author

@devos50 The PR is currently in a very early state and isn't that useful yet, in particular, it doesn't yet have support for signing or generating nonces. However, I've made significant progress in my local branch and I will be pushing substantial changes to the PR by the end of the month that will have nonce generation and signing and should be sufficient for experimentation.

@devos50
Copy link

devos50 commented Aug 20, 2021

@jesseposner thanks for your response! I will wait for your changes then and try to build something around it later when the PR is more feature-complete 👍 (I'm going on holiday tomorrow so no need to rush).

@real-or-random
Copy link
Collaborator

Here's another BIP340 FROST implementation (in Go):
https://github.com/taurusgroup/multi-party-sig

I think it would be nice to have a chat with them.

@real-or-random
Copy link
Collaborator

By the way, there's a new paper about FROST that has a nice pseudocode in Figure 10: https://eprint.iacr.org/2021/1375.pdf

@real-or-random
Copy link
Collaborator

  • Describe modifications to the FROST protocol [in-progress]

Just curious, are there any significant changes to the protocol itself (ignoring implementation details -- of course the line is fuzzy)?

@jesseposner
Copy link
Contributor Author

jesseposner commented Jan 7, 2022

@real-or-random I'll have a full write-up ready very soon, but the general idea is that I'm using a method for combining MuSig and FROST that provides the following advantages: (1) MuSig keys can be converted to FROST keys without changing the aggregate public key, (2) a "broadcast channel" is implemented using MuSig to sign a hashed list of coefficient commitments, and (3) MuSig APIs are utilized (secp256k1_musig_pubkey_agg, secp256k1_musig_nonce_gen, secp256k1_musig_nonce_agg, secp256k1_musig_nonce_process, secp256k1_musig_partial_sig_agg) allowing for simpler code implementation and maintenance.

The FROST protocol is modified such that the first step is to generate a MuSig key. From there, the participants generate random polynomial coefficients, however, for their first coefficient the participants use their respective individual MuSig private key multiplied by the key aggregation coefficient. This results in the FROST and MuSig aggregate public key being the same, because the FROST aggregate public key is the sum of the commitments to each participant's first polynomial coefficient. This also means that the participants no longer need to distribute a proof of knowledge of their first coefficient, as specified by FROST, because the rogue key attack is prevented via MuSig and the key aggregation coefficients.

Then, when the participants distribute their shares and coefficient commitments, the FROST protocol is modified such that they also distribute a MuSig nonce commitment pair. When they have received coefficient commitments from all the other participants, they generate a hash of the list (i.e. the vss_hash), and sign it using MuSig. The partial MuSig signatures are aggregated, and a valid signature verifies that each participant has received the same commitments (thus simulating the broadcast channel, whose implementation is not specified in FROST). This completes key generation.

At that point, each participant has a FROST share that they can use for threshold signing, however, they use MuSig nonces (MuSig nonce generation and FROST nonce generation are nearly identical: in both cases the protocols use a pair of nonces and nonce commitments that are combined with a binding value).

The example file currently implements this method of key generation, nonce generation, and signing: https://github.com/ElementsProject/secp256k1-zkp/blob/36eba62b3b5450f0acde3459cf7600af2116bc32/examples/frost.c. If we want to use the more standard FROST protocol, the code changes required will be relatively modest (e.g. remove keyagg coefficient from share generation, add a proof of knowledge in share generation, create a secp256k1_frost_nonce_process function based on secp256k1_musig_nonce_process that uses a slightly different binding value and removes the keyagg coefficient when generating the challenge hash, derive aggregate public key in secp256k1_frost_share_agg.). It would also be possible to provide APIs for both protocols (standard FROST and hybrid FROST/MuSig).

@real-or-random
Copy link
Collaborator

I had a closer look at the modified KeyGen (the picture in the first comment) and here are few comments:

  • I'm not sure if it's actually an anti-feature that this is compatible to MuSig key aggregation. I think it's a good mind model to keep threshold and multisigs separate as much as possible because the setup is so different. Do we really want users to reuse their FROST pubkey shares as MuSig public keys? It's strictly less secure than having a separate key for MuSig because you gave others shares of the key. And I don't see where reuse will be helpful. I tend to believe that it is good to use a FROST key only for running FROST, with exactly the group of signers it was supposed to be used.
  • It's a very neat idea to use MuSig during KeyGen but isn't it dangerous right now without a security proof for the modified protocol? This is in particular true when we drop the proofs of knowledge. I'm not entirely sure that the MuSig2 signing can replace those. In any case it will require the AGM then to prove the protocol secure, whereas probably ROM+OMDL will be enough for a proof of the unmodified FROST.
  • I don't think that MuSig can replace a broadcast channel. To be more specific, it may be able to replace a reliable broadcast channel but not a terminating broadcast channel. That is, the current KeyGen protocol still needs broadcast: If you terminates with an aggregate pubkey, you don't know if the others terminated or not. So you can't go ahead and use the pubkey (e.g., send money to it). For example, if everyone else has aborted, it's impossible to sign for that pubkey. This can be mitigated by a third round in which everyone confirms that the first two rounds were correct. If you receive a third-round confirmation from everybody, then you know that signing is possible and you can return the aggregate pubkey to the caller. (This still does not mean that everyone else will have received valid third-round confirmations... So others may be in some uncertain state but that's okayish if they don't timeout and delete their secrets. Signing will be possible, it's just that not everyone knows that it will be possible. To be really sure, you need terminating reliable broadcast, which is almost as strong as consensus.)
  • Have you considered using hashes of the pubkey shares as identities for Lagrange coefficients (instead of 1, 2, 3, ...). This may be nicer in practice because then you don't need to bother with an ordering of the pubkeys. (I learned this trick from https://blog.trailofbits.com/2021/12/21/disclosing-shamirs-secret-sharing-vulnerabilities-and-announcing-zkdocs/ which explains how to use it properly. Hashing is certainly safe.)
  • nit: It shouldn't be necessary to verify the partial sigs and the final sig.

@jesseposner
Copy link
Contributor Author

@real-or-random Thank you for the comments!

I'm not sure if it's actually an anti-feature that this is compatible to MuSig key aggregation. I think it's a good mind model to keep threshold and multisigs separate as much as possible because the setup is so different. Do we really want users to reuse their FROST pubkey shares as MuSig public keys? It's strictly less secure than having a separate key for MuSig because you gave others shares of the key. And I don't see where reuse will be helpful. I tend to believe that it is good to use a FROST key only for running FROST, with exactly the group of signers it was supposed to be used.

Reuse is potentially helpful if you have funds secured with a MuSig setup and later decide you'd like to convert to a threshold setup without needing to move the funds. In practice I'm not sure how useful or desirable this actually is and I'm open to the idea that the security downsides outweigh the benefits. Even if that's the case, we can still use MuSig for the broadcast channel (assuming we want to do that), and change the API such that it doesn't allow for converting a pre-existing MuSig key.

It's a very neat idea to use MuSig during KeyGen but isn't it dangerous right now without a security proof for the modified protocol? This is in particular true when we drop the proofs of knowledge. I'm not entirely sure that the MuSig2 signing can replace those. In any case it will require the AGM then to prove the protocol secure, whereas probably ROM+OMDL will be enough for a proof of the unmodified FROST.

I agree that we need a security proof for the modified protocol if we think that's worth pursuing. However, the MuSig partial signatures are also proofs of knowledge, so I think the modified protocol still provides that.

I don't think that MuSig can replace a broadcast channel. To be more specific, it may be able to replace a reliable broadcast channel but not a terminating broadcast channel. That is, the current KeyGen protocol still needs broadcast: If you terminates with an aggregate pubkey, you don't know if the others terminated or not. So you can't go ahead and use the pubkey (e.g., send money to it). For example, if everyone else has aborted, it's impossible to sign for that pubkey. This can be mitigated by a third round in which everyone confirms that the first two rounds were correct. If you receive a third-round confirmation from everybody, then you know that signing is possible and you can return the aggregate pubkey to the caller. (This still does not mean that everyone else will have received valid third-round confirmations... So others may be in some uncertain state but that's okayish if they don't timeout and delete their secrets. Signing will be possible, it's just that not everyone knows that it will be possible. To be really sure, you need terminating reliable broadcast, which is almost as strong as consensus.)

The MuSig partial signatures are not generated until every participant has received their shares, verified their shares, and aggregated their shares. So when a participant receives partial signatures from every other participant, they know that each participant has the shares required to sign for the aggregate public key assuming that all participants verified their shares against the same commitments. This final assumption is validated when the partial signatures are aggregated and the aggregate is verified as a valid signature. So at that point, I believe the participants are left in that same final state you described at the end, where they are potentially uncertain (because they don't know if the other participants were able to verify the aggregate signature), however signing will be possible assuming they don't timeout and delete.

Have you considered using hashes of the pubkey shares as identities for Lagrange coefficients (instead of 1, 2, 3, ...). This may be nicer in practice because then you don't need to bother with an ordering of the pubkeys. (I learned this trick from https://blog.trailofbits.com/2021/12/21/disclosing-shamirs-secret-sharing-vulnerabilities-and-announcing-zkdocs/ which explains how to use it properly. Hashing is certainly safe.)

That looks very useful, thanks for the suggestion!

nit: It shouldn't be necessary to verify the partial sigs and the final sig.

Great point!

@real-or-random
Copy link
Collaborator

I'm not sure if it's actually an anti-feature that this is compatible to MuSig key aggregation. I think it's a good mind model to keep threshold and multisigs separate as much as possible because the setup is so different. Do we really want users to reuse their FROST pubkey shares as MuSig public keys? It's strictly less secure than having a separate key for MuSig because you gave others shares of the key. And I don't see where reuse will be helpful. I tend to believe that it is good to use a FROST key only for running FROST, with exactly the group of signers it was supposed to be used.

Reuse is potentially helpful if you have funds secured with a MuSig setup and later decide you'd like to convert to a threshold setup without needing to move the funds. In practice I'm not sure how useful or desirable this actually is and I'm open to the idea that the security downsides outweigh the benefits. Even if that's the case, we can still use MuSig for the broadcast channel (assuming we want to do that), and change the API such that it doesn't allow for converting a pre-existing MuSig key.

Hm yes, I tend to believe that converting from a MuSig pubkey to a FROST pubkey is rare enough that it doesn't matter to change the pubkey.

It's a very neat idea to use MuSig during KeyGen but isn't it dangerous right now without a security proof for the modified protocol? This is in particular true when we drop the proofs of knowledge. I'm not entirely sure that the MuSig2 signing can replace those. In any case it will require the AGM then to prove the protocol secure, whereas probably ROM+OMDL will be enough for a proof of the unmodified FROST.

I agree that we need a security proof for the modified protocol if we think that's worth pursuing. However, the MuSig partial signatures are also proofs of knowledge, so I think the modified protocol still provides that.

The partial sigs is one part where I'm not sure. A single partial sig is not a proof of knowledge because the hashed nonce is not the partial nonce of the participant. A maybe surprising property is that partial sigs during a MuSig2 run can be forged -- just not all partial signatures of one run because this would imply a real forgery. Now in this case we get all partial sigs (and a full sig) but one would need to see if this really means that all keys can be extracted. I never thought about this to be honest.

I don't think that MuSig can replace a broadcast channel. To be more specific, it may be able to replace a reliable broadcast channel but not a terminating broadcast channel. That is, the current KeyGen protocol still needs broadcast: If you terminates with an aggregate pubkey, you don't know if the others terminated or not. So you can't go ahead and use the pubkey (e.g., send money to it). For example, if everyone else has aborted, it's impossible to sign for that pubkey. This can be mitigated by a third round in which everyone confirms that the first two rounds were correct. If you receive a third-round confirmation from everybody, then you know that signing is possible and you can return the aggregate pubkey to the caller. (This still does not mean that everyone else will have received valid third-round confirmations... So others may be in some uncertain state but that's okayish if they don't timeout and delete their secrets. Signing will be possible, it's just that not everyone knows that it will be possible. To be really sure, you need terminating reliable broadcast, which is almost as strong as consensus.)

The MuSig partial signatures are not generated until every participant has received their shares, verified their shares, and aggregated their shares. So when a participant receives partial signatures from every other participant, they know that each participant has the shares required to sign for the aggregate public key assuming that all participants verified their shares against the same commitments. This final assumption is validated when the partial signatures are aggregated and the aggregate is verified as a valid signature. So at that point, I believe the participants are left in that same final state you described at the end, where they are potentially uncertain (because they don't know if the other participants were able to verify the aggregate signature), however signing will be possible assuming they don't timeout and delete.

Ok yes, I think you're right.

examples/frost.c Outdated
if (!secp256k1_musig_nonce_agg(ctx, &agg_pubnonce, pubnonces, THRESHOLD)) {
return 0;
}
if (!secp256k1_musig_nonce_process(ctx, &session, &agg_pubnonce, msg32, &cache, NULL)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The combination of using MuSig2-style nonce aggregation and reusing MuSig2's way of computing the coefficient for the nonce ("b" in the notation of MuSig2 paper) seems to make this vulnerable to Wagner/BLLOR-style attacks (see MuSig2 paper) if the attacker can choose the set of t participants after seeing an honest signer's nonce pair. Letting the attacker choose the set of participants will give the attacker some control over the Lagrange coefficient, which has a similar effect as giving the attacker control over the Fiat-Shamir challenge, because the two values are simply multiplied to each other.. I say "similar" because the Lagrange coefficient is structured and not random but I don't except that this makes an attack significantly harder or easier.

The fix is to include the set of participants in the derivation of b.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that's a great point. This is fixed in changes I will be pushing shortly that make nonce generation conform to the latest FROST spec (https://datatracker.ietf.org/doc/draft-irtf-cfrg-frost/).

@synctext
Copy link

synctext commented May 13, 2022

Awsome work @jesseposner, thank you. Do you consider stress testing your code with 10M users?
btw Proof-of-concept of your FROST efforts now operational on Android. Delft University students took your code and used it towards building a fully decentralised Taproot-based DAO, based on joint ownership of Bitcoin funds which can be spent by participating in a k-out-of-n Schnorr signature. Specifically, its expanded for distributed usage with our "Trustchain ledger" for potentially numerous signatures from Android users. interesting?

@jesseposner
Copy link
Contributor Author

Thanks @synctext! I'm excited to see that your students were able to leverage this code for their project, I will check it out. Please be careful, the code is still early and has not yet been reviewed and should not be considered secure software.

Great suggestion regarding the stress testing, I will add it to the to-do list.

Copy link
Contributor

@jonasnick jonasnick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jesseposner I had a shallow look at your PR. In particular, I like the implementation of the vss_hashes because it allows to see when key generation has worked. Ideally, we would also be able to assign blame if the vss_hashes are not identical and then resolve the issue through some other process.

As far as I can see, there are no proofs of knowledge involved in key aggregation and, instead, MuSig-aggregation is assumed make PoKs unnecessary. I think this would require a security proof/argument.

Agree with @real-or-random that we should try to get rid of signer indices entirely if possible.

I pushed a branch with a few fixup! commits and CI integration. Feel free to cherry-pick: https://github.com/jonasnick/secp256k1-zkp/commits/frost-jn.

include/secp256k1_frost.h Outdated Show resolved Hide resolved
include/secp256k1_frost.h Outdated Show resolved Hide resolved
examples/frost.c Outdated Show resolved Hide resolved
@jesseposner
Copy link
Contributor Author

jesseposner commented May 17, 2022

Thanks for the comments @jonasnick!

As far as I can see, there are no proofs of knowledge involved in key aggregation and, instead, MuSig-aggregation is assumed make PoKs unnecessary. I think this would require a security proof/argument.

The vss_hash signatures are also PoKs. Instead of distributing PoKs in the first round, we distribute them in the second round, and instead of simply using the index and a context string as the message for the signature that serves as the PoK, we use the vss_hash as the message.

By moving the PoK to the second round and including the vss_hash in the PoK, we fulfill the broadcast channel requirement with the same set of signatures used for the PoK (otherwise we would need an additional set of signatures to fulfill the broadcast channel requirement). As you observed, we already have secure channels between the participants, so we could distribute the vss_hash separately. However, by using the PoKs, that step becomes unnecessary, so it's a little more efficient.

Agree with @real-or-random that we should try to get rid of signer indices entirely if possible.

Great point, I added this to the to-do list.

I pushed a branch with a few fixup! commits and CI integration. Feel free to cherry-pick: https://github.com/jonasnick/secp256k1-zkp/commits/frost-jn.

Thanks, will do!

Copy link
Contributor

@jonasnick jonasnick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, clever idea to use this signature as the PoK. I pushed two more commits to my branch. Feel free to cherry-pick if you agree that they're useful.

I think one open todo that may be useful to keep in the OP is:

  • decide on whether to use MuSig key aggregation

Btw, I mentioned earlier that

Ideally, we would also be able to assign blame if the vss_hashes are not identical and then resolve the issue through some other process.

It seems like this API is already close (requires share_agg(...) to indicate which share failed to verify) to providing the possibility to do something like this.

  1. When distributing shares, signers sign vss_commitment and share, and send the signature to the along with the vss_commitment.
  2. The receiving signer does vss verification of the share. If it fails, the share, the vss_commitment and the signature can be used to complain. Perhaps it makes sense to make the vss_verify function in my branch public because it would allow a third party with minimal information about the key generation session to verify the complaint.
  3. If the vss_hash turns out to not be identical, every signer submits all received vss_commitments and their signatures to the complaint handler, which should be sufficient to identify disruptive signers.

One problem is that there's no agreed-upon session identifer that is covered by the signature, so signatures are transferrable between sessions. A malicious signer could send a received vss_commitment and signature from a different session to the complaint handler to blame an innocent signer that was present in two keygen sessions.

src/modules/frost/main_impl.h Outdated Show resolved Hide resolved
@jesseposner
Copy link
Contributor Author

@jonasnick

I think one open todo that may be useful to keep in the OP is:

  • decide on whether to use MuSig key aggregation

done

One problem is that there's no agreed-upon session identifer that is covered by the signature, so signatures are transferrable between sessions. A malicious signer could send a received vss_commitment and signature from a different session to the complaint handler to blame an innocent signer that was present in two keygen sessions.

Here's an idea for how we might incorporate a session ID by building on what you described:

  1. Each signer generates a random session ID for the DKG session. When distributing shares, signers sign the (1) VSS commitment, (2) share, (3) index, and (4) session ID. This signature could serve as the PoK. If it's not serving as the PoK, then it can just be a hash, and we presume that the hash is signed by the secure channel between the signers when it is sent.
  2. Upon share verification failure, the signer submits the signature and its inputs to the complaint handler. I agree, we should have a verification API that a third party handler can use for this purpose.
  3. Instead of signing the vss_hash by itself, the vss_hash is signed along with the session ID. Also, we might not need to double hash here: the signature inputs could just be the inputs to the vss_hash rather than the hash. If this signature isn't serving as the PoK, then we can use a hash instead of a signature and presume the hash is signed via the secure channel.
  4. If a signer receives an unexpected VSS set from another signer, all signers submit all the signatures they received from step 1 and from step 3 to the complaint handler, along with the inputs to those signatures. The session IDs prevent data from another session being used to blame an innocent signer.

@jonasnick
Copy link
Contributor

Regarding the issue of identifying dishonest signers in the key generation phase, @real-or-random rightfully points out that transcript + signatures only helps if the signers already agree on the individual public keys. If there's disagreement, the signature is worthless. Establishing agreement is difficult. I'm not sure how much sense it makes in practice to simply assume that there is already agreement on the public keys.

@jesseposner
Copy link
Contributor Author

@jonasnick Perhaps we can use the key pair provided to secp256k1_frost_share_gen when keygen is initialized as an authentication foundation. We require the participants to agree upon individual public keys before beginning the protocol, and they provide their respective key pairs to secp256k1_frost_share_gen. The participants can use this key for signing various commitments during keygen, and we require explicit signatures instead of implied secure channels in all cases, and the third party handler would also be familiar with these keys as part of the public key agreement phase.

@jesseposner jesseposner force-pushed the frost branch 2 times, most recently from 54120ad to 28fa632 Compare May 28, 2022 04:22
include/secp256k1_frost.h Outdated Show resolved Hide resolved
include/secp256k1_frost.h Outdated Show resolved Hide resolved
include/secp256k1_frost.h Outdated Show resolved Hide resolved
include/secp256k1_frost.h Outdated Show resolved Hide resolved
include/secp256k1_frost.h Outdated Show resolved Hide resolved
include/secp256k1_frost.h Outdated Show resolved Hide resolved
Copy link
Contributor

@jonasnick jonasnick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jesseposner

The session IDs prevent data from another session being used to blame an innocent signer.

Let me try to expand on the argument: If malicious signer A complains that innocent signer B sent a bad vss_comm that resulted in A having the wrong vss_hash, A must provide sig_B(vss_comm, share, ID) as well as sig_B(vss_hash, ID). If these signatures are from a different session, then the vss_hash differs and therefore B can prove its innocence by showing that it computed the vss_hash correctly. As an aside, the vss_hash may be the same across sessions because they are derived deterministically from the inputs to keygen.

Cool idea. But it's really hard to convince yourself that it works in any case and risks inventing complicated machinery that turns out to be broken. I think this needs more formal treatment. In any case, blaming - even with your idea - does not seem to require API changes. Blaming seems to be something that we can focus on later.

jesseposner added a commit to jesseposner/secp256k1-zkp that referenced this pull request Dec 9, 2022
jesseposner added a commit to jesseposner/secp256k1-zkp that referenced this pull request Dec 9, 2022
Responds to comment by @ariard:

> It could be valuable to add an example for a FROST-based adaptor
signature protocol. I can try to add one later.

See
BlockstreamResearch#138 (comment)
Copy link
Contributor

@elichai elichai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed until second commit (ec537f5)

* each generate 2 shares, distribute 1 share to each other using a secure
* channel, and keep 1 for themselves.
*
* Each participant _must_ have a secure channel with each other participant
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we maybe use the pk of the recipient to do an ECDH+encryption such that only the recipient can decrypt this?
That way we remove the need for a secure channel and everything can be publicly sent, the downside, is that this potentially mixes multiple cryptographic schemes (pk might be used for schnorr+ecdh+share at 0 in the dkg)

if (!secp256k1_xonly_pubkey_serialize(ctx, buf, pk)) {
return 0;
}
secp256k1_sha256_initialize_tagged(&sha, (unsigned char*)"FROST/index", sizeof("FROST/index") - 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also use the contextString here?

secp256k1_sha256_initialize_tagged(&sha, (unsigned char*)"FROST/index", sizeof("FROST/index") - 1);
secp256k1_sha256_write(&sha, buf, sizeof(buf));
/* TODO: add sub_indices for weights > 1 */
secp256k1_sha256_write(&sha, zerobyte, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the point of the zero byte? AFAIR this function is used to generate the index ID of a party using its public key, so I'm not sure why would that hash also some sort of an index that is always zero?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The structure tagged_hash(pk || idx) includes sub_indices (idx) that allows weighting for participants. For example, a participant with weight 2 would have an index at tagged_hash(pk || 0) and tagged_hash(pk || 1), with the same pk in both, and would receive 2 sets of shares, for each index, giving that participant a greater weight.

secp256k1_scalar idx;
secp256k1_scalar sk;
secp256k1_scalar share_i;
secp256k1_ge ge_tmp;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you rename this to sender_pk and the pk in the input to recipient_pk?

ARG_CHECK(session_id != NULL);
ARG_CHECK(keypair != NULL);
ARG_CHECK(pk != NULL);
ARG_CHECK(threshold > 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we really disallow threshold=1? this will just create a constant polynomial with the secret key as the only term and every party will have the full secret key (I understand this will defeat the purpose of using a threshold scheme, but it's technically possible)

Copy link
Contributor Author

@jesseposner jesseposner Jan 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking is that we should presume this is user error because even though it's possible it doesn't seem useful. But if we think that flexibility might have some utility, I think it's fine to allow.

* threshold: the minimum number of signers required to produce a
* signature
*/
SECP256K1_API int secp256k1_frost_share_gen(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the FROST KeyGen figure in page 12 of the FROST paper, step 4 requires a broadcast of commitments + zkproofs before round 2 where you send the actual shares, I don't see any attack stemming from this change, but I wouldn't do this without a justification.

Also, if I understand this correctly, each participant P_i should run this function should in a loop over all P_j (i != j) participants? isn't that a bit wasteful?(regenerating all the coefficients) why not pass in a list of pks, and output a state + all messages

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, it seems like the PoK of the first coefficient(secret key) is missing? (step 2 of the figure) this is required to prevent a t-1 rouge key attack (we could also maybe use a MuSig-like trick but we might need a security proof for feldman + musig trick @real-or-random )

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@elichai I need to call this out more clearly in the docs and PR description and BIP, but see my comment here:

The vss_hash signatures are also PoKs. Instead of distributing PoKs in the first round, we distribute them in the second round, and instead of simply using the index and a context string as the message for the signature that serves as the PoK, we use the vss_hash as the message.

By moving the PoK to the second round and including the vss_hash in the PoK, we fulfill the broadcast channel requirement with the same set of signatures used for the PoK

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, if I understand this correctly, each participant P_i should run this function should in a loop over all P_j (i != j) participants? isn't that a bit wasteful?(regenerating all the coefficients) why not pass in a list of pks, and output a state + all messages

For devices with limited resources, the API provides the flexibility to generate the data sequentially instead of simultaneously, and it only needs to store its private key and session_id (because shares are generated deterministically from those values) and can go offline between share generations if it doesn't have all the public keys for each participant when it begins generating shares.

The commitments are an optional argument to avoid requiring that they be regenerated over and over again, but it's a good point that the coefficients get regenerated each time.

@l2xl also makes the suggestion for even more API flexibility:

Please split the secp256k1_frost_share_gen() function in two:

  • secp256k1_frost_keyshare_commitment_gen()
  • secp256k1_frost_keyshare_gen()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still unsure if sending the evaluation of f(index) together with the coefficients is actually safe. this reduces a round from the FROST keygen algorithm so seems like something they'd want to do in the paper if safe.

I'd need to go over their security proof and see the reasoning there to see if this is used, if you have any insights / justifications I'd love to hear

Copy link
Contributor Author

@jesseposner jesseposner Jan 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@elichai That's a good point. This optimization doesn't even save us a round, it just saves sending an extra signature.

In the prescribed protocol in this PR, there's 2 rounds:

  1. distribute shares and coefficient commitments
  2. distribute signatures of VSS hashes

If we stick to the paper (and add VSS hash sigs for the broadcast requirement), there's also 2 rounds:

  1. distribute coefficient commitments and signatures of a context string
  2. distribute shares and signatures of the VSS hash

With the second approach (paper + VSS hash sigs) there's 2 distributions of signatures, the first signs a context string to prevent a rogue key attack and the second signs the VSS hash for the broadcast requirement.

It would only take some slight revisions to the docs and maybe the APIs as well to support this change. The optimization is probably not worth the time and effort required to prove it secure so I'm currently thinking I should make this change.

Copy link
Contributor

@elichai elichai Jan 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm you're right. the "3rd round" in my head was actually not a round but a local computation.

I'll try to figure out from the proof in the paper if it's safe to send the evaluation before first committing to the coefficients + the zkproof of ai_0

did you write up something about this signature satisfying the broadcast requirement?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you write up something about this signature satisfying the broadcast requirement?

The only reference to it is here.

In Section 2.1 of the FROST paper, it states:

In practice, implementations guarantee consistency of participants’ views by using techniques such as posting commitments to a centralized server that is trusted to provide a single view to all participants, or adding another protocol round where participants compare their received commitment values to ensure they are identical.

The VSS hash signatures are intended to serve as that comparison of received commitment values.

Comment on lines 120 to 122
for (i = 0; i < 8; i++) {
rngseed[i + 0] = threshold / (1ull << (i * 8));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add a secp256k1_write_be64 like secp256k1_write_be32 for better readability?

}
/* Compute commitment to each coefficient */
secp256k1_ecmult_gen(&ctx->ecmult_gen_ctx, &rj, &rand[i % 2]);
secp256k1_ge_set_gej(&rp, &rj);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

secp256k1_ge_set_gej_var should be enough, the commitments here are public

/* Compute seed which commits to threshold and session ID */
secp256k1_scalar_get_b32(buf, &sk);
secp256k1_sha256_initialize(&sha);
secp256k1_sha256_write(&sha, session_id, 32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the session_id is used as a seed to generate the coefficients? if so this must have ~128 bits of secure randomness, so we should document that this must be uniformly random and not just recommend this.

#ifdef __cplusplus
extern "C" {
#endif

#include <stdint.h>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't seem like this header is actually being used here? I think you meant to include stddef but that's already transitively included from secp256k1.h

@elichai
Copy link
Contributor

elichai commented Jan 13, 2023

Do you have any thoughts about what encryption we might use? Perhaps something built on secp256k1_scalar_chacha20?

My goto answer is always chacha20-poly1305 :), now that we have chacha20 I don't think adding a poly1305 is that terrible, but will let others weigh in on this

secp256k1_pubkey *vss_commitment,
secp256k1_frost_share *share,
const unsigned char *session_id32,
const secp256k1_keypair *keypair,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a good idea to pass the "secret that is shared" as input? As this secret shouldn't be used anywhere else because its no longer your secret after you shared it between n other people (assuming t < n)
Meaning that if a user uses his regular private key here he actually allows any t parties here to steal his money, which is unrelated to the generated public key at the end of the protocol.

Copy link
Contributor Author

@jesseposner jesseposner Feb 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need some sort of secret value passed to this API because, as far as I understand it, the secp APIs don't provide entropy directly (e.g. from urandom) but require that entropy be provided as an input.

It's a good call-out though, perhaps some strongly worded warnings and instructions can warn users not to use that secret for anything else. Do you have something else in mind? Is there a way we can avoid passing the secret that is shared as input?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can replace it with a 32 byte ephemeral(document that it has to be ephemeral) random entropy and also generate the first coefficient from chacha.
But maybe this limits some use-cases I'm not seeing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The availability of the keypair for the first coefficient makes it easy to reuse this keypair for the signed VSS hash that also is serving as a proof of knowledge.

However, as we discussed above, we might want to opt for a more conservative approach of sending the proof of knowledge in the first round independently of the signed VSS hash. If we do that, then share_gen can export a proof of knowledge, so the availability of the keypair for the first coefficient is not needed.

We'll still need some keypair to use for the signed VSS hash. Perhaps there should be a set of authentication public keys exchanged between the participants that can be used for signing the VSS hash and also for encrypting/decrypting shares.

jesseposner added a commit to jesseposner/secp256k1-zkp that referenced this pull request Mar 29, 2023
@jesseposner
Copy link
Contributor Author

Proposed refactor based upon the review comments received so far: https://gist.github.com/jesseposner/0168e88f9cc911a3dc9095a78f6efc15

@real-or-random
Copy link
Collaborator

Just found this fork of libsecp256k1 with a FROST module: https://github.com/bancaditalia/secp256k1-frost/blob/frost/src/modules/frost/README.md

@leishman
Copy link

Not specifically related to this PR but I'd like to flag that there is now a 2 BTC bounty for a FROST powered wallet that allows signer membership set to change: https://twitter.com/gladstein/status/1684567113430892545

@jonasnick
Copy link
Contributor

jonasnick commented Aug 21, 2023

I added three commits to a forked branch that applies the global updates introduced with the recent sync of -zkp to upstream libsecp (see #266):

https://github.com/jonasnick/secp256k1-zkp/commits/frost-jn

Feel free to cherry-pick.

for (j = 0; j < N_SIGNERS; j++) {
assigned_shares[j] = &shares[j][i];
}
/* Each participant aggregates the shares they received. */
Copy link
Contributor Author

@jesseposner jesseposner Aug 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to perform the verification of the VSS hashes of all participants inside of share_agg to enforce this verification requirement as a prerequisite to share aggregation. Scratch that, share_agg generates the vss_hash.

Comment on lines 56 to 57
/* Computes indexhash = tagged_hash(pk || idx) */
static int secp256k1_frost_compute_indexhash(const secp256k1_context *ctx, secp256k1_scalar *indexhash, const secp256k1_xonly_pubkey *pk) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment seems to be outdated. It's just tagged_hash(pk||0)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eventually we might want tagged_hash(pk || sub_idx) as referred to below on line 67.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a specific reason to add the zero byte before we support weights? Is there a downside to adding the sub_idx only once we start supporting weights?

Copy link
Contributor Author

@jesseposner jesseposner Aug 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there's a downside, so it's probably better to remove the cruft until we decide to implement it.

include/secp256k1_frost.h Outdated Show resolved Hide resolved
include/secp256k1_frost.h Outdated Show resolved Hide resolved
include/secp256k1_frost.h Outdated Show resolved Hide resolved
examples/frost.c Outdated
Comment on lines 299 to 312
printf("Signing VSS.............");
if (!sign_vss(ctx, signer_secrets, signers, sigs)) {
printf("FAILED\n");
return 1;
}
printf("ok\n");
printf("Verifying VSS...........");
for (i = 0; i < N_SIGNERS; i++) {
if (!secp256k1_schnorrsig_verify(ctx, sigs[i], signers[0].vss_hash, 32, &signers[i].pubkey)) {
printf("FAILED\n");
return 1;
}
}
printf("ok\n");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this now that we verify the pok already inside share_gen?

Copy link
Contributor Author

@jesseposner jesseposner Aug 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These signatures, unlike the pok, sign the vss_hash, an output of share_agg, to fulfill the broadcast channel requirement.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a requirement for the broadcast channel? It seems quite likely to me that the broadcast channel has its own means of ensuring authenticity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: the share_agg doc instructs the user to sign the vss_hash (but doesn't mention secure broadcast).

Copy link
Contributor

@jonasnick jonasnick Aug 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I entirely forgot that we tried to instantiate the broadcast channel. However, I don't think that the doc in share_agg correctly explains how to do that.

As pointed out by @real-or-random, a malicious signer could send the correct vss_hash to Alice and a fake vss_hash to Bob. Alice would think the DKG has succeeded, while Bob thinks it has failed. Bob delets his key material which may result in the inability to sign despite Alice believing everything succeeded. We'd need to describe an echo-broadcast protocol (https://eprint.iacr.org/2002/040.pdf Protocol 1 and Protocol 2, H/T @real-or-random). It's probably best to describe this option in detail in the BIP and refer to it in the function documentation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the malicious signer can make n-of-n succeed unless the malicious signer's PoK is invalid. I think the most the malicious signer can do with an inconsistent VSS, but valid PoK, is trick participants into thinking they have valid shares that won't actually be usable for signing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, this is how I'd think a malicious signer could make it work in a 2-of-3 setup:

  1. Eve generates random key x
  2. Eve generates polynomial f for x, sends commitment to f, share and pok to Alice
  3. Eve generates polynomial g for x distinct from f, sends commitment to g, share and pok to Bob
  4. When creating partial sigs, Alice and Bob use their aggregate shares multiplied with the lagrange coefficient
  5. Eve creates its "normal" partial signature, but subtracts Alice and Bobs partial signatures for the garbage shares and adds a partial signature for the key x

This should result in a valid signature

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But wouldn't this be a signature for key x, not for the output of the DKG, which would be the sum of Alice, Bob, and Eve's keys? If Eve generates random key x, and Alice generates a random key y, and Bob generates a random key z, then how would Eve make the DKG output the key xG for the public key, when it should output (x+y+z)G for the public key? And if the DKG does output (x+y+z)G, then wouldn't Eve need to produce a signature for (x+y+z)G and not simply xG?

Copy link
Contributor

@jonasnick jonasnick Aug 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I'm doing a lot of handwaving. It's true that the DKG would have to produce a signature for (x+y+z)G and not just x. Assume Eve has shares y' from Alice and z'from Bob. Then the sum of Alice and Bob's partial signature is a signature for (y-y'+z-z'+g)G where g are the garbage shares sent by Eve. Eve produces a partial sig for (x + y' + z' - g)G such that the sum of all partial sigs should be (x+y+z)G.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x, y, and z are secrets, and y' and z' are polynomial shares that reconstruct to secrets, and typically shares and secrets are not added to each other. So I'm not following how we can compute a pubkey by adding secrets to shares with (y-y'+z-z'+g)G and (x + y' + z' - g)G.

Maybe it would help to define a full set of terms:

  1. Eve generates random secret x, and shares x_e, x_a, and x_b. x_a is from polynomial f and x_b is from polynomial g.
  2. Alice generates random secret y, and shares y_e, y_a, and y_b
  3. Bob generates random secret z, and shares z_e, z_a, and z_b
  4. The DKG pubkey is P = (x + y + z)G
  5. Eve's aggregate share is e = x_e + y_e + z_e
  6. Alice's aggregate share is a = x_a + y_a + z_a
  7. Bob's aggregate share is b = x_b + y_b + z_b
  8. For the signature, the nonce pubkey is R = r_e*G + r_a*G + r_b*G
  9. Alice's partial sig is s_a = r_a + H(P || R || m) * a * l_a
  10. Bob's partial sig is s_b = r_b + H(P || R || m) * b * l_b

With this setup, how does Eve produce a valid signature for (P, R, m)?

@jonasnick
Copy link
Contributor

@real-or-random and I have been brainstorming about the potential structure of the frost module's DKG, as well as what a potential bip-frost's DKG section might look like. We've consolidated our ideas so far in this repository: https://github.com/jonasnick/bip-frost-dkg. In summary we've been considering SimplPedPop and variants. This is pretty close to what we've already attempted here, but we now have a much better picture of the requirements on the caller and the broadcast channel.


for (i = 0; i < N_SIGNERS; i++) {
/* Generate a polynomial share for the participants */
if (!secp256k1_frost_shares_gen(ctx, shares[i], signer[i].vss_commitment, signer[i].pok, signer_secrets[i].seed, THRESHOLD, N_SIGNERS, ids)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked why the valgrind CI jobs fail and it looks like signer_secrets[i].seed is never initialized.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, good catch. Thanks for tracking that down!

Makefile.am Outdated Show resolved Hide resolved
This commit adds the foundational configuration and building scripts
and an initial structure for the project.
This commit adds share generation, as well as share serialization and
parsing.
This commit adds share aggregation and verification, as well as
computation of public verification shares.
This commits adds nonce generation, as well as serialization and
parsing.
This commits add BIP-341 ("Taproot") and BIP-32 ("ordinary") public key
tweaking.
This commit adds nonce aggregation, as well as adaptor signatures.
This commit adds signature generation and aggregation, as well as
partial signature serialization and parsing.
This commit adds an example file to demonstrate how to use the module.
Add api tests, nonce tests, tweak tests, sha256 tag tests, and constant
time tests.
This commit adds a documentation file with detailed instructions for how
to use the module properly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants