diff --git a/byzcoin/README.md b/byzcoin/README.md index 7da4325f78..4e072ef5c8 100644 --- a/byzcoin/README.md +++ b/byzcoin/README.md @@ -6,7 +6,7 @@ ByzCoin # ByzCoin This implementation of ByzCoin has its goal to implement the protocol -described in the [ByzCoin Paper](https://eprint.iacr.org/2017/406.pdf). +described in the [OmniLedger Paper](https://eprint.iacr.org/2017/406.pdf). As the paper is only describing the network interaction and very few of the details of how the transactions themselves are handled, we will include them as seem fit. diff --git a/eventlog/README.md b/eventlog/README.md index a14b50993c..f1ce77c0a1 100644 --- a/eventlog/README.md +++ b/eventlog/README.md @@ -53,7 +53,7 @@ to initialise it, the first for when you do not have an existing eventlog instance on ByzCoin to connect to, the other when you do. ```java -// Create the eventlog instance. It expects an ByzCoin RPC, a list of +// Create the eventlog instance. It expects a ByzCoin RPC, a list of // signers that have the "spawn:eventlog" permission and the darcID for where // the permission is stored. EventLogInstance el = new EventLogInstance(bcRPC, admins, darcID); diff --git a/eventlog/el/README.md b/eventlog/el/README.md index fc711b044f..7ab719648e 100644 --- a/eventlog/el/README.md +++ b/eventlog/el/README.md @@ -16,11 +16,11 @@ private key the right to make new event logs. $ PRIVATE_KEY=$priv el create -ol $file ``` -The ByzCoin admin will give you an ByzCoin config file, which you will -use with the -bc argument, or you can set the BC environment -variable to the name of the ByzCoin config file. A new event log will be spawned, -and the event log ID will be printed. Set the EL environment variable to -communicate it to future calls to the `el` program. +The ByzCoin admin will give you a ByzCoin config file, which you will +use with the -bc argument, or you can set the BC environment variable to the +name of the ByzCoin config file. A new event log will be spawned, and the +event log ID will be printed. Set the EL environment variable to communicate +it to future calls to the `el` program. You need to give the private key from above, using the PRIVATE_KEY environment variable or the `-priv` argument. diff --git a/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/ByzCoinRPC.java b/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/ByzCoinRPC.java index f2ba73978c..b763543a06 100644 --- a/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/ByzCoinRPC.java +++ b/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/ByzCoinRPC.java @@ -74,7 +74,7 @@ public ByzCoinRPC(Roster r, Darc d, Duration blockInterval) throws CothorityExce } /** - * Constructs an ByzCoinRPC from a known configuration. The constructor will communicate with the service to + * Constructs a ByzCoinRPC from a known configuration. The constructor will communicate with the service to * populate other fields and perform verification. * * @param roster the roster to talk to @@ -101,16 +101,6 @@ public ByzCoinRPC(Roster roster, SkipblockId skipchainId) throws CothorityExcept latest = skipchain.getLatestSkipblock(); } - /** - * Instantiates an byzcoin object given the byte representation. The byzcoin must already have been - * initialized on the cothority. - * - * @param buf is the representation of the basic byzcoin parameters, it should have a Roster and a skipchain ID. - */ - public ByzCoinRPC(byte[] buf) { - throw new RuntimeException("Not implemented yet"); - } - /** * Sends a transaction to byzcoin, but doesn't wait for the inclusion of this transaction in a block. * Once the transaction has been sent, you need to poll to verify if it has been included or not. @@ -258,7 +248,7 @@ public Block getBlock(SkipblockId id) throws CothorityCommunicationException, Co /** * Fetches the latest block from the Skipchain and returns the corresponding Block. * - * @return an Block representation of the skipblock + * @return a Block representation of the skipblock * @throws CothorityCommunicationException if it couldn't contact the nodes * @throws CothorityCryptoException if the omniblock is invalid */ diff --git a/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/Config.java b/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/Config.java index d0fd83e94d..c7bf8898db 100644 --- a/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/Config.java +++ b/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/Config.java @@ -10,8 +10,8 @@ import static java.time.temporal.ChronoUnit.NANOS; /** - * Config is the genesis configuration of an byzcoin instance. It can be stored only once in byzcoin - * and defines the basic running parameters of byzcoin. + * Config is the genesis configuration of a ByzCoin instance. It can be stored only once in ByzCoin + * and defines the basic running parameters of the ledger and its underlying skipchain. */ public class Config { private Duration blockInterval; diff --git a/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/contracts/DarcInstance.java b/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/contracts/DarcInstance.java index 40e26480b0..67ccd1e14f 100644 --- a/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/contracts/DarcInstance.java +++ b/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/contracts/DarcInstance.java @@ -35,7 +35,7 @@ public class DarcInstance { * the current darcInstance. If the instance is not found, or is not of * contractId "darc", an exception will be thrown. * - * @param bc is a link to an byzcoin instance that is running + * @param bc is a ByzCoin instance that is running * @param id of the darc-instance to connect to * @throws CothorityException */ diff --git a/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/contracts/ValueInstance.java b/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/contracts/ValueInstance.java index ddfaf4ff3d..779e3f5ca9 100644 --- a/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/contracts/ValueInstance.java +++ b/external/java/src/main/java/ch/epfl/dedis/lib/byzcoin/contracts/ValueInstance.java @@ -31,7 +31,7 @@ public class ValueInstance { * the current valueInstance. If the instance is not found, or is not of * contractId "Value", an exception will be thrown. * - * @param bc is a link to an byzcoin instance that is running + * @param bc is a ByzCoin instance that is running * @param id of the value-instance to connect to * @throws CothorityException */ diff --git a/external/js/cothority/lib/byzcoin/ByzCoinRPC.js b/external/js/cothority/lib/byzcoin/ByzCoinRPC.js index 6a2b166387..f820ac8aef 100644 --- a/external/js/cothority/lib/byzcoin/ByzCoinRPC.js +++ b/external/js/cothority/lib/byzcoin/ByzCoinRPC.js @@ -12,7 +12,7 @@ const protobuf = require("protobufjs"); */ class ByzCoinRPC { /** - * Constructs an ByzCoinRPC when the complete configuration is known + * Constructs a ByzCoinRPC when the complete configuration is known * * @param {Config} config - the configuration of the ByzCoin * @param {Socket|LeaderSocket|RosterSocket} socket - the socket to communicate with the ByzCoin ledger @@ -167,7 +167,7 @@ class ByzCoinRPC { } /** - * Constructs an ByzGenRPC from a known configuration. The constructor will communicate with the service to + * Constructs a ByzGenRPC from a known configuration. The constructor will communicate with the service to * populate other fields and perform verification. * * @param {Socket|LeaderSocket|RosterSocket} socket - the socket to communicate with the conode diff --git a/external/js/cothority/lib/byzcoin/Config.js b/external/js/cothority/lib/byzcoin/Config.js index 8aa1959577..78c6c5905e 100644 --- a/external/js/cothority/lib/byzcoin/Config.js +++ b/external/js/cothority/lib/byzcoin/Config.js @@ -2,8 +2,8 @@ const root = require("../protobuf/index.js").root; const identity = require("../identity"); /** - * Config is the genesis configuration of an byzcoin instance. It can be stored only once in byzcoin - * and defines the basic running parameters of byzcoin. + * Config is the genesis configuration of a ByzCoin instance. It can be stored only once in ByzCoin + * and defines the basic running parameters of ByzCoin. */ class Config { /** diff --git a/randhound/README.md b/randhound/README.md deleted file mode 100644 index 2fb74b884d..0000000000 --- a/randhound/README.md +++ /dev/null @@ -1,42 +0,0 @@ -Navigation: [DEDIS](https://github.com/dedis/doc/tree/master/README.md) :: -[Cothority](../README.md) :: -[Building Blocks](../doc/BuildingBlocks.md) :: -RandHound - -# RandHound - -Bias-resistant public randomness is a critical component -in many (distributed) protocols. Generating public randomness -is hard, however, because active adversaries may behave -dishonestly to bias public random choices toward their advantage. -Existing solutions do not scale to hundreds or thousands -of participants, as is needed in many decentralized systems. -We propose two large-scale distributed protocols, RandHound -and RandHerd, which provide publicly-verifiable, unpredictable, -and unbiasable randomness against Byzantine adversaries. RandHound -relies on an untrusted client to divide a set of randomness -servers into groups for scalability, and it depends on the pigeonhole -principle to ensure output integrity, even for non-random, -adversarial group choices. RandHerd implements an efficient, -decentralized randomness beacon. - -RandHerd is structurally -similar to a BFT protocol, but uses RandHound in a one-time -setup to arrange participants into verifiably unbiased random -secret-sharing groups, which then repeatedly produce random -output at predefined intervals. Our prototype demonstrates that -RandHound and RandHerd achieve good performance across -hundreds of participants while retaining a low failure probability -by properly selecting protocol parameters, such as a group size -and secret-sharing threshold. For example, when sharding 512 -nodes into groups of 32, our experiments show that RandHound -can produce fresh random output after 240 seconds. RandHerd, -after a setup phase of 260 seconds, is able to generate fresh -random output in intervals of approximately 6 seconds. For this -configuration, both protocols operate at a failure probability of -at most 0.08% against a Byzantine adversary. - -## Research Papers - -- [RandHound](https://eprint.iacr.org/2016/1067.pdf)Scalable Bias-Resistant -Distributed Randomness diff --git a/randhound/proof.go b/randhound/proof.go deleted file mode 100644 index e7d279d8a9..0000000000 --- a/randhound/proof.go +++ /dev/null @@ -1,302 +0,0 @@ -// +build experimental - -package randhound - -import ( - "errors" - - "github.com/dedis/kyber" - "github.com/dedis/kyber/poly" - "github.com/dedis/kyber/util/random" - "github.com/dedis/onet/crypto" -) - -// Package proof provides functionality to create and verify non-interactive -// zero-knowledge (NIZK) proofs for the equality of discrete logarithms (dlog). - -// Proof resembles a NIZK dlog-equality proof. Allows to handle multiple proofs. -type Proof struct { - suite kyber.Suite - Base []ProofBase - Core []ProofCore -} - -// ProofBase contains the base points against which the core proof is created. -type ProofBase struct { - g kyber.Point - h kyber.Point -} - -// ProofCore contains the core elements of the NIZK dlog-equality proof. -type ProofCore struct { - C kyber.Scalar // challenge - R kyber.Scalar // response - VG kyber.Point // public commitment with respect to base point G - VH kyber.Point // public commitment with respect to base point H -} - -// NewProof creates a new NIZK dlog-equality proof. -func NewProof(suite kyber.Suite, g []kyber.Point, h []kyber.Point, core []ProofCore) (*Proof, error) { - - if len(g) != len(h) { - return nil, errors.New("Received non-matching number of points") - } - - n := len(g) - base := make([]ProofBase, n) - for i := range base { - base[i] = ProofBase{g: g[i], h: h[i]} - } - - return &Proof{suite: suite, Base: base, Core: core}, nil -} - -// Setup initializes the proof by randomly selecting a commitment v, -// determining the challenge c = H(xG,xH,vG,vH) and the response r = v - cx. -func (p *Proof) Setup(scalar ...kyber.Scalar) ([]kyber.Point, []kyber.Point, error) { - - if len(scalar) != len(p.Base) { - return nil, nil, errors.New("Received unexpected number of scalars") - } - - n := len(scalar) - p.Core = make([]ProofCore, n) - xG := make([]kyber.Point, n) - xH := make([]kyber.Point, n) - for i, x := range scalar { - - xG[i] = p.suite.Point().Mul(p.Base[i].g, x) - xH[i] = p.suite.Point().Mul(p.Base[i].h, x) - - // Commitment - v := p.suite.Scalar().Pick(random.Stream) - vG := p.suite.Point().Mul(p.Base[i].g, v) - vH := p.suite.Point().Mul(p.Base[i].h, v) - - // Challenge - cb, err := crypto.HashArgsSuite(p.suite, xG[i], xH[i], vG, vH) - if err != nil { - return nil, nil, err - } - c := p.suite.Scalar().Pick(p.suite.Cipher(cb)) - - // Response - r := p.suite.Scalar() - r.Mul(x, c).Sub(v, r) - - p.Core[i] = ProofCore{c, r, vG, vH} - } - - return xG, xH, nil -} - -// SetupCollective is similar to Setup with the difference that the challenge -// is computed as the hash over all base points and commitments. -func (p *Proof) SetupCollective(scalar ...kyber.Scalar) ([]kyber.Point, []kyber.Point, error) { - - if len(scalar) != len(p.Base) { - return nil, nil, errors.New("Received unexpected number of scalars") - } - - n := len(scalar) - p.Core = make([]ProofCore, n) - v := make([]kyber.Scalar, n) - xG := make([]kyber.Point, n) - xH := make([]kyber.Point, n) - vG := make([]kyber.Point, n) - vH := make([]kyber.Point, n) - for i, x := range scalar { - - xG[i] = p.suite.Point().Mul(p.Base[i].g, x) - xH[i] = p.suite.Point().Mul(p.Base[i].h, x) - - // Commitments - v[i] = p.suite.Scalar().Pick(random.Stream) - vG[i] = p.suite.Point().Mul(p.Base[i].g, v[i]) - vH[i] = p.suite.Point().Mul(p.Base[i].h, v[i]) - } - - // Collective challenge - cb, err := crypto.HashArgsSuite(p.suite, xG, xH, vG, vH) - if err != nil { - return nil, nil, err - } - c := p.suite.Scalar().Pick(p.suite.Cipher(cb)) - - // Responses - for i, x := range scalar { - r := p.suite.Scalar() - r.Mul(x, c).Sub(v[i], r) - p.Core[i] = ProofCore{c, r, vG[i], vH[i]} - } - - return xG, xH, nil -} - -// Verify validates the proof(s) against the given input by checking that vG == -// rG + c(xG) and vH == rH + c(xH) and returns the indices of those proofs that -// are valid (good) and non-valid (bad), respectively. -func (p *Proof) Verify(xG []kyber.Point, xH []kyber.Point) ([]int, []int, error) { - - if len(xG) != len(xH) { - return nil, nil, errors.New("Received unexpected number of points") - } - - var good, bad []int - for i := range p.Base { - if xG[i].Equal(p.suite.Point().Null()) || xH[i].Equal(p.suite.Point().Null()) { - bad = append(bad, i) - } else { - rG := p.suite.Point().Mul(p.Base[i].g, p.Core[i].R) - rH := p.suite.Point().Mul(p.Base[i].h, p.Core[i].R) - cxG := p.suite.Point().Mul(xG[i], p.Core[i].C) - cxH := p.suite.Point().Mul(xH[i], p.Core[i].C) - a := p.suite.Point().Add(rG, cxG) - b := p.suite.Point().Add(rH, cxH) - - if p.Core[i].VG.Equal(a) && p.Core[i].VH.Equal(b) { - good = append(good, i) - } else { - bad = append(bad, i) - } - } - } - - return good, bad, nil -} - -// PVSS implements public verifiable secret sharing. -type PVSS struct { - suite kyber.Suite // Suite - h kyber.Point // Base point for polynomial commits - t int // Secret sharing threshold -} - -// NewPVSS creates a new PVSS struct using the given suite, base point, and -// secret sharing threshold. -func NewPVSS(s kyber.Suite, h kyber.Point, t int) *PVSS { - return &PVSS{suite: s, h: h, t: t} -} - -// Split creates PVSS shares encrypted by the public keys in X and -// provides a NIZK encryption consistency proof for each share. -func (pv *PVSS) Split(X []kyber.Point, secret kyber.Scalar) ([]int, []kyber.Point, []ProofCore, []byte, error) { - - n := len(X) - - // Create secret sharing polynomial - priPoly := new(poly.PriPoly).Pick(pv.suite, pv.t, secret, random.Stream) - - // Create secret set of shares - shares := new(poly.PriShares).Split(priPoly, n) - - // Create public polynomial commitments with respect to basis H - pubPoly := new(poly.PubPoly).Commit(priPoly, pv.h) - - // Prepare data for encryption consistency proofs ... - share := make([]kyber.Scalar, n) - H := make([]kyber.Point, n) - idx := make([]int, n) - for i := range idx { - idx[i] = i - share[i] = shares.Share(i) - H[i] = pv.h - } - - // ... and create them - proof, err := NewProof(pv.suite, H, X, nil) - if err != nil { - return nil, nil, nil, nil, err - } - _, sX, err := proof.SetupCollective(share...) - if err != nil { - return nil, nil, nil, nil, err - } - - polyBin, err := pubPoly.MarshalBinary() - if err != nil { - return nil, nil, nil, nil, err - } - - return idx, sX, proof.Core, polyBin, nil -} - -// Verify checks that log_H(sH) == log_X(sX) using the given proof(s) and -// returns the indices of those proofs that are valid (good) and non-valid -// (bad), respectively. -func (pv *PVSS) Verify(H kyber.Point, X []kyber.Point, sH []kyber.Point, sX []kyber.Point, core []ProofCore) (good, bad []int, err error) { - - n := len(X) - Y := make([]kyber.Point, n) - for i := 0; i < n; i++ { - Y[i] = H - } - proof, err := NewProof(pv.suite, Y, X, core) - if err != nil { - return nil, nil, err - } - return proof.Verify(sH, sX) -} - -// Commits reconstructs a list of commits from the given polynomials and indices. -func (pv *PVSS) Commits(polyBin [][]byte, index []int) ([]kyber.Point, error) { - - if len(polyBin) != len(index) { - return nil, errors.New("Inputs have different lengths") - } - - n := len(polyBin) - sH := make([]kyber.Point, n) - for i := range sH { - P := new(poly.PubPoly) - P.Init(pv.suite, pv.t, pv.h) - if err := P.UnmarshalBinary(polyBin[i]); err != nil { - return nil, err - } - sH[i] = P.Eval(index[i]) - } - return sH, nil -} - -// Reveal decrypts the shares in xS using the secret key x and creates an NIZK -// decryption consistency proof for each share. -func (pv *PVSS) Reveal(x kyber.Scalar, xS []kyber.Point) ([]kyber.Point, []ProofCore, error) { - - // Decrypt shares - S := make([]kyber.Point, len(xS)) - G := make([]kyber.Point, len(xS)) - y := make([]kyber.Scalar, len(xS)) - for i := range xS { - S[i] = pv.suite.Point().Mul(xS[i], pv.suite.Scalar().Inv(x)) - G[i] = pv.suite.Point().Base() - y[i] = x - } - - proof, err := NewProof(pv.suite, G, S, nil) - if err != nil { - return nil, nil, err - } - if _, _, err := proof.Setup(y...); err != nil { - return nil, nil, err - } - return S, proof.Core, nil -} - -// Recover recreates the PVSS secret from the given shares. -func (pv *PVSS) Recover(pos []int, S []kyber.Point, n int) (kyber.Point, error) { - - if len(S) < pv.t { - return nil, errors.New("Not enough shares to recover secret") - } - - //log.Lvlf1("%v %v %v %v", pos, pv.t, len(pos), len(S)) - - pp := new(poly.PubPoly).InitNull(pv.suite, pv.t, pv.suite.Point().Base()) - ps := new(poly.PubShares).Split(pp, n) // XXX: ackward way to init shares - - for i, s := range S { - ps.SetShare(pos[i], s) - } - - return ps.SecretCommit(), nil -} diff --git a/randhound/proof_test.go b/randhound/proof_test.go deleted file mode 100644 index 83984b283d..0000000000 --- a/randhound/proof_test.go +++ /dev/null @@ -1,154 +0,0 @@ -// +build experimental - -package randhound_test - -import ( - "testing" - - "github.com/dedis/cothority/randhound" - "github.com/dedis/kyber" - "github.com/dedis/kyber/util/random" - "github.com/dedis/onet/log" -) - -var tSuite = ed25519.NewBlakeSHA256Ed25519() - -func TestProof(t *testing.T) { - // 1st set of base points - g1, _ := tSuite.Point().Pick([]byte("G1"), random.Stream) - h1, _ := tSuite.Point().Pick([]byte("H1"), random.Stream) - - // 1st secret value - x := tSuite.Scalar().Pick(random.Stream) - - // 2nd set of base points - g2, _ := tSuite.Point().Pick([]byte("G2"), random.Stream) - h2, _ := tSuite.Point().Pick([]byte("H2"), random.Stream) - - // 2nd secret value - y := tSuite.Scalar().Pick(random.Stream) - - // Create proofs - g := []kyber.Point{g1, g2} - h := []kyber.Point{h1, h2} - p, err := randhound.NewProof(suite, g, h, nil) - log.ErrFatal(err) - - xG, xH, err := p.Setup(x, y) - log.ErrFatal(err) - - // Verify proofs - q, err := randhound.NewProof(suite, g, h, p.Core) - log.ErrFatal(err) - - _, bad, err := q.Verify(xG, xH) - log.ErrFatal(err) - - if len(bad) != 0 { - log.Fatalf("Some proofs failed: %v", bad) - } -} - -func TestProofCollective(t *testing.T) { - // 1st set of base points - g1, _ := tSuite.Point().Pick([]byte("G1"), random.Stream) - h1, _ := tSuite.Point().Pick([]byte("H1"), random.Stream) - - // 1st secret value - x := tSuite.Scalar().Pick(random.Stream) - - // 2nd set of base points - g2, _ := tSuite.Point().Pick([]byte("G2"), random.Stream) - h2, _ := tSuite.Point().Pick([]byte("H2"), random.Stream) - - // 2nd secret value - y := tSuite.Scalar().Pick(random.Stream) - - // Create proof - g := []kyber.Point{g1, g2} - h := []kyber.Point{h1, h2} - p, err := randhound.NewProof(suite, g, h, nil) - log.ErrFatal(err) - - xG, xH, err := p.SetupCollective(x, y) - log.ErrFatal(err) - - // Verify proof - q, err := randhound.NewProof(suite, g, h, p.Core) - log.ErrFatal(err) - - _, bad, err := q.Verify(xG, xH) - log.ErrFatal(err) - - if len(bad) != 0 { - log.Fatalf("Some proofs failed: %v", bad) - } - -} - -func TestPVSS(t *testing.T) { - G := tSuite.Point().Base() - H, _ := tSuite.Point().Pick(nil, tSuite.Cipher([]byte("H"))) - - n := 10 - threshold := 2*n/3 + 1 - x := make([]kyber.Scalar, n) // trustee private keys - X := make([]kyber.Point, n) // trustee public keys - index := make([]int, n) - for i := 0; i < n; i++ { - x[i] = tSuite.Scalar().Pick(random.Stream) - X[i] = tSuite.Point().Mul(nil, x[i]) - index[i] = i - } - - // Scalar of shared secret - secret := tSuite.Scalar().Pick(random.Stream) - - // (1) Share-Distribution (Dealer) - pvss := randhound.NewPVSS(suite, H, threshold) - idx, sX, encProof, pb, err := pvss.Split(X, secret) - log.ErrFatal(err) - - // (2) Share-Decryption (Trustee) - pbx := make([][]byte, n) - for i := 0; i < n; i++ { - pbx[i] = pb // NOTE: polynomials can be different - } - sH, err := pvss.Commits(pbx, index) - log.ErrFatal(err) - - // Check that log_H(sH) == log_X(sX) using encProof - _, bad, err := pvss.Verify(H, X, sH, sX, encProof) - log.ErrFatal(err) - - if len(bad) != 0 { - log.Fatalf("Some proofs failed: %v", bad) - } - - // Decrypt shares - S := make([]kyber.Point, n) - decProof := make([]randhound.ProofCore, n) - for i := 0; i < n; i++ { - s, d, err := pvss.Reveal(x[i], sX[i:i+1]) - log.ErrFatal(err) - S[i] = s[0] - decProof[i] = d[0] - } - - // Check that log_G(S) == log_X(sX) using decProof - _, bad, err = pvss.Verify(G, S, X, sX, decProof) - log.ErrFatal(err) - - if len(bad) != 0 { - log.Fatalf("Some proofs failed: %v", bad) - } - - // (3) Secret-Recovery (Dealer) - recovered, err := pvss.Recover(idx, S, len(S)) - log.ErrFatal(err) - - // Verify recovered secret - if !(tSuite.Point().Mul(nil, secret).Equal(recovered)) { - log.Fatalf("Recovered incorrect shared secret") - } -} diff --git a/randhound/randhound.go b/randhound/randhound.go deleted file mode 100644 index 833cda095b..0000000000 --- a/randhound/randhound.go +++ /dev/null @@ -1,995 +0,0 @@ -// +build experimental - -// Package randhound is a client/server protocol for creating public random -// strings in an unbiasable and verifiable way given that a threshold of -// participants is honest. The protocol is driven by the client which scavenges -// the public randomness from the servers over the course of two round-trips. -package randhound - -import ( - "bytes" - "encoding/binary" - "errors" - "fmt" - "reflect" - "time" - - "github.com/dedis/kyber" - "github.com/dedis/kyber/sign/schnorr" - "github.com/dedis/kyber/util/hash" - "github.com/dedis/kyber/util/random" - "github.com/dedis/onet" - "github.com/dedis/onet/log" - "github.com/dedis/onet/network" -) - -// TODO: -// - Import / export transcript in JSON -// - Signatures of I-messages are currently not checked by the servers since -// the latter are assumed to be stateless; should they know the public key of the client? - -func init() { - onet.GlobalProtocolRegister("RandHound", NewRandHound) -} - -// NewRandHound generates a new RandHound instance. -func NewRandHound(node *onet.TreeNodeInstance) (onet.ProtocolInstance, error) { - - // Setup RandHound protocol struct - rh := &RandHound{ - TreeNodeInstance: node, - } - - // Setup message handlers - h := []interface{}{ - rh.handleI1, rh.handleI2, - rh.handleR1, rh.handleR2, - } - err := rh.RegisterHandlers(h...) - - return rh, err -} - -// Setup configures a RandHound instance on client-side. Needs to be called -// before Start. -func (rh *RandHound) Setup(nodes int, faulty int, groups int, purpose string) error { - - rh.nodes = nodes - rh.groups = groups - rh.faulty = faulty - rh.purpose = purpose - - rh.server = make([][]*onet.TreeNode, groups) - rh.group = make([][]int, groups) - rh.threshold = make([]int, groups) - rh.key = make([][]kyber.Point, groups) - rh.ServerIdxToGroupNum = make([]int, nodes) - rh.ServerIdxToGroupIdx = make([]int, nodes) - - rh.i1s = make(map[int]*I1) - rh.i2s = make(map[int]*I2) - rh.r1s = make(map[int]*R1) - rh.r2s = make(map[int]*R2) - rh.polyCommit = make(map[int][]kyber.Point) - rh.secret = make(map[int][]int) - rh.chosenSecret = make(map[int][]int) - - rh.Done = make(chan bool, 1) - rh.SecretReady = false - - return nil -} - -// Start initiates the RandHound protocol run. The client pseudo-randomly -// chooses the server grouping, forms an I1 message for each group, and sends -// it to all servers of that group. -func (rh *RandHound) Start() error { - - var err error - - // Set timestamp - rh.time = time.Now() - - // Choose client randomness - rand := random.Bytes(rh.Suite().Hash().Size(), random.Stream) - rh.cliRand = rand - - // Determine server grouping - rh.server, rh.key, err = rh.Shard(rand, rh.groups) - if err != nil { - return err - } - - // Set some group parameters - for i, group := range rh.server { - rh.threshold[i] = 2 * len(group) / 3 - rh.polyCommit[i] = make([]kyber.Point, len(group)) - g := make([]int, len(group)) - for j, server0 := range group { - s0 := server0.RosterIndex - rh.ServerIdxToGroupNum[s0] = i - rh.ServerIdxToGroupIdx[s0] = j - g[j] = s0 - } - rh.group[i] = g - } - - // Compute session id - rh.sid, err = rh.sessionID(rh.nodes, rh.faulty, rh.purpose, rh.time, rh.cliRand, rh.threshold, rh.Public(), rh.key) - if err != nil { - return err - } - - // Multicast first message to grouped servers - for i, group := range rh.server { - - index := make([]uint32, len(group)) - for j, server := range group { - index[j] = uint32(server.RosterIndex) - } - - i1 := &I1{ - SID: rh.sid, - Threshold: rh.threshold[i], - Group: index, - Key: rh.key[i], - } - - rh.mutex.Lock() - - // Sign I1 and store signature in i1.Sig - if err := signSchnorr(rh.Suite(), rh.Private(), i1); err != nil { - rh.mutex.Unlock() - return err - } - - rh.i1s[i] = i1 - - rh.mutex.Unlock() - - if err := rh.Multicast(i1, group...); err != nil { - return err - } - } - return nil -} - -// Shard produces a pseudorandom sharding of the network entity list -// based on a seed and a number of requested shards. -func (rh *RandHound) Shard(seed []byte, shards int) ([][]*onet.TreeNode, [][]kyber.Point, error) { - - if shards == 0 || rh.nodes < shards { - return nil, nil, fmt.Errorf("Number of requested shards not supported") - } - - // Compute a random permutation of [1,n-1] - prng := rh.Suite().Cipher(seed) - m := make([]int, rh.nodes-1) - for i := range m { - j := int(random.Uint64(prng) % uint64(i+1)) - m[i] = m[j] - m[j] = i + 1 - } - - // Create sharding of the current roster according to the above permutation - el := rh.List() - sharding := make([][]*onet.TreeNode, shards) - keys := make([][]kyber.Point, shards) - for i, j := range m { - sharding[i%shards] = append(sharding[i%shards], el[j]) - keys[i%shards] = append(keys[i%shards], el[j].ServerIdentity.Public) - } - - return sharding, keys, nil -} - -// Random creates the collective randomness from the shares and the protocol -// transcript. -func (rh *RandHound) Random() ([]byte, *Transcript, error) { - - rh.mutex.Lock() - defer rh.mutex.Unlock() - - if !rh.SecretReady { - return nil, nil, errors.New("Secret not recoverable") - } - - H, _ := rh.Suite().Point().Pick(nil, rh.Suite().Cipher(rh.sid)) - rnd := rh.Suite().Point().Null() - - // Gather all valid shares for a given server - for source, target := range rh.secret { - - var share []kyber.Point - var pos []int - for _, t := range target { - r2 := rh.r2s[t] - for _, s := range r2.DecShare { - if s.Source == source { - share = append(share, s.Val) - pos = append(pos, s.Pos) - } - } - } - - grp := rh.ServerIdxToGroupNum[source] - pvss := NewPVSS(rh.Suite(), H, rh.threshold[grp]) - ps, err := pvss.Recover(pos, share, len(rh.server[grp])) - if err != nil { - return nil, nil, err - } - rnd = rh.Suite().Point().Add(rnd, ps) - } - - rb, err := rnd.MarshalBinary() - if err != nil { - return nil, nil, err - } - - transcript := &Transcript{ - SID: rh.sid, - Nodes: rh.nodes, - Groups: rh.groups, - Faulty: rh.faulty, - Purpose: rh.purpose, - Time: rh.time, - CliRand: rh.cliRand, - CliKey: rh.Public(), - Group: rh.group, - Threshold: rh.threshold, - ChosenSecret: rh.chosenSecret, - Key: rh.key, - I1s: rh.i1s, - I2s: rh.i2s, - R1s: rh.r1s, - R2s: rh.r2s, - } - - return rb, transcript, nil -} - -// Verify checks a given collective random string against a protocol transcript. -func (rh *RandHound) Verify(suite kyber.Suite, random []byte, t *Transcript) error { - - rh.mutex.Lock() - defer rh.mutex.Unlock() - - // Verify SID - sid, err := rh.sessionID(t.Nodes, t.Faulty, t.Purpose, t.Time, t.CliRand, t.Threshold, t.CliKey, t.Key) - if err != nil { - return err - } - - if !bytes.Equal(t.SID, sid) { - return fmt.Errorf("Wrong session identifier") - } - - // Verify I1 signatures - for _, i1 := range t.I1s { - if err := verifySchnorr(suite, t.CliKey, i1); err != nil { - return err - } - } - - // Verify R1 signatures - for src, r1 := range t.R1s { - var key kyber.Point - for i := range t.Group { - for j := range t.Group[i] { - if src == t.Group[i][j] { - key = t.Key[i][j] - } - } - } - if err := verifySchnorr(suite, key, r1); err != nil { - return err - } - } - - // Verify I2 signatures - for _, i2 := range t.I2s { - if err := verifySchnorr(suite, t.CliKey, i2); err != nil { - return err - } - } - - // Verify R2 signatures - for src, r2 := range t.R2s { - var key kyber.Point - for i := range t.Group { - for j := range t.Group[i] { - if src == t.Group[i][j] { - key = t.Key[i][j] - } - } - } - if err := verifySchnorr(suite, key, r2); err != nil { - return err - } - } - - // Verify message hashes HI1 and HI2; it is okay if some messages are - // missing as long as there are enough to reconstruct the chosen secrets - for i, msg := range t.I1s { - for _, j := range t.Group[i] { - if _, ok := t.R1s[j]; ok { - if err := verifyMessage(suite, msg, t.R1s[j].HI1); err != nil { - return err - } - } else { - log.Lvlf2("Couldn't find R1 message of server %v", j) - } - } - } - - for i, msg := range t.I2s { - if _, ok := t.R2s[i]; ok { - if err := verifyMessage(suite, msg, t.R2s[i].HI2); err != nil { - return err - } - } else { - log.Lvlf2("Couldn't find R2 message of server %v", i) - } - } - - // Verify that all servers received the same client commitment - for server, msg := range t.I2s { - c := 0 - // Deterministically iterate over map[int][]int - for i := 0; i < len(t.ChosenSecret); i++ { - for _, cs := range t.ChosenSecret[i] { - if int(msg.ChosenSecret[c]) != cs { - return fmt.Errorf("Server %v received wrong client commitment", server) - } - c++ - } - } - } - - H, _ := suite.Point().Pick(nil, suite.Cipher(t.SID)) - rnd := suite.Point().Null() - for i, group := range t.ChosenSecret { - - for _, src := range group { - - var poly [][]byte - var encPos []int - var encShare []kyber.Point - var encProof []ProofCore - var X []kyber.Point - - var decPos []int - var decShare []kyber.Point - var decProof []ProofCore - - // All R1 messages of the chosen secrets should be there - if _, ok := t.R1s[src]; !ok { - return errors.New("R1 message not found") - } - r1 := t.R1s[src] - - for j := 0; j < len(r1.EncShare); j++ { - - // Check availability of corresponding R2 messages, skip if not there - target := r1.EncShare[j].Target - if _, ok := t.R2s[target]; !ok { - continue - } - - // Gather data on encrypted shares - poly = append(poly, r1.CommitPoly) - encPos = append(encPos, r1.EncShare[j].Pos) - encShare = append(encShare, r1.EncShare[j].Val) - encProof = append(encProof, r1.EncShare[j].Proof) - X = append(X, t.Key[i][r1.EncShare[j].Pos]) - - // Gather data on decrypted shares - r2 := t.R2s[target] - for k := 0; k < len(r2.DecShare); k++ { - if r2.DecShare[k].Source == src { - decPos = append(decPos, r2.DecShare[k].Pos) - decShare = append(decShare, r2.DecShare[k].Val) - decProof = append(decProof, r2.DecShare[k].Proof) - } - } - } - - // Remove encrypted shares that do not have a corresponding decrypted share - j := 0 - for j < len(decPos) { - if encPos[j] != decPos[j] { - poly = append(poly[:j], poly[j+1:]...) - encPos = append(encPos[:j], encPos[j+1:]...) - encShare = append(encShare[:j], encShare[j+1:]...) - encProof = append(encProof[:j], encProof[j+1:]...) - X = append(X[:j], X[j+1:]...) - } else { - j++ - } - } - // If all of the first values where equal remove trailing data on encrypted shares - if len(decPos) < len(encPos) { - l := len(decPos) - poly = poly[:l] - encPos = encPos[:l] - encShare = encShare[:l] - encProof = encProof[:l] - X = X[:l] - } - - pvss := NewPVSS(suite, H, t.Threshold[i]) - - // Recover polynomial commits - polyCommit, err := pvss.Commits(poly, encPos) - if err != nil { - return err - } - - // Check encryption consistency proofs - goodEnc, badEnc, err := pvss.Verify(H, X, polyCommit, encShare, encProof) - if err != nil { - return err - } - _ = goodEnc - _ = badEnc - - // Remove bad values - for j := len(badEnc) - 1; j >= 0; j-- { - k := badEnc[j] - X = append(X[:k], X[k+1:]...) - encShare = append(encShare[:k], encShare[k+1:]...) - decShare = append(decShare[:k], decShare[k+1:]...) - decProof = append(decProof[:k], decProof[k+1:]...) - } - - // Check decryption consistency proofs - goodDec, badDec, err := pvss.Verify(suite.Point().Base(), decShare, X, encShare, decProof) - if err != nil { - return err - } - _ = goodDec - _ = badDec - - // Remove bad shares - for j := len(badDec) - 1; j >= 0; j-- { - k := badDec[j] - decPos = append(decPos[:k], decPos[k+1:]...) - decShare = append(decShare[:k], decShare[k+1:]...) - } - - // Recover secret and add it to the collective random point - ps, err := pvss.Recover(decPos, decShare, len(t.Group[i])) - if err != nil { - return err - } - rnd = rh.Suite().Point().Add(rnd, ps) - } - } - - rb, err := rnd.MarshalBinary() - if err != nil { - return err - } - - if !bytes.Equal(random, rb) { - return errors.New("Bad randomness") - } - - return nil -} - -func (rh *RandHound) handleI1(i1 WI1) error { - - msg := &i1.I1 - - // Compute hash of the client's message - msg.Sig = []byte{} // XXX: hack - i1b, err := network.Marshal(msg) - if err != nil { - return err - } - - hi1, err := hash.Bytes(rh.Suite().Hash(), i1b) - if err != nil { - return err - } - - // Find out the server's index (we assume servers are stateless) - idx := 0 - for i, j := range msg.Group { - if msg.Key[i].Equal(rh.Public()) { - idx = int(j) - break - } - } - - // Init PVSS and create shares - H, _ := rh.Suite().Point().Pick(nil, rh.Suite().Cipher(msg.SID)) - pvss := NewPVSS(rh.Suite(), H, msg.Threshold) - idxShare, encShare, encProof, pb, err := pvss.Split(msg.Key, nil) - if err != nil { - return err - } - - share := make([]Share, len(encShare)) - for i := 0; i < len(encShare); i++ { - share[i] = Share{ - Source: idx, - Target: int(msg.Group[i]), - Pos: idxShare[i], - Val: encShare[i], - Proof: encProof[i], - } - } - - r1 := &R1{ - HI1: hi1, - EncShare: share, - CommitPoly: pb, - } - - // Sign R1 and store signature in R1.Sig - if err := signSchnorr(rh.Suite(), rh.Private(), r1); err != nil { - return err - } - - return rh.SendTo(rh.Root(), r1) -} - -func (rh *RandHound) handleR1(r1 WR1) error { - - msg := &r1.R1 - - idx := r1.RosterIndex - grp := rh.ServerIdxToGroupNum[idx] - pos := rh.ServerIdxToGroupIdx[idx] - - rh.mutex.Lock() - defer rh.mutex.Unlock() - - // Verify R1 message signature - if err := verifySchnorr(rh.Suite(), rh.key[grp][pos], msg); err != nil { - return err - } - - // Verify that server replied to the correct I1 message - if err := verifyMessage(rh.Suite(), rh.i1s[grp], msg.HI1); err != nil { - return err - } - - // Record R1 message - rh.r1s[idx] = msg - - // Prepare data for recovery of polynomial commits and verification of shares - n := len(msg.EncShare) - poly := make([][]byte, n) - index := make([]int, n) - encShare := make([]kyber.Point, n) - encProof := make([]ProofCore, n) - for i := 0; i < n; i++ { - poly[i] = msg.CommitPoly - index[i] = msg.EncShare[i].Pos - encShare[i] = msg.EncShare[i].Val - encProof[i] = msg.EncShare[i].Proof - } - - // Init PVSS and recover polynomial commits - H, _ := rh.Suite().Point().Pick(nil, rh.Suite().Cipher(rh.sid)) - pvss := NewPVSS(rh.Suite(), H, rh.threshold[grp]) - polyCommit, err := pvss.Commits(poly, index) - if err != nil { - return err - } - - // Record polynomial commits - rh.polyCommit[idx] = polyCommit - - // Return, if we already committed to secrets previously - if len(rh.chosenSecret) > 0 { - return nil - } - - // Verify encrypted shares - good, _, err := pvss.Verify(H, rh.key[grp], polyCommit, encShare, encProof) - if err != nil { - return err - } - - // Record valid encrypted shares per secret/server - for _, g := range good { - if _, ok := rh.secret[idx]; !ok { - rh.secret[idx] = make([]int, 0) - } - rh.secret[idx] = append(rh.secret[idx], msg.EncShare[g].Target) - } - - // Check if there is at least a threshold number of reconstructable secrets - // in each group. If yes proceed to the next phase. Note the double-usage - // of the threshold which is used to determine if enough valid shares for a - // single secret are available and if enough secrets for a given group are - // available - goodSecret := make(map[int][]int) - for i, group := range rh.server { - var secret []int - for _, server := range group { - j := server.RosterIndex - if share, ok := rh.secret[j]; ok && rh.threshold[i] <= len(share) { - secret = append(secret, j) - } - } - if rh.threshold[i] <= len(secret) { - goodSecret[i] = secret - } - } - - // Proceed, if there are enough good secrets - if len(goodSecret) == rh.groups { - - // Reset secret for the next phase (see handleR2) - rh.secret = make(map[int][]int) - - // Choose secrets that contribute to collective randomness - for i := range rh.server { - - // Randomly remove some secrets so that a threshold of secrets remain - rand := random.Bytes(rh.Suite().Hash().Size(), random.Stream) - prng := rh.Suite().Cipher(rand) - secret := goodSecret[i] - for j := 0; j < len(secret)-rh.threshold[i]; j++ { - k := int(random.Uint32(prng) % uint32(len(secret))) - secret = append(secret[:k], secret[k+1:]...) - } - rh.chosenSecret[i] = secret - } - - log.Lvlf3("Grouping: %v", rh.group) - log.Lvlf3("ChosenSecret: %v", rh.chosenSecret) - - // Transformation of commitments from map[int][]int to []uint32 to avoid protobuf errors - var chosenSecret = make([]uint32, 0) - for i := 0; i < len(rh.chosenSecret); i++ { - for _, cs := range rh.chosenSecret[i] { - chosenSecret = append(chosenSecret, uint32(cs)) - } - } - - // Prepare a message for each server of a group and send it - for i, group := range rh.server { - for j, server := range group { - - // Among the good secrets chosen previously collect all valid - // shares, proofs, and polynomial commits intended for the - // target server - var encShare []Share - var polyCommit []kyber.Point - for _, k := range rh.chosenSecret[i] { - r1 := rh.r1s[k] - pc := rh.polyCommit[k] - encShare = append(encShare, r1.EncShare[j]) - polyCommit = append(polyCommit, pc[j]) - } - - i2 := &I2{ - Sig: []byte{}, - SID: rh.sid, - ChosenSecret: chosenSecret, - EncShare: encShare, - PolyCommit: polyCommit, - } - - if err := signSchnorr(rh.Suite(), rh.Private(), i2); err != nil { - return err - } - - rh.i2s[server.RosterIndex] = i2 - - if err := rh.SendTo(server, i2); err != nil { - return err - } - } - } - } - - return nil -} - -func (rh *RandHound) handleI2(i2 WI2) error { - - msg := &i2.I2 - - // Compute hash of the client's message - msg.Sig = []byte{} // XXX: hack - i2b, err := network.Marshal(msg) - if err != nil { - return err - } - - hi2, err := hash.Bytes(rh.Suite().Hash(), i2b) - if err != nil { - return err - } - - // Prepare data - n := len(msg.EncShare) - X := make([]kyber.Point, n) - encShare := make([]kyber.Point, n) - encProof := make([]ProofCore, n) - for i := 0; i < n; i++ { - X[i] = rh.Public() - encShare[i] = msg.EncShare[i].Val - encProof[i] = msg.EncShare[i].Proof - } - - // Init PVSS and verify encryption consistency proof - H, _ := rh.Suite().Point().Pick(nil, rh.Suite().Cipher(msg.SID)) - pvss := NewPVSS(rh.Suite(), H, 0) - - good, bad, err := pvss.Verify(H, X, msg.PolyCommit, encShare, encProof) - if err != nil { - return err - } - - // Remove bad shares - for i := len(bad) - 1; i >= 0; i-- { - j := bad[i] - encShare = append(encShare[:j], encShare[j+1:]...) - } - - // Decrypt good shares - decShare, decProof, err := pvss.Reveal(rh.Private(), encShare) - if err != nil { - return err - } - - share := make([]Share, len(encShare)) - for i := 0; i < len(encShare); i++ { - X[i] = rh.Public() - j := good[i] - share[i] = Share{ - Source: msg.EncShare[j].Source, - Target: msg.EncShare[j].Target, - Pos: msg.EncShare[j].Pos, - Val: decShare[i], - Proof: decProof[i], - } - } - - r2 := &R2{ - HI2: hi2, - DecShare: share, - } - - // Sign R2 and store signature in R2.Sig - if err := signSchnorr(rh.Suite(), rh.Private(), r2); err != nil { - return err - } - - return rh.SendTo(rh.Root(), r2) -} - -func (rh *RandHound) handleR2(r2 WR2) error { - - msg := &r2.R2 - - idx := r2.RosterIndex - grp := rh.ServerIdxToGroupNum[idx] - pos := rh.ServerIdxToGroupIdx[idx] - - rh.mutex.Lock() - defer rh.mutex.Unlock() - - // If the collective secret is already available, ignore all further incoming messages - if rh.SecretReady { - return nil - } - - // Verify R2 message signature - if err := verifySchnorr(rh.Suite(), rh.key[grp][pos], msg); err != nil { - return err - } - - // Verify that server replied to the correct I2 message - if err := verifyMessage(rh.Suite(), rh.i2s[idx], msg.HI2); err != nil { - return err - } - - // Record R2 message - rh.r2s[idx] = msg - - // Get all valid encrypted shares corresponding to the received decrypted - // shares and intended for the target server (=idx) - n := len(msg.DecShare) - X := make([]kyber.Point, n) - encShare := make([]kyber.Point, n) - decShare := make([]kyber.Point, n) - decProof := make([]ProofCore, n) - for i := 0; i < n; i++ { - src := msg.DecShare[i].Source - //tgt := msg.DecShare[i].Target - //X[i] = rh.key[grp][pos] //r2.ServerIdentity.Public - X[i] = rh.key[grp][pos] - encShare[i] = rh.r1s[src].EncShare[pos].Val - decShare[i] = msg.DecShare[i].Val - decProof[i] = msg.DecShare[i].Proof - } - - // Init PVSS and verify shares - H, _ := rh.Suite().Point().Pick(nil, rh.Suite().Cipher(rh.sid)) - pvss := NewPVSS(rh.Suite(), H, rh.threshold[grp]) - good, bad, err := pvss.Verify(rh.Suite().Point().Base(), decShare, X, encShare, decProof) - if err != nil { - return err - } - _ = bad - _ = good - - // Record valid decrypted shares per secret/server - for i := 0; i < len(good); i++ { - j := good[i] - src := msg.DecShare[j].Source - if _, ok := rh.secret[src]; !ok { - rh.secret[src] = make([]int, 0) - } - rh.secret[src] = append(rh.secret[src], msg.DecShare[j].Target) - } - - proceed := true - for i, group := range rh.chosenSecret { - for _, server := range group { - if len(rh.secret[server]) < rh.threshold[i] { - proceed = false - } - } - } - - if len(rh.r2s) == rh.nodes-1 && !proceed { - rh.Done <- true - return errors.New("Some chosen secrets are not reconstructable") - } - - if proceed && !rh.SecretReady { - rh.SecretReady = true - rh.Done <- true - } - return nil -} - -func (rh *RandHound) sessionID(nodes int, faulty int, purpose string, time time.Time, rand []byte, threshold []int, clientKey kyber.Point, serverKey [][]kyber.Point) ([]byte, error) { - - buf := new(bytes.Buffer) - - if len(threshold) != len(serverKey) { - return nil, fmt.Errorf("Non-matching number of group thresholds and keys") - } - - if err := binary.Write(buf, binary.LittleEndian, uint32(nodes)); err != nil { - return nil, err - } - - if err := binary.Write(buf, binary.LittleEndian, uint32(faulty)); err != nil { - return nil, err - } - - if _, err := buf.WriteString(purpose); err != nil { - return nil, err - } - - t, err := time.MarshalBinary() - if err != nil { - return nil, err - } - - if _, err := buf.Write(t); err != nil { - return nil, err - } - - if _, err := buf.Write(rand); err != nil { - return nil, err - } - - cb, err := clientKey.MarshalBinary() - if err != nil { - return nil, err - } - if _, err := buf.Write(cb); err != nil { - return nil, err - } - - for _, t := range threshold { - if err := binary.Write(buf, binary.LittleEndian, uint32(t)); err != nil { - return nil, err - } - } - - for _, gk := range serverKey { - for _, k := range gk { - kb, err := k.MarshalBinary() - if err != nil { - return nil, err - } - if _, err := buf.Write(kb); err != nil { - return nil, err - } - } - } - - return hash.Bytes(rh.Suite().Hash(), buf.Bytes()) -} - -func signSchnorr(suite kyber.Suite, key kyber.Scalar, m interface{}) error { - - // Reset signature field - reflect.ValueOf(m).Elem().FieldByName("Sig").Set(reflect.ValueOf([]byte{})) // XXX: hack - - // Marshal message - mb, err := network.Marshal(m) // TODO: change m to interface with hash to make it compatible to other languages (network.Marshal() adds struct-identifiers) - if err != nil { - return err - } - - // Sign message - sig, err := schnorr.Sign(suite, key, mb) - if err != nil { - return err - } - - // Store signature - reflect.ValueOf(m).Elem().FieldByName("Sig").Set(reflect.ValueOf(sig)) // XXX: hack - - return nil -} - -func verifySchnorr(suite kyber.Suite, key kyber.Point, m interface{}) error { - - // Make a copy of the signature - x := reflect.ValueOf(m).Elem().FieldByName("Sig") - sig := reflect.New(x.Type()).Elem() - sig.Set(x) - - // Reset signature field - reflect.ValueOf(m).Elem().FieldByName("Sig").Set(reflect.ValueOf([]byte{})) // XXX: hack - - // Marshal message - mb, err := network.Marshal(m) // TODO: change m to interface with hash to make it compatible to other languages (network.Marshal() adds struct-identifiers) - if err != nil { - return err - } - - // Copy back original signature - reflect.ValueOf(m).Elem().FieldByName("Sig").Set(sig) // XXX: hack - - return schnorr.Verify(suite, key, mb, sig.Interface().([]byte)) -} - -func verifyMessage(suite kyber.Suite, m interface{}, hash1 []byte) error { - - // Make a copy of the signature - x := reflect.ValueOf(m).Elem().FieldByName("Sig") - sig := reflect.New(x.Type()).Elem() - sig.Set(x) - - // Reset signature field - reflect.ValueOf(m).Elem().FieldByName("Sig").Set(reflect.ValueOf([]byte{})) // XXX: hack - - // Marshal ... - mb, err := network.Marshal(m) // TODO: change m to interface with hash to make it compatible to other languages (network.Marshal() adds struct-identifiers) - if err != nil { - return err - } - - // ... and hash message - hash2, err := hash.Bytes(suite.Hash(), mb) - if err != nil { - return err - } - - // Copy back original signature - reflect.ValueOf(m).Elem().FieldByName("Sig").Set(sig) // XXX: hack - - // Compare hashes - if !bytes.Equal(hash1, hash2) { - return errors.New("Message has a different hash than the given one") - } - - return nil -} diff --git a/randhound/randhound_test.go b/randhound/randhound_test.go deleted file mode 100644 index 1fca16554f..0000000000 --- a/randhound/randhound_test.go +++ /dev/null @@ -1,64 +0,0 @@ -// +build experimental - -package randhound_test - -import ( - "testing" - "time" - - "github.com/dedis/cothority/randhound" - "github.com/dedis/onet" - "github.com/dedis/onet/log" -) - -func TestRandHound(t *testing.T) { - - var name = "RandHound" - var nodes int = 28 - var faulty int = 2 - var groups int = 4 - var purpose string = "RandHound test run" - - local := onet.NewLocalTest() - _, _, tree := local.GenTree(int(nodes), true) - defer local.CloseAll() - - // Setup and start RandHound - - log.Lvlf1("RandHound - starting") - protocol, err := local.CreateProtocol(name, tree) - if err != nil { - t.Fatal("Couldn't initialise RandHound protocol:", err) - } - rh := protocol.(*randhound.RandHound) - err = rh.Setup(nodes, faulty, groups, purpose) - if err != nil { - t.Fatal("Couldn't initialise RandHound protocol:", err) - } - if err := protocol.Start(); err != nil { - t.Fatal(err) - } - - select { - case <-rh.Done: - log.Lvlf1("RandHound - done") - - random, transcript, err := rh.Random() - if err != nil { - t.Fatal(err) - } - log.Lvlf1("RandHound - collective randomness: ok") - - //log.Lvlf1("RandHound - collective randomness: %v", random) - - err = rh.Verify(rh.Suite(), random, transcript) - if err != nil { - t.Fatal(err) - } - log.Lvlf1("RandHound - verification: ok") - - case <-time.After(time.Second * time.Duration(nodes) * 2): - t.Fatal("RandHound – time out") - } - -} diff --git a/randhound/simulation/.gitignore b/randhound/simulation/.gitignore deleted file mode 100644 index 913ce3da12..0000000000 --- a/randhound/simulation/.gitignore +++ /dev/null @@ -1 +0,0 @@ -simulation diff --git a/randhound/simulation/randhound.go b/randhound/simulation/randhound.go deleted file mode 100644 index 7a56a2a449..0000000000 --- a/randhound/simulation/randhound.go +++ /dev/null @@ -1,99 +0,0 @@ -// +build experimental - -// This package contains the randhound simulation configuration and the code -// needed to run it. -package main - -import ( - "github.com/BurntSushi/toml" - "github.com/dedis/cothority/randhound" - "github.com/dedis/onet" - "github.com/dedis/onet/log" - "github.com/dedis/onet/simul" - "github.com/dedis/onet/simul/monitor" -) - -func init() { - onet.SimulationRegister("RandHound", NewRHSimulation) -} - -// RHSimulation implements a RandHound simulation -type RHSimulation struct { - onet.SimulationBFTree - Groups int - GroupSize int - Faulty int - Purpose string -} - -// NewRHSimulation creates a new RandHound simulation -func NewRHSimulation(config string) (onet.Simulation, error) { - rhs := &RHSimulation{} - _, err := toml.Decode(config, rhs) - if err != nil { - return nil, err - } - return rhs, nil -} - -// Setup configures a RandHound simulation with certain parameters -func (rhs *RHSimulation) Setup(dir string, hosts []string) (*onet.SimulationConfig, error) { - sim := new(onet.SimulationConfig) - rhs.CreateRoster(sim, hosts, 2000) - err := rhs.CreateTree(sim) - return sim, err -} - -// Run initiates a RandHound simulation -func (rhs *RHSimulation) Run(config *onet.SimulationConfig) error { - randM := monitor.NewTimeMeasure("tgen-randhound") - bandW := monitor.NewCounterIOMeasure("bw-randhound", config.Server) - client, err := config.Overlay.CreateProtocol("RandHound", config.Tree, onet.NilServiceID) - if err != nil { - return err - } - rh, _ := client.(*randhound.RandHound) - if rhs.Groups == 0 { - if rhs.GroupSize == 0 { - log.Fatal("Need either Groups or GroupSize") - } - rhs.Groups = rhs.Hosts / rhs.GroupSize - } - err = rh.Setup(rhs.Hosts, rhs.Faulty, rhs.Groups, rhs.Purpose) - if err != nil { - return err - } - if err := rh.Start(); err != nil { - log.Error("Error while starting protcol:", err) - } - - select { - case <-rh.Done: - log.Lvlf1("RandHound - done") - random, transcript, err := rh.Random() - if err != nil { - return err - } - randM.Record() - bandW.Record() - log.Lvlf1("RandHound - collective randomness: ok") - - verifyM := monitor.NewTimeMeasure("tver-randhound") - err = rh.Verify(rh.Suite(), random, transcript) - if err != nil { - return err - } - verifyM.Record() - log.Lvlf1("RandHound - verification: ok") - - //case <-time.After(time.Second * time.Duration(rhs.Hosts) * 5): - //log.Print("RandHound - time out") - } - - return nil - -} - -func main() { - simul.Start() -} diff --git a/randhound/simulation/randhound.toml b/randhound/simulation/randhound.toml deleted file mode 100644 index 8e22f16b6f..0000000000 --- a/randhound/simulation/randhound.toml +++ /dev/null @@ -1,11 +0,0 @@ -Servers = 4 -Simulation = "RandHound" -BF = 2 -Rounds = 1 -Faulty = 0 -Purpose = "RandHound Test" -Suite = "Ed25519" - -Hosts, GroupSize -8, 4 -16, 8 diff --git a/randhound/simulation/randhound_test.go b/randhound/simulation/randhound_test.go deleted file mode 100644 index 2d9e97f16c..0000000000 --- a/randhound/simulation/randhound_test.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build experimental - -package main - -import ( - "testing" - - "github.com/dedis/onet/simul" -) - -func TestSimulation(t *testing.T) { - simul.Start("randhound.toml") -} diff --git a/randhound/struct.go b/randhound/struct.go deleted file mode 100644 index ea7fcba119..0000000000 --- a/randhound/struct.go +++ /dev/null @@ -1,146 +0,0 @@ -// +build experimental - -package randhound - -import ( - "sync" - "time" - - "github.com/dedis/kyber" - "github.com/dedis/onet" - "github.com/dedis/onet/network" -) - -func init() { - for _, p := range []interface{}{I1{}, R1{}, I2{}, R2{}, - WI1{}, WR1{}, WI2{}, WR2{}} { - network.RegisterMessage(p) - } -} - -// RandHound is the main protocol struct and implements the -// onet.ProtocolInstance interface. -type RandHound struct { - *onet.TreeNodeInstance - - mutex sync.Mutex - - // Session information - nodes int // Total number of nodes (client + servers) - groups int // Number of groups - faulty int // Maximum number of Byzantine servers - purpose string // Purpose of protocol run - time time.Time // Timestamp of initiation - cliRand []byte // Client-chosen randomness (for initial sharding) - sid []byte // Session identifier - - // Group information - server [][]*onet.TreeNode // Grouped servers - group [][]int // Grouped server indices - threshold []int // Group thresholds - key [][]kyber.Point // Grouped server public keys - ServerIdxToGroupNum []int // Mapping of gloabl server index to group number - ServerIdxToGroupIdx []int // Mapping of global server index to group server index - - // Message information - i1s map[int]*I1 // I1 messages sent to servers (index: group) - i2s map[int]*I2 // I2 messages sent to servers (index: server) - r1s map[int]*R1 // R1 messages received from servers (index: server) - r2s map[int]*R2 // R2 messages received from servers (index: server) - polyCommit map[int][]kyber.Point // Commitments of server polynomials (index: server) - secret map[int][]int // Valid shares per secret/server (source server index -> list of target server indices) - chosenSecret map[int][]int // Chosen secrets contributing to collective randomness - - // Misc - Done chan bool // Channel to signal the end of a protocol run - SecretReady bool // Boolean to indicate whether the collect randomness is ready or not - - //Byzantine map[int]int // for simulating byzantine servers (= key) -} - -// Share encapsulates all information for encrypted or decrypted shares and the -// respective consistency proofs. -type Share struct { - Source int // Source server index - Target int // Target server index - Pos int // Share position - Val kyber.Point // Share value - Proof ProofCore // ZK-verification proof -} - -// Transcript represents the record of a protocol run created by the client. -type Transcript struct { - SID []byte // Session identifier - Nodes int // Total number of nodes (client + server) - Groups int // Number of groups - Faulty int // Maximum number of Byzantine servers - Purpose string // Purpose of protocol run - Time time.Time // Timestamp of initiation - CliRand []byte // Client-chosen randomness (for initial sharding) - CliKey kyber.Point // Client public key - Group [][]int // Grouped server indices - Key [][]kyber.Point // Grouped server public keys - Threshold []int // Grouped secret sharing thresholds - ChosenSecret map[int][]int // Chosen secrets that contribute to collective randomness - I1s map[int]*I1 // I1 messages sent to servers - I2s map[int]*I2 // I2 messages sent to servers - R1s map[int]*R1 // R1 messages received from servers - R2s map[int]*R2 // R2 messages received from servers -} - -// I1 is the message sent by the client to the servers in step 1. -type I1 struct { - Sig []byte // Schnorr signature - SID []byte // Session identifier - Threshold int // Secret sharing threshold - Group []uint32 // Group indices - Key []kyber.Point // Public keys of trustees -} - -// R1 is the reply sent by the servers to the client in step 2. -type R1 struct { - Sig []byte // Schnorr signature - HI1 []byte // Hash of I1 - EncShare []Share // Encrypted shares - CommitPoly []byte // Marshalled commitment polynomial -} - -// I2 is the message sent by the client to the servers in step 3. -type I2 struct { - Sig []byte // Schnorr signature - SID []byte // Session identifier - ChosenSecret []uint32 // Chosen secrets (flattened) - EncShare []Share // Encrypted shares - PolyCommit []kyber.Point // Polynomial commitments -} - -// R2 is the reply sent by the servers to the client in step 4. -type R2 struct { - Sig []byte // Schnorr signature - HI2 []byte // Hash of I2 - DecShare []Share // Decrypted shares -} - -// WI1 is a onet-wrapper around I1. -type WI1 struct { - *onet.TreeNode - I1 -} - -// WR1 is a onet-wrapper around R1. -type WR1 struct { - *onet.TreeNode - R1 -} - -// WI2 is a onet-wrapper around I2. -type WI2 struct { - *onet.TreeNode - I2 -} - -// WR2 is a onet-wrapper around R2. -type WR2 struct { - *onet.TreeNode - R2 -} diff --git a/stable/directories b/stable/directories index 67660e34bf..cc640b8067 100644 --- a/stable/directories +++ b/stable/directories @@ -14,8 +14,6 @@ ftcosi/protocol ftcosi/service ftcosi/simulation messaging -randhound -randhound/simulation skipchain status status/service diff --git a/timestamper/README.md b/timestamper/README.md deleted file mode 100644 index 96a0a0b6a5..0000000000 --- a/timestamper/README.md +++ /dev/null @@ -1,74 +0,0 @@ -Navigation: [DEDIS](https://github.com/dedis/doc/tree/master/README.md) :: -[Cothority](../README.md) :: -[Applications](../doc/Applications.md) :: -Timestamper - -# Timestamper - -*WARNING* - this thing doesn't exist at all - this is just some documentation -that looked nice to be kept around... - -This service offers a collective signature at regular intervals (epochs) of a -hash the client provides. The collective signature is done on the -merkle-tree-root of all hashes sent in one epoch concatenated with the time of -the signature. - -## NSDI-version -The following calls should be implemented in the service: -See https://github.com/dedis/cothority/issues/554#issuecomment-243092585 - -# API calls - -## SetupStamper -Update: This will be only locally (not a message which is sent around). The description below is for what might be implemented later (after NSDI). - -Destination: first conode in the ‘roster to be used’ -* Input: - * roster to be used - * epoch-length -* Action: - * pings all elements of the ‘roster to be used’ to make sure they are alive -* Saves: - * ID of stamper and corresponding ‘roster to be used’ -* Returns: - * ID of stamper - * Collective public key - * error if a threshold of conodes in ‘roster to be used’ are not responding - -## SignHash -* Destination: first conode in the ‘roster to be used’ -* Input: - * ID of stamper - * hash to be signed -* Action: - * Collects all hashes during one epoch - * When the epoch is over - * creates a merkle-tree of all hashes - * Asks the roster belonging to ID to CoSi the merkle-tree-root concatenated with the time (seconds since start of Unix-epoch) -* Saves: - * nothing -* Returns: - * CoSi on merkle-tree-root concatenated with time - * merkle-tree-root and inclusion-proof of ‘hash to be signed’ - * time - -## VerifyHash -* Destination: none - verifies locally only -* Input: - * structure from SignHash -* Action: - * checks the inclusion-proof - * verifies the signature -* Returns: - * OK if the check and the verification pass, an error otherwise - -# Improvements - -These are improvements that can be done once the basic service is working. This list also defines what does not need to be included in the first version: - -* all nodes of the roster verify the time -* if the time is off by more than a threshold, they should refuse to sign -* the root-node will simply restart a round with all nodes who accepted to sign and update the mask of the cosi-signature -all nodes accept hashes to be signed -* every node needs to do his only merkle tree at the end of an epoch -* every individual merkle-tree-root needs to be sent up to the root