Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi-chain bottom-up consensus model prototype #2457

Closed
synctext opened this issue Jul 12, 2016 · 54 comments
Closed

multi-chain bottom-up consensus model prototype #2457

synctext opened this issue Jul 12, 2016 · 54 comments

Comments

@synctext
Copy link
Member

@synctext synctext commented Jul 12, 2016

Research superior consensus models, resilient against 51% attacks.

Create an operational prototype of a mechanisms which spreads the last known record of all multi-chain.

  • All peers crawl to some extend
  • They create 1 Bittorrent swarm with a database dump of all these records and seed it
  • Download a swarm with records from the nearest neighbor
  • Merge, double spending check, and remove duplicates
  • Together with this neighbor, form a group, seed your unified database dump
  • Repeat with exponentially increasing group size

expanding_consensus

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Dec 16, 2016

The idea of merging shares some similarities as the GHS MST algorithm. But unlike the GHS algorithm, bottom-up consensus needs to work in a dynamic environment (multichain is always changing) and also under churn, so that is a challenge.

@synctext
Copy link
Member Author

@synctext synctext commented Dec 20, 2016

Please read some related work:

  • The Quest for Scalable Blockchain Fabric: Proof-of-Work vs. BFT Replication
  • Difficulty control for blockchain-based consensus systems
  • The Sleepy Model of Consensus
  • Hybrid Consensus: Efficient Consensus in the Permissionless Model
  • SCP: A Computationally-Scalable Byzantine Consensus Protocol For Blockchains
  • Enabling Blockchain Innovations with Pegged Sidechains

My advise would be to do a boring incremental steps approach, learn what you need to solve :

  • get live edges operational
  • get TrustChain edges from neighbor and pack into a single file
  • publish all edges in a .torrent, broadcast hash to neighbor and download all neighbors
  • join and merge operation on two files
  • etc.

@synctext
Copy link
Member Author

@synctext synctext commented Jan 6, 2017

Currently the closest related work to Delft blockchain (MultiChain,TruchtChain): "THE SWIRLDS HASHGRAPH CONSENSUS ALGORITHM: FAIR, FAST, BYZANTINE FAULT TOLERANCE"

It is also graph-based, instead of a single chain. They provide an amazing polished description of the idea we call "offline voting", they describe as gossip-about-gossip. Their patented technology uses a graph-based blockchain and mentions a HashDAG. Uses proof-of-stake and single-party signed messages.

Corda is also transaction focussed. However, they lack a gossip mechanism. Transactions are listed on a chain, but stored in "vaults": http://r3cev.com/s/corda-introductory-whitepaper-final.pdf

@synctext
Copy link
Member Author

@synctext synctext commented Jan 17, 2017

Brainstorm outcome, detailing the three algorithm phases per round:

  • consensus on ordered merge candidate list
  • consensus on group merge
  • consensus on group outcome

Using a commitment scheme we can generate guesses, winner becomes temporary elected leader for coordinating group consensus. Drawback: we don't like leader dependancies, DDoS vunerabilities, and failure/resume problems. Kelong: proof-of-luck
20170117_165312

@devos50
Copy link
Contributor

@devos50 devos50 commented Jan 18, 2017

Just came across this consensus algorithm, might be interesting to read: https://www.stellar.org/papers/stellar-consensus-protocol.pdf

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jan 18, 2017

The main disadvantage, also mentioned in the paper, is that the quorum intersection cannot only contain faulty nodes. This requirement depends on the assumption that the individual nodes to choose good quorum slices. In the application of Stellar it is reasonable, because nodes represent banks or other organisations. But the same isn't true for Tribler. Nevertheless, the idea of Federated Byzantine Agreement is very interesting.

@Zhijieren
Copy link

@Zhijieren Zhijieren commented Jan 19, 2017

I will spend some time on this in the coming months.

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jan 19, 2017

I'm currently writing up my idea, expect something by the end of the week.

@synctext
Copy link
Member Author

@synctext synctext commented Jan 20, 2017

First write-up by Kelong: bottom-up.pdf

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jan 20, 2017

Updated my write-up with some negative results and my reasoning, see "2.5.1. Merging consensus groups does not guarantee BFT with a probability of 1"

bottom-up.pdf

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jan 23, 2017

Create a repository for my thesis work, which contains the pdf. It'll be a bit unstructured in the beginning, just me writing ideas and results down. Hopefully it'll converge into a proper thesis by summer.

New section since last time - "2.5.2 Tail bounding the number of Byzantine nodes in a consensus group"

@pimotte
Copy link
Contributor

@pimotte pimotte commented Jan 23, 2017

Some remarks:

  • Section 2.3.2: The querying for the valid set of block, is this indeed the set, or just the hash of the set? (If it is the former, does this scale?)
  • Section 2.5.2: Are we really interested in the probability that the number white balls is lower than the expected value? I would suppose we are interested in the probability that the number of white balls is so low that there are too many black balls to get Byzantine agreement?

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jan 23, 2017

@pimotte thanks for the remarks

  • It is the hash of the set, I'll be more specific.
  • If the number of white balls is lower than (2N +1)/3, then that implies the number of black balls is more than f = (N - 1)/3. In this case Byzantine agreement cannot be achieved, because (2N + 1)/3 is the necessary and sufficient number of honest nodes to reach agreement (http://zoo.cs.yale.edu/classes/cs426/2013/bib/bracha85asynchronous.pdf).

@synctext
Copy link
Member Author

@synctext synctext commented Jan 26, 2017

Ideas:

  • magic for scalability by Kelong: self-reporting! The only service provided is that each node has signed their hash/checkpoint value and the signature is correct. Thus concensus is reached on valid self-reported chain checkpoints. A fraud detection mechanism is still required to check the validity of each self-signed report.
  • 1 hash represents 1 unique chain proof
  • each consensus group, has just 1 lucky node now. Improve elegance of algorithm and each node proposes a consensus. All nodes follow: the most lucky responsive node with also truthful execution of algorithm.
  • probabilistic fraud detection, full chain/full graph equals proven 100% fraud detection (determine this P())

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jan 26, 2017

I can't take all the credit, a lot of it resulted from the fruitful discussions with Zhejie.

@Zhijieren
Copy link

@Zhijieren Zhijieren commented Jan 31, 2017

Consensus Protocol.pdf
I have written my ideas down. It might look totally different from the initial ideas, but Kelong knows how I get here.

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Feb 1, 2017

Starting reading the PDF. In the Transaction Block, I'm assuming it should also have a number of outputs (probably 2). I'm thinking in terms of Bitcoin where one of the output is "spare change" which the sender can re-use later.

@Zhijieren
Copy link

@Zhijieren Zhijieren commented Feb 1, 2017

The spare change are used so that the receiver of a transaction doesn't need to track all the way back to the root to make sure if the sender has enough for this transaction. In this sense, we should to add this spare change for the sake of simplicity and storage requirement. However, we still need the track to the root for the validation.

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Feb 1, 2017

Yeah, we need to traverse to the root to validate a Tx regardless of spare change. But I was just thinking it'll be useful to freely set the transaction amount. For example if I only have 1 unit from 1 source, but I want to send 0.1 unit, then I can set 0.9 as UTXO (the spare change) and 0.1 as the actual output.

@Zhijieren
Copy link

@Zhijieren Zhijieren commented Feb 1, 2017

I see what you mean. Yes we should do that as well. It will eventually save the storage.

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Feb 1, 2017

Just spoke to @synctext. He suggested that we should focus on something that's more generic. Probably no Bitcoin style transaction, just arbitrary data in the block (at least for my thesis). I guess we need to re-think the validation protocol in this case. My initial feel is that there will be more overhead in the validation protocol.

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Feb 2, 2017

The validation protocol I think can be done using a BFS or DFS. On every step check that the checkpoints that surrounds the node (transaction) is in some consensus result and the node is not a fork of previously traversed nodes. We can prune branches if they've been validated previously. This should guarantee validity, still thinking about a way to prove it.

@synctext
Copy link
Member Author

@synctext synctext commented Feb 7, 2017

idea by Pim. Keep the scalability, use checkpoints. Prove that you have a time-bounded worst case detection time of double spend, if a certain fraction of network is non-malicious.

@synctext
Copy link
Member Author

@synctext synctext commented Feb 7, 2017

as a harmed party you can detect a double spend by agents you interacted with. Just check once after doing a transaction if it made it into the consensus, as reported by your counterparty

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Feb 13, 2017

Made a lot of changes since last time - pdf.

Defined a few things formally, i.e. the basic mode, what is a fork, etc.

The consensus part should work if we assume the 2n/3 promoters (the nodes that execute the consensus algorithm) are honest. I also provide some proof sketch. OTOH a lot of it will change if we use an invitation system. Personally I'm not a fan because that's using humans as a part of the protocol so it won't be possible to prove useful results. Unless the invitation is based on some of reputation score.

I also wrote down a few things about fraud detection. Basically it's quite simple to detect fork (the way I defined it) if one of the party in the transaction is honest. The hard part is to detect fork when both parties are malicious. The more forks there are, the harder it is to detect them all.

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Mar 7, 2017

Implemented a simplified version (no erasure coding, no threshold signatures) of HoneybadgerBFT using Twisted - code is here.

The algorithm described in the papers are of synchronous (blocking) style, so they don't fit nicely with the asynchronous (event loop + futures/promises) style in Twisted. So the algorithms are implemented using state machines. Every time a message comes in, I call a handle function to mutate the state.

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented May 1, 2017

As of 1336bb5, true halves and compact blocks are implemented.

Initial experiments show better results than 6 days ago. That is higher (verified) transaction rate and the graph looks a bit more reasonable.

figure_1

figure_2

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented May 11, 2017

A lot of performance performances improvements were merged these couple of weeks, the most notable one is changing serialisation library from jsonpickle to protobuf (PR).

I'm pleased to say the performance results improved significantly, previously we were just below 1000 tx/s, with the new code it can reach 5000 tx/s. Here is the same graph but with new code.

figure_1

figure_2

@qstokkink
Copy link
Contributor

@qstokkink qstokkink commented May 12, 2017

@kc1212 If you are still unhappy with performance, you can make protobuf another 12-20 times faster by using the C++ backend:

os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "cpp"
os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION"] = "2"
import google.protobuf

Official doc: https://developers.google.com/protocol-buffers/docs/reference/python-generated
My serializer using this: https://github.com/qstokkink/TriblerProtobufSerialization/blob/master/serializer.py

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented May 12, 2017

@qstokkink Thanks for the suggestion, 12-20 times faster is hard to ignore.

The document mentioned that "environment variable needs to be set before installing the protobuf library", is the protobuf on the TUD DAS5 a bit different from what you get by simply pip install protobuf?

@qstokkink
Copy link
Contributor

@qstokkink qstokkink commented May 12, 2017

@kc1212 pip install protobuf will work without a problem on most nix based systems, mac requires a brew install protobuf and on Windows.. well.. it's a bit more work to get the cpp backend.

tl;dr Yes, that should work just fine on the DAS5. (I have a manual build script here for CentOS, but I think it is obsolete now)

@synctext
Copy link
Member Author

@synctext synctext commented May 12, 2017

Sybil defense and incentives need to come from application architect.
scientific publication requires theorem material. Proof of spam vulnerability, consensus guarantee,
agreement, gossip for signature propagation?
Efficient, scalable, eventual double spending and fork detection.

This work provides the scalable blockchain fabric. Our efficient consensus mechanism is generic and does not pose any restrictions on the Sybil defense mechanism. Within our architecture you can select a proof-of-work, proof-of-stake or our own preferred trust-based approach.

ToDo:

  • malicious behavior experiment ?
  • correctness proof
  • guaranteed eventual detection of cheaters
  • performance bounds and scalability

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented May 16, 2017

My current model is fairly loose and fails to capture state transitions.

I'm considering to model my system using Canetti's technique described in Universally Composable Security, if I can digest it. This appears to be the technique used in The Bitcoin Backbone Protocol and Hybrid Consensus.

@synctext
Copy link
Member Author

@synctext synctext commented Jun 14, 2017

THESIS Raw .pdf download

  • Holy Grail: we present a consensus model with proven correctness. Avoid probabilistic behavior?
  • bit more intro + blockchain picture in intro (1 MB block size, global consistent broadcast primitive, near civil war for the blockchain, governance problem, 5 nerds in control, R3 consensus == NULL, ... )
  • Chapter Problem Description (30 years of prior BFT work, quote: "PBFT triggered a renaissance in BFT replication research, with protocols like Q/U" etc.
  • Security and Fault Tolerance Analysis: Agreement, validity, termination as key topics ("However, not everything is perfect. Now we show a negative result, where the liveness property cannot be attained.")
    Malicious nodes and collusion ?
  • impact of 51% attack, how fast would it crumble to bits and pieces?

Current results :
image

@synctext
Copy link
Member Author

@synctext synctext commented Jul 3, 2017

@synctext
Copy link
Member Author

@synctext synctext commented Jul 3, 2017

Latest Theses Raw .pdf

Solid progress: mathematical proof that transaction rate now scales linear with participants!

  • 2.4 Non-goals
    Rephrase that as 'limitations'. Our work removes the transaction bound from the blockchain, however, the price to pay is that the Sybil attack prevention mechanism moves from the blockchain towards the application layer. This is a similar approach as seen successfully in the database world, where for large-scale systems the costly ACID-guarantees moved to the application layer (DynamoDB, NoSQL, ..)
  • 3 System Architecture
    We present a design which removes the transaction rate restrictions. We combine HoneyBFT with our prior work on providing each entity with their own blockchain. We duplicate the essence of the architecture of the Internet itself, creating loosely coupled autonomous blockchain entities. Our Trustchain only interactions with others is exchange of a checkpoint.
  • Chapter: 4 Correctness and Complexity Analysis
  • 4.2 Performance and Complexity Analysis
  • Fig 5.3 excellent stuff
  • Figures: compare to prior work with real or estimated performance bounds!

throughput

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jul 11, 2017

figure_1
10k+ tx/s graph

@devos50
Copy link
Contributor

@devos50 devos50 commented Jul 13, 2017

@kc1212 very related work, also quite new (credits for @synctext for finding this one): http://poseidon.it.usyd.edu.au/~concurrentsystems/doc/ConsensusRedBellyBlockchain.pdf

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jul 17, 2017

I was unable to verify the claims in the "Red-Bellied blockchain" work. Especially the 250k transaction throughput for "CGLR" and "Boosted-MMR". I couldn't find any information on these keywords, the document also did not provide any reference.

@synctext
Copy link
Member Author

@synctext synctext commented Jul 18, 2017

Title: Blockchain Consensus Protocol with Horizontal Scalability

Can the mathematical proof of linear scalability also be given without a global consistent reliable broadcast? How about diminishing return factors? Can a gossip-based approach also yield the same guaranteed performance and scale? It can be assumed that we have a 2D network model where nodes are aware of their lowest latency neighbors.

DDoS and spam vulnerability analysis/considerations?

Thesis: "Our consensus protocol runs on top of Extended TrustChain."
More like: we used unmodified ACS as the key building block for our scalable consensus model.

More like: our key difference from related work is the independence of our transaction validation. Prior systems such a Bitcoin and Ethereum entangled the consensus algorithm with the transaction validation, restricting performance. By splitting this functionality we achieved mathematically proven horizontal scalability for the first time.

typo: Meaning that trasanctions with adversaries

Attempt matters like: no-forks-possible proof ?

Another proof? protocol will always terminate for any finite-sized input. (with minimal time assumptions)

4.2.4 Global "linear" throughput

"then the network cannot keep up with the large number of fragments". Our default cautious policy is to download all their new blocks since the last checkpoint before transacting with a stranger. This causes a bottleneck which could be avoided if encounters are predictable and pre-fetching is employed.

we dedicate this chapter on comparing our results with related work == no entropy.
Use word taxonomy.
6.4 Hybrid systems; not informative

What is the name of the thing your did in your thesis?
(trick you use is hash-based state consensus; leaderless)

@devos50
Copy link
Contributor

@devos50 devos50 commented Jul 20, 2017

@kc1212 is your 10k+ transaction graph with fixed or random neighbors?

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Jul 20, 2017

@devos50
Copy link
Contributor

@devos50 devos50 commented Jul 20, 2017

@kc1212 thanks! I expected that already. You probably don't have time to do this but I was thinking, maybe it's helpful to make an interaction model (i.e. based on interactions in the Bitcoin or Ethereum blockchain) and run your consensus algorithm on top of that. This closely resembles real-world transactions.

@synctext
Copy link
Member Author

@synctext synctext commented Jul 22, 2017

Fast AUS blockchain: http://poseidon.it.usyd.edu.au/~concurrentsystems/rbbc/Benchmark.html
shows a single graph with 400k transactions/second...

@kc1212
Copy link
Contributor

@kc1212 kc1212 commented Aug 15, 2017

A 9-page paper version of my thesis is available:
https://github.com/kc1212/consensus-thesis/raw/master/paper/paper.pdf

The thesis also went through many polishing rounds, it can be found at the usual place:
https://github.com/kc1212/consensus-thesis/raw/master/thesis/thesis.pdf

@synctext
Copy link
Member Author

@synctext synctext commented Sep 19, 2017

The source code for 10k transactions per second: https://github.com/kc1212/checo

@nyrosis
Copy link

@nyrosis nyrosis commented Jun 28, 2018

@synctext
Copy link
Member Author

@synctext synctext commented Jun 29, 2018

PARSEC Algorithm is certainly interesting, thnx!

A gossip protocol is used to allow efficient communication between nodes, as in Hashgraph
and [5]. Propagating a message, and indeed, reaching consensus only costs O(N log N)
communications and O(log N) stages.

@synctext
Copy link
Member Author

@synctext synctext commented May 21, 2019

@synctext
Copy link
Member Author

@synctext synctext commented Jul 15, 2021

@synctext synctext closed this Jul 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Token Economy
  
To do
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
8 participants