Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Istanbul Byzantine Fault Tolerance #650
Istanbul byzantine fault tolerant consensus protocol
Note, this work is deeply inspired by Clique POA. We've tried to design as similar a mechanism as possible in the protocol layer, such as with validator voting. We've also followed its EIP style of putting the background and rationale behind the proposed consensus protocol to help developers easily find technical references. This work is also inspired by Hyperledger's SBFT, Tendermint, HydraChain, and NCCU BFT.
Istanbul BFT is inspired by Castro-Liskov 99 paper. However, the original PBFT needed quite a bit of tweaking to make it work with blockchain. First off, there is no specific "client" which sends out requests and waits for the results. Instead, all of the validators can be seen as clients. Furthermore, to keep the blockchain progressing, a proposer will be continuously selected in each round to create block proposal for consensus. Also, for each consensus result, we expect to generate a verifiable new block rather than a bunch of read/write operations to the file system.
Istanbul BFT inherits from the original PBFT by using 3-phase consensus,
Blocks in Istanbul BFT protocol are final, which means that there are no forks and any valid block must be somewhere in the main chain. To prevent a faulty node from generating a totally different chain from the main chain, each validator appends
Istanbul BFT is a state machine replication algorithm. Each validator maintains a state machine replica in order reach block consensus.
Round change flow
Currently we support two policies: round robin and sticky proposer.
Validator list voting
We use a similar validator voting mechanism as Clique and copy most of the content from Clique EIP. Every epoch transaction resets the validator voting, meaning if an authorization or de-authorization vote is still in progress, that voting process will be terminated.
For all transactions blocks:
Future message and backlog
In an asynchronous network environment, one may receive future messages which cannot be processed in the current state. For example, a validator can receive
To speed up the consensus process, a validator that received
We define the following constants:
We also define the following per-block constants:
We didn't invent a new block header for Istanbul BFT. Instead, we follow Clique in repurposing the
Block hash, proposer seal, and committed seals
The Istanbul block hash calculation is different from the
The calculation is still similar to the
Proposer seal calculation
By the time of proposer seal calculation, the committed seals are still unknown, so we calculate the seal with those unknowns empty. The calculation is as follows:
Block hash calculation
While calculating block hash, we need to exclude committed seals since that data is dynamic between different validators. Therefore, we make
Before inserting a block into the blockchain, each validator needs to collect
Committed seal calculation:
Committed seal is calculated by each of the validator signing the hash along with
Block locking mechanism
Locking mechanism is introduced to resolve safety issues. In general, when a proposer is locked at certain height
Lock and unlock
Can you explain when block insertion might fail? I'm struggling to see why block insertion would ever fail for a valid proposal.
Why not just accept zero-gasprice transactions?
Have you tried running the network with >=1/3 faulty nodes? If so, what does the result look like; what kinds of failures do you see in practice?
Before actually inserting the block into the chain, the consensus only validates the block header. Inserting will do more checks so it can fail with other reasons.
You're right. We've updated the EIP according.
Theoretically it's also possible to finalize two conflicting blocks, if the proposer is one of the Byzantine nodes and makes two proposals and each get 2/3 prepares+commits. Though I guess that's fairly unlikely to happen in practice and so won't appear in that many random tests.
I know the meaning of block validity, but outside the PoW this is a little bit ambiguous.
Yes, I think you are right. Suppose there are f+1 faulty nodes, f+f good nodes, and the propose is among the faulty nodes. The proposer can send first f good nodes A block and second f good nodes B block. Then both groups can receive 2f+1 of prepares+commits for block A and B respectively. Thus two conflicting blocks can be finalized.
Each validator puts
Great! I was a little confuse through Valid block and Consensus Proof, your response is helpful also for the meaning of validation in Clique. Thank you.
Can you clarify when this timer starts? Is there one timer for the whole round, like in PBFT (well, in PBFT the timer starts once the client request is received), or is there a new timer at each phase (pre-prepared, prepared, etc.) as the figure seems to suggest?
Unless there is additional mechanism not described above (or perhaps I am just missing something), I think this protocol may have safety issues across round changes, as there does not seem to be anything stopping validators from committing a new block in a new round after others have committed in the previous round. This is what the "locking" mechanism in Tendermint addresses. In PBFT it's handled by broadcasting much more information during the round change. When you "blockchainify" PBFT, you can do away with this extra information if you're careful to introduce something like Tendermint's locking mechanism. I suspect that if you address these issues, you will end up with a protocol that is roughly identical (if not exactly identical) to Tendermint. Happy to discuss further and collaborate on this - great initiative!
Yes, there is only one timer which is reset/triggered in every beginning of a new round.
Yes, in some extreme cases there might be safety issues. For example, say there is only one validator which receives
Yes, sticky proposer policy can lead to this issue. We've listed "faulty propose detection" in the remaining tasks section aiming to resolve it. One possible way is to switch to round robin policy whenever a validator sees an empty block. However, sticky proposer can still hack it by generating very small block every round.
Detecting faulty node deterministically is hard which makes penalize faulty nodes even harder. For simplicity, this PR doesn't dive into this topic. It might be worth looking in the follow up EIP and research.
In our preliminary testing result with 4 validators setup, the consensus time took around 10ms ~ 100ms, depending on how many transactions per block. In our testing, we allow each block to contain up to 2000 transactions.
referenced this issue
Jul 28, 2017
Great work on developing Istanbul!
One comment on "Does it still make sense to use gas?"
I've developed a testnet (using Ethermint) and modified the client to not charge gas. I wanted to bounce this idea of others to see whether this it is valid...
To avoid the infinite loop problem, the validators ensure the that smart contracts being published to the blockchain are sent from a small set of white-listed accounts.
These accounts are trusted by the consortium to only publish smart contracts that have gone through a strict review process.
I suppose in the extreme edge case that a computationally expensive slipped through and was published by mistake, then the validators stop and rollback to before the event.
Does this sound reasonable?
Appreciate any feedback on the faults with such an implementation.
referenced this issue
Dec 15, 2017
The current implementation (as found in Quorum) breaks the concept of the "pending" block, used in several calls, but most notably in
In Ethereum, the pending block means the latest confirmed block + all pending transactions the node is aware of. This means that directly after a transaction is sent to the node (through RPC), the transaction count (aka nonce) in the "pending" block is increased. A lot of tools, like abigen in this repo or any other tool where tx signing occurs at the application level instead of in geth, rely on this for making multiple transactions at once. After the first one, the result of
With the current implementation of Istanbul, the definition of the "pending block" seem to be different. When submitting a transaction, the result for
So this seems to mean that the "pending block" definition changed from "latest block + pending txs" to "the block that is currently being voted on". I consider this a bug; if this is done on purpose, it breaks with a lot of existing applications (all users of abigen, f.e.) and should be reconsidered.
I originally reported about this issue in the Quorum repo, but there doesn't seem to be a good place to report bugs in Istanbul other than here.
This was referenced
Jan 24, 2018
I'm sorry to disrupt the technical discussion here with a non-technical question: What is the intention for including this in the EIP repository? In particular I was wondering:
(1) Is this proposal seeking public protocol adoption (it seems private chain focused, really at extending
referenced this issue
Feb 7, 2018
referenced this issue
Mar 5, 2018
referenced this issue
Mar 26, 2018
This was referenced
Jun 1, 2018
@renuseabhaya I had the same issue. My problem was that with Istanbul, you do not use a "regular" account (meaning, an account that you generate using
@yutelin Can you explain what the rationale was behind using an account address, derived from the node key, to identify validators instead of using the regular enode ID that is already being used for identifying nodes?
@yutelin Yes, correct. So currently we are using
Since the enode id is also derived from the private node key (in its original purpose), is it possible to use the enode id instead of the address? This would save the extra step of generating an address from node key.
I have tried that with the newest geth
but I get a
So it is not yet part of vanilla geth? Only quorum?
In quorum the switch
but then it does not sync.
Please update the hardcoded IP addresses of the bootnodes, or publish a script / list of current bootnodes. Thanks.
Hi, have some issues with block creation (Mining) using IBFT, I'm testing with 7 validator nodes, when I bring 4 nodes up wait some time (around 30 minutes) and then bring the 5th node up, there is no block creation after more than half an hour (another 30 minutes). now, if I bring all 5 nodes up at the same time block creation is happening normally. What might be the issue?
I have given more details here getamis/istanbul-tools#113
One more question: In Clique, with