Skip to content

JeffXiesk/cerberus

Repository files navigation

High-Performance and Secure Multi-BFT Consensus via Dynamic Global Ordering (Cerberus)

ABSTRACT

In Multi-BFT consensus, multiple leader-based consensus instances, e.g., PBFT or HotStuff, run in parallel, circumventing the leader bottleneck of a single instance. Despite the improved performance, the Multi-BFT consensus has an Achilles’ heel: the need to globally order output blocks across instances. Deriving this global ordering is challenging because it must cope with different rates at which blocks are produced by different instances, and simultaneously, it must respect inter-block causality constraints for security. In prior Multi-BFT designs, each block is assigned a global index before creation, leading to poor performance and causality violations. We propose Cerberus, a high-performance and secure Multi-BFT protocol that considers varying block rates of instances while ensuring block causality. Our key idea is to dynamically order consensus results across instances. First, the dynamic ordering decouples the dependencies between the replicas’ partial logs to ensure fast generation of the global log. This eliminates blocking on slow instances. Second, blocks are ordered by their generation sequence, which respects inter-block causality. To this end, we assign monotonic ranks to the output blocks of instances for global ordering. We further pipeline these monotonic ranks with the consensus process and aggregate signature to reduce protocol overhead. We implemented and evaluated Cerberus by initializing consensus instances with PBFT. Our evaluation shows that Cerberus improves the peak throughput of ISS, a state-of-the-art Multi-BFT design, by 6x and reduces transaction latency of ISS by 94%, when deployed with one straggling replica out of 32 replicas in WAN.

Cerberus is implemented based on ISS's implementation from Hyperledger-labs. Detail deployment setup please see ISS's doc below.


Insanely Scalable State-Machine Replication (ISS)

This is a modular framework for implementing, deploying and testing a distributed ordering service. The main task of such a service is maintaining a totally ordered Log of client Requests. This implementation uses multiple instances of an ordering protocol and multiplexes their outputs into the final Log. The ordering protocol instances running on each peer are orchestrated by a Manager module that decides which instance is responsible for which part of the Log, when to execute a checkpoint protocol and which client requests are to be ordered by which ordering instance. The decisions of the Manager must be consistent across all peers.

The Log is a sequence of Entries. Each Entry has a sequence number (SN) defining its position in the Log, and contains a Batch of Requests. The Log is logically partitioned into Segments - parts of the Log attributed to a single instance of an ordering protocol. It is the Manager's task to create these Segments and to instantiate the ordering protocol for each created Segment.

The set of all possible client Requests is partitioned (based on their hashes) into subsets called Buckets. The manager assigns a Bucket to each Segment it creates. The ordering protocol instance ordering that Segment only creates batches of Requests using the assigned Bucket. It is the Manager's task to create Segments and assign Buckets in a way ensuring that no two Segments that are being ordered concurrently are assigned the same Bucket. This is required to prevent request duplication.

The Manager observes the Log and creates new Segments as the Log fills up. When the Manager creates a new Segment, it triggers the Orderer that orders the Segment. Ordering a Segment means committing new Entries with the SNs of that Segment. Periodically, the Manager triggers the Checkpointer to create checkpoints of the Log. The Manager observes the created checkpoints and issues new Segments as the checkpoints advance, respecting the watermark window.

Installation

Cloning the repository

Create a GOPATH directory and make sure you are the owner of it:

sudo mkdir -p /opt/gopath/

sudo chown -R $user:$group /opt/gopath/

where $user and $group your user and group respectively.

Create a directory to clone the repository into:

mkdir -p /opt/gopath/src/github.com/JeffXiesk/

Clone this repository unter the directory you created:

cd /opt/gopath/src/github.com/JeffXiesk/

git clone https://github.com/JeffXiesk/cerberus.git

Checkout theresearch-iss branch.

Installing Dependencies

With /opt/gopath/src/github.com/JeffXiesk/cerberus as working directory, go to the deployment directory:

cd deployment

Configure the user and group in vars.sh

To install Golang and requirements:

source scripts/install-local.sh

NOTE: The install-local.sh script, among other dependencies, installs Go in the home directory, sets GOPATH to /opt/gopath/bin/ and edits ~/.bashrc.

The default path to the repository is set to: /opt/gopath/src/github.com/JeffXiesk/cerberus/.

ISS Installation

The run-protoc.sh script needs to be run from the project root directory (i.e. mirbft) before compiling the Go files.

IMPORTANT: go modules are not supported. Disable with the command: export GO111MODULE=off before installation.

Compile and install the go code by running go install ./... from the project root directory.

Deployment & Permformance Metrics

Detailed instructions can be found here.

Glossary of terms

Batch

An ordered sequence of client Requests. All Requests in a Batch must belong to the same Bucket. The Batch is defined in the request package.

Bucket

A subset of all possible client Requests. Each Request maps to exactly one Bucket (mapping is based on the Request's hash). The Manager assigns one Bucket to each Segment and the Orderer of the Segment only uses Requests from the assigned Bucket to propose new Batches. The Bucket is defined in the request package.

Checkpointer

Module responsible for creating checkpoints of the log. The Checkpointer listens to the Manager, which notifies the Checkpointer about each SN at which a checkpoint should occur. The Checkpointer triggers a separate instance of the checkpointing protocol for each such SN. When a checkpoint is stable, the Checkpointer submits it to the Log. Defined in the checkpointer package.

Entry

One element of the Log. It contains a sequence number (SN) defining its position in the Log and a Batch of Requests. Defined in the log package.

Log

A sequence of Entries replicated by the peers. The log package implements this abstraction and all related functionality.

Manager

Module orchestrating all components of the ordering service implementation. The Manager observes the Log, issues Segments and triggers the Checkpointer. It maintains a watermark window into which all the issued Segments must fall. The decisions of the Manager must be consistent across all peers. Defined in the manager package.

Orderer

Module implementing the actual ordering of Batches, i.e., committing new Entries to the Log. The Orderer listens to the Manager for new Segments. Whenever the Manager issues a new Segment, the Orderer creates a new instance of the ordering protocol that proposes and agrees on Request Batches, one for each SN that is part of the Segment. When a Batch has been agreed upon for a particular SN, the Orderer commits the (SN, Batch) pair as an Entry to the Log. Defined in the orderer package.

Request

Opaque client data. Each Request deterministically maps to a Bucket. Defined in the request package.

Segment

Part of the Log ,i.e., a subset of (not necessarily contiguous) SNs, ordered independently by an Orderer. Segments are disjoint. No SN can appear in more than one single Segment. The Segment data structure (defined in the manager package) completely describes an instance of the ordering protocol: the SNs it is responsible for, the sequence of leaders, the set of followers, the assigned Bucket, as well as information on when it is safe to start ordering it.

Sequence number (SN)

32-bit integer referencing a particilar position of the Log.

Watermark window

A range of SNs for which Entries can be proposed. The watermark window starts at the last stable checkpoint and has a certain length that is a system parameter.

About

Repository of Cerberus, based on the implementation of ISS

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published