Skip to content

Commit

Permalink
[FAB-3760] Update README in orderer dir
Browse files Browse the repository at this point in the history
This file was written a long time ago, and needs to be updated to
reflect the current status of the ordering service component.

This changeset addresses that.

Change-Id: I949702ab5b75e274b95ddfe11ac7e022783130b7
Signed-off-by: Kostas Christidis <kostas@christidis.io>
  • Loading branch information
kchristidis committed May 10, 2017
1 parent 9d1da95 commit f105cc1
Showing 1 changed file with 43 additions and 21 deletions.
64 changes: 43 additions & 21 deletions orderer/README.md
@@ -1,37 +1,59 @@
# Hyperledger Ordering Service
The hyperledger fabric ordering service is intended to provide an atomic broadcast ordering service for consumption by the peers. This means that many clients may submit messages for ordering, and all clients are delivered the same series of ordered batches in response.
# Hyperledger Fabric Ordering Service

The Hyperledger Fabric ordering service is intended to provide an atomic broadcast ordering service for consumption by the peers. This means that many clients may submit messages for ordering, and all clients are delivered the same series of ordered batches in response.

## Protocol definition
The atomic broadcast ordering protocol for hyperledger fabric is described in `hyperledger/fabric/protos/orderer/ab.proto`. There are two services, the `Broadcast` service for injecting messages into the system, and the `Deliver` service for receiving ordered batches from the service. Sometimes, the service will reside over the network, while othertimes, the service may be bound locally into a peer process. The service may be bound locally for single process development deployments, or when the underlying ordering service has its own backing network protocol and the proto serves only as a wrapper.

The atomic broadcast ordering protocol for Hyperledger Fabric is described in `hyperledger/fabric/protos/orderer/ab.proto`. There are two services, the `Broadcast` service for injecting messages into the system, and the `Deliver` service for receiving ordered batches from the service.

## Service types
* Solo Orderer:
The solo orderer is intended to be an extremely easy to deploy, non-production orderer. It consists of a single process which serves all clients, so no `consensus' is required as there is a single central authority. There is correspondingly no high availability or scalability. This makes solo ideal for development and testing, but not deployment. The Solo orderer depends on a backing orderer ledger.

* Kafka Orderer (pending):
The Kafka orderer leverages the Kafka pubsub system to perform the ordering, but wraps this in the familiar `ab.proto` definition so that the peer orderer client code does not to be written specifically for Kafka. In real world deployments, it would be expected that the Kafka proto service would bound locally in process, as Kafka has its own robust wire protocol. However, for testing or novel deployment scenarios, the Kafka orderer may be deployed as a network service. Kafka is anticipated to be the preferred choice production deployments which demand high throughput and high availability but do not require byzantine fault tolerance. The Kafka orderer does not utilize a backing orderer ledger because this is handled by the Kafka brokers.
* Solo ordering service (testing): The solo ordering service is intended to be an extremely easy to deploy, non-production ordering service. It consists of a single process which serves all clients, so no `consensus' is required as there is a single central authority. There is correspondingly no high availability or scalability. This makes solo ideal for development and testing, but not for deployment.

* Kafka-based ordering service (production): The Kafka-based ordering service leverages the Kafka pub/sub system to perform the ordering, but wraps this in the familiar `ab.proto` definition so that the peer orderer client code does not to be written specifically for Kafka. Kafka is currently the preferred choice for production deployments which demand high throughput and high availability, but do not require byzantine fault tolerance.

* PBFT ordering service (pending): The PBFT ordering service will use the Hyperledger Fabric PBFT implementation (currently under development) to order messages in a byzantine fault tolerant way.

### Choosing a service type

In order to set a service type, the ordering service administrator needs to set the right value in the genesis block that the ordering service nodes will be bootstrapped from.

Specifically, the value corresponding to the `ConsensusType` key of the `Values` map of the `Orderer` config group on the system channel should be set to either "solo" or "kafka".

For details on the configuration structure of channels, refer to the [Channel Configuration](../source/configtx.rst) guide.

`configtxgen` is a tool that allows for the creation of a genesis block using profiles, or grouped configuration parameters -- refer to the [Configuring using the connfigtxgen tool](../source/configtxgen.rst) guide for more.

* PBFT Orderer (pending):
The PBFT orderer uses the hyperledger fabric PBFT implementation to order messages in a byzantine fault tolerant way. Because the implementation is being developed expressly for the hyperledger fabric, the `ab.proto` is used for wireline communication to the PBFT orderer. Therefore it is unusual to bind the PBFT orderer into the peer process, though might be desirable for some deployments. The PBFT orderer depends on a backing orderer ledger.
The location of this block can be set using the `ORDERER_GENERAL_GENESISFILE` environment variable. As is the case with all the configuration paths for Fabric binaries, this location is relative to the path set via the `FABRIC_CFG_PATH` environment variable.

## Orderer Ledger Types
Because the ordering service must allow clients to seek within the ordered batch stream, orderers must maintain a local copy of past batches. The length of time batches are retained may be configurable (or all batches may be retained indefinitely). Not all ledgers are crash fault tolerant, so care should be used when selecting a ledger for an application. Because the orderer leger interface is abstracted, the ledger type for a particular orderer may be selected at runtime. Not all orderers require (or can utilize) a backing orderer ledger (for instance Kafka, does not).
## Ledger types

* RAM Ledger
The RAM ledger implementation is a simple development oriented ledger which stores batches purely in RAM, with a configurable history size for retention. This ledger is not crash fault tolerant, restarting the process will reset the ledger to the genesis block. This is the default ledger.
* File Ledger
The file ledger implementation is a simple development oriented ledger which stores batches as JSON encoded files on the filesystem. This is intended to make inspecting the ledger easy and to allow for crash fault tolerance. This ledger is not intended to be performant, but is intended to be simple and easy to deploy and understand. This ledger may be enabled before executing the `orderer` binary by setting `ORDERER_LEDGER_TYPE=file` (note: this is a temporary hack and may not persist into the future).
* Other Ledgers
There are currently no other orderer ledgers available, although it is anticipated that some high performance database or other log based storage system will eventually be adapter for production deployments.
Because the ordering service must allow clients to seek within the ordered batch stream, orderers need a backing ledger, where they maintain a local copy of past batches. Not all ledgers are crash fault tolerant, so care should be used when selecting a ledger for an application. Because the orderer ledger interface is abstracted, the ledger type for a particular orderer may be selected at runtime. The following options are available:

* File ledger (production): The file-based ledger stores blocks directly on the file system. The block locations on disk are 'indexed' in a lightweight LevelDB database by number so that clients can efficiently retrieve a block by number. This is the default, and the suggested option for production deployments.

* RAM ledger (testing): The RAM ledger implementation is a simple development oriented ledger which stores batches purely in memory, with a configurable history size for retention. This ledger is not crash fault tolerant; restarting the process will reset the ledger to the genesis block.

* JSON ledger (testing): The file ledger implementation is a simple development oriented ledger which stores batches as JSON encoded files on the filesystem. This is intended to make inspecting the ledger easy and to allow for crash fault tolerance. This ledger is not intended to be performant, but is intended to be simple and easy to deploy and understand.

### Choosing a ledger type

This can be set by setting the `ORDERER_GENERAL_LEDGERTYPE` environment variable before executing the `orderer` binary. Acceptable values are "file" (default), "ram", and "json".

## Experimenting with the orderer service

To experiment with the orderer service you may build the orderer binary by simply typing `go build` in the `hyperledger/fabric/orderer` directory. You may then invoke the orderer binary with no parameters, or you can override the bind address, port, and backing ledger by setting the environment variables `ORDERER_LISTEN_ADDRESS`, `ORDERER_LISTEN_PORT` and `ORDERER_LEDGER_TYPE` respectively. Presently, only the solo orderer is supported. The deployment and configuration is very stopgap at this point, so expect for this to change noticably in the future.
To experiment with the orderer service you may build the orderer binary by simply typing `go build` in the `hyperledger/fabric/orderer` directory. You may then invoke the orderer binary with no parameters, or you can override the bind address, port, and backing ledger by setting the environment variables `ORDERER_GENERAL_LISTENADDRESS`, `ORDERER_GENERAL_ LISTENPORT` and `ORDERER_GENERAL_LEDGER_TYPE` respectively.

There are sample clients in the `fabric/orderer/sample_clients` directory.

* The `broadcast_timestamp` client sends a message containing the timestamp to the `Broadcast` service.

* The `deliver_stdout` client prints received batches to stdout from the `Deliver` interface.

There are sample clients in the `fabric/orderer/sample_clients` directory. The `broadcast_timestamp` client sends a message containing the timestamp to the `Broadcast` service. The `deliver_stdout` client prints received batches to stdout from the `Deliver` interface. These may both be build simply by typing `go build` in their respective directories. Neither presently supports config, so editing the source manually to adjust address and port is required.
These may both be built simply by typing `go build` in their respective directories. Note that neither of these clients supports config (so editing the source manually to adjust address and port is required), or signing (so they can only work against channels where no ACL is enforced).

### Profiling
## Profiling

Profiling the orderer service is possible through a standard HTTP interface documented [here](https://golang.org/pkg/net/http/pprof). The profiling service can be configured using the **config.yaml** file, or through environment variables. To enable profiling set `ORDERER_GENERAL_PROFILE_ENABLED=true`, and optionally set `ORDERER_GENERAL_PROFILE_ADDRESS` to the desired network address for the profiling service. The default address is `0.0.0.0:6060` as in the Golang documentation.
Profiling the ordering service is possible through a standard HTTP interface documented [here](https://golang.org/pkg/net/http/pprof). The profiling service can be configured using the **orderer.yaml** file, or through environment variables. To enable profiling set `ORDERER_GENERAL_PROFILE_ENABLED=true`, and optionally set `ORDERER_GENERAL_PROFILE_ADDRESS` to the desired network address for the profiling service. The default address is `0.0.0.0:6060` as in the Golang documentation.

Note that failures of the profiling service, either at startup or anytime during the run, will cause the overall orderer service to fail. Therefore it is currently not recommended to enable profiling in production settings.

0 comments on commit f105cc1

Please sign in to comment.