Skip to content
This repository

Technical Details 

dgomezferro edited this page · 10 revisions

Architecture

Omid's architecture

On the above diagram we can see Omid's architecture, which is composed by a centralized server (the Status Oracle), a set of nodes working as a Write Ahead Log (WAL) and the Client that communicates with both the Status Oracle and the HBase cluster. We use BookKeeper for efficient replication of WAL on multiple remote nodes (Bookies). As we can see the Status Oracle and the HBase Cluster do not know about each other.

Code Overview

The project is organized on two main packages, com.yahoo.omid.tso and com.yahoo.omid.client. They correspond to the Status Oracle and the HBase client API respectively. The main server's class is TSOHandler, which handles all the clients' requests as the name implies. The shared buffer described in the next section is implemented by TSOSharedMessageBuffer and TSOBuffer.

The client side communication with the server is done in TSOClient

Communication

There are two differentiated communication channels between the clients and the Status Oracle. One is for the usual requests and responses the client initiates, like Timestamp Request or Commit Response. Some of the responses sent through this channel have to be delayed until the relevant information has been logged to the WAL, so the response is appended to a queue of messages that are sent when the Status Oracle gets a confirmation of persistence from BookKeeper.

The second channel is used for the replication of state to the clients. For this the server keeps a shared buffer across all the clients. Whenever new information has to be replicated, it is appended to the buffer right away, it never waits for a confirmation from BookKeper. When the clients ask for a new transaction, the server piggybacks on the timestamp response message all the pending information from the shared buffer for the given client. This way clients always have an up to date view of the committed transactions, up until the transaction start time.

Something went wrong with that request. Please try again.