HBase High Update Throughput
Latest commit 4e6ab5f Jun 6, 2013 @otisg otisg Update README
Failed to load latest commit information.
src Merge branch 'CPs' into master Nov 20, 2012
.travis.yml Create .travis.yml Jun 6, 2013
LICENSE.txt Initial sources commit Nov 30, 2010
README Update README Jun 7, 2013
pom.xml updating README Dec 12, 2012



Released under Apache License 2.0.

Mailing List:
To participate more in the discussion, join the group at

HBaseHUT stands for High Updates Throughput for HBase. It was inspired by
discussions on HBase mailing lists around the problem with having Get&Put
for each update operation, which affects write throughput dramatically.
Another force behind behind the approach used in HBaseHUT was recent
activity on Coprocessors development.  Although usage of CPs is very limited
in the current implementation (see cp package) HBaseHUT is designed with
broader use of CPs in mind because they add more
flexibility when it comes to alternative MapReduce data processing
approaches in addition to allowing seamlessly integrate the logic in places
where it makes the work to be performed in the most efficient way.

The idea behind HBaseHUT is:
* Don't do updates of existing data on each Put (and hence don't perform
  Get operation for each Put operation). All Puts are plain Puts with the
  relevant pure-insert write performance.
* Defer processing updates to scheduled job (not necessarily a MapReduce job)
  or perform updates on as-needed basis.
* Serve updated data in "online" manner: user always gets updated record
  immediately after new data was Put, i.e., user "sees" updates immediately
  after he writes data.

In addition to allowing real-time data processing where it wasn't possible
before (where batch processing was used due to update throughput limitations)
HBaseHUT also adds such a major feature as ability to roll back changes.

For more information please refer to the github project wiki:

For a clear introductory post with a good HBaseHUT use-case read/watch:

Build Notes:
Note: unit-tests take some time to execute (up to several minutes), to skip
their execution use -Dmaven.skip.tests=true.

The latest stable version can be linked from you maven project with:

      <id>sonatype release</id>



For running (MR jobs) on hadoop-2.0+ (which is a part of CDH4.1+) use:


Alex Baranau