H2O scales statistics, machine learning, and math over Big Data.
H2O uses familiar interfaces like R, Python, Scala, the Flow notebook graphical interface, Excel, & JSON so that Big Data enthusiasts & experts can explore, munge, model, and score datasets using a range of algorithms including advanced ones like Deep Learning. H2O is extensible so that developers can add data transformations and model algorithms of their choice and access them through all of those clients.
Data collection is easy. Decision making is hard. H2O makes it fast and easy to derive insights from your data through faster and better predictive modeling. H2O allows online scoring and modeling in a single platform.
- Downloading H2O-3
- Open Source Resources
- Using H2O-3 Code Artifacts (libraries)
- Building H2O-3
- Launching H2O after Building
- Building H2O on Hadoop
- Sparkling Water
- Documentation
- Community / Advisors / Investors
While most of this README is written for developers who do their own builds, most H2O users just download and use a pre-built version. If that's you, just follow these steps:
- Point to http://h2o.ai
- Click on Download
- Scroll down to find the section for H2O-3
- Click the version you want (generally the latest numbered release)
Most people interact with three primary open source resources: GitHub (which you've already found), JIRA (for issue tracking), and h2ostream (a community discussion forum).
You can browse and create new issues in our open source JIRA: http://jira.h2o.ai
- You can browse and search for issues without logging in to JIRA:
- Click the
Issues
menu - Click
Search for issues
- Click the
- To create an issue (either a bug or a feature request), please create yourself an account first:
- Click the
Log In
button on the top right of the screen - Click
Create an acccount
near the bottom of the login box - Once you have created an account and logged in, use the
Create
button on the menu to create an issue - Create H2O-3 issues in the PUBDEV project
- Click the
(Note: There is only one issue tracking system for the project. GitHub issues are not enabled, you must use JIRA.)
- GitHub
- JIRA - file issues here (PUBDEV contains issues for the current H2O-3 project)
- h2ostream community forum - ask your questions here
- Documentation
- Bleeding edge nightly build page: http://s3.amazonaws.com/h2o-release/h2o-3/master/latest.html
- FAQ: http://h2o.ai/product/faq/
- Download (pre-built packages)
- Jenkins
- Website
- Follow us on Twitter, @h2oai
Every nightly build publishes R, Python, Java, and Scala artifacts to a build-specific repository. In particular, you can find Java artifacts in the maven/repo directory.
Here is an example snippet of a gradle build file using h2o-3 as a dependency. Replace x, y, z, and nnnn with valid numbers.
// h2o-3 dependency information
def h2oBranch = 'master'
def h2oBuildNumber = 'nnnn'
def h2oProjectVersion = "x.y.z.${h2oBuildNumber}"
repositories {
// h2o-3 dependencies
maven {
url "https://s3.amazonaws.com/h2o-release/h2o-3/${h2oBranch}/${h2oBuildNumber}/maven/repo/"
}
}
dependencies {
compile "ai.h2o:h2o-core:${h2oProjectVersion}"
compile "ai.h2o:h2o-algos:${h2oProjectVersion}"
compile "ai.h2o:h2o-web:${h2oProjectVersion}"
compile "ai.h2o:h2o-app:${h2oProjectVersion}"
}
Refer to the latest H2O-3 bleeding edge nightly build page for information about installing nightly build artifacts.
Refer to the h2o-droplets GitHub repository for a working example of how to use Java artifacts with gradle.
Note: Stable H2O-3 artifacts are periodically published to Maven Central (click here to search) but may substantially lag behind H2O-3 Bleeding Edge nightly builds.
Getting started with H2O development requires JDK 1.7, Node.js, and Gradle. We use the Gradle wrapper (called gradlew
) to ensure up-to-date local versions of Gradle and other dependencies are installed in your development directory.
To build H2O from the repository, perform the following steps.
# Build H2O
git clone https://github.com/h2oai/h2o-3.git
cd h2o-3
./gradlew build -x test
# Start H2O
java -jar build/h2o.jar
# Point browser to http://localhost:54321
git clone https://github.com/h2oai/h2o-3.git
cd h2o-3
./gradlew syncSmalldata
./gradlew build
Note: Running tests starts five test JVMs that form an H2O cluster and requires at least 8GB of RAM (preferably 16GB of RAM).
git pull
./gradlew syncSmalldata
./gradlew clean
./gradlew build
A ./gradlew clean
is recommended after each git pull
.
Skip tests by adding -x test
at the end the gradle build command line. Tests typically run for 7-10 minutes on a Macbook Pro laptop with 4 CPUs (8 hyperthreads) and 16 GB of RAM.
Syncing smalldata is not required after each pull, but if tests fail due to missing data files, then try ./gradlew syncSmalldata
as the first troubleshooting step. Syncing smalldata grabs data files from AWS S3 to the smalldata directory in your workspace. The sync is incremental. Do not check in these files. The smalldata directory is in .gitignore. If you do not run any tests, you do not need the smalldata directory.
pip install grip
pip install tabulate
pip install wheel
pip install scikit-learn
Python tests require:
pip install scikit-learn
pip install numpy
pip install scipy
pip install pandas
pip install statsmodels
pip install patsy
Step 1: Download and install WinPython.
From the command line, validate python
is using the newly installed package by using which python
(or sudo which python
). Update the Environment variable with the WinPython path.
pip install grip
pip install tabulate
pip install wheel
Install Java 1.7 and add the appropriate directory C:\Program Files\Java\jdk1.7.0_65\bin
with java.exe to PATH in Environment Variables. To make sure the command prompt is detecting the correct Java version, run:
javac -version
The CLASSPATH variable also needs to be set to the lib subfolder of the JDK:
CLASSPATH=/<path>/<to>/<jdk>/lib
Install Node.js and add the installed directory C:\Program Files\nodejs
, which must include node.exe and npm.cmd to PATH if not already prepended.
To install these packages from within an R session, enter:
R> install.packages("RCurl")
R> install.packages("jsonlite")
R> install.packages("statmod")
R> install.packages(c("devtools", "roxygen2", "testthat"))
Install R and add the preferred bin\i386 or bin\x64 directory to your PATH.
Note: Acceptable versions of R are >= 2.13 && <= 3.0.0 && >= 3.1.1.
To manually install packages, download the releases of the following R packages:
cd Downloads
R CMD INSTALL bitops_x.x-x.zip
R CMD INSTALL RCurl_x.xx-x.x.zip
R CMD INSTALL jsonlite_x.x.xx.zip
R CMD INSTALL statmod_x.x.xx.zip
R CMD INSTALL Rcpp_x.xx.x.zip
R CMD INSTALL digest_x.x.x.zip
R CMD INSTALL testthat_x.x.x.zip
R CMD INSTALL stringr_x.x.x.zip
R CMD INSTALL roxygen2_x.x.x.zip
R CMD INSTALL devtools_x.x.x.zip
Finally, install Rtools, which is a collection of command line tools to facilitate R development on Windows.
NOTE: During Rtools installation, do not install Cygwin.dll.
Step 6. Install Cygwin
NOTE: During installation of Cygwin, deselect the Python packages to avoid a conflict with the Python.org package.
If Cygwin is already installed, remove the Python packages or ensure that Native Python is before Cygwin in the PATH variable.
Step 8. Git Clone h2o-3
If you don't already have a Git client, please install one. The default one can be found here http://git-scm.com/downloads. Make sure that command prompt support is enabled before the installation.
Download and update h2o-3 source codes:
git clone https://github.com/h2oai/h2o-3
cd h2o-3
./gradlew.bat build
If you encounter errors run again with
--stacktrace
for more instructions on missing dependencies.
If you don't have Homebrew, we recommend installing it. It makes package management for OS X easy.
Install Java 1.7. To make sure the command prompt is detecting the correct Java version, run:
javac -version
Using Homebrew:
brew install node
Otherwise, install from the NodeJS website.
Install R and add the bin directory to your PATH if not already included.
Install the following R packages:
cd Downloads
R CMD INSTALL bitops_x.x-x.tgz
R CMD INSTALL RCurl_x.xx-x.x.tgz
R CMD INSTALL jsonlite_x.x.xx.tgz
R CMD INSTALL statmod_x.x.xx.tgz
R CMD INSTALL Rcpp_x.xx.x.tgz
R CMD INSTALL digest_x.x.x.tgz
R CMD INSTALL testthat_x.x.x.tgz
R CMD INSTALL stringr_x.x.x.tgz
R CMD INSTALL roxygen2_x.x.x.tgz
R CMD INSTALL devtools_x.x.x.tgz
To install these packages from within an R session:
R> install.packages("RCurl")
R> install.packages("jsonlite")
R> install.packages("statmod")
R> install.packages(c("devtools", "roxygen2", "testthat"))
Step 4. Git Clone h2o-3
OS X should already have Git installed. To download and update h2o-3 source codes:
git clone https://github.com/h2oai/h2o-3
cd h2o-3
./gradlew build
If you encounter errors run again with
--stacktrace
for more instructions on missing dependencies.
sudo apt-get install npm
sudo ln -s /usr/bin/nodejs /usr/bin/node
npm install -g bower
Install Java 1.7. Installation instructions can be found here JDK installation. To make sure the command prompt is detecting the correct Java version, run:
javac -version
Installation instructions can be found here R installation. Click “Download R for Linux”. Click “ubuntu”. Follow the given instructions.
To install the required packages, follow the same instructions as for OS X above.
Step 4. Git Clone h2o-3
If you don't already have a Git client:
sudo apt-get install git
Download and update h2o-3 source codes:
git clone https://github.com/h2oai/h2o-3
cd h2o-3
./gradlew build
If you encounter errors, run again using
--stacktrace
for more instructions on missing dependencies.
Make sure that you are not running as root, since
bower
will reject such a run.
On Ubuntu 13.10, the default Node.js (v0.10.15) is sufficient, but the default npm (v1.2.18) is too old, so use a fresh install from the npm website:
sudo apt-get install node
sudo ln -s /usr/bin/nodejs /usr/bin/node
wget http://npmjs.org/install.sh
sudo apt-get install curl
sudo sh install.sh
For users of Intellij's IDEA, generate project files with:
./gradlew idea
For users of Eclipse, generate project files with:
./gradlew eclipse
java -jar build/h2o.jar
Pre-built H2O-on-Hadoop zip files are available on the download page. Each Hadoop distribution version has a separate zip file in h2o-3.
To build H2O with Hadoop support yourself, first install sphinx for python: pip install sphinx
Then start the build by entering the following from the top-level h2o-3 directory:
(export BUILD_HADOOP=1; ./gradlew build -x test)
./gradlew dist
This will create a directory called 'target' and generate zip files there. Note that BUILD_HADOOP
is the default behavior when the username is jenkins
(refer to settings.gradle
); otherwise you have to request it, as shown above.
In the h2o-hadoop
directory, each Hadoop version has a build directory for the driver and an assembly directory for the fatjar.
You need to:
- Add a new driver directory and assembly directory (each with a
build.gradle
file) inh2o-hadoop
- Add these new projects to
h2o-3/settings.gradle
- Add the new Hadoop version to
HADOOP_VERSIONS
inmake-dist.sh
- Add the new Hadoop version to the list in
h2o-dist/buildinfo.json
These are the required steps to debug HDFS in IDEA as a standalone H2O process.
Debugging H2O on Hadoop as a hadoop jar
hadoop mapreduce job is a difficult thing to do. However, what you can do relatively easily is tweak the gradle settings for the project so that H2OApp has HDFS as a dependency. Here are the steps:
- Make the following changes to gradle build files below
- Change the
hadoop-client
version inh2o-persist-hdfs
to the desired version - Add
h2o-persist-hdfs
as a dependency toh2o-app
- Change the
- Close IDEA
./gradlew cleanIdea
./gradlew idea
- Re-open IDEA
- Run or debug H2OApp, and you will now be able to read from HDFS inside the IDE debugger
h2o-persist-hdfs
is normally only a dependency of the assembly modules, since those are not used by any downstream modules. We want the final module to define its own version of HDFS if any is desired.
Note this example is for MapR 4, which requires the additional org.json
dependency to work properly.
$ git diff
diff --git a/h2o-app/build.gradle b/h2o-app/build.gradle
index af3b929..097af85 100644
--- a/h2o-app/build.gradle
+++ b/h2o-app/build.gradle
@@ -8,5 +8,6 @@ dependencies {
compile project(":h2o-algos")
compile project(":h2o-core")
compile project(":h2o-genmodel")
+ compile project(":h2o-persist-hdfs")
}
diff --git a/h2o-persist-hdfs/build.gradle b/h2o-persist-hdfs/build.gradle
index 41b96b2..6368ea9 100644
--- a/h2o-persist-hdfs/build.gradle
+++ b/h2o-persist-hdfs/build.gradle
@@ -2,5 +2,6 @@ description = "H2O Persist HDFS"
dependencies {
compile project(":h2o-core")
- compile("org.apache.hadoop:hadoop-client:2.0.0-cdh4.3.0")
+ compile("org.apache.hadoop:hadoop-client:2.4.1-mapr-1408")
+ compile("org.json:org.json:chargebee-1.0")
}
Sparkling Water combines two open-source technologies: Apache Spark and H2O, our machine learning engine. It makes H2O’s library of Advanced Algorithms, including Deep Learning, GLM, GBM, K-Means, and Distributed Random Forest, accessible from Spark workflows. Spark users can select the best features from either platform to meet their Machine Learning needs. Users can combine Spark's RDD API and Spark MLLib with H2O’s machine learning algorithms, or use H2O independently of Spark for the model building process and post-process the results in Spark.
Sparkling Water Resources:
- Download page for pre-built packages (Scroll down for Sparkling Water)
- Sparkling Water GitHub repository
- README
- Developer documentation
To generate the REST API documentation, use the following commands:
cd ~/h2o-3
cd py
python ./generate_rest_api_docs.py # to generate Markdown only
python ./generate_rest_api_docs.py --generate_html --github_user GITHUB_USER --github_password GITHUB_PASSWORD # to generate Markdown and HTML
The default location for the generated documentation is build/docs/REST
.
If the build fails, try gradlew clean
, then git clean -f
.
Documentation for each bleeding edge nightly build is available on the nightly build page.
We will breathe & sustain a vibrant community with the focus of taking a software engineering approach to data science and empowering everyone interested in data to be able to hack data using math and algorithms. Join us on google groups at h2ostream and feel free to file issues directly on our JIRA.
Team & Committers
SriSatish Ambati
Cliff Click
Tom Kraljevic
Tomas Nykodym
Michal Malohlava
Kevin Normoyle
Spencer Aiello
Anqi Fu
Nidhi Mehta
Arno Candel
Josephine Wang
Amy Wang
Max Schloemer
Ray Peck
Prithvi Prabhu
Brandon Hill
Jeff Gambera
Ariel Rao
Viraj Parmar
Kendall Harris
Anand Avati
Jessica Lanford
Alex Tellez
Allison Washburn
Amy Wang
Erik Eckstrand
Neeraja Madabhushi
Sebastian Vidrio
Ben Sabrin
Matt Dowle
Mark Landry
Erin LeDell
Oleg Rogynskyy
Nick Martin
Nancy Jordan
Nishant Kalonia
Nadine Hussami
Jeff Cramer
Stacie Spreitzer
Vinod Iyengar
Charlene Windom
Parag Sanghavi
Scientific Advisory Council
Stephen Boyd
Rob Tibshirani
Trevor Hastie
Systems, Data, FileSystems and Hadoop
Doug Lea
Chris Pouliot
Dhruba Borthakur
Jishnu Bhattacharjee, Nexus Venture Partners
Anand Babu Periasamy
Anand Rajaraman
Ash Bhardwaj
Rakesh Mathur
Michael Marks
Egbert Bierman
Rajesh Ambati