Skip to content
This repository has been archived by the owner on Jul 18, 2018. It is now read-only.

Commit

Permalink
First commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Tong Zhang committed Aug 31, 2016
0 parents commit 9735020
Show file tree
Hide file tree
Showing 45 changed files with 10,089 additions and 0 deletions.
7 changes: 7 additions & 0 deletions CHANGES
@@ -0,0 +1,7 @@
---
0.2 (Aug 2016)
This is the first release. It only supports binary classification and regression, with significant simplifications from the original RGF algorithm for speed consideration. Additional functionalities will be supported in future releases.




21 changes: 21 additions & 0 deletions CMakeLists.txt
@@ -0,0 +1,21 @@
# CMakeLists file
#
cmake_minimum_required (VERSION 2.8.0)

project (FastRGF)

set(CMAKE_CXX_FLAGS "-O3 -std=c++11")

# you may need to use the following for g++-4.8
#set(CMAKE_CXX_FLAGS "-O3 -std=c++11 -pthread")

#set(CMAKE_CXX_FLAGS "-g -std=c++11 -Wall")

include_directories(include)


add_subdirectory(src/base)
add_subdirectory(src/forest)
add_subdirectory(src/exe)


22 changes: 22 additions & 0 deletions LICENSE
@@ -0,0 +1,22 @@

The MIT License (MIT)
Copyright (c) 2016 Baidu, Inc. All Rights Reserved.

Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

71 changes: 71 additions & 0 deletions README.md
@@ -0,0 +1,71 @@
----------
# FastRGF
### Multi-core implementation of Regularized Greedy Forest [RGF]

### Version 0.2 (August 2016) by Tong Zhang
---------
#### 1. Introduction

This software package provides a multi-core implementation of a simplified Regularized Greedy Forest (RGF) described in **[RGF]**. Please cite the paper if you find the software useful.

RGF is a machine learning method for building decision forests that have been used to win some kaggle competitions. In our experience it works better than *gradient boosting* on many relatively large data.

The implementation employs the following conepts described in the **[RGF]** paper:

- tree node regularization
- fully-corrective update
- greedy node expansion with trade-off between leaf node splitting for current tree and root splitting for new tree

However, various simplifications are made to accelerate the training speed. Therefore, unlike the original RGF program (see <http://stat.rutgers.edu/home/tzhang/software/rgf/>), this software does not reproduce the results in the paper.

The implementation of greedy tree node optimization employs second order Newton approximation for general loss functions. For logistic regression loss, which works especially well for many binary classification problems, this approach was considered in **[PL]**; for general loss functions, 2nd order approximation was considered in **[ZCS]**.

#### 2. Installation
Please see the file [CHANGES](CHANGES) for version information.
The software is written in c++11, and it has been tested under linux and macos, and it may require g++ version 4.8 or above and cmake version 2.8 or above.

If you use *g++-4.8*, after running the exmaples, you may get error messages similar to the following:

terminate called after throwing an instance of 'std::system_error'
what(): Enable multithreading to use std::thread: Operation not permitted

If this occurs, you need to add the **-pthread** flag in [CMakeLists.txt](CMakeLists.txt) to the variable CMAKE_CXX_FLAGS in order to enable multi-threading. This problem seems to be a bug in the g++ compiler. There may be variations of this problem specific to your system that require different fixes.

To install the binaries, unpackage the software into a directory.

* The source files are in the subdirectories include/ and src/.
* The executables are under the subdirectory bin/.
* The examples are under the subdirectory examples/.

To create the executables, do the following:

cd build/
cmake ..
make
make install

The following executabels will be installed under the subdirectory bin/.

* forest_train: train rgf and save model
* forest_predict: apply trained model on test data

You may use the option -h to show command-line options (options can also be provided in a configuration file).

#### 3. Examples
Go to the subdirectory examples/, and following the instructions in [README.md](examples/README.md) (it also contains some tips for parameter tuning).

#### 4. Contact
Tong Zhang

#### 5. Copyright
The software is distributed under the MIT license. Please read the file [LICENSE](LICENSE).

#### 6. References

**[RGF]** Rie Johnson and Tong Zhang. [Learning Nonlinear Functions Using Regularized Greedy Forest](http://arxiv.org/abs/1109.0887), *IEEE Trans. on Pattern Analysis and Machine Intelligence, 36:942-954*, 2014.

**[PL]** Ping Li. Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost, *UAI* 2010.

**[ZCS]** Zhaohui Zheng, Hongyuan Zha, Tong Zhang, Olivier Chapelle, Keke Chen, Gordon Sun. A general boosting method and its application to learning ranking functions for web search, *NIPS* 2007.

4 changes: 4 additions & 0 deletions bin/.gitignore
@@ -0,0 +1,4 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore
4 changes: 4 additions & 0 deletions build/.gitignore
@@ -0,0 +1,4 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore
34 changes: 34 additions & 0 deletions examples/README.md
@@ -0,0 +1,34 @@
### Examples
---
* ex1 This is a binary classification problem, in libsvm's sparse feature format.
Use the *shell script* [run.sh](ex1/run.sh) to perform training/test.
The dataset is downloaded from <https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#madelon>.


* ex2: This is a regression problem, in dense feature format. Use the *shell script* [run.sh](ex2/run.sh) to perform training/test.
The dataset is from <https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html#housing>.


Note that for these small examples, the running time with multi-threads may be slower than with single-thread due to the overhead it introduces. However, for large datasets, one can observe an almost linear speed up.

The program can directly handle high dimensional sparse features in the libsvm format as in ex1. This is the recommended format to use when the dataset is relatively large (although some other formats are supported).

---
### Tips for Parameter Tuning

There are multiple training parameters that can affect performance. The following are the more important ones:

* **dtree.loss**: default is LS, but for binary classificaiton, LOGISTIC often works better.
* **forest.ntrees**: typical range is [100,10000], and a typical value is 1000.
* **dtree.lamL2**: use a relatively large vale such as 1000 or 10000. The larger dtree.lamL2 is, the larger forest.ntrees you need to use: the resulting accuracy is often better with a longer training time.
* **dtree.lamL1**: try values in [0,1000], and a large value induces sparsity.
* **dtree.max_level** and **dtree.max_nodes** and **dtree.new_tree_gain_ratio**: these parameters control the tree depth and size (and when to start a new tree). One can try different values (such as dtree.max_level=4, or dtree.max_nodes=10, or dtree.new_tree_gain_ratio=0.5) to fine tuning performance.

You may also modify the discreitzation options below:

* **discretize.dense.max_buckets**: try in the range of [10,65000]
* **discretize.sparse.max_buckets**: try in the range of [10, 250]. If you want to try a larger value up to 65000, then you need to edit [../include/header.h](../include/header.h) and replace
"*using disc_sparse_value_t=unsigned char;*"
by "*using disc_sparse_value_t=unsigned short;*". However, this increase the memory useage.
* **discretize.sparse.max_features**: you may try a different value in [1000,10000000].

0 comments on commit 9735020

Please sign in to comment.