C++ R Scala Python Java Cuda Other
Pull request Compare This branch is 1 commit ahead, 16 commits behind dmlc:master.
Failed to load latest commit information.
R-package [R] fix #1903 (#1929) Jan 6, 2017
amalgamation [R-package] GPL2 dependency reduction and some fixes (#1401) Jul 27, 2016
demo Fix comment in cross_validation.py (#1923) Jan 2, 2017
dmlc-core @ 78b78be Changing omp_get_num_threads to omp_get_max_threads (#1831) Dec 4, 2016
doc An option for doing binomial+1 or epsilon-dropout from DART paper (#1922 Jan 6, 2017
include/xgboost fix typo in comment. (#1850) Dec 11, 2016
jvm-packages refactor duplicate evaluation implementation (#1852) Dec 9, 2016
make config.mk: Set TEST_COVER to 0 by default (#1853) Dec 11, 2016
plugin Fix cmake build for linux. Update GPU benchmarks. (#1904) Dec 23, 2016
python-package Make lib path relatrive to fix setup error #1932 (#1947) Jan 9, 2017
rabit @ a9a2a69 Fix warnings from g++5 or higher (#1510) Aug 26, 2016
src [UTIL] Fix corner case in quantile sketch Jan 13, 2017
tests option to shuffle data in mknfolds (#1459) Dec 22, 2016
.gitignore Add make commands for tests Dec 4, 2016
.gitmodules [REFACTOR] cleanup structure Jan 16, 2016
.travis.yml travis: Add code coverage on success Dec 4, 2016
CMakeLists.txt Fix cmake build for linux. Update GPU benchmarks. (#1904) Dec 23, 2016
CONTRIBUTORS.md Use bst_float consistently throughout (#1824) Nov 30, 2016
ISSUE_TEMPLATE.md issue template (#1475) Aug 18, 2016
LICENSE update year in LICENSE, conf.py and README.md files Mar 15, 2016
Makefile autoconf for solaris (#1880) Dec 16, 2016
NEWS.md [CORE] Refactor cache mechanism (#1540) Sep 3, 2016
README.md change contribution link to open issues (#1834) Dec 2, 2016
appveyor.yml GPU plug-in improvements + basic Windows continuous integration (#1752) Nov 10, 2016
build.sh Minor fix on installation guide and (the probably deprecated) build s… Feb 24, 2016


eXtreme Gradient Boosting

Build Status Build Status Documentation Status GitHub license CRAN Status Badge PyPI version Gitter chat for developers at https://gitter.im/dmlc/xgboost

Documentation | Resources | Installation | Release Notes | RoadMap

XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way. The same code runs on major distributed environment (Hadoop, SGE, MPI) and can solve problems beyond billions of examples.

What's New

Ask a Question

Help to Make XGBoost Better

XGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone.


© Contributors, 2016. Licensed under an Apache-2 license.