Find file History
Pull request Compare This branch is 1 commit ahead, 7 commits behind dmlc:master.
Latest commit 6f16f0e Nov 30, 2016 @AbdealiJK AbdealiJK committed with Use bst_float consistently throughout (#1824)
* Fix various typos

* Add override to functions that are overridden

gcc gives warnings about functions that are being overridden by not
being marked as oveirridden. This fixes it.

* Use bst_float consistently

Use bst_float for all the variables that involve weight,
leaf value, gradient, hessian, gain, loss_chg, predictions,
base_margin, feature values.

In some cases, when due to additions and so on the value can
take a larger value, double is used.

This ensures that type conversions are minimal and reduces loss of
precision.

README.md

Highlights

Higgs challenge ends recently, xgboost is being used by many users. This list highlights the xgboost solutions of players

Guide for Kaggle Higgs Challenge

This is the folder giving example of how to use XGBoost Python Module to run Kaggle Higgs competition

This script will achieve about 3.600 AMS score in public leaderboard. To get start, you need do following step:

  1. Compile the XGBoost python lib
cd ../..
make
  1. Put training.csv test.csv on folder './data' (you can create a symbolic link)

  2. Run ./run.sh

Speed

speedtest.py compares xgboost's speed on this dataset with sklearn.GBM

Using R module

  • Alternatively, you can run using R, higgs-train.R and higgs-pred.R.