Skip to content

Comparison of XGBoost and LightGBM (speed, accuracy and complexity)

Notifications You must be signed in to change notification settings

codecookinging/XGBoost_vs_LightGBM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

93 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Comparisons of XGB a8f670d(2017/11/02) and LGB 7a166fb(2017/11/01)

  • LightGBM.ipynb: Modified version of marugari's work
  • exp013
    • model : XGB(hist_depthwise, hist_lossguie, hist_GPU, GPU), LGB
    • objective : Binary classification
    • metric : Logloss
    • dataset : make_classification
    • n_train : 0.5M, 1M, 2M, 4M
    • n_valid : n_train/4
    • n_features : 32
    • n_clusters_per_class : 8
    • n_rounds : 100
    • max_depth : 5, 10, 15
    • num_leaves : 2 ** max_depth
  • exp014
    • model : XGB(hist_depthwise, hist_lossguie, hist_GPU, GPU), LGB
    • objective : Binary classification
    • metric : Logloss
    • dataset : make_classification
    • n_train : 1,2,4,8,16,32,64 * 10K
    • n_valid : n_train/4
    • n_features : 256
    • n_clusters_per_class : 8
    • n_rounds : 100
    • max_depth : 5, 10
    • num_leaves : 2 ** max_depth

The following codes were run on older versions of XGBoost and LightGBM

  • exp010
    • model : XGB(CPU, EQBIN_depthwise, EQBIN_lossguie, GPU), LGB
    • objective : Binary classification
    • metric : Logloss
    • dataset : make_classification
    • n_train : 0.5M, 1M, 2M
    • n_valid : n_train/4
    • n_features : 32
    • n_rounds : 100
    • n_clusters_per_class : 8
    • max_depth : 5, 10, 15
  • exp011
    • model : XGB(EQBIN_depthwise, EQBIN_lossguie), LGB
    • objective : Binary classification
    • metric : Logloss
    • dataset : make_classification
    • n_train : 0.5M, 1M, 2M
    • n_valid : n_train/4
    • n_features : 32
    • n_clusters_per_class : 8
    • n_rounds : 200
    • max_depth : 5, 10, 15, 20
    • num_leaves : 32, 256, 1024, 4096, 16384
  • exp012
    • model : XGB(EQBIN_depthwise, EQBIN_lossguie), LGB
    • objective : Binary classification
    • metric : Logloss
    • dataset : make_classification
    • n_train : 1, 2, 4, 8, 16, 32 * 10000
    • n_valid : n_train/4
    • n_features : 256
    • n_clusters_per_class : 8
    • n_rounds : 100
    • max_depth : 5, 10, 15, 20
    • num_leaves : 32, 256, 1024, 4096

About

Comparison of XGBoost and LightGBM (speed, accuracy and complexity)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 55.4%
  • Python 44.6%