-
Notifications
You must be signed in to change notification settings - Fork 74
Benchmarks for model builders from XGBoost and LightGBM models #36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…bench into latest_stable
@PetrovKP, @Alexsandruss, @ShvetsKS, |
.gitignore
Outdated
@@ -11,3 +11,4 @@ __work* | |||
# Datasets | |||
dataset | |||
*.csv | |||
*.npy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add EOF
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -0,0 +1,509 @@ | |||
import argparse |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copyright ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
modelbuilders/lgbm_mb.py
Outdated
'lgbm_predict', 'lgbm_to_daal', 'daal_compute'], | ||
times=[t_creat_train, t_creat_test, t_train, t_lgbm_pred, t_trans, t_daal_pred], | ||
accuracy_type=metric_name, accuracies=[0, 0, train_metric, test_metric_xgb, 0, test_metric_daal], | ||
data=[X_train, X_test, X_train, X_test, X_train, X_test]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eof
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added too
modelbuilders/bench.py
Outdated
import json | ||
|
||
|
||
def columnwise_score(y, yp, score_func): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bench.py file should be similar in all folder (sklearn, daal4py, etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. Added same bench.py
as in other folders.
I also added the utils.py
file with function, that I use in both my benchmarks
modelbuilders/lgbm_mb.py
Outdated
'lgbm_predict', 'lgbm_to_daal', 'daal_compute'], | ||
times=[t_creat_train, t_creat_test, t_train, t_lgbm_pred, t_trans, t_daal_pred], | ||
accuracy_type=metric_name, accuracies=[0, 0, train_metric, test_metric_xgb, 0, test_metric_daal], | ||
data=[X_train, X_test, X_train, X_test, X_train, X_test]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add newline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done it
modelbuilders/xgb_mb.py
Outdated
|
||
print_output(library='modelbuilders', algorithm=f'xgboost_{task}_and_modelbuilder', | ||
stages=['xgb_train_dmatrix_create', 'xgb_test_dmatrix_create', 'xgb_training', 'xgb_prediction', | ||
'xgb_to_daal_conv', 'daal_prediction'], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use flake8 to correct formatting:
pip/conda install flake8
flake8 <folder or file names>
You can ignore 'too long line' if line is complicated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done it for every file, which I added / changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also used autopep8 formatting, so every line is less or equal 100 characters now
help='Count DMatrix creation in time measurements') | ||
parser.add_argument('--single-precision-histogram', default=False, action='store_true', | ||
help='Build histograms instead of double precision') | ||
parser.add_argument('--enable-experimental-json-serialization', default=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default=False
is better if this feature affects perf.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's true by default in XGBoost, so, as we discussed, I decided to leave it as is
No description provided.