New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[R-package] Provide recommendation for mnative? #348
Comments
@Laurae2 , did that means that we should alter the c++ build rather than just R libraries. I think we can make that a suggestion rather than a compulsory process. |
@Laurae2 |
@chivee no, this would just be a suggestion to users if they want to achieve better local training speed. I'm not sure if it has a major impact though, I'll test all that thoroughly before I make a PR. As @guolinke there are very small differences just for @guolinke when I get time on my server I'll try I'll get back here once my new benchmarks are done. |
@guolinke Some results here. Not posting the exact details for the benchmark because there will be more at a mini-conference I am doing next month. Settings:
Best means the best flags for compilation for maximum speed, with default settings overriding all the others if the difference is not significant (<~1%) and not consistent (similar flags giving results off).
Summary (tl;dr) We notice LightGBM v2 with Therefore, the following recommendations could be made:
I will follow up with more in the next month. Bosch, 12 threads, LightGBM v1:
Bosch, 12 threads, LightGBM v2:
Bosch, 6 threads, LightGBM v1:
Bosch, 6 threads, LightGBM v2:
Bosch, 1 thread, LightGBM v1:
Bosch, 1 thread, LightGBM v2:
|
@Laurae2 Thanks for your benchmark 👍 . |
@guolinke I'll open a PR to add a recommendation when I get some good charts ready and when the mini-conference will be ready soon (early next month), I'll link to it on the PR. I also have xgboost benchmarks for comparison, do you want to see them? (I also got for |
Sure. The comparison benchmarks are always welcome. It can help to find out which part we can further improve. |
@guolinke Here for xgboost:
xgboost was "slow", I skipped To compare xgboost and LightGBM, best is copy&paste into Excel (or anything similar) and make charts. See the end of this comment for the Excel table example. Default run: Default flag:
Summary (tl;dr) Configuration to choose, difference might be large depending on case:
One can see dmlc/xgboost#1950 more for understand xgboost implementation details. More to come soon next month (on 10 May). Bosch, 12 threads, xgboost depth-wise at b4d97d3:
Bosch, 12 threads, xgboost loss guide at b4d97d3:
Bosch, 6 threads, xgboost depth-wise at b4d97d3:
Bosch, 6 threads, xgboost loss guide at b4d97d3:
Bosch, 1 thread, xgboost depth-wise at b4d97d3:
Bosch, 1 thread, xgboost loss guide at b4d97d3:
Excel table example: Copy & paste:
|
Closed with #511 |
…microsoft#348) * Added numClasses and objective, infer actualNumClasses from objective * Update LightGBM notebook example * Remove numClasses since it is now inferred from dataset
@guolinke I am just wondering if recommending using
mnative
can yield better performance for those installing directly frominstall_github
(default ismcore2
in R).Installation log when installing using
install_github
in Windows for instance: we can see it is tuned for Core 2 architecture:This would require adding in the README.md of the R-package that to maximize performance, adding
-march=native
should be done but might break packages.Regarding
-O3
(if we were to push for even more), I know it is refused by CRAN for compatibility issues (some packages are breaking with-O3
).The text was updated successfully, but these errors were encountered: