-
Notifications
You must be signed in to change notification settings - Fork 861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tabular] Remove TorchThreadManager in TabularNN and hack in LightGBM #2472
Conversation
Job PR-2472-c4e3e95 is done. |
c4e3e95
to
cd9d64e
Compare
To verify the difference, here we take a few datasets for a simple experiment on macOS. With baseline implement in current master branch, we get
After removing TorchThreadManager
I would conclude that we won't have a significant difference in terms of performance after removing reset-thread statement. Here are more visualized details. |
cd9d64e
to
2d50e04
Compare
Job PR-2472-2d50e04 is done. |
2d50e04
to
b670121
Compare
Job PR-2472-b670121 is done. |
Analysis on AdultIncome shows a major speedup on m6i.16xlarge for small-batch inference at ~2.5x faster!
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, great work!
Issue #, if available:
Description of changes:
a. See original topic: Ray parallel #1329 (comment)
For an ensemble model with both TabularFastAI model and XGBoost model, the overall inference speed is reduced from 49 ms to 33 ms (batch size=1). Because of the removal of TorchThreadManager, XGBoost prediction time reduced from 16 ms to 1.6 ms. (Tested on Linux with AdultIncomeBinaryClassification task trained with high_quality preset.)
Code snippet to reproduce the results:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.