Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Predict Speed Dependency on Number of Trees #2094
I am trying to understand how the prediction speed depends on the number of trees. My use case has very stringent requirements for prediction speed of a single sample, because of real-time streaming setting. My simple testing with everything being exactly the same implies that if the number of trees increases say from 700 to 5600, the prediction time does not increase by a factor of 8, but something smaller. On average, I was measuring about 4ms and 7ms for the smaller and larger models respectively. This is within the requirement, but I was expecting a linear dependency at best, so was hoping to gain some clarity on this.
Having looked at some of the other issues (e.g. #144) and also the C++ code (https://github.com/Microsoft/LightGBM/blob/master/src/boosting/gbdt.cpp#L613) it seems that multithreading can be used even in case of a single sample prediction. This would be consistent with my findings. Can someone confirm this.
Are there any benchmarks available how the predict speed (for a single sample) depends on the number of trees? Can someone please confirm that at prediction time each tree can be traversed independently, unlike at training time when tree induction proceeds sequentially, because the final prediction is the sum of the leave values. In this regard the prediction for a single sample is map-reducible.
LightGBM focuses more on training efficiency. For the prediction efficiency, you can try the https://github.com/dmlc/treelite, which support the fast inference for both lightgbm and xgboost.
I am also facing latency issues. I am working on a real-time application, where prediction speed is very critical.
Do you think treelite will help in performance for single-instance prediction too? On their website, it is mentioned that it helps in prediction for large #examples.
Also, do you have any suggestions for optimizing prediction time? Does changing num_iteration param helps?
I have not worked with
Prediction speed could be affected by quite a few things, but one of the big ones should be the number of trees. LightGBM uses a second-order approximation, so in theory it should be reaching a reasonable solution after a fairly small number of iterations1, 2.
Maybe set up an experiment where you vary
I'd be interested to know what you find!
: Mukherjee et al. "Parallel Boosting with Momentum", ECML PKDD, 2013