Skip to content

guitargeek/XGBoost-FastForest

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XGBoost-FastForest

Minimal library code to deploy XGBoost models in C++.

DOI

In science, it is very common to prototype algorithms with Python and then put them in production with fast C++ code. Transitioning models from Python to C++ should be as easy as possible to make sure new ideas can be tried out rapidly. The FastForest library helps you to get your XGBoost model into a C++ production environment as quickly as possible.

The mission of this library is to be:

  • Easy: deploying your XGBoost model should be as painless as it can be
  • Fast: thanks to efficient data structures for storing the trees, this library goes easy on your CPU and memory
  • Safe: the FastForest objects are not mutated when used, and therefore they are an excellent choice in multithreading environments
  • Portable: FastForest has no dependency other than the C++ standard library, and the minimum required C++ standard is C++98

Installation

You can clone this repository, compile and install the library with cmake:

git clone git@github.com:guitargeek/FastForest.git
mkdir build
cd build
cmake ..
make
sudo make install

Usage Example

Usually, XGBoost models are trained via the scikit-learn interface, like in this example with a random toy dataset. In the end, we save the model both in binary format to be able to still read it with XGBoost, as well as in text format so we can open it with FastForest.

from xgboost import XGBClassifier
from sklearn.datasets import make_classification
import numpy as np

X, y = make_classification(n_samples=10000, n_features=5, random_state=42, n_classes=2, weights=[0.5])

model = XGBClassifier().fit(X, y)
booster = model._Booster

booster.dump_model("model.txt")

In C++, you can now quickly load the model into a FastForest and obtain predictions by calling the FastForest object with an array of features.

#include "fastforest.h"
#include <cmath>

int main() {
    std::vector<std::string> features{"f0",  "f1",  "f2",  "f3",  "f4"};

    const auto fastForest = fastforest::load_txt("model.txt", features);

    std::vector<float> input{0.0, 0.2, 0.4, 0.6, 0.8};

    float score = 1./(1. + std::exp(-fastForest(input.data())));
}

Some things to keep in mind:

  • You need to pass the names of the features that you will later use for the prediction to the FastForest constructor. This argument is necessary because the features are not ordered in the text file. Hence you need to define an order yourself.
  • Alternatively, can let the FastForest automatically determine an order by just passing an empty vector of strings. You will see the vector is filled with automatically determined feature names afterwards.
    • The original order of the features used in the training process can't be recovered.
  • The FastForest does not apply the logistic transformation, so you will not have any precision loss when you need the untransformed output. Therefore, you need to apply the logistic transformation manually if you trained with objective='binary:logistic' and want to reproduce the results of predict_proba(), like in the code snippet above.
    • If you train with the objective='binary:logitraw' parameter, the output you'll get from predict_proba() will be without the logistic transformation, just like from the FastForest.

Models with non-zero base response

If you still see a mismatch of xgboost and FastForest output even with the logistric transformation, your xgboost model probably has a base score that is not equal to 0.5. You can inspect the base score of an XGBClassifier as follows:

def get_basescore(model: xgb.XGBModel) -> float:
    import json

    """Get base score from an XGBoost sklearn estimator."""
    base_score = float(json.loads(model.get_booster().save_config())["learner"]["learner_model_param"]["base_score"])
    return base_score

print(get_basescore(model)) # usually 0.5

If the base score is not 0.5, note it down, apply the inverse logistic transformation, and use it as the base response parameter for the FastForest evaluation:

float base_score = /* the output of get_basescore from Python */;
float base_response = std::log(base_score / (1.0 - base_score));
float score = 1./(1. + std::exp(-fastForest(input.data(), base_response)));

Unfortunately, the base score is not saved in the .txt dump of XGBoost, which is why this manual procedure is necessary.

In the future, FastForest might migrate to XGBoosts .json format for the model input, since this schema encodes the base score.

Multiclass classification with softmax

It is easily possible to use multiclassification models trained with the multi:softmax objective.

In this case, you should pass the number of classes to fastforest::load_txt and use the FastForest::softmax function for evaluation.

The function will return you a vector with the probabilities, one entry for each class.

std::vector<float> probas = fastForest.softmax(input.data());

For performance-critical applications, this interface should not be used to avoid heap allocations in the vector construction. Please use either the std::array interface or the old-school interface that writes the output into a function parameter.

{
  std::array<float,3> probas = fastForest.softmax<3>(input.data());
}
// or
{
  std::vector<float> probas(3); // allocated somewhere outside your loop over entries
  fastForest.softmax(input.data(), probas.data());
}

Performance Benchmarks

So far, FastForest has been benchmarked against the inference engine in the XGBoost python library (underlying C) and the TMVA framework. For every engine, the same tree ensemble of 1000 trees is used, and inference is made on a single thread.

Engine Benchmark time
FastForest (GCC 11.1.0) 1.5 s
treelite (GCC 11.1.0) 2.7 s
m2cgen (GCC 11.1.0 with -O11) 1.3 s
xgboost 1.5.2 in Python 3.10.1 2.0 s
ROOT 6.24/00 TMVA 5.6 s

The benchmark can be reproduced with the files found in the benchmark directory. The python scripts have to be run first as they also train and save the models. Input type from the code generated by m2cgen was changed from double to float for a better comparison with FastForest.

The tests were performed on an AMD Ryzen 9 3900 12-Core Processor.

Serialization

The FastForests can be serialized to binary files. The binary format reflects the memory layout of the FastForest class, so saving and loading is as fast as it can be. The serialization to file is done with the write_bin method.

fastForest.write_bin("forest.bin");

The serialized FastForest can be read back with its constructor, this time the one that does not take a reference to a vector for the feature names.

const auto fastForest = fastforest::load_bin("forest.bin");

Footnotes

  1. Different optimization flags were compared, and -O1 was by far the best for the m2cgen model performance. Also note that the performance of the m2cgen approach of hardcoding the model as C code is very sensitive to the compiler. In particular, older compilers result in slower evaluation.