LESS is a supervised learning algorithm that is based on training many local estimators on subsets of a given dataset, and then passing their predictions to a global estimator. You can find the details about LESS in our manuscript.
pip install less-learn
or
conda install -c conda-forge less-learn
(see also conda-smithy repository)
Here is how you can use LESS:
from sklearn.datasets import make_regression, make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
from less import LESSRegressor, LESSClassifier
### CLASSIFICATION ###
X, y = make_classification(n_samples=1000, n_features=20, n_classes=3, \
n_clusters_per_class=2, n_informative=10, random_state=42)
# Train and test split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=42)
# LESS fit() & predict()
LESS_model = LESSClassifier(random_state=42)
LESS_model.fit(X_train, y_train)
y_pred = LESS_model.predict(X_test)
print('Test accuracy of LESS: {0:.2f}'.format(accuracy_score(y_pred, y_test)))
### REGRESSION ###
X, y = make_regression(n_samples=1000, n_features=20, random_state=42)
# Train and test split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=42)
# LESS fit() & predict()
LESS_model = LESSRegressor(random_state=42)
LESS_model.fit(X_train, y_train)
y_pred = LESS_model.predict(X_test)
print('Test error of LESS: {0:.2f}'.format(mean_squared_error(y_pred, y_test)))
Our two-part tutorial on Colab aims at getting you familiar with LESS regression. If you want to try the tutorials on your own computer, then you also need to install the following additional packages: pandas
, matplotlib
, and seaborn
.
Default implementation of LESS uses Euclidean distances with radial basis function. Therefore, it is a good idea to scale the input data before fitting. This can be done by setting the parameter scaling
in LESSRegressor
or LESSClassifier
to True
(this is the default value) or by preprocessing the data as follows:
from sklearn.preprocessing import StandardScaler
SC = StandardarScaler()
X_train = SC.fit_transform(X_train)
X_test = SC.transform(X_test)
An openmpi
implementation of LESS is also available in another repository.
R implementation of LESS is available in another repository.
Our software can be cited as:
@misc{LESS,
author = "Ilker Birbil",
title = "LESS: LEarning with Subset Stacking",
year = 2021,
url = "https://github.com/sibirbil/LESS/"
}
- Classification is added (
LESSClassifier
) - Scaling is automatically done as default (
scaling = True
) - The default global estimator for regression is now
DecisionTreeRegressor
instead ofLinearRegression
(global_estimator=DecisionTreeRegressor
) - Warnings can be turned on or off with a flag (
warnings = True
)
- Typos are corrected
- The hidden class for the binary classifier is now separate
- Local subsets with a single class are handled (the case of
ConstantPredictor
)
We thank Oguz Albayrak for his help with structuring our Python scripts.