Skip to content

Adaptive Conformal Prediction Intervals (ACPI) is a Python package that enhances the Predictive Intervals provided by the split conformal approach by employing a weighting strategy.

License

salimamoukou/ACPI

Repository files navigation

build and test

ACPI - Adaptive Conformal Prediction Intervals

ACPI is a Python package that enhances the Predictive Intervals provided by the split conformal approach by employing a weighting strategy. Unlike the latter that use a constant correction for all test points to ensure coverage guarantee, ACPI use an adaptive correction term that depends on the specific test observation.

ACPI gives PI that better represent the uncertainty of the model or the epistemic uncertainty and are backed by strong theoretical gaurantees: marginal and training-conditional coverage, as well as asymptotic coverage guarantee.

Paper: Adaptive Conformal Prediction By Reweighting the Nonconformity Score

🔗 Requirements

Python 3.7+

OSX: ACPI uses Cython extensions that need to be compiled with multi-threading support enabled. The default Apple Clang compiler does not support OpenMP. To solve this issue, obtain the lastest gcc version with Homebrew that has multi-threading enabled: see for example pysteps installation for OSX.

Windows: Install MinGW (a Windows distribution of gcc) or Microsoft’s Visual C

Install the required packages:

$ pip install -r requirements.txt

🛠 Installation

Clone the repo and run the following command in the ACPI directory to install ACPI

$ pip install .

To make an all-in-one installation, you can run the bash script: install.sh

$ bash install.sh

⚡️ Quickstart

We propose 3 methods to compute PI: LCP-RF , LCP-RF-G, and QRF-TC.

  • LCP-RF: Random Forest Localizer. It used the learned weights of the RF to give more importance to calibration samples that have residuals similar to the test points in the calibration step.

  • LCP-RF-G: Groupwise Random Forest Localizer. It extends the previous approach by conformalizing by group. The groups are computed using the weights of the RF that permit to find cluster/partition of the input space with similar residuals. Hence, allowing more efficient and more adaptive Predictive Intervals.

  • QRF-TC: It directly calibrates the Random Forest Localizer to achieve training-conditional coverage.

Remark: We used QRF-TC by default as it provides the same level of accuracy as the other methods, but is faster.

  • Assume we have trained a mean estimator (XGBRegressor) on the california house prices dataset.
from xgboost import XGBRegressor
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split

test_ratio = 0.25
calibration_ratio = 0.5

sklearn_data = fetch_california_housing()
X, y = sklearn_data.data, sklearn_data.target
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=test_ratio, random_state=2023)
x_train, x_cal, y_train, y_cal = train_test_split(x_train, y_train, test_size=calibration_ratio, random_state=2023)

model = XGBRegressor()
model.fit(x_train, y_train)
  • To calculate Predictive Intervals using QRF-TC, the first step is to define ACPI class, which has the same parameters as a traditional RandomForest. The parameters of ACPI should be fine-tuned to predict the nonconformity scores of the calibration set.
from sklearn.metrics import mean_absolute_error
from acpi import ACPI

# It has the same params as a Random Forest, and it should be tuned to predict the score of calibration set.  
acpi = ACPI(model_cali=model, n_estimators=100, max_depth=20, min_node_size=10)

acpi.fit(x_cal, y_cal, nonconformity_func=None)
# You can use custom nonconformity score by using the argument 'nonconformity_func'. 
# It takes a callable[[ndarray, ndarray], ndarray] that return the nonconformity 
# score given (predictions, y_true). By the default, it uses absolute residual if the model 
# is mean estimates and max(pred_lower - y, y - pred_upper) if the model is quantile estimates.

v_cal = acpi.nonconformity_score(model.predict(x_cal), y_cal) 

# Optimize the RF to predict the nonconformity score
mae = mean_absolute_error(acpi.predict(x_cal), v_cal)
  • Then, we call 'fit_calibration' method to run the training-conditional calibration.
alpha = 0.1
acpi.fit_calibration(x_cal, y_cal, nonconformity_func=None, quantile=1-alpha, only_qrf=True)
  • You can compute the Prediction Intervals as follows.
y_lower, y_upper = acpi.predict_pi(x_test, method='qrf')

# Or the prediction sets if the model is a classifier (NOT TESTED YET)
# y_pred_set = acpi.predict_pi(x_test, method='qrf')

Improvements over split-CP

  • To show an example, we compared the widths of the Prediction intervals generated by our method with those produced by split-CP in california house prices dataset. We used the library MAPIE to compute PI of split-CP.
from mapie.regression import MapieRegressor

mapie = MapieRegressor(model, method='base', cv='prefit')
mapie.fit(x_cal, y_cal)
y_test_pred, y_test_pis = mapie.predict(x_test, alpha=alpha)
  • Below, we plot the interval width of split-CP, QRF-TC and the true errors of the model. It demonstrates that our method provides varying interval width (as shown in Figure 2), whereas split-CP's width remains constant. Figures 1 and 3 reveal that our Predictive Intervals are better aligned with the true errors of the model.
import matplotlib.pyplot as plt
import numpy as np

idx = list(range(len(y_test)))
sort_id = np.argsort(y_test)
max_size = 500

y_lower_true = model.predict(x_test) - np.abs(model.predict(x_test) - y_test)
y_upper_true = model.predict(x_test) + np.abs(model.predict(x_test) - y_test)
r = {'QRF_TC': y_upper - y_lower, 'SPLIT': y_test_pis[:, 1, 0] - y_test_pis[:, 0, 0], 'True': y_upper_true - y_lower_true}

fig, ax = plt.subplots(1, 3, figsize=(20, 6))

ax[0].errorbar(idx[:max_size], y_test[sort_id][:max_size], yerr=r['SPLIT'][sort_id][:max_size], fmt='o', label='SPLIT', color='tab:orange')
ax[0].errorbar(idx[:max_size], y_test[sort_id][:max_size], yerr=r['QRF_TC'][sort_id][:max_size], fmt='o', label='QRF_TC', color='tab:blue')
ax[0].errorbar(idx[:max_size], y_test[sort_id][:max_size], yerr=r['True'][sort_id][:max_size], fmt='o', label='True errors', color='tab:green')
ax[0].set_ylabel('Interval width')
ax[0].set_xlabel('Order of True values')
ax[0].legend()

ax[1].scatter(y_test, r['QRF_TC'], label='QRF_TC')
ax[1].scatter(y_test, r['SPLIT'], label='SPLIT')
ax[1].set_xlabel("True values", fontsize=12)
ax[1].set_ylabel("Interval width", fontsize=12)
ax[1].set_xscale("linear")
ax[1].set_ylim([0, np.max(r['QRF_TC'])*1.1])
ax[1].legend()

ax[2].scatter(y_test, r['True'], label='True errors', color='tab:green')
ax[2].scatter(y_test, r['QRF_TC'], label='QRF_TC', color='tab:blue')
ax[2].scatter(y_test, r['SPLIT'], label='SPLIT', color='tab:orange')
ax[2].set_xlabel("True values", fontsize=12)
ax[2].set_ylabel("Interval width", fontsize=12)
ax[2].set_xscale("linear")
ax[2].set_ylim([0, np.max(r['QRF_TC'])*1.1])
ax[2].legend()

plt.suptitle('Intervals width comparisons between SPLIT, QRF-TC, and the True error ', size=20)
plt.show()

width comparisons

Notebooks

The notebook below show how to you use ACPI for mean regression and quantile regression.

About

Adaptive Conformal Prediction Intervals (ACPI) is a Python package that enhances the Predictive Intervals provided by the split conformal approach by employing a weighting strategy.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published