-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the quantile_forest to predict probabilities and other statistics #19
Comments
Thanks for your interest in the package! It should be possible to use the package to get everything you need for these tasks. To answer your specific questions:
To make this concrete, here's a simple example on a toy dataset. For each test sample, we construct a vector of the training responses from the proximity counts (by repeating them according to their frequency/weight) and then use these vectors to calculate some statistics of interest: import numpy as np
import scipy as sp
from quantile_forest import RandomForestQuantileRegressor
from sklearn import datasets
from sklearn.model_selection import train_test_split
X, y = datasets.load_diabetes(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
reg = RandomForestQuantileRegressor().fit(X_train, y_train)
proximities = reg.proximity_counts(X_test)
# For each test sample, construct an array of the training responses used for prediction.
samples = [np.concatenate([np.repeat(y_train[i[0]], i[1]) for i in prox]) for prox in proximities]
print(f"Mean:\n{np.mean(samples, axis=1)}\n")
print(f"Variance:\n{np.var(samples, axis=1)}\n")
print(f"Skewness:\n{sp.stats.skew(samples, axis=1)}\n")
print(f"Kurtosis:\n{sp.stats.kurtosis(samples, axis=1)}\n") |
I see. Thank you very much for your help! With best wishes.
…------------------ 原始邮件 ------------------
发件人: "zillow/quantile-forest" ***@***.***>;
发送时间: 2023年12月21日(星期四) 上午9:25
***@***.***>;
***@***.******@***.***>;
主题: Re: [zillow/quantile-forest] Use the quantile_forest to predict probabilities and other statistics (Issue #19)
Thanks for your interest in the package! It should be possible to use the package to get everything you need for these tasks.
To answer your specific questions:
Quantile ranks will essentially return the position of the objects in the ECDF, but not the ECDF itself. However, as I'll mention below, it should be possible to retrieve the training probabilities and reconstruct the ECDF from model outputs.
While the package does not provide direct methods for calculating these statistics, they can be calculated as below.
Yes, you can extract the training sample frequencies or weights for each prediction via the proximity counts method. As you note, these weights can be used to calculate the statistics of interest, such as skewness or kurtosis.
To make this concrete, here's a simple example on a toy dataset. For each test sample, we construct a vector of the training responses from the proximity counts (by repeating them according to their frequency/weight) and then use these vectors to calculate some statistics of interest:
import numpy as np import scipy as sp from quantile_forest import RandomForestQuantileRegressor from sklearn import datasets from sklearn.model_selection import train_test_split X, y = datasets.load_diabetes(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) reg = RandomForestQuantileRegressor().fit(X_train, y_train) proximities = reg.proximity_counts(X_test) # For each test sample, construct an array of the training responses used for prediction. samples = [np.concatenate([np.repeat(y_train[i[0]], i[1]) for i in prox]) for prox in proximities] print(f"Mean:\n{np.mean(samples, axis=1)}\n") print(f"Variance:\n{np.var(samples, axis=1)}\n") print(f"Skewness:\n{sp.stats.skew(samples, axis=1)}\n") print(f"Kurtosis:\n{sp.stats.kurtosis(samples, axis=1)}\n")
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Hello author, thank you for your excellent work!
For Quantile Regression Forests, I have been using Nicolai Meinshausen development of R packages "quantregForest" (http://github.com/lorismichel/quantregForest), but the performance of the R is clearly Python, So when I came across the Quantle-Forest, I was delighted. I need your help with three questions.
First, is Quantile Ranks used to calculate probabilities? In quantregForest, you can achieve probability prediction by setting the "what=ecdf" parameter:
predict(object, newdata = NULL, what = , ...)
Nicolai Meinshausen explains the what parameter: Can be a vector of quantiles or a function.Default for what is a vector of quantiles (with numerical values in [0,1]) for which the conditional quantile estimates should be returned.
If a function it has to take as argument a numeric vector and return either a summary statistic (such as mean,median or sd to get conditional mean, median or standard deviation) or a vector of values (such as with quantiles or via sample) or a function (for example with ecdf).
Second, can quantile-forest be used to calculate other statistics? For example, kurtosis, skewness, variance, etc.
Third, whether the weight of the training sample corresponding to the prediction sample can be obtained. If so, these two problems will be solved.
Hope to get your feedback, wish you a happy life!
The text was updated successfully, but these errors were encountered: