New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using this package on machine learning results #22
Comments
Hey Ivan, thanks so much for your interest in the toolbox! In order to use our evaluations, your model needs to produce some uncertainty estimate (something our toolbox does not provide). There are many ways to do this, but one easy way is to alter your regression model so that the outputs are the parameters of a conditional Gaussian (i.e. instead of producing y_pred produce mu_pred, sigma_pred for an output) and then train on negative log likelihood. You can make this even better by making an ensemble of these models. I would recommend checking out this paper as a good starting point: |
Hi @IanChar Many thanks for your prompt answer! What about using bootstrap confidence intervals? They can be applied to estimate uncertainty in the quality of the classification (or prediction). So, I can use the "+/- error" from the confidence interval to estimate a low/high bounds standard deviation on predictions samples. What do you think? Ivan |
Yep that's definitely a valid method too! |
Marking this issue as closed. |
Hi,
Thanks for making this package available to us! I have a simple question:
I have a data set split into train/validation sets. A regressor (or classifier) is trained, and then by using the validation set I get an estimate Y_prediction. From the same validation set, I have the Y_true. So, how would you suggest to compute the standard deviation on predictions from a regressor (or classifier)?
Many thanks,
Ivan
The text was updated successfully, but these errors were encountered: