New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/tf probability #110
Feature/tf probability #110
Conversation
use tensorflow_probability to build probabilistic models. The goal is to provide better uncertainty estimations. Also needed to change X_meta in X_meta.to_numpy() to allow __call__ function on the list of inputs.
…darray. Now we cast X_meta to a np.array in any case. It fixes a bug with __call__ method of tensorflow models.
…name add bayesian models
WIP clean it to make a readable tuto instead of an explo
…del architecture function and simplify flipout and variational models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cleaned this pull request and manage to test it on a larger dataset (the movies dataset : a 100 000 movies plots to classify)
now this pull request looks good for a review please :)
Idea to improve (as a next step) Another idea is to change the way we compute lower/upper bounds : we also could sample the outputs thousands of times and determine a bootstrap 95%-uncertainty area |
…n and another example with no automation
This pull request is linked to #104
The changes are optional. The user will need to pip install melusine[tf-probability] to use this feature.
The base Melusine is this way not impacted.
Description
New models are available with feature for uncertainty estimation using TFP (tensorflow-probability)
The TFP based models output are no punctual estimations but distribution on probabilities.
In other words, for each prediction : a distribution on the estimated probability is computed.
We propose to use this distribution get
Fixes #104
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
I plan to test it using an open source dataset to check if the classification performances are the same (punctual estimation)
Test Configuration:
Checklist: