Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default values for Ensemble.from_sample #730

Merged
merged 8 commits into from
Oct 14, 2021

Conversation

PaulJonasJost
Copy link
Collaborator

fixes #727.

@PaulJonasJost PaulJonasJost changed the title added default values to ensemble.from_sample Default values for Ensemble.from_sample Sep 30, 2021
@codecov-commenter
Copy link

codecov-commenter commented Sep 30, 2021

Codecov Report

Merging #730 (deaf478) into develop (9d84d3e) will decrease coverage by 0.41%.
The diff coverage is 54.54%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #730      +/-   ##
===========================================
- Coverage    89.74%   89.33%   -0.42%     
===========================================
  Files           99       99              
  Lines         6720     6730      +10     
===========================================
- Hits          6031     6012      -19     
- Misses         689      718      +29     
Impacted Files Coverage Δ
pypesto/ensemble/utils.py 62.62% <ø> (-9.10%) ⬇️
pypesto/ensemble/ensemble.py 77.10% <54.54%> (-0.95%) ⬇️
pypesto/objective/amici_util.py 75.43% <0.00%> (-8.78%) ⬇️
pypesto/predict/amici_predictor.py 91.75% <0.00%> (-6.19%) ⬇️
pypesto/objective/amici_calculator.py 91.66% <0.00%> (-1.67%) ⬇️
pypesto/sample/geweke_test.py 94.36% <0.00%> (+2.81%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9d84d3e...deaf478. Read the comment docs.

Copy link
Member

@jvanhoefer jvanhoefer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't we need some checks, if x_names, lower_bound, ... are in **kwargs?

@PaulJonasJost
Copy link
Collaborator Author

wouldn't we need some checks, if x_names, lower_bound, ... are in **kwargs?

Yes definitely currently on it. 👍

@dilpath
Copy link
Member

dilpath commented Sep 30, 2021

The docstring for lower_bound in EnsemblePrediction looks incorrect (also seems completely different to upper_bound). Is lower_bound used with different meaning in different places or is this just a typo?

lower_bound:
Array of potential lower bounds for the predictions, should have
the same shape as the output of the predictions, i.e., a list of
numpy array (one list entry per condition), with the arrays having
the shape of n_timepoints x n_outputs for each condition.
upper_bound:
array of potential upper bounds for the parameters

@yannikschaelte
Copy link
Member

it's just shortened, right? arguably too shortened, should at least say that the similar to lower bound.

@yannikschaelte
Copy link
Member

ok thx for pointing out it's predictions vs parameters. agreed, while conceptually both may make sense, from usage it should probably be parameters.

Copy link
Member

@jvanhoefer jvanhoefer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, looks good!

@PaulJonasJost
Copy link
Collaborator Author

Given the return statement of Ensemble.predict(), I agree, it should be parameters if anything, but does it even make sense? As far as I can see, the EnsemblePredictor does not contain any info about the parameters anymore and is just meant as simulation result. But in that case lower bounds for parameters do not make sense? Should I remove it entirely?

@dilpath
Copy link
Member

dilpath commented Oct 4, 2021

Given the return statement of Ensemble.predict(), I agree, it should be parameters if anything, but does it even make sense? As far as I can see, the EnsemblePredictor does not contain any info about the parameters anymore and is just meant as simulation result. But in that case lower bounds for parameters do not make sense? Should I remove it entirely?

Seems like the intention is for bounds to be on parameters in Ensemble (see usage in Ensemble.check_identifiability:

def check_identifiability(self) -> pd.DataFrame:
), and on predictions in EnsemblePrediction (currently no usage...). Hence it doesn't make sense to pass bounds from Ensemble to EnsemblePrediction, as done in Ensemble.predict:
lower_bound=self.lower_bound,
upper_bound=self.upper_bound

As @yannikschaelte said, both make sense, but only bounds on parameters are currently used. Maybe add a NotImplementedError to EnsemblePrediction.__init__ if bounds are passed? The names lower_bound and upper_bound could also be renamed in EnsemblePrediction to avoid confusion with the meaning of these names elsewhere in pyPESTO (and in Ensemble).

@PaulJonasJost PaulJonasJost merged commit 0c88715 into develop Oct 14, 2021
@PaulJonasJost PaulJonasJost deleted the feature_defaults_for_ensemble_from_sample branch October 14, 2021 12:49
@yannikschaelte yannikschaelte mentioned this pull request Oct 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants