-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Predictions #544
Predictions #544
Conversation
Codecov Report
@@ Coverage Diff @@
## develop #544 +/- ##
===========================================
- Coverage 90.43% 84.07% -6.36%
===========================================
Files 65 69 +4
Lines 4225 4403 +178
===========================================
- Hits 3821 3702 -119
- Misses 404 701 +297
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good 👍 Making states available to the user would be useful for state uncertainties.
The notebook fails during optimization.
# loop over parameters | ||
for i_par in range(output_sensi[i_out].shape[1]): | ||
# create filename for this condition and parameter | ||
tmp_filename = os.path.join(output_path, output_file_dummy + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could use the Path
object here too.
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
outputs_sensi = amici_sy | ||
timepoints = amici_t | ||
if self.post_processor is not None: | ||
outputs = self.post_processor(outputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
outputs = self.post_processor(outputs) | |
outputs = self.post_processor(amici_y) |
No change in logic, but better consistency with code below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One important aspect here though is that this code may have issues if the user-provided post processing routines apply their changes in-place. May want to add documentation about this or add safeguards in the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will change that a bit anyway, as this part of the code isn't really what I was aiming for. But yes, this is an important suggestion, will do so
n_simulations = 1 | ||
else: | ||
# simulate only a subset of conditions | ||
n_simulations = 1 + int(len(self.amici_objective.edatas) / |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would feel more comfortable with an explicit call to ceil
here.
of the simulations. Default are the timepoints of the amici model. | ||
This method takes a list of ndarrays (as returned in the field | ||
['t'] of amici ReturnData objects) as input. | ||
max_num_conditions: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a fan of the name, since its not really a max. What about condition_chunk_size
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chunk_size is the word I was looking for and didn't find... :D 👍
test/petab/test_amici_prediction.py
Outdated
def conversion_reaction_model(): | ||
# read in sbml file | ||
model_name = 'conversion_reaction' | ||
sbml_file = f'../../doc/example/{model_name}/model_{model_name}.xml' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use os.path.dirname
and os.path.join
@@ -21,6 +21,10 @@ | |||
Objective, | |||
NegLogPriors, | |||
ObjectiveBase) | |||
from .prediction import ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add pypesto.prediction
to API documentation (update sphinx main file).
test/petab/test_amici_prediction.py
Outdated
|
||
|
||
def test_petab_prediction(): | ||
yaml_file = '../../doc/example/conversion_reaction/' \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use relative path based on __file__
to make tests run from different working directories.
All Comments should be addressed now. In particular, the changes I did now were:
|
Fine to merge, or any further comments, remarks? |
Didn't get through everything, but two reviewers are enough for me. GTG. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, fine for me, just a few suggestions but they're not so important I think.
|
||
from .constants import (MODE_FUN, OBSERVABLE_IDS, TIMEPOINTS, OUTPUT, | ||
OUTPUT_SENSI, TIME, CSV, H5, T, Y, SY, RDATAS) | ||
OUTPUT_SENSI, CSV, H5, T, X, SX, Y, SY, RDATAS) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, I think these constants should be unabbreviated and as unambiguous as possible (as in PEtab). e.g. OUTPUT_SENSI
-> OUTPUT_SENSITIVITY
, and X
-> STATE
. In particular, STATE
instead of X
might be important here since x
is used for parameters elsewhere in pyPESTO, so if a constant is added at some point to represent parameters, it might also be PARAMETER
instead of X
too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding OUTPUT_SENSITIVITY
and others, should some of these constants be imported from AMICI? For now, this is fine as is for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another alternative, could be renamed e.g. X
to AMICI_X
or AMICI_STATE
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
amici_outputs: | ||
List of Dicts with the fields 't', 'x', 'sx', 'y', and 'sy' as | ||
returned by an amici simulation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this can be removed now
timepoints = [outputs['t'] for outputs in amici_outputs] | ||
# add outputs and sensitivities if requested | ||
if 0 in sensi_orders: | ||
outputs = [outputs['y'] for outputs in amici_outputs] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
outputs = [outputs['y'] for outputs in amici_outputs] | |
outputs = [amici_output['y'] for amici_output in amici_outputs] |
A little confusing without understanding the scope, since outputs
would be on both sides of the assignment operator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also could rename elsewhere, where outputs
is used to refer to elements of amici_outputs
and not the outputs
variable.
result.to_csv(tmp_filename, sep='\t') | ||
# postprocess | ||
if self.post_processor is not None: | ||
outputs = self.post_processor(amici_outputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
outputs
is overwritten here. If intended, then the above code that previously assigned a value to outputs
(and similarly outputs_sensi
) could be moved to be elif
of these conditions, as e.g.
# add outputs and sensitivities if requested and post-processing was skipped
if self.post_processor is not None:
...
elif 0 in sensi_orders:
outputs = [amici_output['y'] for amici_output in amici_outputs]
if self.post_processor_sensi is not None:
...
elif 1 in sensi_orders:
outputs_sensi = [amici_output['sy'] for amici_output in amici_outputs]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could have the post-processors be a dictionary such that any output can be processed, e.g.
AMICI_TIME = 't'
AMICI_STATE = 'x'
AMICI_OBSERVABLE = 'y'
# user input
post_processors = {
AMICI_OBSERVABLE: my_custom_post_processor_callable,
'sy': my_custom_post_processor_sensi_callable,
}
# default processor
default_post_processors = {
AMICI_TIME: lambda t: t,
AMICI_STATE: lambda x: x,
AMICI_OBSERVABLE: lambda y: y,
'sy': lambda sy: sy,
}
output_types = amici_outputs[0].keys()
# actual processing
outputs = {}
for output_type in output_types:
outputs[output_type] = None
if output_type in post_processors:
outputs[output_type] = post_processors[output_type](amici_outputs) # deepcopy amici_outputs?
elif output_type in default_post_processors:
outputs[output_type] = default_post_processors[output_type](amici_outputs) # deepcopy amici_outputs?
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
Co-authored-by: Dilan Pathirana <59329744+dilpath@users.noreply.github.com>
# return dependent on sensitivity order | ||
return results | ||
|
||
def _get_outputs(self, x, sensi_orders, mode) -> Tuple[List, List, List]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typehints?
Co-authored-by: Fabian Fröhlich <fabian@schaluck.com>
This is a concept for predictions within pyPESTO. I pushed this branch for a PR now, although 2 things are still missing:
Yet, it's just before christmas and I wanted to have things pushed as I think most corner stones are set and we may want to discuss whether the overall concept is okay.
Why this whole concept of predictions? It's for four reasons: