Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hierarchical optimization: avoid extra simulations #973

Open
dweindl opened this issue Nov 15, 2022 · 4 comments · Fixed by #1245
Open

Hierarchical optimization: avoid extra simulations #973

dweindl opened this issue Nov 15, 2022 · 4 comments · Fixed by #1245
Assignees
Labels
enhancement New feature or request hierarchical Related to hierarchical optimization performance

Comments

@dweindl
Copy link
Member

dweindl commented Nov 15, 2022

When using hierarchical optimization, there are currently two simulations performed during each objective evaluations. This could be avoided if the gradient is not required and could significantly improve performance.

See here:

# TODO speed gain: if no gradient is required, then simulation can be
# skipped here, and rdatas can be updated in place
# (y, sigma, res, llh).

Ideally after AMICI-dev/AMICI#2215

@dweindl dweindl added enhancement New feature or request performance labels Nov 15, 2022
@dweindl dweindl added the hierarchical Related to hierarchical optimization label Nov 17, 2023
@Doresic
Copy link
Contributor

Doresic commented Dec 4, 2023

Will look into and update

@dweindl
Copy link
Member Author

dweindl commented Apr 19, 2024

@Doresic : This is finished, right?

@Doresic
Copy link
Contributor

Doresic commented Apr 19, 2024

Yes, mostly done.
That's the calculate_directly call of the RelativeCalculator.

Amici is still called twice through call_amici_twice in some special cases. One of those is also if 2 is in sensi_orders.
This is a part that can be further optimized. As far as I know, amici doesn't really calculate 2nd order sensitivities but uses FIM or some BFGS-type of approximation. This is something that could be implemented in the calculate_directly call as well. In that case, one would not have to call amici twice even when requesting 2nd order derivatives.

One special case of calling with 2nd order sensitivities, but can already be used with calculate_directly, is optimization with fides. If one just uses pypesto.optimizer.FidesOptimizer() then it will be used with call_amici_twice as fides always calls for second order sensitivities which are then returned by amici as some approximation.
However, one can instead select pypesto.optimizer.FidesOptimizer(hessian_update = fides.BFGS) which will make fides calculate 2nd order approximations itself, so it will not request them from the objective. Then, the model will not be simulated twice and calcualte_directly will be used.

TLDR: can be closed, but can also be slightly improved.

@dweindl
Copy link
Member Author

dweindl commented Apr 19, 2024

Ah, right. Thanks for the update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request hierarchical Related to hierarchical optimization performance
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants