-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Towards linear optimization (dual problem, sensitivity problem, output gradient) #1110
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not complete sure about the interface myself, but some remarks:
- I would rename
dual_model
todual
. What else should the dual of a model be if not a model. - I would ditch
solve_dual
completely. The user simply can writem.dual.solve()
- I don't like the name
solution_sensitivity
, but I'm not sure about a better name. How aboutsolve_d_mu
? - I would say that the actual computation of
solution_sensitivity
should happen inModel.compute
. I'm going to prepare a PR for the discussed changes tomorrow. output_gradient
sounds better thansolution_sensitivity
. But maybe alsooutput_d_mu
?- I would not add
P
andadjoint_approach
to the signature inModel
. I would assume this does not make much sense for arbitraryModels
. - One could consider base class implementations via finite differences.
Thanks, @sdrave for your very nice input.
I agree !
I think this is a good name. I agree
Ah, even better !!
agreed
I think an adjoint approach is always possible for computing the gradient arbitrary outputs of arbitrary models but maybe that goes a little bit too far. So I agree to removing it there.
That is a very nice idea. |
f4885a6
to
dd48e3c
Compare
dd48e3c
to
328821e
Compare
I adjusted this code to the new
In my opinion the names of 1. and 2. do not really fit to each other, but so far I did not find a better name for these methods. Apart from that, I think this PR is a good start for an optimization tutorial which I will prepare after this PR is merged. |
dd75622
to
4bd375f
Compare
Codecov Report
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about extending compute
to allow either passing a bool to solution_d_mu
/output_d_mu
or a tuple of parameter and index?
Maybe rename _compute_solution_d_mus
to _compute_solution_d_mu
to align it with the signature of compute
and rename _compute_solution_d_mu
to _compute_solution_d_mu
to something like _compute_solution_d_mu_single_direction
.
src/pymor/models/basic.py
Outdated
except AttributeError: | ||
assert self.output_functional is not None | ||
assert self.output_functional.linear | ||
# TODO: assert that the operator is symmetric |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before merging this, we need to find a solution here. One simple option would be to let the user pass an optional dual_operator
. However, I'm unsure if there are situations where also output_functional
needs to be modified to be usable as a rhs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, it is not even clear that a dual model has the output_functional
as right hand side. This probably is something that is usual for RB or Optimization, but in general one could also go for the dual model with the actual rhs as its right hand side.
We could thus also introduce an optional dual_rhs
argument
Thanks @sdrave for the nice feedback. I have tried to address all your comments. I am now using additional arguments PS: @renefritze Why did the enforce label check fail again? |
6f00929
to
fcbfc08
Compare
Hey @TiKeil it looks like this PR touches our CI config (and you are not in my contributor safelist), therefore I am not starting a gitlab-ci build for it and it will not be mergeable. |
Yeah, shot myself in the foot there with the update, didn't I? 🙄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving for the bridge bot to start the PR build again
15f8252
to
ca91a74
Compare
@renefritze , something seems to be wrong with the wheels? |
|
3c91f5a
to
ae76f3d
Compare
Thinking about it again, I think it is better to remain the question of "how to define a dual model?" open and open an issue for this instead. The reason is that there does not really exists a unique What is unique though is a |
ae76f3d
to
7362504
Compare
7362504
to
bbe4511
Compare
@TiKeil, have you thought about my proposal of taking a discrete point of view and always using |
Yea I did. |
Co-authored-by: Stephan Rave <stephanrave@uni-muenster.de>
…roduce return_array
146ca40
to
289330b
Compare
3358ccd
to
6abf6dd
Compare
This PR adds code for handling PDE-constrained parameter optimization. We only cover simple linear optimization in the sense that the output functional is linear. For that, we add code to compute the gradient of the output functional w.r.t. the parameter.
To compute the gradient, two approaches are feasible. One is to compute the sensitivities of the solutions and one is the adjoint approach, where a dual solution is used.
We also added a demoscript where we test all of the code by comparing the computational time between FOM and ROM implementation.
For more information on the topic, we also have a tutorial in #1205,