-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluator: compute reduced Hessian on GPU #118
Conversation
* new function to compute Hessian of objective with AutoDiff, via Hessian-vector product * rename `cost_production` as `objective` * rename `\partial cost()` as `adjoint_objective!`
Allow to evaluate Hessian on the GPU directly ReducedSpaceEvaluator: * implement hessprod! with AutoDiff * implement hessprod_lagrangian_penalty! with AutoDiff * replace MATPOWER's Jacobian with AutoDiff's Jacobian * remove old attribute `autodiff` in ReducedSpaceEvaluator * improve type inference for AutoDiff * remove method `get(nlp, AutoDiffBackend())`, which was ambiguous * remove Hessian and Jacobian caches in ReducedSpaceEvaluator Polar: * remove init_autodiff_factory in Polar/ * add a new ConstantJacobian structure for linear function * add a new function `is_linear` to state whether a constraint is linear * add new storage types `JacobianStorage` and `HessianStorage`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks really nice.
@@ -55,9 +55,9 @@ powerflow_solver = NewtonRaphson(tol=ntol) | |||
nlp = ExaPF.ReducedSpaceEvaluator(polar, x0, u0; | |||
linear_solver=algo, powerflow_solver=powerflow_solver) | |||
convergence = ExaPF.update!(nlp, u0) | |||
ExaPF.reset!(nlp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lately I tried to get better at this. I have no idea why I spontaneously spread this around the code. Must be VisualCode's fault :-P.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No doubt that this is VisualCode's fault!
end | ||
|
||
function ReducedSpaceEvaluator( | ||
model, x, u; | ||
constraints=Function[voltage_magnitude_constraints, active_power_constraints, reactive_power_constraints], | ||
linear_solver=DirectSolver(), | ||
powerflow_solver=NewtonRaphson(tol=1e-12), | ||
want_jacobian=true, | ||
want_hessian=true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing:
want_gpu :-)
reduced_gradient!(nlp, jv, jvx, jvx, jvu) | ||
return | ||
end | ||
|
||
function ojtprod!(nlp::ReducedSpaceEvaluator, jv, u, σ, v) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the adjoint of the objective? Could be called gradient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not exactly the adjoint of the objective. It's the adjoint of the vector-value function x -> [f(x); h(x)]
with f
the objective, h
the nonlinear constraints
# ψ = -(∇gₓ' \ (∇²fx .+ ∇²gx)) | ||
if isa(hessvec, CUDA.CuArray) | ||
∇gₓ = nlp.state_jacobian.Jx.J | ||
∇gT = LinearSolvers.get_transpose(nlp.linear_solver, ∇gₓ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha, I should have looked at that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking of a cleaner implementation for all our linear operators. Let's discuss that today :-)
# Jacobian | ||
∇gₓ = nlp.autodiff.Jgₓ.J::SpMT | ||
∇gᵤ = nlp.autodiff.Jgᵤ.J::SpMT | ||
# Two vector products |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is like a cache for preallocation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly!
@@ -125,6 +125,36 @@ function AutoDiff.jacobian!(polar::PolarForm, jac::AutoDiff.Jacobian, buffer) | |||
return jac.J | |||
end | |||
|
|||
# Handle properly constant Jacobian case | |||
function AutoDiff.ConstantJacobian(polar::PolarForm, func::Function, variable::Union{State,Control}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. Have you run into this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed! The constraints w.r.t. voltage magnitude are all linear, leading to a constant Jacobian.
* change signature of `hessian_prod_objective` * remove deprecated functions * rename adjoint_objective! as gradient_objective!
Allow to evaluate Hessian on the GPU directly
ReducedSpaceEvaluator:
autodiff
in ReducedSpaceEvaluatorget(nlp, AutoDiffBackend())
, whichwas ambiguous
Polar:
via Hessian-vector product
is_linear
to state whether a constraint is linearJacobianStorage
andHessianStorage
cost_production
asobjective
\partial cost()
asadjoint_objective!