Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluator: compute reduced Hessian on GPU #118

Merged
merged 4 commits into from
Mar 10, 2021
Merged

Evaluator: compute reduced Hessian on GPU #118

merged 4 commits into from
Mar 10, 2021

Conversation

frapac
Copy link
Member

@frapac frapac commented Mar 9, 2021

Allow to evaluate Hessian on the GPU directly

ReducedSpaceEvaluator:

  • implement hessprod! with AutoDiff
  • implement hessprod_lagrangian_penalty! with AutoDiff
  • replace MATPOWER's Jacobian with AutoDiff's Jacobian
  • remove old attribute autodiff in ReducedSpaceEvaluator
  • improve type inference for AutoDiff
  • remove method get(nlp, AutoDiffBackend()), which
    was ambiguous
  • remove Hessian and Jacobian caches in ReducedSpaceEvaluator

Polar:

  • new function to compute Hessian of objective with AutoDiff,
    via Hessian-vector product
  • remove init_autodiff_factory in Polar/
  • add a new ConstantJacobian structure for linear function
  • add a new function is_linear to state whether a constraint is linear
  • add new storage types JacobianStorage and HessianStorage
  • rename cost_production as objective
  • rename \partial cost() as adjoint_objective!

* new function to compute Hessian of objective with AutoDiff,
  via Hessian-vector product
* rename `cost_production` as `objective`
* rename `\partial cost()` as `adjoint_objective!`
Allow to evaluate Hessian on the GPU directly

ReducedSpaceEvaluator:
* implement hessprod! with AutoDiff
* implement hessprod_lagrangian_penalty! with AutoDiff
* replace MATPOWER's Jacobian with AutoDiff's Jacobian
* remove old attribute `autodiff` in ReducedSpaceEvaluator
* improve type inference for AutoDiff
* remove method `get(nlp, AutoDiffBackend())`, which
  was ambiguous
* remove Hessian and Jacobian caches in ReducedSpaceEvaluator

Polar:
* remove init_autodiff_factory in Polar/
* add a new ConstantJacobian structure for linear function
* add a new function `is_linear` to state whether a constraint is linear
* add new storage types `JacobianStorage` and `HessianStorage`
@frapac frapac requested a review from michel2323 March 9, 2021 17:26
@frapac frapac mentioned this pull request Mar 9, 2021
18 tasks
Copy link
Member

@michel2323 michel2323 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really nice.

@@ -55,9 +55,9 @@ powerflow_solver = NewtonRaphson(tol=ntol)
nlp = ExaPF.ReducedSpaceEvaluator(polar, x0, u0;
linear_solver=algo, powerflow_solver=powerflow_solver)
convergence = ExaPF.update!(nlp, u0)
ExaPF.reset!(nlp)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lately I tried to get better at this. I have no idea why I spontaneously spread this around the code. Must be VisualCode's fault :-P.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No doubt that this is VisualCode's fault!

end

function ReducedSpaceEvaluator(
model, x, u;
constraints=Function[voltage_magnitude_constraints, active_power_constraints, reactive_power_constraints],
linear_solver=DirectSolver(),
powerflow_solver=NewtonRaphson(tol=1e-12),
want_jacobian=true,
want_hessian=true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing:
want_gpu :-)

reduced_gradient!(nlp, jv, jvx, jvx, jvu)
return
end

function ojtprod!(nlp::ReducedSpaceEvaluator, jv, u, σ, v)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the adjoint of the objective? Could be called gradient.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not exactly the adjoint of the objective. It's the adjoint of the vector-value function x -> [f(x); h(x)] with f the objective, h the nonlinear constraints

# ψ = -(∇gₓ' \ (∇²fx .+ ∇²gx))
if isa(hessvec, CUDA.CuArray)
∇gₓ = nlp.state_jacobian.Jx.J
∇gT = LinearSolvers.get_transpose(nlp.linear_solver, ∇gₓ)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, I should have looked at that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking of a cleaner implementation for all our linear operators. Let's discuss that today :-)

# Jacobian
∇gₓ = nlp.autodiff.Jgₓ.J::SpMT
∇gᵤ = nlp.autodiff.Jgᵤ.J::SpMT
# Two vector products
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is like a cache for preallocation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly!

@@ -125,6 +125,36 @@ function AutoDiff.jacobian!(polar::PolarForm, jac::AutoDiff.Jacobian, buffer)
return jac.J
end

# Handle properly constant Jacobian case
function AutoDiff.ConstantJacobian(polar::PolarForm, func::Function, variable::Union{State,Control})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Have you run into this case?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed! The constraints w.r.t. voltage magnitude are all linear, leading to a constant Jacobian.

* change signature of `hessian_prod_objective`
* remove deprecated functions
* rename adjoint_objective! as gradient_objective!
@frapac frapac merged commit 6406b88 into develop Mar 10, 2021
@frapac frapac deleted the fp/adjhessian branch March 12, 2021 11:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants