-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate batch Hessian in ReducedSpaceEvaluator
routine
#179
Conversation
* add support for batch Hessian
* generate Hessian code with metaprogramming * change signature of hessian_lagrangian_penalty_prod! * ReducedSpaceEvaluator now uses batch Hessian by default
Allow to deport the computation on the GPU, while using a CPU compatible solver.
* add CUSOLVERRF in the dependencies
Side note: this PR target the PR#178, to ease the reviewing process. |
@@ -19,7 +20,7 @@ SparseDiffTools = "47a9eef4-7e08-11e9-0b38-333d64bd3804" | |||
TimerOutputs = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f" | |||
|
|||
[compat] | |||
CUDA = "~2.3, 3.0" | |||
CUDA = "~2.3, ~3.2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, this creates a conflict with my push to develop where I forced 2.6
instead of 2.3
. 2.3
has issues on Summit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is resolved
Closing in favor of #185 |
This PR allows to use the batch Hessian in any optimization routine.
Further:
src/Evaluators/reduced_evaluator.jl
BridgeDeviceEvaluator
to wrap an Evaluator on the GPU and move the results back to the CPU