-
Notifications
You must be signed in to change notification settings - Fork 17
Add reverse-forward hprod #171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #171 +/- ##
==========================================
+ Coverage 95.32% 95.47% +0.14%
==========================================
Files 14 14
Lines 1647 1744 +97
==========================================
+ Hits 1570 1665 +95
- Misses 77 79 +2
☔ View full report in Codecov by Sentry. |
|
Connected to #165 When it works, the results are promising https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl/blob/benchmark/benchmark/2023-07-07_adnlpmodels_benchmark_hprod_optimized_nscal_1000_mono.png For instance, the following fails: This can straightforwardly be adapted to compute the sparse objective Hessian. |
cce7f22 to
2afe018
Compare
|
@jbcaillau @ocots @amontoison @BaptisteCbl I added the hprod for the Lagrangian Hessian and optimized sparse hessian.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic Tangi! 🎉
|
Test with this file: https://github.com/ocots/JumpVSADNLP/blob/master/bench.jl Hessian of Lagrangian
ADNLP
1.191 ms (157 allocations: 141.55 KiB)
JuMP
60.402 μs (137 allocations: 24.08 KiB)
Jacobian of constraints
ADNLP
10.509 μs (22 allocations: 20.70 KiB)
JuMP
13.329 μs (15 allocations: 19.80 KiB)Seems to rock! |
|
Great job! New release please 🙏🙏🙏 (JuliaCon next Tuesday) |
|
Let me add some documentation on the new features and improvements, and then the new release (likely tomorrow). |
|
Many thanks @tmigot for the timely update. Doing some more tests with @PierreMartinon (Goddard test case). |
|
Amazing results @tmigot! Congratulations! |
|
@tmigot @amontoison check this |
|
Thanks @jbcaillau! Our computation of the sparse Hessian is 20x / 30x slower than JuMP. |
|
@amontoison thanks for the feedback. probably some profiling to do on our side as our objective and constraints evaluations might be far from being efficiently evaluable compared to the pure Julia / JuMP code written for the comparison. ps. greetings from JuliaCon2023 with @ocots @gergaud ... and @tmigot ! |
only for objective function hprod