-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use type-stable nlpmodels #50
Comments
I am curious about how much this impacts the results on NLPModels. It can live in the variants for testing, but we should defer to the developers of NLPModels, if they want this to be the default implementation or not. |
on the 118 case, it went from Dict{String, Any} with 10 entries:
"cost" => 97213.6
"variables" => 1088
"constraints" => 1539
"case" => "data/pglib_opf_case118_ieee.m"
"time_total" => 3.76066
"time_build" => 1.11021
"solution" => Dict("p_102_66_65"=>-1.66211, "p_132_84_85"=>-0…
"time_solve" => 2.60632
"time_data" => 0.0441308
"feasible" => true to Dict{String, Any} with 10 entries:
"cost" => 97213.6
"variables" => 1088
"constraints" => 1539
"case" => "data/pglib_opf_case118_ieee.m"
"time_total" => 2.30815
"time_build" => 1.22802
"solution" => Dict("p_102_66_65"=>-1.66211, "p_132_84_85"=>-0…
"time_solve" => 1.03217
"time_data" => 0.0479541
"feasible" => true so the solve is much faster |
The 793 case went from
to
So again, the solve time is 2X faster. But still only a small reduction in total runtime from 46 seconds to 39 seconds. |
@tmigot @amontoison thoughts? |
See #51 |
Hi @ccoffrin @odow ! Thank you for maintaining this and the feedback. In general, I think type-stable functions behave better in Julia, so it helps the auto-diff. Overall, I am fine with both solutions, it depends more on the benchmark philosophy. |
@tmigot thank you for following up! I have completed a detailed study of the two models for your consideration. The key decisions I would like you to provide are,
While you consider these choices, let me note that, my original inception of "rosetta-opf" was to be an educational tool for transfer learning in the Julia optimization ecosystem (like rosetta code is for programming languages). It should help folks understand how to model across the different frameworks. Over time it has become well known as a performance benchmark, however I still say this is not the primary objective of the project. For example, the Symbolic AD variant of the JuMP model is more performant on AC-OPF than the default one. However, I have not made it the default implementation here because it is not the best default AD choice for NLP modeling in JuMP. With that said, here are the two versions of NLPModels we now have, NLPModels - this is the current implementation Based on the table below the NLPModels-CS version is consistently faster (around 10%-25% in the large size limit).
|
Thanks again @ccoffrin for the amount of work this represents. This is very interesting. Do you have the logs of what failed in Back to your primary question:
The reason being that this benchmark uses Ipopt so essentially it requires 3 derivatives from the model:
We are not (at the moment) investigating more ways to compute the sparse Jacobian/Hessian with automatic differentiation (even though one issue JuliaSmoothOptimizers/ADNLPModels.jl#204 is directly linked to this topic), but more interested in ways to compute matrix-vector products directly with AD, i.e. without having to compute/evaluate the whole matrix. |
Copied from Optimization example. I'll make a PR once CI is merged
The text was updated successfully, but these errors were encountered: