You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To show the validity of my experiments, I want to use PowerModels.jl to provide reference solutions.
For my current work, I am specifically interested in the SOC approximation - corresponding to SOCWRPowerModel.
I had little success, however, consistently reproducing results.
Thus, I went on to trying to reproduce SOCWRPowerModel as exact as possible.
Here is the current model:
I used the notation in Jabr 2006, so wr = Rij, wi = Iij, w = ui.
For "pglib_opf_case3_lmbd.m" I included a comparison of the printed JuMP models of PowerModels.jl (left) and the model below (right).
In the model below pt[e] corresponds to p(e, fbus[e], tbus[e]), while pf[e] corresponds to p(e, tbus[e], fbus[e]).
While PowerModels.jl produces the following accurate result:
Dict{String, Any} with 3 entries:
"1" => Dict{String, Any}("qg"=>0.224946, "pg"=>0.950497)
"2" => Dict{String, Any}("qg"=>0.18509, "pg"=>2.24491)
"3" => Dict{String, Any}("qg"=>0.096942, "pg"=>0.0)
The model below produces this result:
Dict{String, Dict{String, Float64}} with 3 entries:
"1" => Dict("qg"=>-0.997569, "pg"=>0.756516)
"2" => Dict("qg"=>-0.514129, "pg"=>2.24656)
"3" => Dict("qg"=>-0.483644, "pg"=>1.0e-8)
I most recently
added the start values,
additional constraints to tighten the problem (lifted nonlinear cuts) and
a lower tolerance of the solver,
with no success.
Are there any glaring issues that I am just missing, or anything else PowerModels does under the hood?
If you have any suggestions, I would appreciate it a lot.
Hi @antonhinneck! Thanks for this post. I unfortunately do not have time to work on understanding the differences between your model and the one implemented in PowerModels. However, if you try posting this injury to Julia's discourse forum others may be able to help you. See https://discourse.julialang.org/c/domain/opt/13
Methodologically the way I would approach debugging this would be to check the two generated JuMP models side-by-side (as you have done in this post) match up the constraints 1-by-1 and find out where the differences are. If at some point it is not clear how PowerModels arrives at some constraint or parameters I can point you to some source of where it comes from.
Thanks @ccoffrin.
The pointer to the concise soc formulation was already a big help.
Any further questions, I will direct at the Julia discourse forum.
To show the validity of my experiments, I want to use PowerModels.jl to provide reference solutions.
For my current work, I am specifically interested in the SOC approximation - corresponding to SOCWRPowerModel.
I had little success, however, consistently reproducing results.
Thus, I went on to trying to reproduce SOCWRPowerModel as exact as possible.
Here is the current model:
I used the notation in Jabr 2006, so wr = Rij, wi = Iij, w = ui.
For "pglib_opf_case3_lmbd.m" I included a comparison of the printed JuMP models of PowerModels.jl (left) and the model below (right).
In the model below pt[e] corresponds to p(e, fbus[e], tbus[e]), while pf[e] corresponds to p(e, tbus[e], fbus[e]).
While PowerModels.jl produces the following accurate result:
Dict{String, Any} with 3 entries:
"1" => Dict{String, Any}("qg"=>0.224946, "pg"=>0.950497)
"2" => Dict{String, Any}("qg"=>0.18509, "pg"=>2.24491)
"3" => Dict{String, Any}("qg"=>0.096942, "pg"=>0.0)
The model below produces this result:
Dict{String, Dict{String, Float64}} with 3 entries:
"1" => Dict("qg"=>-0.997569, "pg"=>0.756516)
"2" => Dict("qg"=>-0.514129, "pg"=>2.24656)
"3" => Dict("qg"=>-0.483644, "pg"=>1.0e-8)
I most recently
with no success.
Are there any glaring issues that I am just missing, or anything else PowerModels does under the hood?
If you have any suggestions, I would appreciate it a lot.
Best,
Anton
The text was updated successfully, but these errors were encountered: