-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: SDDiP #237
WIP: SDDiP #237
Conversation
@variable(sp, x, SDDP.SDDiP.BinaryState(), initial_value = 1.0)
@variable(sp, 0 <= x <= 10, SDDP.SDDiP.ContinuousState(), initial_value = 1.0)
@variable(sp, 0 <= x <= 10, SDDP.SDDiP.IntegerState(), initial_value = 1.0)
See previous comment. Make the new JuMP variable types and set Bonus points for including a way to check for
👍
Why is it needed at the moment?
More the merrier. |
It's not, I pushed it without realizing |
So by the Line 167 in 67378f2
which relies on SDDP knowing about SDDiP, and I couldn't think of how to get this working without changing code algorithm.jl , or else parameterizing Node . So would you parameterize Node by the type of states it has (SDDP vs SDDiP)?
There is also minor gotcha here Line 868 in 67378f2
but basically we just need to tell a Node whether it will be involved in SDDiP. |
It doesn't have to be a submodule. You could also go I don't really want to parameterize |
After a think, I don't think there is a reason to introduce |
Is it possible to add the keyword to train? Then the default remains as-is, and we can solve the LP relaxation. You could even pass in a type, add it to abstract type AbstractSMIPSolver end
struct ContinuousRelaxation <: AbstractSMIPSolver end
struct SDDiP <: AbstractSMIPSolver end
struct MyCoolNewSolver <: AbstractSMIPSolver end
SDDP.train(model, integrality_method = SDDP.SDDiP()) (Edit: although, then we get in trouble if you try to solve multiple times. Maybe an argument to |
Oops I pushed before reading the comment Edit: I'll think about the keyword to train |
What exactly is the problem you see if we try to solve multiple times? If the keyword is added to |
Basically I would worry that someone would go SDDP.train(model, integrality_method = SDDP.SDDiP(), iteration_limit = 10)
SDDP.train(model, integrality_method = SDDP.ContinuousRelaxation(), iteration_limit = 10) Would anything problematic happen the second time? |
Yes this is actually worse than what I was thinking, because the first train would add a bunch of new state variables that we don't want for the second train. An argument to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good @lkapelevich. Modulo the test failures. But I guess you're on those.
Codecov Report
@@ Coverage Diff @@
## master #237 +/- ##
==========================================
+ Coverage 93.82% 94.07% +0.25%
==========================================
Files 14 17 +3
Lines 1231 1351 +120
==========================================
+ Hits 1155 1271 +116
- Misses 76 80 +4
Continue to review full report at Codecov.
|
Looks like you forgot to commit |
Great! The doc-test failures aren't your fault. The other PR's are also failing. I'm trying to fix it. |
If you rebase onto the latest master, the documentation should be fixed. |
@odow anything else? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great @lkapelevich!
The only other thing is some docs. If you have time, great. If you don't, I'll merge this as-is and add a TODO for some docs.
I don't want to spend time writing docs right now, but happy to answer questions for docs. |
ext[:issddip]=true
and find a nice way to distinguish for a node/subproblem that we want Lagrangian duals (half done)better initial values, get rid of extra vectors being createdraise an issue: state variables are assumed to be nonnegative herejust mentioned in docstringscloses SDDiP #195 and Complete rewrite lkapelevich/SDDiP.jl#21