Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Markov State behavior #120

Closed
rodripor opened this issue Apr 17, 2018 · 8 comments · Fixed by #152
Closed

Markov State behavior #120

rodripor opened this issue Apr 17, 2018 · 8 comments · Fixed by #152
Labels

Comments

@rodripor
Copy link

Hello,

When running SDDP algortihm with Markov State model in all forward passes the first stage state is 1. Is there a way to modify this behavior through a parameter?

When the model is resolved, i can plot valuefunction depending on markov state but i cant get explicitly the set of cuts associated with that state in that stage. Is there a way to obtain such cuts from framework? I didnt find a structure where this cuts are stored.

Thanks for your help,
Rodrigo

@odow
Copy link
Owner

odow commented Apr 17, 2018

Is there a way to modify this behavior through a parameter?

The markov_transition argument to SDDPModel takes a Vector{Array{Float64, 2}} with one element for each stage. Each element is a MxN matrix. We read the matrix markov_transition[t][i,j] as the probability of transitioning from Makov state i in stage t-1 to Markov state j in stage t. The first stage is special as it assumes there is a "zero-th" stage (which I will call the root node) with one Markov state.

If you want two Markov states in the first stage, then the first element is a 1x2 array.

For example, here is the transition matrix for a problem with two stages, and two Markov states within each stage.

transition = Array{Float64, 2}[
    [ 0.5 0.5 ],
    [ 0.6 0.4 ; 0.4 0.6]
]

Is there a way to obtain such cuts from framework?

You have two options. The first programatically returns a list of SDDP.Cut objects

STAGE, MARKOV   = 1, 2
sp = SDDP.getsubproblem(m, STAGE, MARKOV)
oracle = SDDP.cutoracle(sp)
cuts = SDDP.validcuts(oracle)

The second is to use the cut_output_file argument to solve.

solve(m, cut_output_file = "mycuts.csv")

This will produce a CSV file containing a list of all the cuts. The columns are
STAGE, MARKOV STATE, intercept, state 1 coefficient, state 2 coefficient ...

@odow
Copy link
Owner

odow commented Apr 17, 2018

Also, if you have some examples with multiple Markov states in the first stage, it would be great to include them in the library. I'm always looking for new models.

@rodripor
Copy link
Author

rodripor commented Apr 17, 2018

Also, if you have some examples with multiple Markov states in the first stage, it would be great to include them in the library. I'm always looking for new models.

I am trying to make a model with several Markov States in the first stage. I have taken into account the transition matrix. All matrices in the model are 5x5 so in the first stage there are 5 Markov states. The problem is that if in the SDDP algorithm I make N forward passes all its N associated scenarios start with the markov state in the initial stage in 1. But i want the initial markov state (first stage) to be drawn with uniform distribution within 5 possible states.

When i make the model work ill include it in the library with pleasure.

Thanks!

@odow
Copy link
Owner

odow commented Apr 18, 2018

You need something like this:

transition = Array{Float64, 2}[]
push!(transition, [0.2 0.2 0.2 0.2 0.2])
for t in 2:T
    push!(transition, [ ... the 5x5 matrix ... ])
end
m = SDDPModel( 
    markov_transition = transition
        ) do sp, t, i

end

I should disable the ability to pass a single matrix, or provide a better constructor for this e.g.,

m = SDDPModel( 
    markov_transition =  [ 5x5 matrix ],
    initial_markov_probability = fill(0.2, 5)
        ) do sp, t, i

end

@rodripor
Copy link
Author

transition = Array{Float64, 2}[]
push!(transition, [0.2 0.2 0.2 0.2 0.2])
for t in 2:T
push!(transition, [ ... the 5x5 matrix ... ])
end
m = SDDPModel(
markov_transition = transition
) do sp, t, i
end

Ah! Now i undertand that zeroth state. I was confused about that. Thanks! When i have some examples ill include it.

@odow
Copy link
Owner

odow commented Jun 9, 2018

Tutorial Four: Markovian policy graphs addresses the Markov chain aspect of this question.
Tutorial Six: cut selection addresses the issue of querying the cuts.

Please re-open this issue if anything is unclear!

@odow odow closed this as completed Jun 9, 2018
@rodripor
Copy link
Author

Hello,
I re-open this issue to report a bug in console output, bound column.

You need something like this:
transition = Array{Float64, 2}[]
push!(transition, [0.2 0.2 0.2 0.2 0.2])
for t in 2:T
push!(transition, [ ... the 5x5 matrix ... ])
end
m = SDDPModel(
markov_transition = transition
) do sp, t, i

when i use the initial distribution vector (in this case equally distributed among five states) the console output bound column report a bound that is incorrect.
I think what is happening is that bound is miscalculated because it is not weighted according to Markov states. In particular the simulation result column value is much smaller than the bound. Casually in this case is approximately five times smaller (five Markov States). When i call getbound function i get an error. I tried to correct the bug but i couldn't, i think the problem is in modifyvalue function in valuefunctions.jl module. I got confused when modifyprobability is called to treat risk measures, but i think inside this function is the bug. Im sorry i couldn't correct it.

If you want to reproduce the bug you can change markov transition matrix definition in hydrovalley example changing initial vector, i use this matrix to capture the bug:

# Transition matrix
if hasmarkovprice
    transition = Array{Float64, 2}[
        [ 0.2 0.2 0.6 ],
        [ 0.6 0.4 0.0; 0.3 0.7 0.0; 0.3 0.7 0.0],
        [ 0.6 0.4 0.0; 0.3 0.7 0.0; 0.3 0.7 0.0]
    ]
else
    transition = [ones(Float64, (1,1)) for t in 1:3]
end

Thanks.

@odow odow reopened this Jul 17, 2018
@odow odow added bug and removed documentation labels Jul 17, 2018
@odow
Copy link
Owner

odow commented Jul 17, 2018

From a quick peruse, the bug is likely with this line

return dot(m.storage.objective, m.storage.probability)

It probably needs to be something more sophisticated than a dot.

This wasn't caught as none of these tests have a problem with multiple Markov states in the first stage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants