You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In an example of your work "Stochastic dual dynamic programming with stagewise dependent objective uncertainty”, you use a value_function = priceprocess(USE_AR1) as a parameter of SDDPModel. I want to know if this we can modify the priceprocess function to allow noises from different discrete distributions. For example, think of a tree with different branches at stage 2 than at stage 3.
Here is an example
const DISTRIBUTIONS = [
[1, 2, 3], # distributions in Markov state 1
[3, 4, 5] # distributions in Markov state 2
]
functionbuildvaluefunction(stage::Int, markov::Int)
returnDynamicPriceInterpolation(
# ... other arguments omitted ...
dynamics = (price, noise) -> price + noise,
noises = DISTRIBUTIONS[markov]
)
end
m =SDDPModel(
stages =3,
# The transition matrix# x - x# / \ /# x X# \ / \# x - x
markov_transition = [
[1.0]'
[0.50.5],
[0.50.5; 0.50.5]
],
# pass a function that takes (stage, markov state) as arguments # and returns a value function.
value_functon = buildvaluefunction
) do sp, t, i
# you can also use the state t, and markov state i,# inside the subproblem, e.g.@state(sp, x >=2* t + i, x0==0)
@stageobjective(sp, p-> (t + i) * p * x
end
The text was updated successfully, but these errors were encountered:
Dear Oscar,
I'm trying to apply previous code, but I can't make it work. buildfunction is defined with arguments, but no argumets are used when it is used in the SDDPModel. I've tried doing some changes from this example with no success. Any help is appreciated.
Nice package by the way.
A user writes
Here is an example
The text was updated successfully, but these errors were encountered: