-
-
Notifications
You must be signed in to change notification settings - Fork 198
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Framing and periodicity #798
Comments
Yes. Attributes named in poststate_vars are the ones that are initialized
at birth, and thus also when initializeSim() is called. They are the
variables that describe the agents' state at the moment that time t becomes
time t+1: they're the end of time t and the beginning of time t+1, and thus
the variables that are mandated to exist for a new agent who comes into
existence at some time.
The permanent income level `pLvlNow` is a state variable in the sense that
its value in time t is a function of the post-state variable from t-1 and
the shock variables from time t. For an IndShockConsumerType, `pLvlNow` is
only trivially used to make a consumption decision, as the problem is
normalized by permanent income. For a GenIncProcessConsumerType, `pLvlNow`
is a direct argument into the consumption function in a non-trivial way.
But it's *also* a post-state variable in the sense that it exists at the
moment of intertemporal transition. Non-trivial post-state variables are
determined as a function of period t state variables and period t control
variables, e.g. end-of-period assets are market resources less
consumption. But permanent income as a post-state variable is a trivial
(identity) function of its value as a state variable: permanent income at t
equals permanent income at t.
…On Thu, Aug 13, 2020 at 5:36 PM Sebastian Benthall ***@***.***> wrote:
The PerfForesightConsumerType has two poststate_vars: aNrmNow and pLvlNow
https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1450
However pLvlNow is set in getStates():
https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1665
...and not in getPostStates(). Rather, aLvlNow is updated as a poststate
in the code:
https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1706
Correcting the poststate_var field to list aLvlNow instead of pLvlNow
raises an AttributeError.
I guess this is because the class is depending on the automatic
initialization of the poststates, to create the attribute.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#798>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFMQIYJMPEF7MN2WOK3SARMHXANCNFSM4P632Z4A>
.
|
Hmmm. For context, I bumped into this while working on #761. If I understand correctly, you are saying:
I find this confusing. If the variable is being used intertemporally, wouldn't its value in the previous period be I.e., why not:
(I expect that once the state variables are all properly listed out, it should be possible to automatically store the previous value for all of them. But that would be a separate issue.) Also, what about |
I think I do that exact thing in some of the models, where there's a line
of code in getStates:
`pLvlPrev = self.pLvlNow`
`pLvlNow = pLvlPrev*self.PermShkNow`
The span of time in the code where what *was* pLvlNow is now pLvlPrev
is extremely short. That can be moved to `getPostStates` if you want. In
some sense it would make things more explicit, *but* it would introduce the
bad feature that if you added `pLvlPrev` to track_vars, hoping it would
track p_{t-1}, it would instead track p_t.
`aLvlNow` isn't in poststate_vars for an IndShockConsumerType because it
isn't needed at simulation initialization; note that it *is* in
poststate_vars for GenIncProcessConsumerType. It's calculated here merely
for user convenience. I guess it's also a slight code speedup for when the
simulation is being run for a AggShockConsumerType as part of a Market
being solved, and thus might be computed in parallel (to other instances of
AggShockConsumerType).
…On Fri, Aug 14, 2020 at 11:56 AM Sebastian Benthall < ***@***.***> wrote:
Hmmm.
For context, I bumped into this while working on #761
<#761>.
I'm trying to functionalize out some of the behavior and that requires
consistency as to how the functions used in simulations and variable
designations are used.
If I understand correctly, you are saying:
- pLvlNow is a state variable because it is a function of past state
and stochastic shocks and is an input to the determination of a period's
control values.
- It is also an intertemporal transition, so is in that respect a
'poststate'
I find this confusing. If the variable is being used intertemporally,
wouldn't its value in the previous period be pLvlPrev, or $p_{t-1}$, and
the new value be pLvlNow, or p? It seems like some notational slippage
was introduced into the code here.
I.e., why not:
- in getStates, assign a value to state variable pLvlNow based on
pLvlPrev and the stochastic shocks
- in getPostStates, assign to poststate variable pLvlPrev that value
of pLvlNow.
(I expect that once the state variables are all properly listed out, it
should be possible to automatically store the previous value for all of
them. But that would be a separate issue.)
Also, what about aLvlNow? Can you explain why that is being set in
getPostStates(), but not listed as a poststate_var?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#798 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFKUIZQGPVPTAFUPYCDSAVNCBANCNFSM4P632Z4A>
.
|
Ok. As I said, the main thing that's needed for refactoring is consistency. That's interesting about the tracking issue.
In the short term, I'd like to go ahead with having the previous state stored in |
This is related to the discussion Seb and I had with @albop last week. I believe we agreed to move to a system in which there was a clearly defined point at which each exogenous shock was realized and at which each endogenous decision was made. The transition equations would define the state variables. But we would need to have a name for every state variable that exists at any point -- including some that might not have names in the mathematical formulation of the problem. For example, in the portfolio problem:
There are several virtues of disarticulating things in this way:
|
I support what Chris just wrote, with the extension that I'd like to make
*every single thing* its own subperiod or step or whatever we call it.
Then there are only three basic kinds of operations:
1) Assign: Determining a variable as a *function* of other variables.
2) Draw: Determining variable(s) by drawing from a distribution.
3) Choose: Determine a variable by selecting among alternatives.
In some sense, Choose is a subset of Assign: `cNrm = cFunc(mNrm)`. When
"acted out", assigning consumption based on the consumption function
evaluated at market resources is just another kind of assignment. However,
`cFunc` is not part of the agent's problem, but is instead determined as
part of its solution. All Assign and Draw steps could be executed once the
*problem* has been specified, but any Choose step only has a value when the
model has been "solved" in some sense.
This is also the way to indicate to the future "magic solver" that *this
thing* is to be optimized, and *these* are the variables in the information
set at the time the choice is made.
Note that I specifically wrote Draw as potentially multivariate, whereas
the other ones have one variable on the LHS. Any multivariable
distribution can be written as a sequence of conditional multivariate
distributions, but that can add work for the user if we mandate that.
Somewhat related, a Choose step might have a multivariate LHS purely for
computational speed reasons.
…On Mon, Aug 17, 2020 at 1:40 PM Christopher Llorracc Carroll < ***@***.***> wrote:
This is related to the discussion Seb and I had with @albop
<https://github.com/albop> last week.
I believe we agreed to move to a system in which there was a clearly
defined point at which each exogenous shock was realized and at which each
endogenous decision was made.
The transition equations would define the state variables. But we would
need to have a name for every state variable that exists at any point --
including some that might not have names in the mathematical formulation of
the problem.
For example, in the portfolio problem:
Subperiod State Shock Operation
0 b_{t} = a_{t-1} \RPort \Risky Transition
1 p_{t}=p_{t-1}\PermShk_{t} \PermShk_{t} Transition
2 y_{t} = p_{t}\TranShk_{t} TranShk Transition
3 m_{t} = b_{t} + y_{t} Transition
4 a_{t} = m_{t} - c_{t} c_{t} is chosen
5 \PortShare \PortShare chosen optimally
There are several virtues of disarticulating things in this way:
1. There are no more "post-state" and "pre-state" variables. There are
just the states at each subperiod.
2. The state variables are all defined by the transition equations --
there's no need to have any separate articulation of what is a state and
what is a control
3. All the variables have a unique value within the period. So we
don't need to have extra notation to decide whether the value in question
is a "pre" or a "post" value
4. In the case of models where we are really thinking of shocks as
being realized simultaneously (like the risky return, the transitory shock,
and the permanent shock, the order in which we lay out the draws of those
shocks in sequence does not matter for the mathematical solution, but might
matter sometimes for computational purposes.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#798 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFO43SXEECQXLHQPCTLSBFTP5ANCNFSM4P632Z4A>
.
|
Ok cool. I think this is something we are getting consensus around then. I'm working towards making this happen. I'm trying to start with some more basic refactoring: #760 #761 #660 When everything has been standardized around that, it should be possible to write a "magic simulator" that uses them, along with a decision rule for the control variables. Then "magic solvers" can be written that algorithmically find good decision rules. |
@mnwhite makes a good point about multivariate statistical draws, and that for computational efficiency we might want to have several LHS variables rather than one. Let me expound a point that I forgot to make earlier, then elaborate on it in light of Matt's response. The point was that we should refer to subperiods by the name of the product(s) returned therein, rather than by a numerical order (0, 1, etc). So the earlier figure should have been
If (counterfactually) the most efficient way to solve this problem were a simultaneous method that produced both c and portshare, the last line could be modified as below. And if it were necessary to draw the transitory and permanent shocks jointly (because they are mutually correlated, we could modify the representation as below
Then we would be able to write sentences like "In the period labeled c_portshare, both the level of consumption and the share of the portfolio invested in risky assets are jointly determined." |
I'd prefer more descriptive indexing/naming, but I agree they shouldn't be
absolutely numbered (at least by us; the code will look at their order and
number them).
…On Mon, Aug 17, 2020 at 5:11 PM Christopher Llorracc Carroll < ***@***.***> wrote:
@mnwhite <https://github.com/mnwhite> makes a good point about
multivariate statistical draws, and that for computational efficiency we
might want to have several LHS variables rather than one.
Let me expound a point that I forgot to make earlier, then elaborate on it
in light of Matt's response.
The point was that we should refer to subperiods by the name of the
product(s) returned therein, rather than by a numerical order (0, 1, etc).
So the earlier figure should have been
Subperiod State Shock Operation
b b_{t} = a_{t-1} \RPort \Risky Transition
p p_{t}=p_{t-1}\PermShk_{t} \PermShk_{t} Transition
y y_{t} = p_{t}\TranShk_{t} TranShk Transition
m m_{t} = b_{t} + y_{t} Transition
a a_{t} = m_{t} - c_{t} c_{t} is chosen
portshare \PortShare \PortShare chosen optimally
If (counterfactually) the most efficient way to solve this problem were a
simultaneous method that produced both c and portshare, the last line could
be modified as below. And if it were necessary to draw the transitory and
permanent shocks jointly (because they are mutually correlated, we could
modify the representation as below
Subperiod State Shock Operation
b b_{t} = a_{t-1} \RPort \Risky Transition
PermShk_TranShk $\psi,\theta$ \PermShkTranShkMatrix Draw
p p_{t}=p_{t-1}\PermShk_{t} \PermShk_{t} Transition
y y_{t} = p_{t}\TranShk_{t} TranShk Transition
m m_{t} = b_{t} + y_{t} Transition
c_portshare \PortShare c and \PortShare chosen optimally
a a_{t} = m_{t} - c_{t} Transition
Then we would be able to write sentences like "In the period labeled
c_portshare, both the level of consumption and the share of the portfolio
invested in risky assets are jointly determined."
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#798 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFJ2CQ4O4F6VUGUJJOTSBGMIHANCNFSM4P632Z4A>
.
|
The equations/rules for assignments will specify a partial order, but not necessarily a total order, on the variables. In my view, this means we should not consider them to be numbered. It also means that the exogenous shocks can be both "simultaneous" and at differentiated. What's important is that they can be evaluated prior to state or control variables. |
I might not get to the full refactor of post-post-state glory in the next PR. For the transitional work, @mnwhite could you tell me if |
I'm getting back to this discussion by pure chance. There is one argument I
didn't convey properly.
In my view, the definition of a subperiod does not result from a
succession of operations, but rather by a change n the state-space that
does not correspond to a change in the information set (information set
being indexed by t). It defines the succession of functional spaces that
you are going to go through when doing your time-iteration.
When you do $m_{t} = b_{t} + y_{t}$ you move to a new subperiod, not
because there is a new subperiod (there isn't) but because the state-space
has changed from $b_t$ to $m_t$. It defines how you iterate "backward":
i..e you start by functions of m_t (`c(m_t)`, then functions of `b_t`
(psi(b_t), p(b_t), ...). Conceptually, all these function could be
computed as a function of m_t or b_t, but we would loose useful information.
So this is not the same as defining an order in which you compute the
transitions (b then p then y then m). It is possible to keep this order to
compute transitions from one subperiod to the next. When doing
time-iteration, if there is a system of several functions to solve for,
which have the same state-space, one can also look for the right succession
of operation (either provide it or autodetect it).
Now, the idea of labeling implicitly subperiod (in the sense I introduced),
with the name of the last computed variable does make some sense: after you
have computed `m` then `m` is the new state-space where functions are
defined. But what if you have more than one state ? Which variables do you
keep to define the state-space in a given subperiod ? I suppose when could
name subperiod, $(m,p)$ the period obtained just after m and p have both
been computed. I'm not saying it doesn't work but it requires a bit
more some thought compared to the 0,1,2,3... version which certainly does.
…On Mon, Sep 14, 2020 at 10:20 PM Sebastian Benthall < ***@***.***> wrote:
I might not get to the full refactor of post-post-state glory in the next
PR.
For the transitional work, @mnwhite <https://github.com/mnwhite> could
you tell me if aLvlNow is a state or a post-state in AggShockConsumerType
?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#798 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACDSKJBBP2VQLTXC5UMYQLSFZ3HTANCNFSM4P632Z4A>
.
|
In my view, there is not a linear series of subperiods. Rather, over a period at time t, there is a directed acyclic graph of variables. In simulation, a decision rule induces the choice variables into a normal random variable with conditional dependencies on its inputs. State variables are like this as well. Exogenous shocks are the roots (no parents) of the DAG structure. Simulation proceeds down the DAG. In principle, the evaluation of two nodes at the same "depth" can be parallelized at the computational level. Nothing makes it sequential, necessarily. When doing the solution, I expect you can work in reverse in a similar way. Start with the "deepest" nodes in the DAG, solve them, then go by backwards induction to "back" of the graph. So I think I agree with what @albop is saying, in that what matters about a "period" is which variable's states are being sampled/solved. But calling them "subperiods" imposes a linearity on this process needlessly. |
I've renamed this issue to make it clear that it is now about the new proposed architecture. |
Actually I find the term subperiod confusing. I prefer post-state which is less ambiguous. How to generalize it ? Post-post-state or post-k-state or just k-state ? |
If I understand correctly from @mnwhite , "post state" refers to any variables (state, control, shocks) that need to have values at the start of period t in order to resolve the downstream variables. One major function of the "post state" designation has been to mark variables that need to be initialized with some value when an agent is "born". [This gets in HARK's multiple agent simulation paradigm, which includes mortality. I believe this paradigm is another extension beyond the MDP formalism]. Because these variables are always evaluated/initialized before the period t, I propse that we call them "antecedent variables" instead of "post states"--as they are neither always "post" nor always "states". I think that @albop you may be using "post-state" slightly differently--I mean, with something different in mind--in your most recent comment. Maybe it would be better to decouple the terminology around the model (which contains variables, which may be state, shock, or control variables) and the terminology around the solution/simulation algorithms (which may approach the problem of solving or simulation a model in sequential stages). "period" I tend to use unambiguously to mean that which is denoted by the discrete time parameter t which has special meaning within the MDP formalism. |
@sbenthall : I've been trying to formalize a bit the representation I had in mind, in order to clarify the terminology (sorry it's a bit long)
In the above I've been using specific vocabulary which partially conflicts with the terms we've been using so far. In my mind the post-state is essentially what I have described as a notional state. And I gather "sub-period" has been used either as a synonym of a notional-date or as a slightly more general notion that I don't really grasp. I kind of feel strongly that whichever naming scheme we choose should at least be compatible with the ones in 1., that I think are quite universal. For this reason, I would avoid talking about sub-periods. The spirit of my proposal was that after you define the exogenous process, you have defined the actual time structures (atomic periods) and that suitable notation shortcuts (namely filtering on exogenous values) where enough to define the "model". In the table above with the list of operations, (transition, choice, random), I perceive a desire to unify everything in one description of what happens between t1 and t2, whatever t1 and t2 represent. I'm a bit skeptic about it for now, though I can't say for sure it won't work. It is enough for a RL algorithm after all. I suspect that operationally you would end up reconstructing a structure set similar to what I described. In this case, the definition of time is a result of the model (for instance, some transitions do t->t+1 some others don't). |
You have introduced both mathematical concepts and notation that I have been unfamiliar with. You bring up "filtration". I'm afraid I'm unfamiliar with this idea. Looking it up, and looking up stochastic processes, I believe I see t as the index into the index set, which are the natural numbers in most cases we consider here. When discussing an MDP, we typically formalize it in terms of a set S of states, A of actions, R of rewards, and P, the transition probabilities of states and actions. The decision rule pi specifies a probability distribution over actions given states. This induces the MDP into a markov process over the state space S. All of this is over the index set of the natural numbers, the index is denoted with t. This is the view from "1,000 feet (or meters) up". As is quite typical with MDPs, the state space S is the Cartesian product of many substates; the action space A is a Cartesian product over many sub-actions. The structural relationships between the substates and subactions are determined by the joint probability distribution P. Note that in this formalism, the entirety of S_{t-1} is the "post state" with respect to state S_t. The terminology gets muddled at this point because, I must presume, Economics training is focused at a smaller resolution than this. Let us, consistently with our earlier discussion, call these 'sub-states' and 'sub-actions' "variables"; we have been calling the latter "control variables". Because the joint probability distribution P involves many independent relations between the variables, it is possible to represent their structure graphically as a Bayesian network. When this is done, it makes it clear which variables are endogenous and which are exogenous: exogenous variables are those with no control variables as ancestors. If you are following me to this point, then I think I can precisely state my disagreement with your formulation above. If t cannot be a filtration over the structured distribution over the sub-state variables because a filtration must be totally ordered, whereas the sub-state variables are only guaranteed to be partially ordered. So the premise of your formulation is incorrect. |
which premise ? |
I'm saying time t indexes a filtration on the states S and actions A. One could attempt to impose a filtration on the sub-state/sub-action process, but that would be cumbersome and I violate the use of t in the MDP formalism. To put it another way: it is better to think of t as a full day (September 16th), rather than a particular point in time (September 16, 12:00 AM). Many things can happen during the day (the sub-actions, the sub-state variables). But, at the lower level of resolution, I can take two actions on September 16th with different information available at each time. Your use of "information" here confuses me a bit, to be honest. At the high level, yes, given a decision rule (S_t, A_t) is determined based on the information from (S_{t-1}, A_{t-1}). But this is not the case as the sub-state level. For example, there is no mathematical reason that an exogenous shock, which would contribute information, needs to be resolved "before" a control variable that does not depend on it. |
So, rather than a period being understood as a span between t and t+1, a period would be, simply, t. Then, the question is what to name and whether and how to order what is happening within t. |
I really don't see the incompatibility. When you define an MDP, using SARP, you get a discrete random process One issue it that time defined by each markov chain update (each epoch) does not directly match physical time so one might want to replace The other problem is that some of our model reformulations are implicitly changing the definition of markov chain. Take the updating equation Incidentely, I've started to use here the term "epoch" to denote the update, of the markov chain from the flow of time. It is not 100% clarifying though as it is not immune to the kind of reformulation I just mentioned. |
Yes, I see there is some misunderstanding about the definition of the information set. In your case, assume the transition of your markov chains rely on some i.i.d. distribution I have also referred in the past to the concept of indexing variables by the state of the (exogenous) system but it is not relevant to the discussion here. |
What is "F" in this notation?
True.
I see. Yes, seasonality. This case has been quite tricky. My impression of this discussion so far is that there are many equivalent ways to model the seasonality problem, and that the sticking point is computational tractability. My own preference would be, in terms of setting up the model:
But then have the solver and simulator know how to handle these cases intelligently and not waste time on, say, computing what happens if s_(t+1) = s_(t) + 2 because that never happens. I.e., separate the model definition and computational levels of abstraction cleverly.
I will wait at least a decade before trying to debate this point with you further.
Ah. This gets to the heart of the matter. Is there a way to formulate what "usefulness" is? If so, maybe there's a way to programmatically discover the best framings of a particular model. |
Trying to get at this more directly... suppose you have a problem with several control variables:
I wonder if it matters whether:
Maybe the consumer can only investigate their asset allocations on days that they don't labor. |
On Fri, Sep 18, 2020, 4:47 PM Sebastian Benthall ***@***.***> wrote:
1. d. in discrete time, one can define natural ("atomic") periods as
intervals [t1, t2[ such that F_t = F_t1 for any t in [t1,t2[ and F_t1 ⊄F_t2
. With these notations, t+1 is "one unit of time later", but does not
necessarily denote the "next" period, as in "the next time information will
be released.
What is "F" in this notation?
The infamous filtration ;-) in other words a sigma-algebra used to measure
information up to date t. In France we usually call it a "tribe" and there
is a recurring joke that most annoying questions in a math seminar can be
answered by "it is a custom of my tribe".
It is the standard mathematical apparatus to define random processes both
in discrete and continuous time.
There is an alternative formulation of the Markov property where the
future distribution of of a process can be forecast using (X_t, X_{t-1},
... X_{t-k}) with some finite k.
True.
seasonality for instance. As a modeller you might want to refer to t+1 as
the next quarter, while t+4 would be the next year.
I see. Yes, seasonality. This case has been quite tricky.
My impression of this discussion so far is that there are many equivalent
ways to model the seasonality problem, and that the sticking point is
computational tractability.
Not necessarily. There is a difference between the modeling language, and
the convention we take and the internal representation. The internal
representation can be a markov chain while the user-facing formalism could
be expressed differently.
My own preference would be, in terms of setting up the model:
- reserve *t* (time, time step) as the natural numbers, scoped such
that the k=1 Markov property holds
- a different variable, such as *c*, be used to represent the number
of time steps within a cyclical superperiod (i.e, c = 4 for four seasons,
where one time step (t) is a season.
- a different variable, such as *s*, advances deterministically as *t*
mod *c*.
But then have the solver and simulator know how to handle these cases
intelligently and not waste time on, say, computing what happens if s_(t+1)
= s_(t) + 2 because that never happens.
I.e., separate the model definition and computational levels of
abstraction cleverly.
Yes it is the kind of of things we need to differentiate physical time and
computational ticks.
This is a quite useless comment.
I will wait at least a decade before trying to debate this point with you
further.
But at that point, interplanetary macroeconomics may be viable and I would
like to revisit the topic.
By then quantum computing might be a thing which will probably allow for
conflicting opinions to be simultaneously true.
Problem is not whether frames are clearly and uniquely defined by
transitions but which set of frames are useful.
Ah. This gets to the heart of the matter.
Is there a way to formulate what "usefulness" is?
No that is to be defined.
It is probably not about simulating a model though but rather about
specifying additional informations to solve it.
… If so, maybe there's a way to programmatically discover the best framings
of a particular model.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#798 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACDSKO3M4X7BOCS3WSU4QTSGNXI7ANCNFSM4P632Z4A>
.no
|
Yes it is this kind of example which could make a difference. Some quick
comments :
- if it's about changing the order of arrival of new information or about
taking all decisions at date t, to me it is a problem of type 1, i.e. about
cycles and about language shortcuts to define and redefine them
- if it's about ignoring selectively some shocks when taking some decisions
it becomes about "decision frames" (I actually like the addition of
"decision" there)
…On Fri, Sep 18, 2020, 5:22 PM Sebastian Benthall ***@***.***> wrote:
And I wanted to have an example in mind where this is not the case. I can
see why it would be the case for some limited information problem though.
In the basic consumption-savings model it is not the case. In fact there
another condition is met that a frame does not loose information w.r.t. its
parent frame. Do you have one easy characterization of one if the models
you are working on ?
Trying to get at this more directly... suppose you have a problem with
several control variables:
- consumption, informed by current market assets
- endogenous labor quantity, informed by an exogenous physical energy
level
- risky asset allocation share, informed by an exogenous and
intermittent selective attention to stock prices.
I wonder if it matters whether:
- (a) these variables are determined in sequential order, each made
with knowledge of the preceding choices, or
- (b) if the variables are determined only in light of the state at
t-1.
Maybe the consumer can only investigate their asset allocations on days
that they don't labor.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#798 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACDSKKKXRPM66736CAYPCLSGN3L3ANCNFSM4P632Z4A>
.
|
It is getting to the point where it will be possible to implement a general framework for this. I expect it will be viable to work on it soon after #836 is pulled. I'm wondering, from a software design standpoint, how we would like to support this in HARK. My preference would be for this architecture to ultimately be in My understanding is that the status quo is to support the overwriting of the core functionality for custom models in cases like this. See the work of @Mv77 on the new multi-stage portfolio model, this comment from @mnwhite, and the work in PR #832 Recall that the goal for this work is twofold:
I'm somewhat ambivalent between trying to modify the Speaking of #696, I wonder how its experimental approach to dictionary-based model configuration looks in light of the more recent discussions in this thread. While in that experimental PR each state has its own individual transition function--because the dictionary of transition functions uses single variable names as keys--it would also be possible for it to use tuples of variables names as keys, allowing for transition functions that determine multiple variables 'at once'. |
I'm now trying to figure out a generic data structure for the frame architecture (this is informed by work on #862). @Mv77 wondering if you could weigh in on this especially. I think we have consensus about what a frame is. The question is how to design the implementation so that it elegantly does all the things we want it to do. This is a list of desiderata for what a Frame implementation might have: I have been playing with implementations in terms of the basic data structures that come with Python. But now I'm wondering if we should have a dedicated Frame class that can capture all this information. The other question that's coming up for me is that when listing the variables that a transition function depends on, it needs to distinguish between current time values and previous time values for any referenced model variables. This can be done structurally (i.e., separate lists of variable names for 'now' and previous values) or notationally (i.e., adding 'prev' to indicate a previous value). I can see pros or cons to either approach. I have a prototype implementation ready for your review at PR #865 but it sidesteps these issues, which I do think will matter moving forward. |
On the data-structure question, my own uninformed prior is that the most future-proof way to do things would be for frames to have their own class. They seem like a complex enough object that at some point in the future you might want to add properties or attributes to it that might not be accommodated by primitive structures without substantial changes. It might also make it more user friendly in the sense of, e.g., knowing that the 'target' is at |
Just a broader example of what the Frame initialization fragment may look like
|
One point to make here is that "arbitrage equation" is a legacy from the Representative Agent world in which one of the chief cheats is that all optimization solutions are interior solutions for which a first order condition holds. That often will not be true. (Like, in our dcegm tool). So I think I'd advocate replacing "arbitrage" with "optimality" in the above, which allows for the possibility of dcegm-like tools. |
Roger that. |
Definition of an objective function that should be maximized by the agent.
Maybe the category should be "objective" or something like that rather than
"optimality". We should consult Pablo on this.
…On Tue, Nov 17, 2020 at 6:50 PM Sebastian Benthall ***@***.***> wrote:
Roger that.
What does 'optimality' refer to in general? A function to be optimized?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#798 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKCK75T7KNHVAHBPFHYFPDSQMD5LANCNFSM4P632Z4A>
.
--
- Chris Carroll
|
I have another question about the design of frames. Control variables may be subject to bounds that functions of other state variables. https://dolo.readthedocs.io/en/latest/model_specification.html#boundaries In @llorracc 's examples in this thread, which I'm working from, it's not clear where the bounds are specified. My intuition is:
What I'm unclear about is how a frame-specific constraint would impact the complexity of a frame's solution problem, and whether these constraints are any impediment to the composability of frame solutions in backwards induction. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
The
PerfForesightConsumerType
has twopoststate_vars
:aNrmNow
andpLvlNow
https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1450
However
pLvlNow
is set in getStates():https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1665
...and not in getPostStates(). Rather,
aLvlNow
is updated as a poststate in the code:https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L1706
Correcting the
poststate_var
field to listaLvlNow
instead ofpLvlNow
raises anAttributeError
.I guess this is because the class is depending on the automatic initialization of the poststates, to create the attribute.
The text was updated successfully, but these errors were encountered: