You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks so much for making this repo public. It's really helped me understand a lot of the inner workings of LFADS as well as the CD and other recent enhancements.
When going over the LearnableAutoRegressive1Prior, I'm a little bit confused by something. It's quite possible I don't have the prerequisite knowledge so I apologize that this question might just be the result of my ignorance.
I hope you don't mind but I'll use the wikipedia nomenclature found here.
So the process model takes the form: E(X_t) = E(c) + phi * E(X_{t-1}) + e_t c is a constant, typically 0, and e_t is white noise at time t.
When there are no previous samples, E(X_t) = E(c) + e_t
which, I believe, is a normal distribution: N(c, sigma_e**2)
When there is a previous sample, we have a normal distribution: N(c + phi * prev, sigma_p**2),
where prev is a draw from X_{t-1} and the combined variance sigma_p**2 = phi**2 * var(X_{t-1}) + sigma_e**2, or equivalently sigma_p**2 = sigma_e**2 / (1 - phi**2).
In your code, I think sigma_e**2 is stored in the more tractable logevars and sigma_p**2 in logpvars, cofirmed by the fact that logpvars is a transformation of logevars and phis, here:
So then later, in the logp_t method, I would expect the 0th-sample branch to use logevars and the >=1th sample branch to use logpvars, but it seems the opposite is the case:
I'm inclined to think I'm misunderstanding something, but I suppose it's also possible that there are a couple typos here e <-> p, so I wanted to check with you first.
Cheers, and thanks again for this great repo!
The text was updated successfully, but these errors were encountered:
Hi there,
Thanks so much for making this repo public. It's really helped me understand a lot of the inner workings of LFADS as well as the CD and other recent enhancements.
When going over the
LearnableAutoRegressive1Prior
, I'm a little bit confused by something. It's quite possible I don't have the prerequisite knowledge so I apologize that this question might just be the result of my ignorance.I hope you don't mind but I'll use the wikipedia nomenclature found here.
So the process model takes the form:
E(X_t) = E(c) + phi * E(X_{t-1}) + e_t
c
is a constant, typically 0, and e_t is white noise at time t.When there are no previous samples,
E(X_t) = E(c) + e_t
which, I believe, is a normal distribution:
N(c, sigma_e**2)
When there is a previous sample, we have a normal distribution:
N(c + phi * prev, sigma_p**2)
,where
prev
is a draw fromX_{t-1}
and the combined variancesigma_p**2 = phi**2 * var(X_{t-1}) + sigma_e**2
, or equivalentlysigma_p**2 = sigma_e**2 / (1 - phi**2)
.In your code, I think
sigma_e**2
is stored in the more tractablelogevars
andsigma_p**2
inlogpvars
, cofirmed by the fact thatlogpvars
is a transformation oflogevars
andphis
, here:lfads-cd/helper_funcs.py
Lines 364 to 365 in 1d6bb5e
So then later, in the
logp_t
method, I would expect the 0th-sample branch to uselogevars
and the >=1th sample branch to uselogpvars
, but it seems the opposite is the case:lfads-cd/helper_funcs.py
Lines 386 to 393 in 1d6bb5e
I'm inclined to think I'm misunderstanding something, but I suppose it's also possible that there are a couple typos here
e <-> p
, so I wanted to check with you first.Cheers, and thanks again for this great repo!
The text was updated successfully, but these errors were encountered: