Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Adding Discrete Approximation of VAR Methods #640

Merged
merged 29 commits into from
Feb 24, 2023

Conversation

crondonm
Copy link
Contributor

@crondonm crondonm commented Oct 5, 2022

Hi,

As discussed with @jstac, I am submitting the routine discrete_var.py discussed in #639.

The routine is based on the work `Finite-State Approximation Of VAR Processes: A Simulation Approach'' by Stephanie Schmitt-Grohé and Martín Uribe, July 11, 2010.

I am attaching the Authors' Matlab file for comparison. Also, I am attaching a notebook to facilitate testing

Discrete_VAR.zip

@jstac
Copy link
Contributor

jstac commented Oct 5, 2022

Many thanks @crondonm , this will be a very valuable contribution.

@chien-y and @shlff, would you mind to take a first pass at reviewing this PR?

You'll need to quickly review the underlying paper, get some idea of what the code does, and write up a small notebook that tries to explain it. Also,

  • can you make the Python code more Pythonic? For example, function names should be longer and more descriptive.
  • I think we should break up the large function tpm. In particular, simulation of the process should be separate.
  • Should tpm return an instance of MarkovChain, like qe.tauchen?
  • If so, do we need a separate simulation routine or can we use the one in MarkovChain?

@Smit-create , I know that you have quite a lot on but I'm hoping that you can finish this PR in the next month or two, reviewing whether the code can be made more efficient and turning the example into a test.

@Smit-create
Copy link
Member

@Smit-create , I know that you have quite a lot on but I'm hoping that you can finish this PR in the next month or two, reviewing whether the code can be made more efficient and turning the example into a test.

Thanks, @jstac. Once the code is reviewed and verified, I can look into this and try to modify and make the APIs more user-friendly and clean ups.

@shlff
Copy link
Member

shlff commented Oct 7, 2022

Thanks @jstac and @Smit-create .

@chien-y and I chatted about this just now. We will work on this, meet again on Monday and find a time to report it to you later.

@shlff
Copy link
Member

shlff commented Oct 9, 2022

Thanks @jstac . Here is a list of reviewing tasks:

  • understand the Tauchen method in the QuantEcon library
  • quickly read the paper: http://www.columbia.edu/~mu2166/tpm/tpm.pdf
  • present the algorithm to Chien (11 am Thursday 13 October)
  • read and understand the notebook written by Carlos
  • make the code more Pythonic following PEP8: https://peps.python.org/pep-0008/
  • explain the code to Chien and discuss how to improve its efficiency (3-4 pm Friday 21 October)
  • design different experiments to check and improve the time complexity of different parts of the code, especially with jax or jit
  • break the large functions into smaller pieces, especially the simulation part
  • make the return of tpm consistent with the return of qe.tauchen, including three components: state space, stochastic matrix and simulation for the stochastic process.

@jstac
Copy link
Contributor

jstac commented Oct 24, 2022

@shlff and @chien-y, please update me on your progress.

@shlff
Copy link
Member

shlff commented Oct 24, 2022

Hey @jstac please refer to the list above, I've designed and been implementing some experiments to test different parts of the code:
computing unconditional covariance $\sigma_i$
compute state space matrix $S$:
simulating $\Pi$

Please find code with shu's comments, breakdowns and experiments on the code: https://colab.research.google.com/drive/1116BYYOw0XiJjvedRNUSf5DrSQkeRchK?usp=sharing

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

Docs need to be reformulated in line with QuantEcon.py standards -- see, e.g., https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/markov/ddp.py

Note S(i, j) should be S[i, j]. The meaning of i needs to be clarified here. Are we counting elements of the state (which is built from a grid) using column major or row major order?

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

Adding to the last comment, for Python we need to be using row major.

@jstac jstac closed this Oct 26, 2022
@jstac jstac reopened this Oct 26, 2022
@coveralls
Copy link

Coverage Status

Coverage decreased (-1.04%) to 93.96% when pulling 3506341 on crondonm:VAR-approximation into 060f3e6 on QuantEcon:master.

@coveralls
Copy link

coveralls commented Oct 26, 2022

Coverage Status

Coverage: 92.903% (+0.06%) from 92.844% when pulling 08dd321 on crondonm:VAR-approximation into 65ac63d on QuantEcon:main.

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

tpm should have a default seed in the argument list in order to help reproducibility.

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

The function mom solves a discrete Lyapunov equation. Replace with existing QE code.

mean = [0] * m

for t in range(T+Tburn):
drw = multivariate_normal(mean, omega).reshape(1,2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@crondonm , does this line assume that m = 2 ? I'm guessing 2 should be m in my edit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jstac,

m should be equal to the number of states you are discretizing. In the example, m = 2.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @crondonm . In the specified example that's true but for the code to work for arbitrary examples we need to replace 2 by m. I've taken care of this.

return Pi, Xvec, S


def mom(gx, hx, varshock, J=0, method=True):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@crondonm , FYI, this code is already available in quantecon as solve_discrete_lyapunov. I've changed it to use that function (after confirming identical results for the test matrices you use.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted. Thanks for the suggestion.

@jstac jstac mentioned this pull request Oct 26, 2022
@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

@shlff I couldn't see your code (due to lack of access). I edited @crondonm's code and pushed my own commits. Pleaese comment on the new version if I've missed obvious changes.

We still need to accelerate the simulation, which is very slow (as well as add tests, etc.).

I suggest that we also return a multi-index version of the transition probability matrix P. Then S doesn't need to contain a lot of unnecessary values --- we can just return the grid.

@shlff
Copy link
Member

shlff commented Oct 26, 2022

Thanks @jstac . I've updated the access and now you and others should be able to view, edit and comment on it:

I've updated the new-version code, commented on them (please see comments 1-7), broke down the code by parts and identified the most time-consuming parts in the simulation. For example this part:

drw = multivariate_normal(mean, Omega).reshape(m, 1)
x = A @ x0 + drw

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

@crondonm, FYI, the documentation states that the code discretizes $x_t = A x_{t-1} + \Omega e_t$ but in fact it discretizes $x_t = A x_{t-1} + u_t$ where $u_t$ is zero-mean normal with variance-covariance matrix $\Omega$.

(Note that $\Omega e_t$ has variance-covariance matrix $\Omega \Omega^\top$.)

If this is a mistake in the original MATLAB documentation perhaps we should let the authors know.

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

@crondonm , FYI, following up from our conversation at the CBC, to avoid tiling and hence creating unnecessary arrays,

xx = np.tile(x.T, (n, 1))                                                                                                                                             
d = np.sum((S - xx)**2, axis=1)     

can be replaced by

xx = np.reshape(x, (1, m))                                                                                                                                            
d = np.sum((S - xx)**2, axis=1)     

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

Efficiency issues are now solved. I chose Numba to accelerate the code because it's largely serial. With jit compilation and some memory savings mentioned above (reshape vs tile), execution time for the example provided by @crondonm drops from 1 min 13 seconds to 2.65 seconds.

@crondonm
Copy link
Contributor Author

Hi @jstac,

This is fantastic. Thanks a lot for all your work optimizing the code.

Thanks a lot for pointing out the issue with the variance-covariance matrix,

In fact, the MATLAB code does compute the variance-covariance as $\Omega \Omega^\top$, which requires the input to be a Cholesky-Decomposition of $\omega$. I missed using the right matrix when producing the simulation.

Regarding your question about XVEC, I find it helpful to get it as it saves time when you are simulating the full model. However, in fairness, it is not always needed and it could be an option. I guess the minimum output should be the states and the stochastic matrix.

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

Regarding your question about XVEC, I find it helpful to get it as it saves time when you are simulating the full model. However, in fairness, it is not always needed and it could be an option. I guess the minimum output should be the states and the stochastic matrix.

Good idea, I'll make that change now.

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

Python tradition suggests to use long descriptive names for functions (unlike MATLAB). Hence I propose that we change the name of tpm to discretize_var (which is the same of as the filename). Is this OK with you @crondonm ?

@crondonm
Copy link
Contributor Author

Is this OK with you @crondonm ?

Sounds great!

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

OK @crondonm , I've changed the name and also provided options to construct the state space using row major order, as well as column major order. (The first will be a better fit for most Python code, whereas the second is consistent with the original).

@jstac
Copy link
Contributor

jstac commented Oct 26, 2022

This is about ready. I've added tests within the file so the code just needs to be reviewed and then embedded properly in the library.

I suggest that we add the code to markov/approximation.

All help for these last steps is appreciated (CC @Smit-create @mmcky)

@Smit-create
Copy link
Member

This is about ready. I've added tests within the file so the code just needs to be reviewed and then embedded properly in the library.

Sure, will try to add the final touch-ups this weekend.

@crondonm
Copy link
Contributor Author

Given the rest of the description. It seems to me that the last sentence is redundant:

Martín Uribe, July 11, 2010. In particular, we follow Schmitt-Grohé
and Uribe's method in contructing the grid for approximation.

@mmcky
Copy link
Contributor

mmcky commented Feb 10, 2023

@oyamad the test added here is coming from the file tpm_test.ipynb that was included in the original zip file at the top of this PR. I am happy to accept that as the source of truth for the test -- as it aligns with @crondonm submission. Do you agree?

@jstac
Copy link
Contributor

jstac commented Feb 10, 2023

@oyamad the test added here is coming from the file tpm_test.ipynb that was included in the original zip file at the top of this PR. I am happy to accept that as the source of truth for the test -- as it aligns with @crondonm submission. Do you agree?

It's not easy to find other tests for this function.

@oyamad
Copy link
Member

oyamad commented Feb 10, 2023

@mmcky Yes.

Remaining are:

@HumphreyYang
Copy link
Collaborator

HumphreyYang commented Feb 15, 2023

Hi @mmcky,

CI on the windows platform has some issues with pip. Could you please have a quick look when you have time?

Requirement already satisfied: pip in c:\miniconda\envs\test\lib\site-packages (22.3.1)
Collecting pip
  Downloading pip-23.0-py3-none-any.whl (2.1 MB)
     ---------------------------------------- 2.1/2.1 MB 16.4 MB/s eta 0:00:00
ERROR: To modify pip, please run the following command:
C:\Miniconda\envs\test\python.exe -m pip install -U pip

Many thanks in advance.

@HumphreyYang
Copy link
Collaborator

Hi @oyamad,

Many thanks for editing the docstring. It is a wonderful exemplar for me to follow in the future.

I have added the three tests and fixed a small bug in the code while testing. For the record, tests with order='F' and a scipy.stats multivariate distribution are relatively straightforward. However, a test for a non-square C is tricky, given random samples of u will change when the size r is changed under the same seed. My approach is to hold the first two rows of u unchanged as before and add a new row at the end and check if it returns the same solution as before.

u = random_state.standard_normal(size=(sim_length-1, 2))
u =  np.insert(u, 2, u[:, 0], axis=1)

The test result is the same as the previous P_out, which indicates the change in P results from the changes in randomu. Therefore, I conclude that the code is correct and add the result as a test case.

Could you please kindly review these changes when you are available?

Many thanks in advance.

drawn from a multivariate t-distribution

>>> mc = discrete_var(A, C, grid_sizes,
... rv=scipy.stats.multivariate_t(shape=np.identity(2)),
Copy link
Collaborator

@HumphreyYang HumphreyYang Feb 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @HumphreyYang, but it is the covariance matrix that must be the identity matrix.

Many thanks @oyamad. Sorry I forgot the constraint on the covariance matrix and accidentally removed your comment.

Do you think we should add these lines below into the docstring as well, in addition to my update to emphasize our constraint on the distribution?

>>> df = 100
>>> Sigma = np.diag(np.tile((df-2)/df, 2))
>>> t_dis = scipy.stats.multivariate_t(shape=Sigma, df=df).rvs(10_000_000)
>>> np.round(np.cov(t_dis.T), 2)
array([[1., 0.],
       [0., 1.]])

@QuantEcon QuantEcon deleted a comment from oyamad Feb 17, 2023
@oyamad
Copy link
Member

oyamad commented Feb 17, 2023

Do you think we should add these lines below into the docstring as well, in addition to my update to emphasize our constraint on the distribution?

No, I think the update in a85a6cd is just fine. Thanks!

@oyamad
Copy link
Member

oyamad commented Feb 17, 2023

@HumphreyYang Will you cherry-pick the minor chnages in 496942d? Thanks!

Copy link
Member

@oyamad oyamad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests all green. I think this is ready to merge.

@jstac
Copy link
Contributor

jstac commented Feb 24, 2023

Many thanks @oyamad, @crondonm, @HumphreyYang and all other contributers. @mmcky , I think this is ready to merge.

@mmcky
Copy link
Contributor

mmcky commented Feb 24, 2023

thanks everyone for your input on this. LGTM.

@mmcky mmcky merged commit 71d4d88 into QuantEcon:main Feb 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants