Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Negative values obtained for stationary distribution in mc_tools.py #18

Closed
jstac opened this issue Jul 15, 2014 · 9 comments
Closed

Negative values obtained for stationary distribution in mc_tools.py #18

jstac opened this issue Jul 15, 2014 · 9 comments
Labels

Comments

@jstac
Copy link
Contributor

jstac commented Jul 15, 2014

The following is a message from Daisuke Oyama at Tokyo University. It shows instances where the output of mc_tools/mc_compute_stationary.py produces negative values. (The code also contains a nice sets of tests for that function --- I think we can adopt them.)

**

I reproduced instances of outputs of mc_compute_stationary with
negative elements:

https://github.com/oyamad/test_mc_compute_stationary

It may be inevitable given the limited precision in floating point numbers
(especially in executing np.linalg.solve),
but it seems relevant in some contexts such as the Kandori-Mailath-Rob model
with a large number of players and/or a very small probability of mutations.

@cc7768
Copy link
Member

cc7768 commented Jul 15, 2014

One of their cases is fixed by approaching the problem through eigenvalues and not limiting yourself to a single stationary distribution. Still playing with it, but thought I'd give an update.

@jstac
Copy link
Contributor Author

jstac commented Jul 17, 2014

This sounds interesting. It would be nice if we could change the mc_compute_stationary function to return all possible solutions (stationary distributions) --- or a description of the set in the infinite case. I'm sure this can be done. It would be something like: compute the set of eigenvectors that span the eigenspace associated with the unit eigenvalue. Now take the nonnegative ones and normalize them so that they sum to one.

Is that what you're thinking of? Such code must exist somewhere. (Having said that I did a quick search and couldn't find anything.)

I should add that Tom knows this stuff better than me...

@jstac
Copy link
Contributor Author

jstac commented Jul 17, 2014

I now see that this comment should have been under issue 19...

@cc7768
Copy link
Member

cc7768 commented Jul 18, 2014

Follow up on some stuff I was thinking about earlier:

  1. If you look at the eigenvalues for the KMR_sequential matrices, as you increase the number of players it looks like you get an eigenvalue that is approaching (but shouldn't reach) unity. This isn't a proof, but a numerically observed phenomenon. For example, try the following:

    import numpy as np
    import numpy.linalg as la
    from test_mc_compute_stationary import KMR_Markov_matrix_sequential 
    
    for i in xrange(2, 30):
        eigvals = la.eig(KMR_Markov_matrix_sequential(i, 1./3, 1e-2))[0]
        eigvals.sort()
        print(eigvals[-2:])
    
  2. Because of the previously mentioned issues of a second eigenvalue approaching unity, maybe there is some way to leverage the fact that we don't need to find the eigenvalue. We know that the eigenvalue we need is 1, it feels like that should give us some leverage in computing a stationary distribution.

These are just some random thoughts I had while walking to campus. Maybe they'll be useful, but maybe not. Just thought I'd write them down here.

@oyamad
Copy link
Member

oyamad commented Jul 22, 2014

I wrote some code that can pick eigenvectors that have eigenvalue sufficiently close to one, using eig from mpmath, which allows floating-point arithmetic with arbitrary precision:
https://github.com/oyamad/test_mc_compute_stationary#mc_compute_stationary_mpmathpy

  • As remarked, there is a second eigenvalue that is close to one. The function in the code is just to set high enough precision to distinguish the largest eigenvalue (= 1, theoretically) from the second in the list of eigenvalues produced by the eig routine (and return a list of the corresponding eigenvectors).
  • It passes my previous tests; or I should say I chose the default parameter values so that it works as expected for the instances in the tests. I have not tried any other instances.
  • mpmath (0.18) is contained in latest SymPy (0.7.5) which is included in latest Anaconda.

@cc7768
Copy link
Member

cc7768 commented Jul 22, 2014

mpmath looks cool. I hadn't seen that before.
On Jul 22, 2014 3:17 AM, "Daisuke Oyama" notifications@github.com wrote:

I wrote some code that can pick eigenvectors that have eigenvalue
sufficiently close to one, using eig from mpmath http://mpmath.org/,
which allows floating-point arithmetic with arbitrary precision:

https://github.com/oyamad/test_mc_compute_stationary#mc_compute_stationary_mpmathpy

  • As remarked, there is a second eigenvalue that is close to one. The
    function in the code is just to set high enough precision to distinguish
    the largest eigenvalue (= 1, theoretically) from the second in the list of
    eigenvalues produced by the eig routine (and return a list of the

    corresponding eigenvectors).

    It passes my previous tests; or I should say I chose the default
    parameter values so that it works as expected for the instances in the

    tests. I have not tried any other instances.

    mpmath (0.18) is contained in latest SymPy (0.7.5) which is included
    in latest Anaconda.


Reply to this email directly or view it on GitHub
#18 (comment).

@cc7768
Copy link
Member

cc7768 commented Aug 7, 2014

@oyamad Thanks for that code. I incorporated it into the new version of the mc_compute_stationary, see #43. mpmath is certainly slower at computing eigenvalues/eigenvectors than numpy, so I made the default be to use numpy but left it flexible enough to do either. @jstac let me know what needs to change for the PR and we can close this issue and #19.

@oyamad
Copy link
Member

oyamad commented Aug 8, 2014

@cc7768 Great, I am happy to see it nicely incorporated!

(I am not sure if this is the right place to make a comment, but is the class DMarkov still to be developed? Its methods do not match the methods as described in the docstrings.)

@cc7768
Copy link
Member

cc7768 commented Aug 8, 2014

@oyamad Thanks for pointing that out. I had used different names before @spencerlyon2 's suggestion, but made them all fit. It should now match. Let me know if there is anything else out of line.

Closing issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants