-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEAT: Add tools for repeated games #347
Conversation
460c55b
to
649ab5b
Compare
649ab5b
to
fe1c996
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't followed the details. Only small comments at the moment.
General comments:
- Try
%prun
to detect bottlenecks. - Try
method='interior-point'
(ENH: added "interior-point" method for scipy.optimize.linprog scipy/scipy#7123) as alinprog
option, which is available with the latest dev version of scipy. - Consider providing a
linprog_method
option toouterapproximation
, to be passed tolinprog
. - Modification of the docstring in
ce_util.py
should belong to a separate PR. - Did you compare
gridmake
withcartesian
?
quantecon/game_theory/__init__.py
Outdated
best_dev_payoff_i, best_dev_payoff_1, best_dev_payoff_2, initialize_hpl, | ||
worst_value_i, worst_value_1, worst_value_2, worst_values, RepeatedGame, | ||
outerapproximation | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think all the routines should be imported. Just import important routines (perhaps RepeatedGame
and outerapproximation
?).
@@ -0,0 +1,384 @@ | |||
""" | |||
Filename: repeated_game.py | |||
Author: Quentin Batista |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The author (Chase Coleman) of the original code also should be here.
|
||
class RepeatedGame: | ||
""" | ||
Class representing an N-player repeated form game. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove "form".
|
||
# Create the unit circle, points, and hyperplane levels | ||
C, H, Z = initialize_sg_hpl(rpd, nH) | ||
Cnew = copy.copy(C) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't C.copy()
work?
warn("Maximum Iteration Reached") | ||
|
||
# Update hyperplane levels | ||
C = copy.copy(Cnew) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't C[:] = Cnew
work?
|
||
# Set iterative parameters and iterate until converged | ||
itr, dist = 0, 10.0 | ||
while (itr < maxiter) & (dist > tol): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use and
instead of &
.
tol_int = int(round(abs(np.log10(tol))) - 1) | ||
|
||
# Find vertices that are unique within tolerance level | ||
vertices = np.vstack({tuple(row) for row in np.round(vertices, tol_int)}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these two blocks really necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They seem to be -- here is the output I get without them:
array([[ 10.00000001, 3.97266052],
[ 10.00000001, 2.99999998],
[ 2.99999998, 10.00000001],
[ 3.97266052, 10.00000001],
[ 9.00000001, 8.99999999],
[ 8.99999999, 9.00000001],
[ 2.99999998, 3.00000001],
[ 2.99999998, 3. ],
[ 2.99999999, 2.99999999],
[ 2.99999998, 3. ],
[ 3.00000001, 2.99999998],
[ 3. , 2.99999998],
[ 2.99999999, 2.99999999],
[ 3. , 2.99999998],
[ 9.00000001, 9. ],
[ 9.00000001, 9. ],
[ 9. , 9.00000001],
[ 9. , 9.00000001]])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why we have these duplications. We should look into the algorithm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In numpy
version 1.13.1 they have updated the np.unique
function to accept an axis
argument -- Once this is released, we could do something like:
_, inds = np.unique(np.round(vertices, tol_int), axis=0, return_index=True)
vertices = vertices[inds, :]
or just
vertices = np.unique(np.round(vertices, tol_int), axis=0)
depending on whether we want the returned values to be rounded or not.
quantecon/game_theory/utilities.py
Outdated
|
||
|
||
class RGUtil: | ||
def frange(start, stop, step=1.): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's wrong with using np.linspace
?
quantecon/game_theory/utilities.py
Outdated
x = x0 + i * step | ||
yield x | ||
|
||
def unitcircle(npts): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would put this in repeated_game.py
, as it's very specific to the code there.
pure_nash_exists = pure_nash_brute(sg) | ||
|
||
if not pure_nash_exists: | ||
raise ValueError('No pure action Nash equilibrium exists in stage game') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to compute all the pure Nash equilibria.
Try:
try:
next(pure_nash_brute_gen(sg))
except StopIteration:
raise ValueError('No pure action Nash equilibrium exists in stage game')
c7f46e4
to
90646c2
Compare
90646c2
to
a60656a
Compare
This reverts commit b3e720a.
Here is the comparison between
|
It is interesting that |
@oyamad and @QBatista : I refreshed my memories by looking at the implementation of cartesian and gridmake. |
Here a small gist to test the claims above, https://gist.github.com/albop/a4e6af9311fe9a9a392462ed757018bb
|
I haven't had a time to properly review this, but it looks like @oyamad has done a pretty thorough review. The code looks well organized and nicely written in the 10 minutes I spent reading through it. One "Python" vs "Julia" comment that I have is in Julia you write functions that take a type as an argument, but in Python you typically attach these functions to the class itself as methods -- This means it might make sense to have some of these functions (in particular, any of the functions that take the I'm not surprised that this code is a bit slower (this is precisely the type of example where one would expect Julia to perform better). It would be nice to investigate whether this could be sped up a little, but I don't think that is a first order priority for now. |
@cc7768 I agree that For speedups we might implement a LP solver (by a simple simplex method) in Numba (as a medium term project). |
a6a5918
to
9f22ddc
Compare
9f22ddc
to
0cd2faa
Compare
@@ -353,7 +353,7 @@ def outerapproximation(rpd, nH=32, tol=1e-8, maxiter=500, check_pure_nash=True, | |||
b[nH+1] = (1-delta)*flow_u_2(rpd, a1, a2) - \ | |||
(1-delta)*best_dev_payoff_2(rpd, a1) - delta*_w2 | |||
|
|||
lpout = linprog(c, A_ub=A, b_ub=b, bounds=(lb, ub)) | |||
lpout = linprog(c, A_ub=A, b_ub=b, bounds=(lb, ub), method='interior-point') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be something like:
def outerapproximation(..., linprog_method='simplex'):
...
lpout = linprog(c, A_ub=A, b_ub=b, bounds=(lb, ub), method=linprog_method)
(scipy
version 1.0.0 has not been released.)
Changes Unknown when pulling 48021ff on QBatista:repeated-games into ** on QuantEcon:master**. |
@oyamad Here is a visualization of how the algorithm works on a few experiments: https://nbviewer.jupyter.org/github/QBatista/Notebooks/blob/master/JYC_Algo_Visualization.ipynb An important observation is that the quality of the approximation is not weakly increasing with the number of gradients. For this game, choosing 32 gradients seems to give a better approximation than choosing 127 gradients which I suspect is because of the symmetry of payoffs. Additionally, it appears that the approximation is very sensitive to the geometry of the initial guess. |
These visualizations are very cool. Nice work @QBatista. I think @thomassargent30 would be quite interested in seeing these visualizations. Is there a reason you think that the value set corresponding to 32 subgradients looks much better than the one with 127? I kind of see what you mean since there are a few extra points between (3, 3) and the other two vertices, but they don't seem far off that line. I agree the solution will have some dependence on the geometry of the initial guess (I think the difference between the set with 127 points and with 128 points actually illustrates this nicely -- The 127 point version doesn't necessarily have (0, 1), (1, 0), (0, -1), and (-1, 0) to work with which seem to be important components of this value set so it has to place more points along what should be the vertical line). This algorithm finds the smallest convex set (for a given set of subgradients!) that contains the fixed point of the B operator. A natural way to investigate this further would be to work on writing up the inner approximation which is also described by JYC. The fixed point of the B operator should lie between the inner and outer approximations -- My suspicion is that there are some very cool graphs you could draw that show the dependence on the initial geometry and how the inner and outer approximations differ for different games/geometries.algorithmically I'm hesitant to be too aggressive with popping vertices. It does seem that not all vertices end up mattering very much, but I suspect it is hard to determine algorithmically which are the ones that we should keep. For example, in this game (-1, 0) and (0, -1) seem to be important vertices. I would be interested in seeing some of the tests you described above though as a proof-of-concept. |
@QBatista Animations look very nice. Maybe we need "a better understanding of the manner in which extreme points of the equilibrium payoff set are generated" (Abreu and Sannikov, 2014). We should study Abreu and Sannikov (as we talked). |
@oyamad Is your last comment suggesting further development of this PR or a new project? |
@mmcky I suggested a new project. (I am afraid the JYC algorithm is too inefficient for pure Python/NumPy.) |
Adds tools for repeated games including the outer hyperplane approximation described by Judd, Yeltekin, Conklin 2002.
The implementation is currently much slower than the one in Julia.
1.532440 seconds (1.17 M allocations: 87.199 MiB, 0.92% gc time)
1 loop, best of 3: 1min 8s per loop