New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Add AS algorithm. #433

Merged
merged 14 commits into from Nov 25, 2018

Conversation

Projects
None yet
7 participants
@shizejin
Member

shizejin commented Sep 17, 2018

A Python version implementation of AS (Abreu and Sannikov 2014) algorithm. (Julia version is QuantEcon/Games.jl#65).

Comparing to the Julia version, the running time is significantly reduced here by using scipy.spatial.ConvexHull. On my computer, the example from AS runs for 30 ms on average, which is even quicker than the original Java implementation (usually runs for 50 ms, sometimes more than 100 ms). (Downloadable from here)

A demonstration shows the AS example and Prisoner's dilemma example.

Detailed docstring needed to be added. Will see if it's possible to improve the efficiency further.

@coveralls

This comment has been minimized.

coveralls commented Sep 17, 2018

Coverage Status

Coverage increased (+0.2%) to 94.597% when pulling 070e900 on shizejin:add-AS-algorithm into ba191fb on QuantEcon:master.

@mmcky mmcky added the in-work label Sep 17, 2018

@jstac

This comment has been minimized.

Contributor

jstac commented Sep 18, 2018

@shizejin This looks very nice. Thanks!

When you have time it would be great if you could add the demonstration to this site:

http://notes.quantecon.org/

Perhaps after the code has been merged and the demonstration has been updated (and a little commentary added)...

@cc7768 You reviewed the Julia version. Do you have any thoughts on this?

@mmcky @oyamad I'll leave the rest up to you.

@cc7768

This comment has been minimized.

Member

cc7768 commented Sep 19, 2018

This is very cool work @shizejin. The insights of AS are very useful -- It is a great example of "a little extra math can save you a lot of computations."

Happy to review this -- In fact, I may spend some of the time during the code sprint on Saturday to review this and go back and finish reviewing the Julia version (I noticed today that we never finished doing that) .

@shizejin

This comment has been minimized.

Member

shizejin commented Sep 19, 2018

Thanks @cc7768 ! I will add detailed doc string these days so that it makes easier for you to review.

About the Julia version, yes it has been suspending for a really long time... Actually, after writing up this Python version, I realized I made some stupid bugs in the Julia version. After you review this Python version I will go back there and modify it.

@oyamad

This comment has been minimized.

Member

oyamad commented Sep 19, 2018

@shizejin Looks great!

Some initial comments:

  • Consider setting up a class RepeatedGame and implementing the function as a method of that class; refer to #347.
  • Look for a more informative name than AS. I don't have a good idea though: RepeatedGame.get_equilibrium_payoff_set(method='abreu_sannikov') would be too long...
@shizejin

This comment has been minimized.

Member

shizejin commented Sep 21, 2018

@oyamad Thanks for comments!

  • Sure. I will talk with @QBatista as I believe he already has some codes and I should be consistent with it.

I have one question: now I am returning a ConvexHull as I think it will be convenient to have access to the vertices, simplices, and equations information. I am also thinking about returning a payoff matrix. Which one do you think will be better?

Zejin Shi

@shizejin shizejin force-pushed the shizejin:add-AS-algorithm branch from b543866 to a9d24dd Sep 22, 2018

@shizejin

This comment has been minimized.

Member

shizejin commented Sep 26, 2018

I have added RepeatedGame class and use AS as a function of the class.

Here is the newest demonstration, with a Cournot Duopoly Game example added.

To do:

  • Add docstrings for the internal functions.

  • Think about a more informative name for AS.

@shizejin shizejin force-pushed the shizejin:add-AS-algorithm branch from 5c5dafc to c273176 Sep 26, 2018

@shizejin shizejin changed the title from [WIP] Add AS algorithm. to Add AS algorithm. Nov 9, 2018

@shizejin

This comment has been minimized.

Member

shizejin commented Nov 9, 2018

This is ready for review.

Before the merge, we may want to decide whether to use RepeatedGame.get_equilibrium_payoff_set(method='abreu_sannikov') as suggested by @oyamad, or RepeatedGame.abreu_sannikov() (or just RepeatedGame.AS() which I am using now. AS is the abbreviation of the algorithm they proposed in the paper.)

The former one is favorable in the sense that it is easy to add outerapproximation algorithm in the future, but I am also concerned about it as outerapproximation and AS take very different arguments, and I am not sure if it is possible to use one get_equilibrium_payoff_set() for both of them.

@mmcky mmcky changed the title from Add AS algorithm. to ENH: Add AS algorithm. Nov 11, 2018

@mmcky

This comment has been minimized.

Contributor

mmcky commented Nov 11, 2018

@oyamad would you have time to review this PR?

@mmcky mmcky added the enhancement label Nov 11, 2018

@mmcky

This comment has been minimized.

Contributor

mmcky commented Nov 11, 2018

  • @mmcky to update documentation to include repeated_games once ready to be merged.
@oyamad
  • I like the idea of returning a ConvexHull instance.

  • I don't have a good suggestion for the name of the method. My only comment is that the name of a function/method should be in lower case (so I would downvote for AS).

  • Add a reference information for Abreu and Sannikov (2014) somewhere.

  • Inspect the code with pep8online.com or whatever.

  • Raise NotImplementedError if self.N != 2?

  • Consider prepending an underscore to the name of private functions.

Show resolved Hide resolved quantecon/game_theory/repeated_game.py Outdated
Show resolved Hide resolved quantecon/game_theory/repeated_game.py Outdated
self.N = stage_game.N
self.nums_actions = stage_game.nums_actions
def AS(self, tol=1e-12, max_iter=500, u_init=np.zeros(2)):

This comment has been minimized.

@mmcky

mmcky Nov 15, 2018

Contributor

@oyamad @shizejin what about using compute_as() as a method name?

or alternatively .compute(method='AS')? will there be alternative ways of computing set of payoffs?

This comment has been minimized.

@oyamad

oyamad Nov 15, 2018

Member

I am afraid compute would leave unclear what to compute...

This comment has been minimized.

@mmcky

mmcky Nov 19, 2018

Contributor

@oyamad I think you're right.

@oyamad

This comment has been minimized.

Member

oyamad commented Nov 15, 2018

  • An alternative is to implement this method as a function that takes RepeatedGame as (one of) the argument like lemke_howson (possibly in a separate file) and call it just abreu_sannikov.

  • Later, we may add a method, RepeatedGame.get_equilibrium_payoff_set(method='abreu_sannikov') or RepeatedGame.equilibrium_payoffs(method='abreu_sannikov'), allowing also an alias method='AS'.

  • Note that passing different options for different methods is not a big problem. It is usual if you have a unified interface that unifies several different methods; see for example the options argument in scipy.optimize.minimize.

@shizejin

This comment has been minimized.

Member

shizejin commented Nov 15, 2018

  • I think RepeatedGame.equilibrium_payoffs(method='abreu_sannikov') would be a good idea. It is clear and not too long.

  • scipy.optimize.minimize takes fun and x0 as key parameters for all methods, while there is no such common parameters for our case (for abreu_sannikov and outerapproximation). In that sense, we need to put all the parameters in options, which is a little bit strange to me. But I guess it's fine.

@shizejin

This comment has been minimized.

Member

shizejin commented Nov 19, 2018

A unified interface RepeatedGame.equilibrium_payoffs() has been added, following the style of scipy.optimize.minimize.

The usage would be as following now:

  1. Abreu and Sannikov 2014 example
>>> p1 = gt.Player([[16, 3, 0], [21, 10, -1], [9, 5, -5]])
>>> p2 = gt.Player([[9, 1, 0], [13, 4, -4], [3, 0, -15]])
>>> sg = gt.NormalFormGame([p1, p2])
>>> rpg = gt.RepeatedGame(sg, 0.3)
>>> hull = rpg.equilibrium_payoffs()
>>> hull.points[hull.vertices]
array([[7.33770472e+00, 1.09826253e+01],
       [1.12568240e+00, 2.80000000e+00],
       [7.33770472e+00, 4.44089210e-16],
       [7.86308964e+00, 4.44089210e-16],
       [1.97917977e+01, 2.80000000e+00],
       [1.55630896e+01, 9.10000000e+00]])
  1. Prisoner Dilemma example
>>> pd_payoff = [[9.0, 1.0], [10.0, 3.0]]
>>> A = gt.Player(pd_payoff)
>>> B = gt.Player(pd_payoff)
>>> sg = gt.NormalFormGame((A, B))
>>> rpg = gt.RepeatedGame(sg, 0.9)
>>> hull = rpg.equilibrium_payoffs(options={'u_init': np.array([3, 3])})
>>> hull.points[hull.vertices]
array([[3.  , 3.  ],
       [9.75, 3.  ],
       [9.  , 9.  ],
       [3.  , 9.75]])

Currently, only abreu_sannikov method is supported. Therefore, if we pass method='outerapproximation', NotImplementedError will be raised.

>>> hull = rpg.equilibrium_payoffs(method='abreu_sannikov', options={'u_init': np.array([3, 3])})
>>> hull = rpg.equilibrium_payoffs(method='AS', options={'u_init': np.array([3, 3])})
>>> hull = rpg.equilibrium_payoffs(method='outerapproximation', options={'u_init': np.array([3, 3])})
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/hyde/QuantEcon.py/quantecon/game_theory/repeated_game.py", line 69, in equilibrium_payoffs
    raise NotImplementedError(msg)
NotImplementedError: method outerapproximation not supported.

It will be very easy for us to unify outerapproximation into this interface in the future.

Would this way of implementation be better? @oyamad @mmcky

import numpy as np
from scipy.spatial import ConvexHull
from numba import jit, njit

This comment has been minimized.

@oyamad

oyamad Nov 19, 2018

Member

Remove jit.

delta : scalar(float)
The common discount rate at which all players discount the future.
"""

This comment has been minimized.

@oyamad

oyamad Nov 19, 2018

Member

Add docstring for attributes.

raise NotImplementedError(msg)
def _equilibrium_payoffs_abreu_sannikov(rpg, tol=1e-12, max_iter=500,

This comment has been minimized.

@oyamad

oyamad Nov 19, 2018

Member

Does it make sense to export this function, under the name, say, abreu_sannikov (this is what I meant in my previous comment)? In the current version, no documentation is exposed for the AS algorithm implementation.

This comment has been minimized.

@shizejin

shizejin Nov 19, 2018

Member

Would it be a little bit redundant if we both export abreu_sannikov and RepeatedGame.equilibrium_payoffs(method='abreu_sannikov')?

What scipy.optimize.minimize does is to make documentation page for each internal function corresponding to each method (e.g. minimize(method='COBYLA')) and then add a reference link in the documentation of scipy.optimize.minimize (e.g. here). Do you think we can follow this way?

return hull
def _best_dev_gains(sg, delta):

This comment has been minimized.

@oyamad

oyamad Nov 19, 2018

Member

What's wrong with passing a RepeatedGame (which is a container of sg and delta) as an argument to this function?

best_dev_gains0 = (1-delta)/delta * \
(np.max(sg.payoff_arrays[0], 0) - sg.payoff_arrays[0])
best_dev_gains1 = (1-delta)/delta * \
(np.max(sg.payoff_arrays[1], 0) - sg.payoff_arrays[1])

This comment has been minimized.

@oyamad

oyamad Nov 19, 2018

Member

Tuple comprehension seems to be a better approach here.

return _equilibrium_payoffs_abreu_sannikov(self, **options)
else:
msg = f"method {method} not supported."
raise NotImplementedError(msg)

This comment has been minimized.

@oyamad

oyamad Nov 19, 2018

Member

I would expect a ValueError if I pass method='something_nonsense'.

This comment has been minimized.

@shizejin

shizejin Nov 20, 2018

Member

That's True. I shall change this exception.

Considering that 'outerapproximation' method is going to be added in the future, shall we set a separate NotImplementedError exception for it?

@mmcky

This comment has been minimized.

Contributor

mmcky commented Nov 23, 2018

@oyamad this PR is looking close. Should I wait to release a new version through PyPI after we merge this PR?

@oyamad

oyamad approved these changes Nov 23, 2018

@mmcky I think we can merge this now. We can modify the code afterwards if necessary.

@mmcky

This comment has been minimized.

Contributor

mmcky commented Nov 25, 2018

thanks @oyamad and @shizejin for this contribution. I will merge now and make a new PyPI release this afternoon.

@mmcky mmcky merged commit 17a66a5 into QuantEcon:master Nov 25, 2018

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
for d in self.game_dicts:
rpg = RepeatedGame(d['sg'], d['delta'])
for method in ('abreu_sannikov', 'AS'):
hull = rpg.equilibrium_payoffs(options={'u_init': d['u']})

This comment has been minimized.

@natashawatkins

natashawatkins Nov 27, 2018

Member

@shizejin just wondering, where is method used here?

This comment has been minimized.

@shizejin

shizejin Nov 27, 2018

Member

@natashawatkins Thanks for reviewing. I should have added method='abreu_sannikov' and method='AS' here. The test passed because currently only "AS" method is supported and is set to be default.

I will fix this in minutes.

This comment has been minimized.

@natashawatkins

natashawatkins Nov 27, 2018

Member

No problem, I'm just writing a news item about the latest release

@natashawatkins

This comment has been minimized.

Member

natashawatkins commented Nov 27, 2018

@shizejin did you add the notebook to notes.quantecon.org?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment