Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to show game. #4

Closed
xiezhipeng-git opened this issue Mar 19, 2023 · 1 comment
Closed

how to show game. #4

xiezhipeng-git opened this issue Mar 19, 2023 · 1 comment

Comments

@xiezhipeng-git
Copy link

I can not find game render function.
how to show game? and how to use this game with other program to compare.

@baskuit
Copy link
Owner

baskuit commented Mar 19, 2023

The game is an abstract tree and doesn't really have anything to render. Below I've generated a game with max_actions=2, depth_bound=2, max_transitions=1. I hope it will help you understand the nature of the 'tree game':

index: 0
index matrix:
tensor([[[0, 0],
         [0, 0]]])
value matrix:
tensor([[[0., 0.],
         [0., 0.]]])
index: 1
index matrix:
tensor([[[2, 3],
         [4, 5]]])
value matrix:
tensor([[[-1., -1.],
         [ 1.,  1.]]])
index: 2
index matrix:
tensor([[[0, 0],
         [0, 0]]])
value matrix:
tensor([[[-1., -1.],
         [-1., -1.]]])
index: 3
index matrix:
tensor([[[0, 0],
         [0, 0]]])
value matrix:
tensor([[[-1.,  1.],
         [-1., -1.]]])
index: 4
index matrix:
tensor([[[0, 0],
         [0, 0]]])
value matrix:
tensor([[[ 1.,  1.],
         [-1., -1.]]])
index: 5
index matrix:
tensor([[[0, 0],
         [0, 0]]])
value matrix:
tensor([[[ 1.,  1.],
         [-1.,  1.]]])

The index in the first dimension of the game data tensor corresponds to the state. With the parameters above the game that is generated will have 5 non-terminal states. One is the root and the others correspond to the entries in the 2x2 matrix at the root. These have indices [1, 5]. The index 0 is just an absorbing state.
The entries in the 'index matrix' are the indices the state will transition to after both row and column player jointly select it.
Notice how the index matrix at root matrix has values 2, 3, 4, and 5. These are the subsequent states. All other states are terminal, so their index matrices are all 0. This means they transition to the absorbing state at index 0.
The entries in the value matrix are the Nash equilibrium payoffs for the row player at the state that entry will transition to.
For example, the upper left entry of the value matrix at the root has value -1. This is because the state at index 2 has all values = -1 in its own value matrix; No matter what both players do at the index=2 state do, they will transition to the terminal/absorbing state and the row player receives reward=-1.

Other games are outside the scope of this project. You will have to learn from the regularization code in rnad.py and apply it to your use case. Hope this helps!

@baskuit baskuit closed this as completed Mar 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants