Application of proximal policy optimization algorithm to the card game Big 2
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cardImages
PPONetwork.py
README.md
actionIndices.pkl
baselines_installation_guide_for_Windows.md
big2Game.py
enumerateOptions.py
gameLogic.py
generateGUI.py
mainBig2PPOSimulation.py
modelParameters136500
rules.md

README.md

Big 2 Self-Play Reinforcement Learning AI

Big 2 is a 4 player game of imperfect information with quite a complicated action space (being able to choose to play singles, pairs, three of a kinds, two pairs, straights, flushes, full houses etc from an initial starting hand of 13 cards). The aim of the game is to is to be the first player to play all of your cards but to play the game well requires formulating a long term plan, thinking about what your opponents plans are and knowing when to play a hand and when to save a hand for later. This is my implementation of training an AI to learn the game purely via self-play deep reinforcement learning using the "Proximal Policy Optimization" algorithm. The results have been surprisingly good - my friends and I play this game A LOT every time we go on holiday and it has got to the point where it convincingly beats all of us over a decent amount of games.

If you run generateGUI.py you can play with the AI and also see the values it assigns to each state as well as the probability of choosing each option. I've also made a web app using Django so that you can play against the trained networks in a more proper setting here (it may take a while to load). Here are the rules of the game.

I wrote up the details of how I trained the network and added it to arXiv here!