Skip to content

GTO Poker Bot that uses reinforcement learning and self-play

Notifications You must be signed in to change notification settings

arnenoori/gto-poker-bot

Repository files navigation

Poker

GTO Poker Bot

Acknowledgements

This project is part of CSC 481 - Knowledge Based Systems at Cal Poly with Professor Rodrigo Canaan.

The autonomous part of our project is based of the work of self-operating coomputer by HyperwriteAI https://github.com/OthersideAI/self-operating-computer.

Getting started (usage instructions):


Clone the repository

git clone https://github.com/arnenoori/gto-poker-bot

Create venv

python3 -m venv env

Activate it (Mac)

source env/bin/activate

Install requirements

pip install -r requirements.txt

Add OpenAI Key

export OPENAI_API_KEY=yourkeyhere

Go on your poker website of choice. We used www.247freepoker.com and played against bots.

To run the agent with the fixed strategy agent (default), simply run:

python play.py

3 different agents


  • agent_random.py: an agent making random decisions (used for testing and comparison)
  • agent_dqn.py: a deep q agent
  • fixed.py: a fixed model

Results

You can test the performance of the 3 different agents playing each other by running:

python evaluate.py

Results

Results after 1000 hands:

DQN Agent Wins: 51

Fixed Model Wins: 927

Random Model Wins: 6

Ties: 16

How it works:

GPT-V is used to extract the game state from the screenshots. The extracted information is then used to make decisions based on the current state of the game. Here is an example of the screenshot sent to GPT-V:

Demo gif

Example of the screenshot sent to GPT-V:

Bot

Read more in our technical report.

Repository Structure


About

GTO Poker Bot that uses reinforcement learning and self-play

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages