Although Gin Rummy was one of the most popular card games of the 1930's and 1940's (Parlett 2020) and remains one of the most popular standard-deck card games (Ranker Community 2020), it has received relatively little Artificial Intelligence research attention. Here, we develop initial steps towards hand strength evaluation in the game of Gin Rummy.
Gin Rummy is a 2-player imperfect information card games played with a standard (a.k.a. French) 52-card deck. Ranks run from aces low to kings high. The game's object is to be the first player to score 100 or more points accumulated through the scoring of individual rounds. We follow standard Gin Rummy rules (McLeod 2020) with North American 25-point bonuses for both gin and undercut.
This research was conducted as a part of EAAI Undegraduate Research Challenge in the summer of 2020. The game implementation in Java is released by the competition organizer and can be found here. Althought this research has been published at AAA21, a significant extension of this work on reinforcement learning on graph representation of game states has been conduct by Sang Truong, Masayuki Nagai, and Shuto Araki.
- Java >= 8
- Python >= 3.5
- Tensorflow Keras >= 2.3.0
The base code for implementing Gin Rummy is in Java to comply with competition guideline and to enable tournament. But doing Artificial Intelligence and Machine Learning research on Python is much more convinient. Hence we did the game playing part in Java but the agent design in Python. To test each agent, you need to first running the Server.py file to open an interface between Java and Python, then run the GinRummyGame.java file to initialize the game. Bellow is a list of currently supported players/strategies:
- Dual Inception: Using a convolution neural network operating on two 4x13 matrices representing player hand and opponent hand estimation. The opponent hand was estimated using Bayesian reasoning. Data for training network was generated using Monte Carlo simulation. For more details on this player please see our associated paper.
- Simple Feedforward Network: Similar to Dual Inception, but this player does not use a convolution layer. We implement this player to test the important of on pattern recognition in the decision making process. For more details on this player please see our associated paper.
- Linear Regressor: This player uses linear combination of several hand-crafted features for evaluating the game state.
- Linear Regressor with Coevolution of the value function. For more detail of this player, please see Kotnik 2013.
As the project is continue to evolve, please direct any question, feedback, or comment to sttruong@stanford.edu.
This project was started by Sang Truong in summer 2020 under the mentorship of Professor Todd Neller and was financially supported by the Cornelsen Charitable Foundation Fund for Career Preparation at DePauw University. We would like to express our very great appreciation to Seoul Robotics Co., Ltd. and Dr. Minh Truong for providing Sang Truong opportunities to complete this research as a part of his internship with the company during Summer 2020. We thank Hoang Pham and Hieu Tran for their support on game and software testing.
@inproceedings{
truong2021ginrummy,
title={A Data-Driven Approach for Gin Rummy Hand Evaluation},
author={Sang Truong and Todd Neller},
booktitle={Proceedings of the 35th AAAI Conference on Artificial Intelligence},
year={2021},
url={https://ojs.aaai.org/index.php/AAAI/article/view/17843}
}