Skip to content

gitgutgit/mit-hardmode-ImposterAI

Repository files navigation

MIT Hackathon - Liar Game

🔗 Project Link :D

Setup

For a quick command reference, see commands.md.

1. Create and activate a virtual environment

python3 -m venv mitv
source mitv/bin/activate

2. Install dependencies

pip install -r requirements.txt

3. Configure environment variables

Create a .env file by copying .env.example, then fill in your real OpenAI API key.

cp .env.example .env

You can use .env.example as the reference format.

How To Run

Use the client-server flow for the actual voice game:

Terminal 1

python3 -m uvicorn server:app --port 8000 --host 0.0.0.0

Terminal 2

Local development client for Mac/Windows testing

python3 pi_client.py  

Raspberry Pi deployment client

python3 pi_clinet_real.py

File Roles

server.py

  • FastAPI backend for the game.
  • Handles setup, player registration, role reveal flow, turn progression, voting, result generation, TTS prompt generation, and STT processing.
  • pi_client.py and pi_client_real.py both communicate with this server over HTTP.

pi_client.py

  • Local development client for Mac/Windows testing.
  • Hardware behavior is mocked with terminal prints, so no Raspberry Pi, LEDs, or LCD are required.
  • Works as the front-end controller that polls server.py, records audio, and sends player actions to the backend.

pi_client_real.py

  • Raspberry Pi deployment client.
  • Same server communication flow as pi_client.py, but with real hardware control for serial LED commands and LCD output.

main.py

  • Standalone terminal-only version of the game.
  • This file does not act as a bridge between pi_client.py, pi_client_real.py, and server.py.
  • It runs its own local game flow without the FastAPI client-server architecture.

Prompt Optimization

We viewed the prompting as an iterative optimization process. We did not fine tune our model weight but we iterated through different versions of prompts based on what was inputted from the player. At one point, AI players would give the liars too much information regarding the secret word to easily figure out, which affected the overall objective of the game. Thus, we continued to revise and test new prompts with real-time game testing and utilized our best judgement to choose the better performing version of the prompts. Therefore, we developed a human-in-the-loop prompt refinement cycle: rather than update the model parameter we updated the behavior through observation of the failures (cases), identification of the preferable responses, and continuous revisions to the instruction. We are not claiming there is a formal benchmark; however, the later versions of prompts provided a more balanced clue and allowed for more playable rounds than previous versions.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages