Minimal scaffold for a poker bot
- Serve the simulator so Playwright hits a consistent layout:
cd simulator && python -m http.server 8000 - Keep
TABLE_URLinmain.pypointing athttp://localhost:8000.
- Set
DEBUG_DUMP_IMAGES = True(default) inmain.pyto dump screenshots/crops. - Images land in
data/debug_frames/with full frames plus hero/board crops. - Viewport is fixed to
1200x800for consistent template generation.
Use the calibration CLI to capture screenshots, define regions, and generate card corner templates when you have a real table screenshot.
Capture a screenshot:
python -m vision.calibration capture --url http://localhost:8000 --out data/calibrationSet regions (example values):
python -m vision.calibration set-region --name hero_region --x 120 --y 420 --w 240 --h 90
python -m vision.calibration set-region --name board_region --x 110 --y 260 --w 320 --h 90Preview the regions:
python -m vision.calibration preview --image data/calibration/screenshot.pngExtract templates (once you have a screenshot with known cards):
python -m vision.calibration extract-templates \
--image data/calibration/screenshot.png \
--hero-cards "As Kd" \
--board-cards "2c 7h Jh"- main.py: orchestration loop (perception → solver → act)
- vision/: screenshotting and card reading stubs (OpenCV/Tesseract)
capture.py: take screenshots of tablecard_reader.py: template matching for cardsocr_utils.py: Tesseract OCR helpers
- solver/: lookup and decision stubs
lookup.py: query precomputed GTO tablespio_interface.py: optional live solver controldecision.py: pick action from solver frequencies
- automation/: UI clicker stubs (pyautogui / Playwright / Selenium)
browser_control.py: execute actions in UIui_coords.json: pre-recorded button coordinates
- data/: data files and lookup tables
templates/: card corner image templateslookups/: precomputed GTO solution tablesboards.json: example board definitions
- simulator/: tiny fake-money table for testing
index.html: simple poker UIstyle.css: green felt stylingscript.js: button click handling