A tool that converts chess game protocols from images, for example taken by phone, to PGN format for further analysis. The conversion is done using AI vision services. Currently, only the OpenAI API is fully supported.
- Clone this repository
- Install dependencies:
npm install- Copy the template.env file to .env and fill in your API keys:
This repository contains two related projects in the src/ folder:
src/cli— the command-line conversion utility that converts chess game protocols from images to PGN using AI vision services.src/mobile— a React Native / Expo mobile app that lets you photograph scoresheets and convert them to PGN on-device.
Choose the subproject you want to use and follow the instructions below.
- src/cli: CLI tool (TypeScript)
- src/mobile: Mobile Expo app
template.env: Example environment variables (copy to each subproject as described below)
- Install dependencies for the CLI:
cd src/cli
npm install- Create a
.envfile for the CLI (copy from repository root):
cp ../../template.env .env
# then edit .env and set OPENAI_API_KEY- Run in development mode (uses
ts-node):
npm run dev -- -i path/to/image.jpg -o out.md- Build and install the CLI globally (optional):
npm run build
npm link
# now you can run: chess-ocr-cli -i image.jpg -o out.mdNotes:
- The dev/start scripts run the TypeScript source directly with
ts-node. - The
buildscript compiles todist/and the packagebinentry points todist/index.js.
- Install dependencies for the mobile app:
cd src/mobile
npm install- Start the Expo dev server:
npm start
# or use: npx expo startScan the QR code with Expo Go on your phone (or use tunnel mode with npx expo start --tunnel).
The repository includes template.env with example variables:
OPENAI_API_KEY=your_openai_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here // TODO: reserved for next versions and currently not in use
Copy template.env into each subproject directory as .env and set the values before running the CLI or mobile app.
Run the CLI from src/cli:
cd src/cli
npm run dev -- -i ./sample/game1.jpg -o ./sample/game1.md- Currently implemented: OpenAI Vision API integration for image analysis and PGN output.
- Planned: Gemini Vision integration, batch processing, and additional output formats.
If any instructions need adjustment for your environment, tell me which OS or toolchain you're using and I will update the README accordingly.
- iPhone: Use the Camera app → tap the link to open in Expo Go
- Android: Open Expo Go → scan the QR code
⚠️ Your phone and computer must be on the same Wi-Fi network.
Use tunnel mode to share with anyone, anywhere:
npx expo start --tunnelSend the QR code or the exp:// URL to the other person. They scan it with Expo Go.
To build an installable .apk you can share directly:
npm install -g eas-cli
eas login
eas build --platform android --profile previewAdd this to src/mobile/eas.json first:
{
"build": {
"preview": {
"distribution": "internal",
"android": { "buildType": "apk" }
}
}
}When the build completes, you'll get a download link to share.
- Open the app → tap the ⚙️ gear icon to go to Settings
- Enter your OpenAI API key and tap Save
- Go back to the home screen
- Tap Take Photo or Pick from Gallery
- The AI will analyze the scoresheet and return PGN
- Use Copy or Share to export the result