An MCP (Model Context Protocol) server that lets users interact with AI through eye tracking. Navigate options by winking and select by closing both eyes.
Built for hands-free interaction with Claude and other MCP-compatible AI assistants.
- AI presents a question with 2-6 options via the
present_optionstool - A styled UI window appears with the options
- User navigates with eye movements:
- Left wink - move selection up
- Right wink - move selection down
- Close both eyes - confirm selection
- The selected option is returned to the AI, which continues the conversation
The camera feed is used for face landmark detection only and is never shown or recorded.
The UI displays a dark-themed option selector with:
- Orange accent highlighting for the current selection
- Green/red dot indicating face detection status
- Progress bar when confirming a selection
- Python 3.10+
- Webcam
- macOS (tested), Linux (should work), Windows (untested)
# Clone the repo
git clone https://github.com/yunuscode/eyemcp.git
cd eyemcp
# Install dependencies
pip install opencv-python mediapipe mcpThe face_landmarker.task model file is included in the repo (~3.6MB). It's the MediaPipe Face Landmarker model used for eye tracking.
Add to your Claude Code MCP config (~/.claude/settings.json or project .mcp.json):
{
"mcpServers": {
"eyemcp": {
"command": "python",
"args": ["/path/to/eyemcp/server.py"],
"type": "stdio"
}
}
}Then ask Claude anything — it will present choices through the eye tracker UI instead of expecting typed responses.
eyemcp/
├── server.py # MCP server (stdio transport)
├── eye_ui.py # Eye tracking + UI subprocess
├── eye_tracker.py # Standalone eye tracker module
└── face_landmarker.task # MediaPipe model file
Uses MediaPipe Face Landmarker to detect 468 facial landmarks in real-time. The Eye Aspect Ratio (EAR) is calculated for each eye using 6 landmark points. When EAR drops below a threshold, the eye is considered closed. A differential check distinguishes winks (one eye) from blinks (both eyes).
MIT