Skip to content

yunuscode/eyemcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

eyemcp

An MCP (Model Context Protocol) server that lets users interact with AI through eye tracking. Navigate options by winking and select by closing both eyes.

Built for hands-free interaction with Claude and other MCP-compatible AI assistants.

How It Works

  1. AI presents a question with 2-6 options via the present_options tool
  2. A styled UI window appears with the options
  3. User navigates with eye movements:
    • Left wink - move selection up
    • Right wink - move selection down
    • Close both eyes - confirm selection
  4. The selected option is returned to the AI, which continues the conversation

The camera feed is used for face landmark detection only and is never shown or recorded.

Demo

The UI displays a dark-themed option selector with:

  • Orange accent highlighting for the current selection
  • Green/red dot indicating face detection status
  • Progress bar when confirming a selection

Requirements

  • Python 3.10+
  • Webcam
  • macOS (tested), Linux (should work), Windows (untested)

Installation

# Clone the repo
git clone https://github.com/yunuscode/eyemcp.git
cd eyemcp

# Install dependencies
pip install opencv-python mediapipe mcp

The face_landmarker.task model file is included in the repo (~3.6MB). It's the MediaPipe Face Landmarker model used for eye tracking.

Usage with Claude Code

Add to your Claude Code MCP config (~/.claude/settings.json or project .mcp.json):

{
  "mcpServers": {
    "eyemcp": {
      "command": "python",
      "args": ["/path/to/eyemcp/server.py"],
      "type": "stdio"
    }
  }
}

Then ask Claude anything — it will present choices through the eye tracker UI instead of expecting typed responses.

Project Structure

eyemcp/
├── server.py           # MCP server (stdio transport)
├── eye_ui.py           # Eye tracking + UI subprocess
├── eye_tracker.py      # Standalone eye tracker module
└── face_landmarker.task # MediaPipe model file

How Eye Detection Works

Uses MediaPipe Face Landmarker to detect 468 facial landmarks in real-time. The Eye Aspect Ratio (EAR) is calculated for each eye using 6 landmark points. When EAR drops below a threshold, the eye is considered closed. A differential check distinguishes winks (one eye) from blinks (both eyes).

License

MIT

About

Eye-tracking MCP server for hands-free AI interaction

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages