An accessibility-focused Python application that enables mouse control through eye movement and clicking through intentional blinking.
EyeCursor uses computer vision and facial landmark detection to track eye gaze and translate it into cursor movement. It's designed with HCI principles in mind, prioritizing:
- Accessibility: Hands-free operation for users with motor disabilities
- Usability: Simple calibration and intuitive controls
- Reliability: Smooth cursor movement and intentional blink detection
- Performance: Real-time operation on standard hardware
- Eye Tracking Cursor Control: Move the cursor by looking where you want to go
- Blink Detection: Long blink (>300ms) for left click, double blink for right click
- Calibration System: 5-point calibration for accurate screen mapping
- Smooth Movement: Exponential moving average eliminates cursor jitter
- Visual Feedback: Real-time display of eye tracking status and detection points
- Safety Features: Cooldown periods prevent accidental multiple clicks
EyeCursor uses MediaPipe Face Mesh to detect 468 facial landmarks in real-time. For eye tracking, it specifically uses:
- Iris landmarks: Precise center points of each eye (4 points per iris)
- Eye aspect ratio (EAR): Vertical/horizontal eye ratio for blink detection
- Gaze point: Average of left and right eye centers
The gaze point is mapped from normalized camera coordinates (0-1) to screen coordinates using calibration data.
Intentional blinks are distinguished from natural blinking using:
-
Eye Aspect Ratio (EAR): Measures eye openness
- Open eye: EAR ≈ 0.25-0.35
- Closed eye: EAR < 0.25
-
Duration filtering: Intentional blinks last longer (>300ms)
-
Cooldown periods: Prevents multiple triggers from single blink
-
Double-blink pattern: Two blinks within 400ms = right click
Raw gaze data is noisy. EyeCursor applies exponential moving average:
position = old_position × smoothing_factor + target × (1 - smoothing_factor)
This creates smooth, natural cursor movement without lag.
- Python 3.8+
- Webcam
- 4GB RAM minimum
- Works on Windows, macOS, and Linux
-
Clone or download the project:
cd eyecursor -
Create a virtual environment (recommended):
python -m venv venv
-
Activate the virtual environment:
- Windows:
venv\Scripts\activate - macOS/Linux:
source venv/bin/activate
- Windows:
-
Install dependencies:
pip install -r requirements.txt
-
Run the application:
python main.py
- Ensure good lighting on your face
- Position yourself about 50-70cm from the camera
- Keep your head relatively still during use
| Key | Action |
|---|---|
C |
Start calibration |
R |
Reset cursor position |
P |
Pause/Resume tracking |
Q |
Quit |
- Press
Cto start calibration - Look at each circle that appears (5 points)
- Keep your gaze steady on each point
- Calibration completes automatically
| Action | How to Perform |
|---|---|
| Move cursor | Look at target location |
| Left click | Close eyes for >300ms, then open |
| Right click | Blink twice quickly (<400ms between) |
- Lighting: Bright, even lighting on your face
- Position: Center yourself in the frame
- Distance: 50-70cm from camera
- Background: Plain background works best
- Head position: Keep head still, move only eyes
- Blinking: Practice long, deliberate blinks for clicks
- Double-click: Blink-blink quickly for right click
- Calibration: Recalibrate if accuracy degrades
| Issue | Solution |
|---|---|
| Cursor jitter | Recalibrate; ensure good lighting |
| False clicks | Blink longer/more deliberately |
| Missed clicks | Blink faster (but still >300ms) |
| Face not detected | Check lighting; move closer to camera |
| Cursor drifts | Recalibrate; check head position |
eyecursor/
├── main.py # Application entry point
├── config.py # Configuration settings
├── eye_tracker.py # Eye detection and tracking
├── blink_detector.py # Blink detection logic
├── cursor_controller.py # Cursor movement and smoothing
├── calibration.py # Calibration system
├── ui.py # User interface
├── requirements.txt # Dependencies
└── README.md # This file
- OpenCV: Video capture and image processing
- MediaPipe: Face mesh detection (468 landmarks)
- PyAutoGUI: Mouse cursor control
- NumPy: Numerical operations
- Frame rate: 30 FPS on modern laptops
- Latency: ~50-100ms (eye movement to cursor response)
- CPU usage: ~15-25% on quad-core processors
PyAutoGUI's failsafe is disabled for this demo. To stop the program if the cursor becomes uncontrollable:
- Press
Alt+Tabto switch windows - Press
Qin the EyeCursor window - Or use
Ctrl+Cin the terminal
Edit config.py to adjust:
SMOOTHING_FACTOR: Cursor smoothness (0-1)BLINK_THRESHOLD: Sensitivity of blink detectionMOVEMENT_SCALE_X/Y: Cursor speedBLINK_COOLDOWN_MS: Time between clicks
- Multi-monitor support
- Adjustable sensitivity settings
- Click-and-drag gesture
- Scroll gesture (look up/down at screen edge)
- Voice command integration
- Profile system for multiple users
MIT License - See LICENSE file for details
- MediaPipe by Google for face mesh detection
- PyAutoGUI by Al Sweigart for mouse control
- OpenCV team for computer vision tools
For issues or questions:
- Check the troubleshooting section above
- Ensure all dependencies are correctly installed
- Verify your webcam is working
- Try recalibrating in different lighting
