Phantom is a breakthrough system that tackles phantom limb pain at multiple levelsβfrom decoding neural signals to delivering real-time haptic, visual, and voice feedback. By combining EEG brain signal processing, prosthetic sensing, haptic stimulation, and an AI-powered assistant, Phantom creates a closed-loop system that interrupts, reduces, and manages phantom pain in amputees.
π‘ Why this matters:
- Up to 80% of amputees suffer from phantom limb pain, one of the most persistent challenges in rehabilitation.
- A recent Nature article (Aug 2025) confirms that haptic feedback is one of the most reliable treatments for phantom limb pain.
- Phantom demonstrates how affordable, open-source hardware and AI tools can be combined into a scalable solution with clinical potential.
- Detects Brain Signals: Reads EEG activity from the somatosensory cortex & parietal lobe, where pain and touch are represented.
- Localizes Pain Perception: Maps brain activity to phantom pain/touch sensations in real time.
- Triggers Feedback Loops: Converts those signals into synchronized haptic (vibration), visual (LED), and audio cues.
- Prosthetic Integration: A servo-driven prosthetic hand triggers haptic vibration whenever it touches a surface, restoring feedback to the residual limb.
- AI Assistant: A voice-enabled chat system helps users interact with their prosthetic, adjust intensity, and monitor feedback.
π In short: Phantom creates a seamless closed-loop system that translates thought β detection β feedback β relief.
# Install Arduino server dependencies (run from project root)
npm install express cors serialport @serialport/parser-readline# Upload the haptic feedback sketch to Arduino
arduino-cli upload --fqbn arduino:renesas_uno:unor4wifi --port /dev/tty.usbmodem* hardware/routines1_test# From project root, run:
./start_haptic_system.shThis starts both servers automatically!
Terminal 1 - Start Arduino Server:
# Run from project root directory
node arduino-server.jsYou should see:
Arduino server running on port 3001
Found Arduino at /dev/tty.usbmodem...
Serial port opened successfully
Terminal 2 - Start Dashboard:
# Navigate to dashboard and start
cd phantom-dashboard
npm run dev- Open browser at http://localhost:3000
- Go to the chat interface
- Type:
play haptic feedback - The Arduino will trigger haptic motor!
- Board: Arduino UNO R4 WiFi
- Haptic Driver: Adafruit DRV2605
- Connection: Haptic driver connected via I2C/Qwiic
- FQBN:
arduino:renesas_uno:unor4wifi
- Node.js (v18+)
- Arduino CLI (version 1.3.1+)
- VS Code (optional, for development)
You can run these tasks using Cmd+Shift+P β "Tasks: Run Task":
- Arduino: Compile - Compiles the sketch without uploading
- Arduino: Upload - Uploads the compiled sketch to the board
- Arduino: Compile and Upload - Compiles and uploads in sequence
- Arduino: List Boards - Shows connected Arduino boards
- Arduino: Stop/Interrupt Task - Interrupts running tasks
The following keyboard shortcuts are available to interrupt running tasks:
Ctrl+C- Terminates the currently running taskCmd+Shift+X- Alternative shortcut to terminate tasksEscape- Terminates task when terminal is focused
These shortcuts work when any Arduino task is running (compile, upload, etc.).
phantom/
βββ .vscode/
β βββ tasks.json # VS Code task definitions
β βββ keybindings.json # Keyboard shortcuts
βββ arduino-cli.yaml # Arduino CLI configuration
βββ arduino-server.js # Node.js server for Arduino serial communication
βββ whisper-server.py # Python server for voice transcription
βββ hardware/
β βββ routines1_test/
β β βββ routines1_test.ino # Haptic feedback Arduino sketch
β βββ prosthetic/
β β βββ proximity_buzzer.ino # Proximity sensing with buzzer feedback
β βββ led_blink/
β β βββ led_blink.ino # Basic LED blink sketch
β βββ rainbow_led/
β βββ rainbow_led.ino # Rainbow breathing effect
βββ model_training/ # EEG signal processing & ML models
β βββ preprocess.py # EEG data preprocessing pipeline
β βββ features.py # Feature extraction utilities
β βββ train_model.py # Binary touch detection training
β βββ train_model_multiclass.py # Multiclass marker detection
β βββ inference.py # Real-time prediction engine
β βββ touch_detection_model.pkl # Trained binary classifier (69.6% accuracy)
β βββ multiclass_model.pkl # Multiclass marker detector
βββ phantom-dashboard/ # Next.js web dashboard
β βββ components/
β β βββ coach-chat.tsx # AI chat with voice & haptic integration
β βββ hooks/
β β βββ use-voice-recording.ts # Voice recording hook
β βββ app/
β βββ api/
β βββ transcribe/ # Whisper API integration
βββ start_haptic_system.sh # Startup script for haptic system
βββ start_complete_system.sh # Full system startup (all services)
βββ requirements.txt # Python dependencies
βββ README.md
When creating a new Arduino sketch:
- Create directory:
hardware/your_sketch_name/ - Create sketch file:
hardware/your_sketch_name/your_sketch_name.ino - Let Windsurf generate the VS Code tasks automatically
Each sketch gets its own compile and upload tasks in VS Code for clean separation.
- Open VS Code in this workspace
- Press
Cmd+Shift+P - Type "Tasks: Run Task"
- Select "Arduino: Compile and Upload"
# Compile only
arduino-cli compile --fqbn arduino:renesas_uno:unor4wifi hardware/tests
# Upload (requires compilation first)
arduino-cli upload --fqbn arduino:renesas_uno:unor4wifi --port /dev/cu.usbmodem34B7DA631B182 hardware/tests
# Compile and upload in one command
arduino-cli compile --fqbn arduino:renesas_uno:unor4wifi hardware/tests && arduino-cli upload --fqbn arduino:renesas_uno:unor4wifi --port /dev/cu.usbmodem34B7DA631B182 hardware/testsThe hardware/tests/tests.ino file contains a simple LED blink program that:
- Blinks the built-in LED every second
- Outputs status messages to Serial Monitor
- Demonstrates basic Arduino functionality
To view serial output:
arduino-cli monitor --port /dev/cu.usbmodem34B7DA631B182 --config baudrate=9600- Touch Detection Model: Achieved 69.6% test accuracy for binary classification (touch vs no-touch)
- Multiclass Marker Detection: Supports marker 1, marker 2, or none classification
- Real-time Inference: Optimized prediction pipeline for live EEG data streams
- Feature Engineering: Advanced frequency band analysis (delta, theta, alpha, beta, gamma)
- Channel Differencing: Implements C3-C4, P3-P4, P7-P8, T7-T8 spatial features
- Whisper API: Integrated OpenAI Whisper for voice transcription
- Voice Recording Hook: Real-time audio capture in React dashboard
- Voice-to-Command: Voice input triggers haptic feedback and AI responses
- Proximity Sensing: Added ultrasonic sensor with buzzer feedback for obstacle detection
- Multi-modal Feedback: Combined haptic, audio, and visual feedback systems
- Complete Pipeline: EEG β Feature Extraction β ML Model β Command β Arduino β Haptic/Audio Feedback
- Unified Startup: Single script launches all services (EEG processing, web server, Arduino, voice)
-
"Coach is thinking" but no haptic:
- Check Arduino server is running:
node arduino-server.js - Verify Arduino is connected and sketch is uploaded
- Check server logs for "HAPTIC command sent"
- Check Arduino server is running:
-
Arduino not detected:
- Unplug and reconnect Arduino USB
- Close Arduino IDE Serial Monitor if open
- Check port with:
ls /dev/tty.usbmodem*
-
Haptic motor not moving:
- Verify DRV2605 is connected to I2C/Qwiic port
- Check haptic motor is attached to DRV2605
- Test with curl:
curl -X POST http://localhost:3001/api/arduino/haptic
- Board not detected: Check USB connection and ensure the correct port is specified
- Compilation errors: Verify the sketch syntax and required libraries
- Upload fails: Ensure the board is not in use by another application
# Search for libraries
arduino-cli lib search <library_name>
# Install a library
arduino-cli lib install <library_name>
# List installed libraries
arduino-cli lib list