robot control via eye action based on cnn+lstm
* step1: camera frame -- face detection(Haar+Adaboost)
* step2: face region(step1) -- eyes detection(left+right, Haar+Adaboost)
* step3: eyes region(step2) -- eyes selection(lenet-5)
* step4: eyes region(frame difference or optical flow or original pic) -- cnn+lstm classification
* step5: predict -- action control(finite state machine)
- numpy, scipy
sudo apt-get install python-dev, python-numpy, python-scipy
- skimage
sudo apt-get install python-skimage or pip install scikit-image
- sklearn
sudo apt-get install python-sklearn or pip install scikit-learn
- opencv
sudo apt-get install python-opencv
Before run demo, you should train network with your own data, at first prepare your data. In file train.py, modify the param
face_cascade_file='data/model/haarcascade_frontalface_alt.xml',
eyes_cascade_file='data/model/haarcascade_eye.xml'
left_video_dir = 'data/video/left',
right_video_dir = 'data/video/right',
recover_video_path = 'data/video/recover/IMG_1950.MP4'
And then run the script train.py to train the model
python train.py
At last, please run demo
python demo.py