Skip to content

Tensorflow implement of robot control using eye action(left, right and recovery)

Notifications You must be signed in to change notification settings

YannZyl/py-actionnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

robot control via eye action based on cnn+lstm

* step1: camera frame -- face detection(Haar+Adaboost)
* step2: face region(step1) -- eyes detection(left+right, Haar+Adaboost)
* step3: eyes region(step2) -- eyes selection(lenet-5)
* step4: eyes region(frame difference or optical flow or original pic) -- cnn+lstm classification
* step5: predict -- action control(finite state machine)

Dependences

  • numpy, scipy
sudo apt-get install python-dev, python-numpy, python-scipy
  • skimage
sudo apt-get install python-skimage  or  pip install scikit-image
  • sklearn
sudo apt-get install python-sklearn  or  pip install scikit-learn
  • opencv
sudo apt-get install python-opencv

Demo/Usage

Before run demo, you should train network with your own data, at first prepare your data. In file train.py, modify the param

face_cascade_file='data/model/haarcascade_frontalface_alt.xml',
eyes_cascade_file='data/model/haarcascade_eye.xml'
left_video_dir = 'data/video/left',
right_video_dir = 'data/video/right',
recover_video_path = 'data/video/recover/IMG_1950.MP4'

And then run the script train.py to train the model

python train.py

At last, please run demo

python demo.py

About

Tensorflow implement of robot control using eye action(left, right and recovery)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages