This project uses faster-rcnn-resnet model to detect and localize hands in an image using GCP ml engine.
pyenv virtualenv 3.6.5 my-virtual-env-3.6.5
; create a virtual environmentpyenv activate my-virtual-env-3.6.5
; activate the virtual environmentpip install -r requirements.txt
; Install dependencies
- Install Tensorflow object detection API.
- After installing this you should have a models/ folder in your project directory.
- Install Docker
- Install Google Cloud SDK
I have used GCP ml engine to train the detection model.
Note: As soon as your training job would start, it would start incurring charges.
-
Navigate to
cd scripts/preprocessing
. -
Run
python generate_tf_record.py
to generate data in TFRecord format. -
Navigate to
cd models/research/
. -
Run
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
-
Create project on GCP
-
Setup GCP SDK, run
./google-cloud-sdk/install.sh
and run./google-cloud-sdk/bin/gcloud init
-
Create bucket
gsutil mb -l europe-north1 gs://[BUCKET_NAME]/
-
Navigate to
cd detection/
and runbash gcp_deploy.sh [BUCKET_NAME]
-
All the requires files are now in [BUCKET_NAME]/data folder on GCP.
-
Navigate to
cd models/research
and run the following commands in order:bash object_detection/dataset_tools/create_pycocotools_package.sh /tmp/pycocotools
python setup.py sdist
cd slim
python setup.py sdist
-
Run
bash gcp_run_job.sh [BUCKET_NAME]
. -
Monitor training job :
tensorboard --logdir=gs://[BUCKET_NAME]/train/
I used Tensorflow serving to deploy the model using docker. Make sure the models are versioned. Tensorflow serving by default picks up the model with latest version (highest integer). Our model is versioned under tf_serving/1. 1 is the version number.
Before deploying the model we have to export the model to tensorflow saved format.
cd workspace/training_demo
python export_inference_graph \
--input_type encoded_image_string_tensor \
--pipeline_config_path gs://[BUCKET_NAME]/data/fast_rcnn_resnet101_coco.config \
--trained_checkpoint_prefix gs://[BUCKET_NAME]/train/model.ckpt \
--output_directory exported_graphs/
Now pick the model of your choice and move it to the tf_serving folder by running the following command:
mv exported_graphs/. ../../tf_serving/1/
Now your model is versioned and we can start with Tensor flow serving.
- Pull the docker image of tensorflow server :
docker pull tensorflow/serving
- Run docker service :
docker run -d --name serving_base tensorflow/serving
- Copy your model to docker container:
docker cp tf_serving/ serving_base:/models/faster_rcnn
- Commit changes :
docker commit --change "ENV MODEL_NAME faster_rcnn" serving_base detection
- Start your new container with custom model:
docker run -p 8501:8501 --mount type=bind,source=$PWD/tf_serving,target=/models/faster_rcnn -e MODEL_NAME=faster_rcnn -t detection
- Run
python tf_serving.py IMAGE_PATH
NOTE : The inference time was 26 seconds on the following CPU:
Macbook Pro (13-inch, 2017), 2.3 GHz Intel Core i5, 8 GB 2133 MHz LPDDR3, Intel Iris Plus Graphics 640 1536 MB
I tested the model with some test images. The results are given below: