Skip to content

implement mxnet face and insightface with mapr streams for near real time face detection and recognition in video streams with residual neural net deep learning models

License

Notifications You must be signed in to change notification settings

mengdong/mapr-streams-mxnet-face

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mapr-streams-mxnet-face

implement mxnet face and insightface with mapr streams for near real time face detection and recognition with deep learning models

Get pre-trained models

After clone the repo, get the face detection from mxnet-face, whereas the model is stored at dropbox. Also the face recogition model from insightface whereas the model is stored at google drive

put model-0000.params under "consumer/models/", put mxnet-face-fr50-0000.params under "consumer/deploy"

Pre-requisite

A GPU MapR cluster, some installation could be referred from this blog

Your laptop, with a camera if you want the content from your camera, should have MapR MAC/Linux/Windows Client installed and tested to be able to connect to your GPU MapR cluster, the installation is here

Produce the content into a GPU MapR cluster

Producer code is straightforward, also hardcoded. Run "python mapr-producer-video.py", will read the Three Billboards trailer and produce it to a stream on the cluster:'/mapr/DLcluster/tmp/rawvideostream'. Also, you could use the camera, it is similar.

Before run the producer code, Stream should be created on the cluster, or through the client. Simple commands to create the stream and topics:

maprcli stream delete -path /tmp/rawvideostream
maprcli stream create -path /tmp/rawvideostream
maprcli stream edit -path /tmp/rawvideostream -produceperm p -consumeperm p -topicperm p
maprcli stream topic create -path /tmp/rawvideostream -topic topic1 -partitions 1

Consume the content in the GPU MapR cluster

After making sure the stream in on GPU cluster, we can run the consumer which contains the facial recognition code to process the stream: "python mapr_consumer.py". We will read from stream '/tmp/rawvideostream', get the face embedding vector and bounding boxes, and write them to stream '/tmp/processedvideostream', also, we will write all identified faces into stream '/tmp/identifiedstream'

Similarly, the stream should be pre-created:

maprcli stream delete -path /tmp/processedvideostream
maprcli stream create -path /tmp/processedvideostream
maprcli stream edit -path /tmp/processedvideostream -produceperm p -consumeperm p -topicperm p
maprcli stream topic create -path /tmp/processedvideostream -topic topic1 -partitions 1

maprcli stream create -path /tmp/identifiedstream
maprcli stream edit -path /tmp/identifiedstream -produceperm p -consumeperm p -topicperm p
maprcli stream topic create -path /tmp/identifiedstream -topic sam -partitions 1
maprcli stream topic create -path /tmp/identifiedstream -topic frances -partitions 1
maprcli stream topic create -path /tmp/identifiedstream -topic all -partitions 1

Identify new person in the stream with a picture and a docker run command on your laptop

Since all face embedding has been calculated, we can launch a container on your laptop possibly with only CPU to identify new person with one or a few picture of that person's face. Current code only accept one picture.

After set up the github repo, we can run the following command to launch the container, there is an option to decide whether you want to write identified frames with that person back to a MapR stream or not.

docker pull mengdong/mapr-pacc-mxnet:new_person_identifier

docker run -it --privileged --cap-add SYS_ADMIN --cap-add SYS_RESOURCE --device /dev/fuse -e MAPR_CLUSTER=DLcluster  \
-v /home/mapr/GITHUB/mapr-streams-mxnet-face:/tmp/mapr-streams-mxnet-face:ro \
-e MAPR_CLDB_HOSTS=10.0.1.74 -e MAPR_CONTAINER_USER=mapr -e MAPR_CONTAINER_UID=5000 -e MAPR_CONTAINER_GROUP=mapr  \
-e MAPR_CONTAINER_GID=5000 -e MAPR_MOUNT_PATH=/mapr \
-e GROUPID=dong01 -e GPUID=-1 -e READSTREAM=/tmp/processedvideostream \
-e WRITESTREAM=/tmp/identifiedstream -e THRESHOLD=0.3 -e WRITETOSTREAM=0 \
-e WRITETOPIC=sam -e READTOPIC=topic1 \
-e TIMEOUT=0.3 -e PORT=5011 -e FILENAME=sam_.jpg \
-p 5011:5011 mengdong/mapr-pacc-mxnet:new_person_identifier

Visualization could be seen at the port you chose (go to 'http://localhost:5011').

Demo the processed stream from a running docker on your laptop

If you decided to write processed stream into another MapR stream, we can always pull the processed stream for visualization.

docker pull mengdong/mapr-pacc-mxnet:5.2.2_3.0.1_ubuntu16_yarn_fuse_hbase_streams_flask_client_arguments

docker run -it --privileged --cap-add SYS_ADMIN --cap-add SYS_RESOURCE --device /dev/fuse -e MAPR_CLUSTER=DLcluster  \
-e MAPR_CLDB_HOSTS=10.0.1.74 -e MAPR_CONTAINER_USER=mapr -e MAPR_CONTAINER_UID=5000 -e MAPR_CONTAINER_GROUP=mapr  \
-e MAPR_CONTAINER_GID=5000 -e MAPR_MOUNT_PATH=/mapr \
-e GROUPID=YOUGROUPNAME -e STREAM=/tmp/identifiedstream -e TOPIC=all(choose from all/frances/sam) \
-e TIMEOUT=0.035(0.035 if reading from topic all, 0.2 from frances/sam, can be flexible) -e PORT=5010(choose a new port) \
-p 5010:5010(match the port you chose before) mengdong/mapr-pacc-mxnet:5.2.2_3.0.1_ubuntu16_yarn_fuse_hbase_streams_flask_client_arguments

The video will show up at the port you chose (go to 'http://localhost:5010').

About

implement mxnet face and insightface with mapr streams for near real time face detection and recognition in video streams with residual neural net deep learning models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published