Skip to content

zfields/kinect-opencv-face-detect

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenCV Test Project

Instructions

Pre Requisites

  • Docker

Suggested: test run a ready to run image

xhost +local:docker
docker run --device /dev/snd --env DISPLAY --interactive --net host --privileged --rm --tty zfields/kinect-opencv-face-detect

It will open a window with the video stream

  • Press [Esc] or [q] to exit
  • Press [d] to toggle depth heat map
  • Press [f] to toggle facial recognition
  • Press [s] to capture a screenshot

Download, modify, build and run

  • clone: git clone https://github.com/zfields/kinect-opencv-face-detect.git

  • If you want, modify as you wish the files kinect-opencv-face-detect.cpp

  • Build (will take some time and may show some warnings) docker build --tag kinect-opencv-face-detect .

It wil finish with something like (see the tag kinect-opencv-face-detect:latest):

Successfully built dfacdd726593
Successfully tagged kinect-opencv-face-detect:latest
  • Run with sudo docker run --device /dev/snd --env DISPLAY --interactive --net host --privileged --rm --tty kinect-opencv-face-detect:latest

Ideation

BOM

Embedded Linux Machine

  • Raspberry Pi
  • BeagleBone Black

Camera

  • Windows Kinect v1
  • D-Link Network Camera

Plan

  1. Research
  2. Setup local laptop with the Kinect
  3. Create Dockerfile to isolate the build environment
  4. Install OpenCV
  5. Create ARM based Docker image from Dockerfile
  6. Setup a Raspberry Pi with the Kinect
  7. Track a person in the video

Research

Web Searches

zak::waitKey(millis)

Kinect

OpenCV

Journal

06 MAY 2020 - I was able to quickly and successfully setup the containerized build environment. However, I was unable to run the examples, because the require an X11 GUI for the video display. I found a couple of links to research possible paths forward, but in the interest of time I decided to install the example dependencies natively on my host machine in order to test the Kinect. The test was successful. The next steps will be to get the Kinect providing input to OpenCV.

07 MAY 2020 - I was able to get the application to run via the CONTAINER! Previously there were complications providing Docker access to the X11 host. I have resolved this in a hacky, not safe manner, and made notes in the Dockerfile. I found several good blog posts (linked above) and upgraded the container to include OpenCV. Installing OpenCV in the container took over an hour and I fell asleep while it was running. The next step will be to test the OpenCV and kinect sample software provided by the OpenKinect project.

09 MAY 2020 - I attempted to use the example code I found on the internet. Unfortunately, the examples were written for the Freenect API v1 and v2, and the project is currently at major version 4. Instead of installing an older version of the API, I am planning to understand the flow of the program, then remap the OpenCV usage to the newer Freenect examples.

12 MAY 2020 - I began walking through the glview.c example of the Kinect library. I spent time looking up the documentation for the OpenGL and OpenGLUT API calls. I made comments in the example code with my findings. I documented the main loop, which processes the OpenGL thread. I still have the libfreenect thread to review and document. For my next steps, there does not appear to be API documentation for libfreenect, so I will have to read the code to understand each API call.

13 MAY 2020 - I am disappointed about the lack of API documentation for libfreenect. My goal was to learn OpenCV, a library leaned upon heavily by the industry. However, I have instead gotten myself into the weeds of libfreenect. I am going to continue learning the Kinect, because I have been facinated by the hardware. That being said, I thought it was pertinent to call out the loss of productivity as a talking point for a retrospective. I spent 5 hours researching/documenting the API calls, and felt like I accomplished nothing. I stopped with only the DrawGLScene function left to research/document.

18 MAY 2020 - Documenting DrawGLScene was by far the most difficult, as it contained trigonometric functions to calculate compensation for video rotation angles. I dug deep into the weeds - even researching and studying trigometric functions (of which I had long since forgotten). While doing it, documenting the example felt exhaustive and fruitless, but I believe it helps me identify exactly which code is necessary versus which code can be discarded. Stepping back, I think going deep on trigonometry was unnecessary when considering the goals of identifying non-critical code; however the subject matter remains fascinating. Having completed the research/documentation exercise, I feel as though I have a firm grasp of the flow of a Kinect program. The next steps will be to review the examples I found early on, and see if I can integrate OpenCV into the existing Kinect examples.

Later in the evening, I watched several YouTube video tutorials (linked above) describing how to get started with OpenCV. I now feel as though I have a basic enough understanding of OpenCV to cipher the original OpenCV examples I found early on in my research.

20 MAY 2020 - I rehydrated the OpenCV + Kinect example from OpenKinect.org. I updated the API calls in the example code as well as the Makefile, and got the example running in my Docker container.

21 MAY 2020 - While updated the example code, I recognized several sections of unused or misleading code, and I'm removing the kruft before adding in the new facial recognition feature.

I updated the windowing scheme and added new key controls to toggle depth information and finally facial recognition. Adding facial recognition was simple to add. It required nearly verbatim usage of the example in the video (linked above).

27 MAY 2020 - I been fiddling with the API over the last few days, trying to upgrade the video resolution. Unfortunately, the API and wrappers are not very extensible. They would need to be completely rewritten to allow objects to be created with parameterized resolution. To create such a composition would be an interesting use case for the CRTP (Curiously Recurring Template Pattern). However, I'll leave the refactor for another day. I plan to provide the examples I've created as is, and I am electing to make a note of the shortcoming in the corresponding blog post.

28 MAY 2020 - As I was finalizing the source and preparing to share, I noticed in my notes that I had originally intended for this to run on the Raspberry Pi. Luckily, I had created Dockerfiles, so this really only amounted to rebuilding the image on ARM - or so I thought... It turns out I configured my Raspberry Pi to not have a GUI. So I created a headless version of the program. This required rewriting cv::waitKey, because it has a dependency on the HighGUI library, the OpenCV windowing framework.

About

Example application using the XBox Kinect with OpenCV for facial recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published