You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The purpose of this project is to detect and track people in an indoor environment and recognize events regarding their movement using visual information. The visual information used consists of an RGB stream and a depth stream from an ASUS Xtion Pro or Microsoft Kinect.
CLI custom version of Logger1, a tool for logging RGB-D data from the Microsoft Kinect and ASUS Xtion Pro Live (original GUI version by Thomas Whelan). Suited for embedded applications.
Gesture Recognition For Human-Robot Interaction with modelling, training, analysing and recognising gestures based on computer vision and machine learning techniques. This work was done at Distributed Artificial Intelligence Lab (DAI Labor), Berlin.
Multiple people silhouettes are detected as contours, with feature extraction like height at extremepoints/extremeties, height of center, etc. OpenCV based library, using depth images/ frames from a top down mounted depth camera, e.g. Kinect, Realsense etc.