Vitruvius is a set of easy-to-use Kinect utilities that will speed-up the development of your projects. Supports gesture detection, skeleton drawing, frame capturing, voice recognition and much more.
Vitruvius works with Kinect SDK version 2 and 1.8.
- Joint scaling, proper for on-screen display
- User height
- Distance between joints
- One-line body tracking
- Angle calculations
- Project points on screen
- Easily display color, depth and infrared frames
- Save Kinect frames as bitmap images
- One-line skeleton drawing
- Record color, depth and infrared streams and save into video files (WinRT only)
- Kinect Smart Viewer
- Kinect Angle
- WaveLeft
- WaveRight
- SwipeLeft
- SwipeRight
- JoinedHands
- SwipeUp
- SwipeDown
- ZoomIn
- ZoomOut
- Menu
- Recognize voice commands
- Speech synthesis
Kinect version 2
Kinect version 1
- Download project's source code and build the solution that matches the version of your sensor. NuGet packages will be available soon.
-
Displaying Kinect color frames:
void Sensor_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e) { using (var frame = e.OpenColorImageFrame()) { if (frame != null) { // Display on screen image.Source = frame.ToBitmap(); // Save the JPEG file frame.Save("C:\\ColorFrame.jpg"); } } }
-
Displaying Kinect depth frames:
void Sensor_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) { using (var frame = e.OpenDepthImageFrame()) { if (frame != null) { // Display on screen image.Source = frame.ToBitmap(); // Save the JPEG file frame.Save("C:\\DepthFrame.jpg"); } } }
-
Getting the height of a body:
void Sensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { using (var frame = e.OpenSkeletonFrame()) { if (frame != null) { canvas.ClearSkeletons(); var skeletons = frame.Skeletons().Where(s => s.TrackingState == SkeletonTrackingState.Tracked); foreach (var skeleton in skeletons) { if (skeleton != null) { // Get the skeleton height. double height = skeleton.Height(); } } } } }
-
Detecting gestures:
GestureController gestureController = new GestureController(); gestureController.GestureRecognized += GestureController_GestureRecognized; // ... void Sensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { using (var frame = e.OpenSkeletonFrame()) { if (frame != null) { var skeletons = frame.Skeletons().Where(s => s.TrackingState == SkeletonTrackingState.Tracked); foreach (var skeleton in skeletons) { if (skeleton != null) { // Update skeleton gestures. gestureController.Update(skeleton); } } } } } // ... void GestureController_GestureRecognized(object sender, GestureEventArgs e) { // Display the recognized gesture's name. Debug.WriteLine(e.GestureType); }
-
Recognizing and synthesizing voice:
VoiceController voiceController = new VoiceController(); voiceController.SpeechRecognized += VoiceController_SpeechRecognized; KinectSensor sensor = SensorExtensions.Default(); List<string> phrases = new List<string> { "Hello", "Goodbye" }; voiceController.StartRecognition(sensor, phrases); // ... void VoiceController_SpeechRecognized(object sender, Microsoft.Speech.Recognition.SpeechRecognizedEventArgs e) { string text = e.Result.Text; voiceController.Speak("I recognized the words: " + text); }
- Vangos Pterneas from LightBuzz
- George Karakatsiotis from LightBuzz
- George Georgopoulos from LightBuzz
- Gesture detection partly based on Fizbin library, by Nicholas Pappas
You are free to use these libraries in personal and commercial projects by attributing the original creator of Vitruvius. Licensed under Apache v2 License.