Skip to content

Demo scenes

Rumen Filkov edited this page Jan 31, 2024 · 1 revision

Short descriptions of the demo scenes

Here are the short descriptions of the available demo scenes, along with the major components they utilize and demonstrate.

Scene Description
AvatarsDemo / KinectAvatarsDemo1 It shows two avatars controlled by the user, from third person perspective. The scene utilizes the KinectManager-component to manage the sensor and data, AvatarController-components to control each of the two avatar, as well as KinectGestures and SimpleGestureListener-components to provide gesture detection.
AvatarsDemo / KinectAvatarsDemo2 This demonstrates the avatar control from first-person perspective. The scene utilizes the KinectManager-component to manage the sensor and data, FacetrackingManager-component to get more precise head-orientation data, and AvatarController-component to control the avatar.
AvatarsDemo / KinectAvatarsDemo3 This scene utilizes the AvatarControllerClassic-component (to control the upper body only), and offset node to make the avatar motion relative to game object. The scene utilizes the KinectManager-component to manage the sensor and data, and AvatarControllerClassic-component to control only the upper body of the avatar.
AvatarsDemo / KinectAvatarsDemo4 This demo shows how to instantiate and remove avatars in the scene, to match the users in front of the camera. It utilizes the KinectManager-component to manage the sensor and data, UserAvatarMatcher-component to instantiate and remove the avatars in the scene as the users come and go, as well as AvatarController-components to control each of the instantiated avatars.
. .
BackgroundRemovalDemo / KinectBackgroundRemoval1 Demonstrates the Kinect background removal functionality, i.e. how to cut out the user body silhouettes only. The scene utilizes the KinectManager-component to manage the sensor and data, and BackgroundRemovalManager-component to manage the background removal and display the foreground image (i.e. the cut-out silhouettes) on the main camera GUI layer.
BackgroundRemovalDemo / KinectBackgroundRemoval2 Shows how to set the user silhouettes as a 2nd background layer and put 3D-objects behind it, or in front of it. The scene utilizes the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to manage the background removal, and ForegroundToImage-component to set the texture of BackgroundImage2. This scene also utilizes the FacetrackingManager-component to get precise head-position data, as well as ModelHatController-component to move the halo along with the user's head.
BackgroundRemovalDemo / KinectBackgroundRemoval3 Here you can see how to utilize a separate layer and camera, to display objects and image effects "behind" the user silhouettes. The scene utilizes the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to manage the background removal, and ForegroundBlender-component to blend the background and foreground textures.
BackgroundRemovalDemo / KinectBackgroundRemoval4 Here the result of background removal gets inverted. The user silhouettes are transparent, while the Kinect-captured background is visible. The scene utilizes the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to manage the background removal, and optionally ForegroundBlender-component to blend the background and foreground textures (for Kinect-v1).
BackgroundRemovalDemo / KinectBackgroundRemoval5 This demo adds depth to the background removal image. The user silhouette moves around in the scene, according to user's position in front of the sensor. The scene utilizes the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to manage the background removal, ForegroundToRenderer-component to set the texture of UserImage-object, as well as UserPlaneMover-component to move the UserImage-object in the scene.
. .
BlobDetectionDemo / BlobDetectionDemo The blob-detection demo shows how to detect blobs (compact areas) of pixels in the raw depth image, within the min/max distance configured for the sensor. It utilizes the KinectManager-component to manage the sensor and data and BlobDetector-component to detect the blobs in the raw depth image, coming from the sensor.
. .
ColliderDemo / ColorColliderDemo Demonstrates ‘virtual touch’ triggers between the user hands and scene objects. The scene utilizes the KinectManager-component to manage the sensor and data, HandColorOverlayer-component to move the hand overlaying hand-spheres with colliders, and JumpTrigger-component that makes the colliding objects jump. These hand spheres trigger collisions with the other on-scene objects.
ColliderDemo / DepthColliderDemo2D It shows how your silhouette can interact with virtual objects in a 2D-scene. The scene utilizes the KinectManager-component to manage the sensor and data, DepthSpriteViewer-component to display the depth image and create the overlaying skeleton colliders, and EggSpawner-component to spawn the virtual objects (spheres) in the scene.
ColliderDemo / DepthColliderDemo3D It shows how your silhouette can interact with virtual objects in a 3D-scene. The scene utilizes the KinectManager-component to manage the sensor and data, DepthImageViewer-component to display the depth image and create the overlaying skeleton colliders, and EggSpawner-component to spawn the virtual objects (eggs) in the scene.
ColliderDemo / SkeletonColliderDemo This scene shows how the user’s skeleton can interact with virtual objects in the scene. It utilizes the KinectManager-component to manage the sensor and data, SkeletonCollider-component to display the user’s skeleton and create bone colliders, and BallSpawner-component to spawn virtual objects in the scene.
. .
FaceTrackingDemo / KinectFaceTrackingDemo1 Shows how the Kinect-generated face model (textured or not) can be controlled and optionally overlay the real user face on screen. The scene utilizes the KinectManager-component to manage the sensor and data, FacetrackingManager-component to control the face model, and SetBackgroundImage-component to display the color camera feed on BackgroundImage.
FaceTrackingDemo / KinectFaceTrackingDemo2 Demonstrates how to move a virtual object (hat) along with the user's head. The scene utilizes the KinectManager-component to manage the sensor and data, FacetrackingManager-component to get precise head position and orientation data, ModelHatController-component to control the virtual object's transform, and SetBackgroundImage-component to display the color camera feed on BackgroundImage.
FaceTrackingDemo / KinectFaceTrackingDemo3 It textures an object in the scene with the user face. As a result, the user's head replaces the model’s head in the scene. The scene utilizes the KinectManager-component to manage the sensor and data, FacetrackingManager-component to get precise head position data, BackgroundRemovalManager to create user image with blurred edges, and SetFaceTexture-component to display the face image.
FaceTrackingDemo / KinectFaceTrackingDemo4 Shows how a rigged head model can overlay the real user face on screen. It also controls the face expressions of the rigged model, according to the tracked face animation units (AU). The scene utilizes the KinectManager-component to manage the sensor and data, FacetrackingManager-component to manage the head and face data, ModelFaceController-component to control the rigged face model, and SetBackgroundImage-component to display the color camera feed on BackgroundImage.
FaceTrackingDemo / KinectFaceTrackingDemo5 It shows how to overlay a tracked face point with a virtual object. The scene utilizes the KinectManager-component to manage the sensor and data, FacetrackingManager-component to manage the head and face data and FacePointOverlayer to overlay a tracked face point with virtual object.
. .
FittingRoomDemo / KinectFittingRoom1 This is the primary dressing room demo-scene. It demonstrates the use of calibration pose, model overlay and blending with color camera feed, as well as interaction with Unity UI and making pictures. The scene utilizes the KinectManager-component to manage the sensor and data, KinectGestures-component to detect user gestures, InteractionManager-component to manage the user interactions with UI, InteractionInputModule to convey the user interactions to Unity event system, OverlayController-component to display the color camera feed on BackgroundImage, CategorySelector-component to manage model categories, ModelSelector-component to manage model menu population and model selection, PhotoShooter-component to manage photo shooting, and UserBodyBlender-component to blend the clothing model with the color camera feed. There are two optional components to detect user's age and gender in order to list appropriate clothing categories only - CloudFaceManager and CloudFaceDetector.
FittingRoomDemo / KinectFittingRoom2 This is the second dressing room demo-scene. It shows how a general humanoid model (like Ironman, Cinderella, Ninja turtle, etc.) can overlay user's body on screen. The scene utilizes the KinectManager-component to manage the sensor and data, KinectGestures-component to detect user gestures, AvatarController-component to control the model, AvatarScaler-component to scale the model according to user dimensions, OverlayController-component to display the color camera feed on BackgroundImage, and optionally UserBodyBlender-component to blend the humanoid model with the color camera feed.
. .
GesturesDemo / KinectGesturesDemo1 Demonstrates the detection of discrete gestures (hand swipes here), used to control a presentation cube. The scene utilizes the KinectManager-component to manage the sensor and data, KinectGestures-component to detect user gestures, CubeGestureListener-component to set up and listen for swipe-gestures, and CubePresentationScript-component to control the presentation cube.
GesturesDemo / KinectGesturesDemo2 Demonstrates the detection of continuous gestures (wheel, zoom-out & zoom-in here), used to rotate and zoom 3D model. The scene utilizes the KinectManager-component to manage the sensor and data, KinectGestures-component to detect user gestures, ModelGestureListener-component to set up and listen for wheel and zoom gestures, and ModelPresentationScript-component to control the 3D model in the scene.
GesturesDemo / VisualGesturesDemo Here you can see how to detect and use the visual gestures (Seated-gesture here), i.e. the ones created with the Visual Gesture Builder. The scene utilizes the KinectManager-component to manage the sensor and data, VisualGestureManager-component to detect visual gestures, and SimpleVisualGestureListener-component to listen for the detected discrete and continuous visual gestures.
. .
InteractionDemo / KinectInteractionDemo1 It demonstrates the hand cursor control and hand interactions, in means of grips, releases, presses & clicks. The scene utilizes the KinectManager-component to manage the sensor and data, InteractionManager-component to manage the hand cursor and user interactions, GrabDropScript-component to control (drag & drop) virtual objects in the scene, and InteractionInputModule to convey the user interactions to Unity event system.
InteractionDemo / KinectInteractionDemo2 Shows how to use hand interactions to grip and turn a virtual object in all directions. The scene utilizes the KinectManager-component to manage the sensor and data, InteractionManager-component to manage the hand cursor and user interactions, and GrabRotateScript-component to grip and turn the HandleObject in the scene.
. .
MocapAnimatorDemo / MocapAnimatorDemo The Mocap-animator scene provides a simple tool to capture the user's motion as Unity animation. The scene utilizes the KinectManager-component to manage the sensor and data, MocapRecorder-component to capture and record the user's motions as Unity animation asset, MocapPlayer-component to play the recorded animation on a humanoid model and AvatarController-component to provide the motion data.
MovieSequenceDemo / KinectMovieDemo This demo scene shows how to control a set of movie frames with the user position in front of the sensor, just producing a short forward and backward playing video. The scene utilizes the KinectManager-component to manage the sensor and data, and UserMovieSequence-component to set the current movie frame, according to the current user position.
. .
MultiSceneDemo / Scene0-StartupScene, Scene1-AvatarsDemo & Scene2-GesturesDemo Demonstrates how to use the KinectManager and other Kinect-related components across multiple scenes. This set of scenes utilizes the KinectManager-component to manage the sensor and data, KinectGestures-component to detect user gestures, LoadFirstLevel-component in the startup scene to load the first real scene, LoadLevelWithDelay-component in the real scenes to cycle between scenes, and LocateAvatarsAndGestureListeners-component to refresh the lists of KinectManager for the current scene. Other Kinect-related components like AvatarController, InteractionManager and KinectGestures are utilized in the real scenes (1 & 2), but they are related to the respective scene specifics, and not to using the single KinectManager across multiple scenes.
. .
OverlayDemo / KinectOverlayDemo1 It is the basic joint-overlay demo, showing how a virtual ball overlays the right hand of the user on screen. The scene utilizes the KinectManager-component to manage the sensor and data, and JointOverlayer-component to overlay the given body joint of the tracked user with the given virtual object, and optionally display the color camera feed on scene background.
OverlayDemo / KinectOverlayDemo2 This is a skeleton overlay demo, with green balls overlaying the body joints, and lines between them representing the bones. The scene utilizes the KinectManager-component to manage the sensor and data, and SkeletonOverlayer-component to overlay the body joints of the tracked user with the given objects and lines between them, and optionally display the color camera feed on scene background.
OverlayDemo / KinectOverlayDemo3 This is a simple ‘draw-in-the-air’ application, by utilizing hand grips and releases and overlaying hand positions with lines. The scene utilizes the KinectManager-component to manage the sensor and data, InteractionManager-component to manage the user interactions, HandOverlayer-component to overlay the hand cursor over the user's hand, and LinePainter-component to draw a line while the user's hand is closed.
OverlayDemo / KinectPhotoBooth This is a photo-booth application demo, courtesy of Sophiya Siraj. It uses swipe gestures to change the 2D-models overlaying the user on screen. The scene utilizes the KinectManager-component to manage the sensor and data, JointOverlayer-components to overlay the body joints of the tracked user with virtual objects, KinectGestures-component to detect user gestures, InteractionManager-component to manage user interactions, PhotoBoothController to control the changes of the overlaying models, and PhotoShooter-component to manage photo shooting.
. .
PhysicsDemo / KinectPhysicsDemo This is a ball-physics demo, where the user can raise his hand to get a virtual ball on screen, then throw it to the barrel in the scene. The scene utilizes the KinectManager-component to manage the sensor and data, ProjectorCamera-component to apply projector and BallController-component to control the ball state and physics in the scene.
ProjectorDemo / KinectProjectorDemo This is a basic projector-overlay demo, where the skeleton displayed by the projector should overlay the tracked user's body in front of the sensor. The skeleton may be replaced with a humanoid model, controlled by AvatarController and AvatarScaler-components. The projector needs to be calibrated first - see this tip. The scene utilizes the KinectManager-component to manage the sensor and data, ProjectorCamera-component to match the projector's point of view in the room setup, and SkeletonProjection-component to display the user's skeleton overlay.
RecorderDemo / KinectRecorderDemo This is a simple body-data recorder and player, controlled by voice commands or key presses. The recorded body-data files may be replayed later on the same or other machine, without physical sensor connected. The scene utilizes the KinectManager-component to manage the sensor and data, FacetrackingManager-component to get the precise head position, SpeechManager to manage the Kinect speech recognition, KinectRecorderPlayer-component to manage the body data recording and replaying to/from a file, KinectPlayerController-component to process the voice commands and key presses, and CubemanController-components to visualize the captured body data of the tracked users on the Cubeman-objects in the scene.
SpeechRecognitionDemo / KinectSpeechRecognition Demonstrates how a grammar of preconfigured voice commands can be used to control a virtual robot on screen. The grammar file ‘SpeechGrammar.grxml’ is located in the Assets/Resources-folder. The scene utilizes the KinectManager-component to manage the sensor and data, SpeechManager to manage the Kinect speech recognition, BotControlScript-component to process the recognized voice commands and control the robot, and GameControlScript-component to create the fence and display the list of voice commands.
. .
VariousDemos / KinectAudioTracker This is a very simple demo, whose only purpose is to show the direction of the detected audio source, in means of audio-beam angle with respect to the sensor. The scene utilizes the KinectManager-component to manage the sensor and data, and KinectAudioTracker-component to estimate the detected audio beam angle, as well as its confidence.
VariousDemos / KinectHandObjectChecker It shows how the sensor can detect an object in user's hands, for instance a book in the user passes between his hands. The scene utilizes the KinectManager-component to manage the sensor and data, BackgroundDepthImage-component to display the depth image on scene background, and HandObjectChecker-component to check for objects in user's hands.
VariousDemos / KinectHeightEstimator This is a simple user-body measuring tool. It uses the depth image to estimate the user height and several other body measures. The scene utilizes the KinectManager-component to manage the sensor and data, BodySlicer-component to estimate the user height and other body measures, and HeightEstimator-component to visualize the detected user height and body measures.
VariousDemos / KinectHolographicViewer This is simple holographic-view demo, courtesy of Davy Loots. It changes the camera projection matrix, according to viewer's position in front of the sensor. The scene utilizes the KinectManager-component to manage the sensor and data, and SimpleHolographicCamera-component to manage the camera projection matrix based on viewer's position.
VariousDemos / KinectPoseDetector This is simple pose-detection demo. It calculates the differences between the bone orientations of the model and the user and estimates the pose matching factor. The scene utilizes the KinectManager-component to manage the sensor and data, PoseDetectorScript-component to estimate the pose-matching factor, PoseModelHelper-component to provide the bone orientation data for each model, and AvatarController-component to control the user's avatar model.
. .
VisualizerDemo / KinectSceneVisualizer Converts the real environment, as seen by the sensor, to a mesh and overlays the color camera feed with it. The scene utilizes the KinectManager-component to manage the sensor and data, BackgroundColorImage-component to display the color camera feed on scene background, SceneMeshVisualizer-component to convert the real environment to a mesh, and MousePointOverlayer-component is a tool that puts a virtual object (ball) in the scene, at the position where the user clicks with the mouse.
VisualizerDemo / KinectUserVisualizer Converts the user body, as seen by the sensor, to a mesh and puts it into the scene. This way the user can interact with the other virtual objects in the scene. The scene utilizes the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to manage the background removal and smooth the edges of detected bodies, UserMeshVisualizer-component to convert the user body to a mesh, PointmanController-component to control the Pointman-object in the scene (used for collision detection), and BallSpawner-component to spawn virtual objects (balls and cubes) in the scene.
. .