Skip to content

Demo scenes

Rumen Filkov edited this page Jan 25, 2024 · 4 revisions

Short Descriptions of the Demo Scenes

Here are short descriptions of the available demo scenes, along with the major components they utilize and demonstrate.

Scene Description
AvatarDemo / AvatarDemo1 The scene shows two avatars controlled by the user, from third person perspective. The scene utilizes the KinectManager-component to manage the sensor and data, AvatarController-components to control each of the two avatars, as well as SimpleGestureListener-component to demonstrate the gesture detection process.
AvatarDemo / AvatarDemo2 This scene utilizes the AvatarControllerClassic-component, to control the avatar’s upper body, as well as offset node to make the avatar motion relative to a game object. The scene utilizes the KinectManager-component to manage the sensor and data, and AvatarControllerClassic-component to control the upper body of the avatar.
AvatarDemo / AvatarDemo3 This demo shows how to instantiate and remove avatars in the scene, to match the users in front of the camera. It utilizes the KinectManager-component to manage the sensor and data, UserAvatarMatcher-component to instantiate and remove the avatars in the scene as the users come and go, as well as AvatarController-components to control each of the instantiated avatars.
. .
BackgroundRemovalDemo / BackgroundRemovalDemo1 This demo scene shows how to display the user silhouettes on a virtual background by utilizing the detected body indices. It utilizes the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to render the user silhouettes, BackgroundRemovalByBodyIndex-component to filter the user silhouettes, according to the detected body indices, and PortraitBackground-component to manage different screen resolutions.
BackgroundRemovalDemo / BackgroundRemovalDemo2 This scene demonstrates how to display part of the virtual scene environment within the user's silhouette. It utilizes the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to render user silhouettes, and PortraitBackground-component to manage different screen resolutions.
BackgroundRemovalDemo / BackgroundRemovalDemo3 This demo scene shows how to display the user silhouettes in 3D virtual environment, according to user's distance to the sensor. It utilizes the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to render the user silhouettes, ForegroundToRenderer-component to render the user's silhouette on a game object, UserImageMover-component to move the user's image object according to user's distance to the sensor, ColorImageJointOverlayer-component to overlay user's joint with a virtual object, and PortraitBackground-component to manage different screen resolutions.
. .
BlobDetectionDemo / BlobDetectionDemo The blob-detection demo shows how to detect blobs (compact areas) of pixels in the raw depth image, within the min/max distance configured for the sensor. It utilizes the KinectManager-component to manage the sensor and data, BlobDetector-component to detect the blobs in the raw depth image, coming from the sensor, and BackgroundDepthImage-component to display the depth camera image on the scene background.
. .
ColliderDemo / ColorColliderDemo It demonstrates how to trigger ‘virtual touch’ between the user’s hands and virtual scene objects. The scene utilizes the KinectManager-component to manage the sensor and data, HandColorOverlayer-component to move the overlaying hand-objects with colliders, JumpTrigger-component to detect virtual object collisions, and BackgroundColorImage-component to display the color camera image on the scene background.
ColliderDemo / DepthColliderDemo2D It shows how the user’s silhouette can interact with virtual objects in 2D-scene. This scene utilizes the KinectManager-component to manage the sensor and data, DepthSpriteViewer-component to display the user’s silhouette and create the overlaying skeleton colliders, and EggSpawner-component to spawn the virtual objects (spheres) in the scene.
ColliderDemo / DepthColliderDemo3D It shows how the user’s silhouette can interact with virtual objects in 3D-scene. The scene utilizes the KinectManager-component to manage the sensor and data, DepthImageViewer-component to display the user’s silhouette and create the overlaying skeleton colliders, and EggSpawner-component to spawn the virtual objects (eggs) in the scene.
ColliderDemo / SkeletonColliderDemo This scene shows how the user’s skeleton can interact with virtual objects in the scene. It utilizes the KinectManager-component to manage the sensor and data, SkeletonCollider-component to display the user’s skeleton and create bone colliders, and BallSpawner-component to spawn virtual objects in the scene.
. .
FittingRoomDemo / FittingRoomDemo1 This is the main dressing room demo-scene. It demonstrates the use of calibration pose, model overlay and blending between virtual and physical objects. It utilizes the KinectManager-component to manage the sensor and data, FollowSensorTransform-component to keep the camera pose in sync with the sensor, BackgroundColorImage-component to display the color camera feed on BackgroundImage2, CategorySelector-component to change model categories as needed, ModelSelector-component to manage model menu population and model selection, PhotoShooter-component to manage making photos, and UserBodyBlender-component to blend the virtual with physical objects.
FittingRoomDemo / FittingRoomDemo2 This is the second dressing room demo-scene. It shows how to overlay the user's body with an arbitrary humanoid model (like Ironman, Cinderella, Ninja turtle, etc.). The scene utilizes the KinectManager-component to manage the sensor and data, FollowSensorTransform-component to keep the camera pose in sync with the sensor, AvatarController-component to control the virtual model, AvatarScaler-component to scale the model according to the user's dimensions, BackgroundColorImage-component to display the color camera feed on BackgroundImage2, and optionally UserBodyBlender-component to blend the virtual with physical objects.
. .
GestureDemo / GestureDemo1 This scene demonstrates the detection of discrete gestures (swipe-left, swipe-right & swipe up in this demo), used to control a presentation cube. The scene utilizes the KinectManager-component to manage the sensor and data, CubeGestureListener-component to listen for swipe-gestures, and CubePresentationScript-component to control the presentation cube in the scene.
GestureDemo / GestureDemo2 It demonstrates the detection of continuous gestures (wheel, zoom-out & zoom-in in this demo), used to rotate and zoom a 3D model. The scene utilizes the KinectManager-component to manage the sensor and data, ModelGestureListener-component to set up and listen for wheel and zoom gestures, and ModelPresentationScript-component to control the 3D model in the scene.
. .
GreenScreenDemo / GreenScreenDemo1 The scene shows how to utilize a green screen for background segmentation. The scene uses the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to render the real environment in the scene, BackgroundRemovalByGreenScreen-component to filter a part of the real environment, according to its similarity or difference to the color of the green screen, and PortraitBackground-component to manage different screen resolutions.
GreenScreenDemo / GreenScreenDemo2 It shows how to utilize a green screen for background segmentation and volumetric rendering. The scene uses the KinectManager-component to manage the sensor and data, BackgroundRemovalManager-component to render the real environment in the scene, BackgroundRemovalByGreenScreen-component to filter a part of the real environment, according to its similarity or difference to the color of the green screen, ForegroundBlendRenderer-component to provide volumetric rendering and lighting of the filtered environment in the scene, and PortraitBackground-component to manage different screen resolutions.
. .
MocapAnimatorDemo / MocapAnimatorDemo The Mocap-animator scene provides a simple tool to capture the user's motion as Unity animation. The scene utilizes the KinectManager-component to manage the sensor and data, MocapRecorder-component to capture and record the user's motions as Unity animation asset, MocapPlayer-component to play the recorded animation on a humanoid model and AvatarController-component to provide the motion data.
. .
MultiSceneDemo / Scene0-StartupScene, Scene1-AvatarsDemo & Scene2-GesturesDemo This set of scenes shows how to use the KinectManager and other Kinect-related components across multiple scenes. It utilizes the KinectManager-component to manage the sensor and data, LoadFirstLevel-component in the startup scene to load the first real scene, LoadLevelWithDelay-component in the real scenes to cycle between scenes, and RefreshGestureListeners-component to refresh the list of gesture listeners for the current scene. The other Kinect-related components like AvatarController, gesture listeners, etc. are utilized in the real scenes (1 & 2), but they are related to the respective scene specifics, and not to using a single KinectManager across multiple scenes.
. .
NetworkDemo / NetServer This scene acts as network server for the streams of the connected sensor. It utilizes the KinectManager-component to manage the sensor and data, KinectNetServer-component to listen for clients and send sensor data over the network, and UserAvatarMatcher-component to instantiate and remove the avatars in the scene as the users come and go.
NetworkDemo / NetClientDemo1 This scene demonstrates how to use the network-client sensor (in combination with KinectNetServer), to remotely control the avatars in the scene. It utilizes the KinectManager-component to manage the sensor and data, NetClientInterface-component to act as "network" sensor by receiving the remote sensor's data from the server over the network, and UserAvatarMatcher-component to instantiate and remove the avatars in the scene as the users come and go.
. .
OverlayDemo / OverlayDemo1 This is the most basic joint-overlay demo, showing how a virtual object can overlay the user’s joint (right shoulder) on screen. The scene utilizes the KinectManager-component to manage the sensor and data, JointOverlayer-component to overlay the user’s body joint with the given virtual object, and BackgroundColorImage-component to display the color camera image on the scene background.
OverlayDemo / OverlayDemo2 This is a skeleton overlay demo, with green balls overlaying the body joints, and lines between them to represent the bones. The scene utilizes the KinectManager-component to manage the sensor and data, SkeletonOverlayer-component to overlay the user’s body joints with the virtual objects and lines, and BackgroundColorImage-component to display the color camera image on the scene background.
OverlayDemo / KinectPhotoBooth This is a photo booth demo, overlaying specific user's body joints with 2d images. The scene utilizes the KinectManager-component to manage the sensor and data, InteractionManager-component to manage the hand interactions, JointOverlayer-component to overlay the user’s body joint with 2d images, PhotoBoothController-component to detect user's swipe gestures and update the overlay images accordingly, and BackgroundColorImage-component to display the color camera image on the scene background.
. .
PointCloudDemo / SceneMeshDemo This demo shows how to integrate part of the real environment as point-cloud into the Unity scene at runtime. It utilizes the KinectManager-component to manage the sensor and data, SceneMeshRendererGpu-component to render the real environment into the scene, and FollowSensorTransform-component to move the mesh center according to sensor's position and rotation.
PointCloudDemo / UserMeshDemo This demo shows how to integrate the real user as point-cloud into the Unity scene at runtime. It utilizes the KinectManager-component to manage the sensor and data, UserMeshRendererGpu-component to render the user into the scene, and FollowSensorTransform-component to move the mesh center according to sensor's position and rotation.
PointCloudDemo / VfxPointCloudDemo This scene demonstrates how to combine the spatial and color data provided by the sensor with visual effect graphs to create point cloud of the environment, with or without visual effects. It utilizes the KinectManager-component to manage the sensor and data, as well as the sensor interface settings to create the attribute textures needed by the VFX graph. Please see ‘How to run VfxPointCloudDemo-scene’-section below.
. .
PoseDetectionDemo / PoseDetectionDemo1 This is simple, static pose-detection demo. It calculates the differences between the bone orientations of a static pose-model and the user, and estimates the pose matching factor. The scene utilizes the KinectManager-component to manage the sensor and data, StaticPoseDetector-component to estimate the pose-matching factor, PoseModelHelper-component to provide the model bone orientations, and AvatarController-component to control the user's avatar.
PoseDetectionDemo / PoseDetectionDemo2 In comparison to the previous demo, this demo scene estimates the pose matching factor between the user and an animated pose model. The moving-pose detection may be divided in steps, to provide better feedback to the user. The scene utilizes the KinectManager-component to manage the sensor and data, DynamicPoseDetector-component to estimate the pose-matching factor between the user and the animated model, MovingPoseManager-component to estimate the user performance in the given sequence of animated steps, PoseModelHelper & PoseModelHelperClassic-components to estimate the user's and model's bone orientations, and AvatarController-component to control the user's avatar.
. .
RecorderDemo / BodyDataRecorderDemo This is simple body-data recorder and player. The recorded body-data file may be played later on the same or other machine. The scene utilizes the KinectManager-component to manage the sensor and data, BodyDataRecorderPlayer-component to record or play the body data to or from a file, BodyDataPlayerController-component to process the key presses and start/stop the recorder or player, and UserAvatarMatcher-components to instantiate and remove avatars in the scene as needed.
RecorderDemo / PlayerDetectorDemo This scene demonstrates how to play a body-data recording, when no user is detected for a while. The scene utilizes the KinectManager-component to manage the sensor and data, BodyDataRecorderPlayer-component to play the recorded body data from a file, PlayerDetectorController-component to check for users and start/stop the body-data player accordingly, and UserAvatarMatcher-components to instantiate and remove avatars in the scene as needed.
. .
Clone this wiki locally