Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
dp.kinect2 exposes the functionality of the Microsoft Kinect for Windows v2 sensor to the Max patching environment. Below you will find the system requirements, setup guide, and reference documentation.
- Verify Microsoft's hardware and operating system requirements. You will need the 64-bit version of Windows 8.1 or newer.
- Install Cycling74 Max 6.x (32 or 64-bit) or newer.
- Install Kinect v2.x Runtime.
- Install Microsoft Visual C++ 2015 Redistributable Update 3 from primary or secondary links. Be sure to install the 32 or 64-bit version (or both) to match your Max platform version.
- Plug your Kinect for Windows v2 sensor into a USB 3.0 port. Please do not plug any other USB devices into the same USB 3.0 controller because the Microsoft Kinect v2 drivers need all the bandwidth.
- Download dp.kinect2 from http://hidale.com/shop/dp-kinect2/#download.
- Open the ZIP archive and move the whole
dp.kinect2folder into your
home directory -> Max/Max7 -> Packagesfolder.
- Open a command prompt.
md "%localappdata%\Dale Phurrough"and press enter. You may get an error that it already exists. This is ok.
md "%localappdata%\Dale Phurrough\dp.kinect2"and press enter. You may get an error that it already exists. This is ok.
This step is a temporary workaround for issue 45.
- To use (optional) face tracking, download the face model ZIP file and follow the instructions found in the
- To use (optional) speech recognition...
Install the Microsoft Speech Platform v11 Runtime
Install the Kinect for Windows 2.0 Language Packs for your needed languages. (Microsoft already installed English.)
- Next, you must register dp.kinect2.
- Open the dp.kinect2 help patch.
- If you want a trial test period for dp.kinect2, you can register easily with two steps
- When you purchase a license for dp.kinect2, you will receive several emails. Please check your inbox and spam folder. Please read these emails. They have important instructions and you must follow them to successfully download and register your license(s).
- After you follow those email instructions, you will have your registration name and a ZIP archive of all your registration key(s). Your registration keys (.dpreg files) are inside this ZIP file. You will need to decompress the ZIP file.
- Please use the same register tab in the help patch. Please type your registration name in the field beside the register button.
- Click this register button and use the dialog box that appears to select your registration key (.dpreg file).
- Color image, depth, and IR sensor output in many pixel formats
- User identification, location, and occlusion
- Skeleton joint tracking with orientations
- Body properties, hand tracking, lean, body restriction
- Point clouds, accelerometer, and gravity
- Sound location and strength; speech recognition
- Face tracking with pose, rotation, translation, bounding boxes, key 2D and 3D points, smiling, eye engagement, eye open/closed, mouth open/closed, skin/hair color, face 3D modeling with animation/shape units
- Data alignment, filtering, smoothing, rotation to gravity
- jit.anim.node can be connected to recalculate the Kinect data for a VR world, compensate for your Kinect's location, or to coordinate and combine multiple Kinect sensors
- Output data in native Max messages or OSC; compatible with the output of dp.kinect and jit.openni to aid in migration
- Two levels of error messages to aid in debugging
- Support for collections, packages, and executables
v1.1.20170110 provide more helpful error messages for problems during registration
v1.1.20160922 fix issue #28 (setting @facesuausmooth before starting Kinect causes crash)
v1.1.20160705 production release with new features and changes
v1.0.1 production release, no longer beta
Compatibility with dp.kinect
The output of the external is nearly compatible with dp.kinect and benefits from the new v2 sensor's higher resolution. Still, there are some known differences:
- The size of the output matrices can not be changed on the Kinect 2. If you set matrix size attributes on dp.kinect2, it will ignore them. If you must have the original matrix sizes of dp.kinect, you can resize or crop the output matrices using jit.matrix or other matrix operators. Otherwise, the sizes are:
depthmap, IR, playermap, pointcloud = 512x424
colormap = 1920x1080
- When the IR (infrared) output is set to the long type with
@irtype=1, these values will range from 0→65535. This is different than the 0→1023 range of dp.kinect. This wider range is due to the increased precision and fidelity of the Kinect v2 sensor. If you must have the original 0→1023 range, you can use jit.op, jit.expr, or other matrix operators to scale the values in the matrix output. Please note the IR data bug (described above) that Microsoft has not fixed.
- The face tracking pose
scalevalue is always 1.0 until you enable
@face3dmodeloutput. When either is enabled, the
scalevalue will be updated.
- The face tracking pose
translationcoordinate values is always the same as the head skeleton joint until you enable
@face3dmodeloutput. When either is enabled, the
translationcoordinate will be further refined to be a more exact head pivot point to use with 3D face modeling.
- Face 2D points compatible with both dp.kinect and dp.kinect2 are limited to
@face2dpoints=2.This provides the basic five 2D alignment points of: left eye, right eye, nose, left mouth corner, and right mouth corner.
- Face animation units (AU) on the Kinect v2 are different than Kinect v1. There are now 17 AUs and they have different meanings. At this time, only
@soundinfoattribute is only updated when dp.kinect2 is banged. This is different than dp.kinect which constantly updated the attribute.
@syncattribute is not available in dp.kinect2. All frames are sync'd by Microsoft as described in the Known Issues section.
- Microsoft does not support control of the IR emitter, color camera processing, or audio processing. Therefore, the following parameters are not present on dp.kinect2:
iremitter, autoexposure, autowhitebalance, backlight, brightness, contrast, exposure, frameinterval, gain, gamma, hue, powerfreq, saturation, sharpness, whitebalance, autogain, echocancel.
- Kinect v2 sensor works in all seated, near, and far situations while also having a 0.5 - 8 meter range. Therefore, these parameters are not available on dp.kinect2:
@seated, @nearmode, @depthrange.
- Kinect v2 does not have a tilt motor within itself. Therefore, the
@elevationattribute will only output the basic elevation relative to the horizon.
- The following esoteric parameters are not available on dp.kinect2:
@gravsmooth, @facerate, @sensorrate
Known Issues, Limitations, Report Problems
- Microsoft has isolated a performance problem that might occur with the Kinect v2 on new Intel CPUs. Microsoft believes it is isolated to Kinect software that uses eventing to get Kinect data. dp.kinect2 does not use eventing to process core Kinect data. Therefore, I do not think dp.kinect2 is affected. I asked Microsoft for more information but have not received any to date. Microsoft does provide a workaround in this forum post.
- With stress testing using a repeating Kinect Studio clip, a bug in the Kinect 2.0 runtime from Microsoft was found when HD face tracking features are used. A private thread in their runtime accesses memory which is unavailable causing a crash in the runtime. The issue was escalated to Microsoft and they reproduced it. They have no fix at this time. It is possible that the crash will not be seen in real-world usage. I recommend solid testing of any solutions that use HD face tracking to ensure stability.
- All frames are synced by the Microsoft drivers. Out of sync or delayed frames are likely due to Microsoft limitations. Microsoft made a design decision to keep frames in sync even if it must drop the framerate to 15fps for the color camera to have enough light and therefore penalize the depth/body/ir/other frames. There is no workaround and therefore the
@syncparameter is not available on dp.kinect2.
- Microsoft has intentionally degraded the IR data. Intense IR regions do not linearly increase in v2.0 builds 1407 and higher including the final release. Instead, it over saturates, uses half the intensity bit range, and the data is poor. A bug has been logged with Microsoft in private forum and public forum. They have acknowledged this is their issue. The problem does not exist in 1406 or earlier releases of the Kinectv2 SDK/runtime. There is no workaround.
- Microsoft's driver for the Kinect v2 does not dependably report the existence or absence of a Kinect. This is a known Microsoft issue reported in the private forum and public forum. Therefore, you can "open" a Kinect even when one isn't plugged in. Or the driver might report there is no Kinect even though there is one plugged in. The result is sometimes the "open" will fail. Or you will get a successful open yet get repeatedly the last known data (which could be zeros). If you set @unique=1, then no data will be output.
- If you experience bugs or failures with dp.kinect2, please research past issues at https://github.com/diablodale/dp.kinect2/issues. If you still do not find a solution to your issue, please open a new issue.
Strongly recommended sources of additional learning and documentation include
- Max tutorials. This core knowledge on sending messages and manipulating Jitter matricies is essential. You can find the tutorials under the Help menu in Max.
- Microsoft Kinect v2 programming guide. This guide is still in development by Microsoft and is regularly updated. This and the Kinect v1 programming guide teach you about the coordinate system for the Kinect, depth maps and their range, skeletal tracking, facial recognition, and much more.
- The help file included with dp.kinect2. This help file includes many simple examples using dp.kinect2. It also link to online tutorials and documentation.
bang Causes the Kinect to be queried for any updated data with optional waiting/blocking. After this, matrices and message-based data are output from the outlets as configured using the attributes.
open Initializes and opens a connection to a specific Kinect. If only one Kinect is attached to the computer and no
@idkinect has been set, then dp.kinect2 defaults to the only attached Kinect. Success/failure is reported at dumpout.
read Is supported primarily for compatibility with jit.openni. No XML file is read. Instead, the message is accepted and a result generated to allow for easier migration. Otherwise, its functionality is the same as
close Closes the connection to the currently open Kinect
getusbidlist Returns a list of symbols representing unique ID's for all Kinects attached to the computer on dumpout. Use one of these symbols to set the
@idkinect attribute of a given dp.kinect. At this time, Microsoft only supports one Kinect2 per computer and the only ID value that will be output is
defaultkinectid. The values returned by this message may change in the future when Microsoft supports more Kinect2 sensors attached to the same computer.
getusbidlist <-- sent to the inlet usbidlist defaultkinectid <-- received from the dumpout outlet
pixeltoskel Takes a series of three numbers representing a depth pixel (x column, y row, z depth value) from a depthmap and transforms it into real world (skeleton space) x, y, z coordinates. The behavior of this message is affected by the values of
@distmeter, @flipx, @align, @position, @quat, @rotate, @rotatexyz, and @scale. An example:
pixeltoskel 110 85 2.4 <-- sent to the inlet pixeltoskel -0.42012 0.294084 2.4 <-- received from the outlet
This object also supports the normal compliment of Max/Jitter messages to set/query attributes, get a summary via dumpout, etc. Its dumpout output will follow standard conventions through use of standard Max/Jitter APIs.
Special Dumpout Messages
With the Kinect v1 and dp.kinect, some messages were automatically generated and output simultaneously from the dumpout outlets of all dp.kinects created in Max. In dp.kinect2, those messages
lostkinect are not implemented due to unreliability in the Microsoft APIs for detecting plugged/unplugged sensors.
Attributes are now documented on their own page.
Depth, Color, IR, Playermap, Point Cloud
Depthmap, color image, infrared image, playermap, and point cloud data (all matrix-based) are now documented on their own page.
Skeleton Position, Joints, Face tracking, Sound Info, Speech Recognition, and Other Data
User position, skeleton joints, face tracking, sound information, speech recognition, floor identification, and all other message-based data of this nature are now documented on their own page.
Standalone Application and Collectives
Standalone applications can be created with dp.kinect2. All standard features of dp.kinect2 will work. If you use the optional face features, you will need to copy the Kinect20.Face.xxx.dll and the NuiDatabase folder into the support subfolder of your application.
Collectives can also be created and all standard features will work. Max 6 and Max 7 have a limitation; the optional files needed for face tracking can not be directly included within the collective file. I have created a workaround in dp.kinect v1.1. In addition, you must copy the Kinect20.Face.xxx.dll and the NuiDatabase folder into the same directory as the collective.
Please remember that your license for dp.kinect2 is only for one computer. You or your customers will need to purchase additional licenses for each computer on which your standalone or collective runs.