Skip to content
An example that shows how to connect RealSense depth camera to Unity VFX Graph
Branch: master
Clone or download
Latest commit d9ec503 Apr 17, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
Assets Tweak build settings Apr 17, 2019
ProjectSettings Tweak build settings Apr 17, 2019
.gitattributes initial commit Feb 15, 2019
.gitignore Include RealSenseSDK Feb 20, 2019
LICENSE Add LICENSE Apr 11, 2019 Update README Mar 24, 2019


Rsvfx is an example that shows how to connect an Intel RealSense depth camera to a Unity Visual Effect Graph.

gif gif

System requirements

  • Unity 2019.1 or later
  • Intel RealSense D400 series
  • Git (required for importing external packages)

This repository only contains Windows and Linux versions of the RealSense plugin binaries. For macOS, the plugin binary must be installed separately.

This project uses Git support on Package Manager to import external packages. To use the functionality, Git must be installed to the system. See the forum thread for futher details.

How it works


The PointCloudBaker component converts a point cloud stream sent from a RealSense device into two dynamically animated attribute maps: position map and color map. These maps can be used in the "Set Position/Color from Map" blocks in a VFX graph, in the same way as attribute maps imported from a point cache file.


Frequently asked questions

Is it possible with Kinect v2?

It's technically possible but requires a lot of changes to the PointCloudBaker component. I personally am not very motivated to support it because it's a discontinued product, and a successor product (Azure Kinect) is coming. Instead, I'd recommend following Depthkit. The developer seems to be trying VFX Graph support with Depthkit.

Which RealSense model fits best?

I personally recommend D415 because it gives the best sample density in the current models. Please see this thread for further information.

You can’t perform that action at this time.