BlendPCR: Seamless and Efficient Rendering of Dynamic Point Clouds captured by Multiple RGB-D Cameras
Paper | Video | Slides | Supplementary
C++/OpenGL implementation of our real-time renderer BlendPCR for dynamic point clouds derived from multiple RGB-D cameras. It combines efficiency with high-quality rendering while effectively preventing common z-fighting-like seam flickering. The software is equipped to load and stream the CWIPC-SXR dataset for test purposes and comes with a GUI.
Andre Mühlenbrock¹, Rene Weller¹, Gabriel Zachmann¹
¹Computer Graphics and Virtual Reality Research Lab (CGVR), University of Bremen
28th October 2025
- Added Mesh LOD (Resolution) parameter
- Removed Geometry Shader for performance improvements
- Added Framebuffer-Rendering (parameters
resultWidthandresultHeightcurrently have to be specified in themain.cpp).
Previous
- CUDA filter replaced by full OpenGL 3.3 implementation
- WiP of Unreal Engine 5 VR integration (see
unreal_engine_5_streamerbranch) - Performance Optimization
- Bottleneck of uploading point clouds to GPU were solved by uploading
uint16_t*depth image and generate the point cloud on the GPU
If you only want to test the BlendPCR renderer, without editing the implementation, we also offer pre-built binaries:
- Download Windows (64-Bit), main branch, (fixed shader paths)
(Note: currently does not contain the newest performance optimization.)
Rendering at 3840 x 2160 while fusing 7 Microsoft Azure Kinects @ 30 Hz simultaneously on NVIDIA GeForce 4090 RTX:
- Single Person: approx. 232 fps
- Whole Scene: approx. 204 fps
Rendering at 1920 x 1080 with Mesh LOD 3 (almost same quality):
- Single Person: approx. 666 fps (1.5 ms overall)
- Whole Scene: approx. 588 fps (1.7 ms overall)
- CMake ≥ 3.11
- OpenGL ≥ 3.3
- C++ Compiler, e.g. MSVC v143
- Azure Kinect SDK 1.4.1: Required to load and stream the CWIPC-SXR dataset.
Note: As the C++ compiler, we have currently only tested MSVC, but other compilers that support the Azure Kinect SDK 1.4.1 are likely to work as well.
- CUDA Toolkit 12.1 (not used anymore) We have reimplemented these filters as GLSL passes in case of BlendPCR, so CUDA is not used anymore.
Additionally, this project uses small open-source libraries that we have directly integrated into our source code, so no installation is required. You can find them in the lib folder.
A big thank you to the developers of
Dear ImGui 1.88,
nlohmann/json,
GLFW 3.3,
stb_image.h,
tinyobjloader,
imfilebrowser, and
GLAD.
This project has been tested with Azure Kinect SDKs version 1.4.1, although other SDK versions may also be compatible.
On Windows, you can install Azure Kinect SDK 1.4.1.exe from the official website using the default paths. Once installed, the program should be buildable and executable, since the CMakeLists.txt is configured to search at default paths.
If you use custom paths or are operating on Linux, please set the following CMAKE-variables:
K4A_INCLUDE_DIRto the directory containing thek4aandk4arecordfolders with the include files.K4A_LIBto the file path ofk4a.libK4A_RECORD_LIBto the file path ofk4arecord.lib
Note: Current usage of the k4a and k4arecord libraries included with vcpkg might lead to errors, as both libraries seem to be configured to create an spdlog instance with the same name.
- After installing Azure Kinect SDK 1.4.1, simply clone the BlendPCR repository and run CMake.
- If you don't use Windows or installed the Azure Kinect SDK to a custom path, configure the variables above.
- Build & Run.
To use the renderer out-of-the-box, RGB-D recordings from seven Azure Kinect sensors are required, and these recordings must conform to the format of the CWIPC-SXR dataset.
You can download the CWIPC-SXR dataset here: CWIPC-SXR Dataset.
It is recommended to download only the dataset_hierarchy.tgz, which provides metadata for all scenes, as the entire dataset is very large (1.6TB). To download a specific scene, such as the S3 Flight Attendant scene, navigate to the s3_flight_attendant/r1_t1/ directory and run the download_raw.sh file, which downloads the .mkv recordings from all seven cameras. After downloading, ensure that the .mkv recordings are located in the raw_files folder. The scene is now ready to be opened in this software project.
When loading the CWIPC-SXR dataset, you have two options:
- CWIPC-SXR (Streamed): This mode streams the RGB-D camera recordings directly from the hard drive. Operations such as reading from the hard drive, color conversion (MJPEG to BGRA32), and point cloud generation are performed on the fly. Real-time streaming is usually not feasible when using seven cameras.
- CWIPC-SXR (Buffered): This mode initially reads the complete RGB-D recordings, performs color conversion, and generates point clouds. While this process can be time-consuming and requires significant RAM, it enables subsequent real-time streaming of the recordings. (Note: Due to memory requirements, no high resolution color textures are loaded)
After choosing your preferred mode, a file dialog will appear, prompting you to select the cameraconfig.json file for the scene you wish to load. Playback will commence a few seconds or minutes after the selection, depending on the chosen Source Mode.
You can switch between following rendering techniques:
- Splats (Uniform): Using uniform splats with a fixed (configurable) size.
- Naive Mesh: Separate Meshes reconstructed for each camera, which are not blended.
- BlendPCR: The BlendPCR implementation, which we described in our paper. Note that you can activate the 'Reimpl. Filters' option, which enables a GLSL reimplementation of the CUDA filters we used in the evaluation of our paper.
Note: For High Resolution Color Textures - named BlendPCR (HR) in the paper -, enable High Resolution Encoding both in the Source Mode and in the BlendPCR renderer.
1: Note that both Pointersect and P2ENet are rendered from slightly different perspectives and use slightly different preprossesing filters (in terms of erosion & hole filling). Both renderings are taken from the Supplemental Material of HU Y., GONG R., SUN Q., WANG Y.: Low latency point cloud rendering with learned splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (June 2024), pp. 5752–5761.
1: Note that both Pointersect and P2ENet are rendered from slightly different perspectives and use slightly different preprossesing filters (in terms of erosion & hole filling). Both renderings are taken from the Supplemental Material of HU Y., GONG R., SUN Q., WANG Y.: Low latency point cloud rendering with learned splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (June 2024), pp. 5752–5761.
The performance in default configuration for different numbers of cameras, divided by point cloud passes and screen passes on an NVIDIA GeForce RTX 4090 using a resolution of 3840 × 2160 (please note that these plots differ from the benchmarks in the paper, as they include performance optimizations implemented in this repository option).
Measurements are given for the default BlendPCR version. The BlendPCR (HR) version adds approximately 16 ms of runtime (for seven cameras) due to the upload of high-resolution textures to the GPU. For further details, see our paper.
This work was presented at ICAT-EGVE 2024. If you use this code, please cite:
@inproceedings{10.2312:egve.20241366,
booktitle = {ICAT-EGVE 2024 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {Hasegawa, Shoichi and Sakata, Nobuchika and Sundstedt, Veronica},
title = {{BlendPCR: Seamless and Efficient Rendering of Dynamic Point Clouds captured by Multiple RGB-D Cameras}},
author = {Mühlenbrock, Andre and Weller, Rene and Zachmann, Gabriel},
year = {2024},
publisher = {The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-245-5},
DOI = {10.2312/egve.20241366}
}




