CoRL 2025
Authors: Sheng Wu · Fei Teng · Hao Shi · Qi Jiang · Kai Luo · Kaiwei Wang · Kailun Yang
QuaDreamer Panoramic cameras, capturing comprehensive 360-degree environmental data, are suitable for quadruped robots in surrounding perception and interaction with complex environments. However, the scarcity of high-quality panoramic training data — caused by inherent kinematic constraints and complex sensor calibration challenges — fundamentally limits the development of robust perception systems tailored to these embodied platforms. To address this issue, we propose QuaDreamer—the first panoramic data generation engine specifically designed for quadruped robots. QuaDreamer focuses on mimicking the motion paradigm of quadruped robots to generate highly controllable, realistic panoramic videos, providing a data source for downstream tasks. Specifically, to effectively capture the unique vertical vibration characteristics exhibited during quadruped locomotion, we introduce Vertical Jitter Encoding (VJE). VJE extracts controllable vertical signals through frequency-domain feature filtering and provides high-quality prompts. To facilitate high-quality panoramic video generation under jitter signal control, we propose a Scene-Object Controller (SOC) that effectively manages object motion and boosts background jitter control through the attention mechanism. To address panoramic distortions in wide-FoV video generation, we propose the Panoramic Enhancer (PE) – a dual-stream architecture that synergizes frequency-texture refinement for local detail enhancement with spatial-structure correction for global geometric consistency. We further demonstrate that the generated video sequences can serve as training data for the quadruped robot's panoramic visual perception model, enhancing the performance of multi-object tracking in 360-degree scenes.
QuaDreamer.mp4
- Download the training dataset from Google Drive and place it in the project root as
./mydata. - Download the pretrained weights from Hugging Face and place them in the project root. Ensure the paths match
pretrained_model_name_or_pathandpretrain_unetinscripts/stage1.shandscripts/stage2.sh. - Install the environment (PyTorch 2.0.1 + CUDA 11.8):
cd ${ROOT}
pip install -r requirements.txt
pip install causal_conv1d==1.1.1 mamba-ssm==1.2.0
pip install https://download.openmmlab.com/mmcv/dist/cu117/torch2.0.0/mmcv-2.0.0-cp310-cp310-manylinux1_x86_64.whl
git clone https://github.com/open-mmlab/mmtracking.git -b dev-1.x
cd mmtracking
pip install -e .Run scripts/stage1.sh and scripts/stage2.sh from the project root for two-stage training.
Run scripts/val.sh.
- [✅] Release the arXiv preprint.
- [✅] Publish training and evaluation code.
- [✅] Publish training dataset.
- [✅] Add training and evaluation instructions.
Our model is based on Trackdiffusion-SVD, CameraCtrl, ObjCtrl-2.5D, and ZITS_inpainting. Thanks for their great work!
If our work is helpful to you, please consider citing us by using the following BibTeX entry:
@article{wu2025quadreamer,
title={QuaDreamer: Controllable Panoramic Video Generation for Quadruped Robots},
author={Wu, Sheng and Teng, Fei and Shi, Hao and Jiang, Qi and Luo, Kai and Wang, Kaiwei and Yang, Kailun},
journal={arXiv preprint arXiv:2508.02512},
year={2025}
}

