Skip to content

cai-jianfeng/ROMP_mindspore

Repository files navigation

Monocular, One-stage, Regression of Multiple 3D People

ROMP is a one-stage method for monocular multi-person 3D mesh recovery in real time. BEV further explores multi-person depth relationships and supports all age groups.
[Paper] [Video] [Project Page] [Paper] [Video] [RH Dataset]
drawing drawing

We provide cross-platform API (installed via pip) to run ROMP & BEV on Linux / Windows / Mac.

Table of contents

News

2022/06/21: Training & evaluation code of BEV is released. Please update the model_data.
2022/05/16: simple-romp v1.0 is released to support tracking, calling in python, exporting bvh, and etc.
2022/04/14: Inference code of BEV has been released in simple-romp v0.1.0.
2022/04/10: Adding onnx support, with faster inference speed on CPU/GPU.
Old logs

Getting started

Please use simple-romp for inference, the rest code is just for training.

Installation

pip install --upgrade setuptools numpy cython
pip install --upgrade simple-romp

For more details, please refer to install.md.

How to use it

Please refer to this guidance for inference & export (fbx/glb/bvh).

Train

For training, please refer to installation.md for full installation. Please prepare the training datasets following dataset.md, and then refer to train.md for training.

Evaluation

Please refer to romp_evaluation.md and bev_evaluation.md for evaluation on benchmarks.

Extensions

[Blender addon]: Yan Chuanhang created a Blender-addon to drive a 3D character in Blender using ROMP from image, video or webcam input.

[VMC protocol]: Vivien Richter implemented a VMC (Virtual Motion Capture) protocol support for different Motion Capture solutions with ROMP.

Docker usage

Please refer to docker.md

Bugs report

Welcome to submit issues for the bugs.

Contributors

This repository is currently maintained by Yu Sun.

We thank Peng Cheng for his constructive comments on Center map training.

ROMP has also benefited from many developers, including

Citation

@InProceedings{BEV,
author = {Sun, Yu and Liu, Wu and Bao, Qian and Fu, Yili and Mei, Tao and Black, Michael J},
title = {Putting People in their Place: Monocular Regression of 3D People in Depth},
booktitle = {CVPR},
year = {2022}}
@InProceedings{ROMP,
author = {Sun, Yu and Bao, Qian and Liu, Wu and Fu, Yili and Michael J., Black and Mei, Tao},
title = {Monocular, One-stage, Regression of Multiple 3D People},
booktitle = {ICCV},
year = {2021}}

Acknowledgement

We thank all contributors for their help!
This work was supported by the National Key R&D Program of China under Grand No. 2020AAA0103800.
Disclosure: MJB has received research funds from Adobe, Intel, Nvidia, Facebook, and Amazon and has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While he was part-time at Amazon during this project, his research was performed solely at Max Planck.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published