Skip to content
View RenderMe-360's full-sized avatar

Block or report RenderMe-360

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
RenderMe-360/README.md

RenderMe-360 Dataset

arXiv Project Demo

This is the Benchmark PyTorch implementation of the paper "RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars".

teaser.mp4

Abstract: Synthesizing high-fidelity head avatars is a central problem for many applications on AR, VR, and Metaverse. While head avatar synthesis algorithms have advanced rapidly, the best ones still face great obstacles in real-world scenarios. One of the vital causes is the inadequate datasets -- 1) current public datasets can only support researchers to explore high-fidelity head avatars in one or two task directions, such as viewpoint, head pose, hairstyle, or facial expression; 2) these datasets usually contain digital head assets with limited data volume, and narrow distribution over different attributes, such as expressions, ages, and accessories. In this paper, we present RenderMe-360, a comprehensive 4D human head dataset to drive advance in head avatar algorithms across different scenarios. RenderMe-360 contains massive data assets, with 250+ million complete head frames and over 800k video sequences from 500 different identities captured by synchronized HD multi-view cameras at 30 fps. It is a large-scale digital library for head avatars with three key attributes: 1) High Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K cameras to collect their portrait data in 360 degrees. 2) High Diversity: The collected subjects vary from different ages, eras, ethnicity, and cultures, providing abundant materials with distinctive styles in appearance and geometry. Moreover, each subject is asked to perform various dynamic motions, such as expressions and head rotations, which further extend the richness of assets. 3) Rich Annotations: the dataset provides annotations with different granularities: cameras' parameters, background matting, scan, 2D as well as 3D facial landmarks, FLAME fitting labeled by semi-auto annotation, and text description. Based on the dataset, we build a comprehensive benchmark for head avatar research, with 16 state-of-the-art methods performed on five main tasks: novel view synthesis, novel expression synthesis, hair rendering, hair editing, and talking head generation. Our experiments uncover the strengths and weaknesses of state-of-the-art methods, showing that extra efforts are needed for them to perform in such diverse scenarios. RenderMe-360 opens the door for future exploration in modern head avatars. All of the data, code, and models will be publicly available at https://renderme-360.github.io/.

Updates

  • 2024.06.18: πŸ”₯πŸ”₯πŸ”₯Raw Data of 500 subjects have been released! Download LinkπŸ”₯πŸ”₯πŸ”₯
  • 2024.05.01: Please refer to RenderMe-360 Benchmark to check our released benchmark code, training data and model!!
  • 2023.09.22: πŸŽ‰ Our paper has been accepted by NeurIPS 2023 D&B Track.
  • 2023.09.21: πŸ”₯πŸ”₯πŸ”₯Data of 21 subjects have been released! Download LinkπŸ”₯πŸ”₯πŸ”₯
  • 2023.05.24: Data and code will be released around September. Please stay tuned!
  • 2023.05.24: πŸ”₯πŸ”₯πŸ”₯The technical report is released!πŸ”₯πŸ”₯πŸ”₯
  • 2023.05.10: The demo video is uploaded.
  • 2023.05.08: The project page is created.

Contents

  1. Features
  2. Data Download
  3. Benchmark & Model Zoo
  4. Usage
  5. Related Works
  6. Citation
  7. Acknowlegement

Features

  • High Fidelity: Build a multi-video camera capture cylinder called POLICY to capture synchronized multi-view videos. 60 instructive cameras / 2448 Γ— 2048 / 30FPS for video capture.
  • High Diversity: RenderMe-360 is a large scale dataset with 500 IDs and 243M frames in total, far exceeds other datasets. A wide diversity including era, ethnicity, accessory and makeup. Each subject capture about 20-30 performance parts in cluding expression, hair, and speech.
  • Rich Annotations: Rich and multimodal annotation far beyond other datasets: face landmark 2d & 3d, front-back matting, FLAME parameters, scan mesh, uv map, action units, appearance annotation and text description.

Data Download

✨ We have released raw data and annotations of 21 subjects! Please refer to RenderMe-360 Download.πŸŽ‰πŸŽ‰

Benchmark & Model Zoo

We provide for each benchmark the pretrained model, code for training & evaluation reimplementation, and dataset for training. Refer to RenderMe-360-Benchmark.

Usage

The code will be released around September!

TODO List

  • Release Code and pretrained model
  • Release Dataset
  • Technical Report
  • Project page

Related Works

Citation

@article{pan2024renderme,
      title={RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars},
      author={Pan, Dongwei and Zhuo, Long and Piao, Jingtan and Luo, Huiwen and Cheng, Wei and Wang, Yuxin and Fan, Siming and Liu, Shengqi and Yang, Lei and Dai, Bo and Liu, Ziwei and Loy, Chen Change and Qian, Chen and Wu, Wayne and Lin, Dahua and Lin, Kwan-Yee},
      journal={Advances in Neural Information Processing Systems},
      volume={36},
      year={2024}
}

Popular repositories Loading

  1. RenderMe-360 RenderMe-360 Public

    RenderMe-360: Large Digital Asset Library and Benchmark Towards High-fidelity Head Avatars

    226 5

  2. RenderMe-360-Benchmark RenderMe-360-Benchmark Public

    Benchmark of RenderMe-360: Large Digital Asset Library and Benchmark Towards High-fidelity Head Avatars

    Python 1

  3. RenderMe-360.github.io RenderMe-360.github.io Public

    HTML 1