Skip to content

omniobject3d/OmniObject3D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OmniObject3D

Large-Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation

Tong Wu  Jiarui Zhang  Xiao Fu  Yuxin Wang  Jiawei RenLiang Pan
Wayne WuLei YangJiaqi WangChen QianDahua Lin✉Ziwei Liu✉

Accepted to CVPR 2023 as Award Candidate 🥳

Project Paper Data

colored_mesh (1)

Updates

  • [09/2023] Language annotations by human experts are released here.
  • [08/2023] Our challenge for ICCV 2023 is now live! For more details, please check it out here.
  • [06/2023] Training set of OmniObject3D released!

Usage

Download the dataset

  • Sign up here.
  • Install OpenDataLab's CLI tools through pip install openxlab.
  • View and download the dataset from the command line:
openxlab login                                                        # Login, input AK/SK
openxlab dataset info --dataset-repo OpenXDLab/OmniObject3D-New       # View dataset info
openxlab dataset ls --dataset-repo OpenXDLab/OmniObject3D-New	      # View a list of dataset files
openxlab dataset get --dataset-repo OpenXDLab/OmniObject3D-New        # Download the whole dataset (the compressed files require approximately 1.2TB of storage)

You can check out the full folder structure on the website above and download a certain portion of the data by specifying the path. For example:

openxlab dataset download --dataset-repo OpenXDLab/OmniObject3D-New \
                          --source-path /raw/point_clouds/ply_files \
                          --target-path <your-target-path> 

For more information, please refer to the documentation.

We are also maintaining the dataset on Google Drive.

Batch untar

To batch-untar a specific folder of compressed files based on your requirements, use the command bash batch_untar.sh <folder_name>. If the untar operation is completed successfully, remove all compressed files through rm -rf <folder_name>/*.tar.gz.

Dataset format

OmniObject3D_Data_Root
    ├── raw_scans               
    │   ├── <category_name>
    │   │   ├── <object_id>
    │   │   │   ├── Scan
    │   │   │   │   ├── Scan.obj
    │   │   │   │   ├── Scan.mtl
    │   │   │   │   ├── Scan.jpg
    
    ├── blender_renders         
    │   ├── <category_name>
    │   │   ├── <object_id>
    │   │   │   ├── render
    │   │   │   │   ├── images
    │   │   │   │   ├── depths
    │   │   │   │   ├── normals
    │   │   │   │   ├── transforms.json    
    
    ├── videos_processed       
    │   ├── <category_name>
    │   │   ├── <object_id>
    │   │   │   ├── standard
    │   │   │   │   ├── images
    │   │   │   │   ├── matting
    │   │   │   │   ├── poses_bounds.npy           # raw results from colmap
    │   │   │   │   ├── poses_bounds_rescaled.npy  # rescaled to world-scale
    │   │   │   │   ├── sparse

    ├── point_clouds    
    │   ├── hdf5_files
    │   │   ├── 1024
    │   │   ├── 4096
    │   │   ├── 16384
    │   ├── ply_files
    │   │   ├── 1024
    │   │   ├── 4096
    │   │   ├── 16384

Benchmarks

Please find the examplar usage of the data for the benchmarks here.

TODO

  • Language annotations.

License

The OmniObject3D dataset is released under the CC BY 4.0.

Reference

If you find our dataset useful in your research, please use the following citation:

@inproceedings{wu2023omniobject3d,
    author = {Tong Wu and Jiarui Zhang and Xiao Fu and Yuxin Wang and Jiawei Ren, 
    Liang Pan and Wayne Wu and Lei Yang and Jiaqi Wang and Chen Qian and Dahua Lin and Ziwei Liu},
    title = {OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception, 
    Reconstruction and Generation},
    booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2023}
}

About

[ CVPR 2023 Award Candidate ] OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published