Skip to content

A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow.

License

google-research/kubric

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

Note that this cl, however, does not handle the support for principal points in kubric.

PiperOrigin-RevId: 568999187

Co-authored-by: kubric-team <kubric@google.com>
e140e24

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
March 27, 2022 11:10
April 12, 2022 19:01
June 14, 2021 15:42
January 20, 2021 01:19
October 12, 2023 12:09
January 22, 2021 17:12

Kubric

Blender Kubruntu Test Coverage Docs

A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow.

Motivation and design

We need better data for training and evaluating machine learning systems, especially in the context of unsupervised multi-object video understanding. Current systems succeed on toy datasets, but fail on real-world data. Progress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand. Kubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.

Getting started

For instructions, please refer to https://kubric.readthedocs.io

Assuming you have docker installed, to generate the data above simply execute:

git clone https://github.com/google-research/kubric.git
cd kubric
docker pull kubricdockerhub/kubruntu
docker run --rm --interactive \
           --user $(id -u):$(id -g) \
           --volume "$(pwd):/kubric" \
           kubricdockerhub/kubruntu \
           /usr/bin/python3 examples/helloworld.py
ls output

Kubric employs Blender 2.93 (see here), so if you want to inspect the generated *.blend scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version.

Requirements

  • A pipeline for conveniently generating video data.
  • Physics simulation for automatically generating physical interactions between multiple objects.
  • Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures.
  • Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible.
  • Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)
  • Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)

Challenges and datasets

Generally, we store datasets for the challenges in this Google Cloud Bucket. More specifically, these challenges are dataset contributions of the Kubric CVPR'22 paper:

Pointers to additional datasets/workers:

Bibtex

@article{greff2021kubric,
    title = {Kubric: a scalable dataset generator}, 
    author = {Klaus Greff and Francois Belletti and Lucas Beyer and Carl Doersch and
              Yilun Du and Daniel Duckworth and David J Fleet and Dan Gnanapragasam and
              Florian Golemo and Charles Herrmann and Thomas Kipf and Abhijit Kundu and
              Dmitry Lagun and Issam Laradji and Hsueh-Ti (Derek) Liu and Henning Meyer and
              Yishu Miao and Derek Nowrouzezahrai and Cengiz Oztireli and Etienne Pot and
              Noha Radwan and Daniel Rebain and Sara Sabour and Mehdi S. M. Sajjadi and Matan Sela and
              Vincent Sitzmann and Austin Stone and Deqing Sun and Suhani Vora and Ziyu Wang and
              Tianhao Wu and Kwang Moo Yi and Fangcheng Zhong and Andrea Tagliasacchi},
    booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2022},
}

Disclaimer

This is not an official Google Product

About

A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow.

Resources

License

Stars

Watchers

Forks

Packages

No packages published