Skip to content

Code to automatically render CHOC mixed-reality images via Blender, together with segmentation masks, depth and nocs maps and 6D pose annotations.

License

Notifications You must be signed in to change notification settings

Saafke/CHOC-renderer

Repository files navigation

CHOC renderer

DOI arXiv

Official software to automatically render composite images of handheld containers (synthetic objects, hands and forearms) over real backgrounds using Blender and Python. The software was used to generate the mixed-reality set of the CORSMAL Hand-Occluded Containers (CHOC) dataset that consists of RGB images, segmentation masks (object, hand+forearm), depth maps, 6D object poses, and Normalised Object Coordinate Space (NOCS) maps.

[dataset] [webpage]

Table of Contents

  1. Installation
    1. Requirements
    2. Instructions
  2. Downloading data
  3. Running code
  4. Tooling
    1. Labelling the surface in the scene
    2. Generating grasps
    3. Creating NOCS textures
  5. Notes
  6. Enquiries, Question and Comments
  7. Licence

Installation

The following instructions are meant for a Linux-based machine.

Requirements

This code has been tested on an Ubuntu 18.04 machine with the following dependencies:

  • Blender 3.3.0
  • Python 3.10
  • Conda 4.13.0
  • Pillow 9.3.0
  • OpenCV 4.6.0
  • SciPy 1.9.3

Setting up Blender

  1. Download Blender 3.3.0
  2. Run:
tar xf blender-3.3.0-linux-x64.tar.xz
  1. Open Blender:
cd blender-3.3.0-linux-x64
./blender

Alternative (latest version using snap): sudo snap install blender --classic (Note: if you do this, this repository might not function properly anymore)

Setting up a Conda environment

  1. Create and activate a conda environment:
conda create --name choc-render-env python=3.10
conda activate choc-render-env
  1. Install dependencies:
pip install Pillow opencv-python scipy

Linking Blender with Python dependencies

Add the path of the conda packages at the top of the render_all.py script: render_all.py (line 22)

+ sys.path.append('/home/user/anaconda3/envs/choc-render-env/lib/python3.10/site-package')

To find the path, run

conda info

and you should see it in the second row of the terminal:

active env location : /home/user/anaconda3/envs/choc-render-env

The full path to the Python libraries is: "/home/user/anaconda3/envs/choc-render-env/lib/python3.10/site-packages"

Downloading data

To render mixed-reality images, you need background images, object files, and optionally grasps+textures. Here we will explain how to download and unzip these data that were used for CHOC. The resulting file structure will look as follows:

CHOC-renderer
  |--data
  |   |--backgrounds
  |   |--object_models
  |   |--bodywithhands
  |   |--assets
  |   |--grasps
  | ...
  1. Make a local folder called data in CHOC-renderer:
mkdir data
  1. From the CHOC dataset, download:
  • backgrounds.zip (8.3 MB)
  • object_models.zip (14.6 MB)
  • grasps (1.3 GB) [optional]
  1. Request access here to download textures for the hands and forearms (grasps): bodyandhands.zip (267.2 MB) [optional].

  2. Unzip all the zip files and their contents in data.

Running code

Here we will explain how to run the code. For more information about how Blender works through the Python API, see here.

The general command to run the code is:

blender --python render_all.py -- <datafolder> <outputfolder>

Arguments (we need to give them in order after --):

  1. path to the data folder
  2. path to the output folder (where we will save the renders)

We can make an outputs folder as follows:

mkdir outputs

Example run commands:

To run the code, with opening the Blender Graphical User Interface (GUI):

blender-3.3.0-linux-x64/blender --python render_all.py -- ./data ./outputs

or to run the code, without opening the Blender GUI, add the --background argument:

blender-3.3.0-linux-x64/blender --background --python render_all.py -- ./data ./outputs

Configuration

You can change the settings, such as the camera and randomization parameters in the config.py file.

Tooling

Here we highlight some instructions on how to label the flat surface in the scene, manually create grasps, and create NOCS textures in Blender.

Labelling the surface in the scene

We used labelme to first manually segment the table. Then, we used a 3D plane segmentation algorithm via Open3D, to compute the normal of the flat surface, and remove outlier points from the table.

Annotate using the GUI

labelme

Convert segmentation masks

labelme_json_to_dataset file.json -o a_folder_name

Plane segmentation

To segment the plane in 3D, and compute its surface-normal, see compute_table_normals.py.

Creating Grasps

Installing and using GraspIt!
  1. Install ROS Melodic (or another version).

http://wiki.ros.org/melodic/Installation/Ubuntu

  1. Install GraspIt!

First follow: https://graspit-simulator.github.io/build/html/installation_linux.html

Then follow: https://github.com/graspit-simulator/graspit_interface

  1. Install ManoGrasp

Follow the steps ‘Install’ and ‘Model’ in https://github.com/ikalevatykh/mano_grasp

  1. Open GraspIt!

roslaunch graspit_interface graspit_interface.launch

  1. Load Object (container) & Table

File > Import Object > Look for the .OFF files! (change XML to OFF in the drop-down, just above the ‘Open’ button). After you load an object, zoom out, so you can actually see it.

  1. Load the ManoHand

File > Import Robot > ManoHand (there are three versions, not sure if there's a difference). I loaded ManoHand.xml

  1. Use the GraspIt! GUI to make the grasp.

Note: when all objects are loaded, interpenetration is possible, preventing any movement. You can turn OFF the Collision via: Element tab > Collision. Then before grasping, turn collision back ON.

  1. Save the world as .xml file.

Converting from GraspIt! to Blender

  1. Use GraspIt_to_MANOparams.py to extract the MANO parameters from the GraspIt! world files (.xml).
  2. Use MANOparams_to_Mesh.py to generate the hand+forearm meshes from the MANO parameters.

Creating NOCS textures

To create the NOCS textures in Blender, we used the Texture Space function. This allows you to create a bounding box around the object, and give each vertex of the object an RGB color based on its coordinate in that bounding box (exactly like the NOCS space). This vertex coloring can then be converted into the object's material/texture. For the code, see create_nocs.py.

Notes

Objects used in this dataset

Enquiries, Question and Comments

If you have any further enquiries, question, or comments, or you would like to file a bug report or a feature request, use the Github issue tracker or send an email to corsmal-challenge@qmul.ac.uk or eey138@qmul.ac.uk.

Licence

This work is licensed under the MIT License. To view a copy of this license, see LICENSE.

About

Code to automatically render CHOC mixed-reality images via Blender, together with segmentation masks, depth and nocs maps and 6D pose annotations.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages