Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Latest commit

 

History

History
253 lines (208 loc) · 13.5 KB

doc.md

File metadata and controls

253 lines (208 loc) · 13.5 KB

ContactPose Documentation

Table of Contents

Getting Started

  1. Install Miniconda. Create the contactpose conda environment: conda env create -f environment.yml. Activate it:
$ conda activate contactpose

The following commands should be run after activating the conda env.

  1. We will use the Python requests library to download data. If you use proxies, set them in data/proxies.json, or set the environment variables HTTP_PROXY and HTTPS_PROXY as mentioned in the requests docs.

  2. Startup by downloading grasp data (pose annotations and camera calibrations) for the entire dataset, contact maps for participant #28 all 'use' intent grasps, and RGB-D images for participant #28 'bowl' object, 'use' intent grasp. By default it will download to data/contactpose_data, but you can also provide a directory of your choice. It will symlink that directory to data/contactpose_data for easy access.

$ python startup.py --data_dir <dir_name>
ContactPose data directory is data/contactpose_data
Downloading 3D model marker locations...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 103k/103k [00:05<00:00, 17.3kiB/s]
Extracting...
Downloading grasps...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 126M/126M [03:45<00:00, 559kiB/s]
Extracting...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [01:55<00:00,  1.15s/it]
Downloading full28_use contact maps...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 96.8M/96.8M [00:03<00:00, 29.1MiB/s]
Extracting...
Downloading full28_use images...
  0%|                                                                                                                                                         | 0/1 [00:00<?, ?it/s]bowl
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.08G/2.08G [00:45<00:00, 45.6MiB/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:47<00:00, 47.23s/it]
Extracting...
1it [00:33, 33.48s/it]
  1. Download MANO code and models from https://mano.is.tue.mpg.de and unzip it in to thirdparty/mano. Note: MANO code is written for Python 2, but we use it in Python 3. We have developed work-arounds for all issues except one: you should comment out the print 'FINITO' (last line) statement in thirdparty/mano/webuser/smpl_handpca_wrapper_HAND_only.py. MPI license does not allow re-distribution of their MANO code.

  2. You can 3D visualize contact maps and 3D hand joints:

$ python scripts/show_contactmap.py --p_num 28 --intent use --object_name mouse --mode simple_hands

simple_hands mode

semantic_hands_fingers mode

semantic_hands_phalanges mode

simple_mano mode

Jupyter notebook demonstrating the ContactPose dataset API - accessing images, poses, and calibration data.

Demo Notebook

Downloading Data

NOTE: We fixed some annotation errors for participants 31-35 on 12 Jan 2021. If you downloaded data before that date, please re-download it. Changes were made to data --type grasps, images, depth_images, and color_images.

Main Script

All downloads can be done through scripts/download_data.py:

$ python scripts/download_data.py --help
usage: download_data.py [-h] --type {grasps,color_images,depth_images,images,contact_maps,markers,3Dmodels}
                        [--p_nums P_NUMS] [--intents INTENTS]
                        [--images_dload_dir IMAGES_DLOAD_DIR]

optional arguments:
  -h, --help            show this help message and exit
  --type {grasps,color_images,depth_images,images,contact_maps}
  --p_nums P_NUMS       Participant numbers E.g. 1, 1,2, or 1-5
  --intents INTENTS     use, handoff, or use,handoff
  --images_dload_dir IMAGES_DLOAD_DIR
                        Directory where images will be downloaded.They will be
                        symlinked to the appropriate location

Download Contact Maps

You can download more contact maps:

$ python scripts/download_data.py --p_nums 1-10 --intents use,handoff --type contact_maps

Download RGB-D Images

And more RGB-D images:

$ python scripts/download_data.py --p_nums 1-10 --intents use,handoff --type images \
--images_dload_dir <dir_name>

The entire RGB-D collection is ~ 2.5 TB. Hence the script provides the option to provide an image download directory. This can be on a large SSD drive, for example. It will automatically symlink the downloaded data to the appropriate location in data/contactpose_data for easy access. The image download directory defaults to data/contactpose_data and can be set through --images_dload_dir.

Download RGB Images only

New: Many users need only RGB images. So we have compressed RGB images into videos and provide separate download links, which reduces the download size ~4.5x. The compression is lossless. To use this, change the --type flag in the command above.

  • --type images: Both RGB and depth images, no video compression
  • --type depth_images: Same download as --type images, extract only depth images
  • --type color_images: Compressed RGB video download, 4-5x faster

Depth images are (for now) still needed to preprocess images for ML. We are working on refactoring that code to allow RGB cropping without depth (see issue).

Download 3D Models

3D models of objects and locations of markers placed on them (this is already done if you run startup.py):

$ python scripts/download_data.py --type 3Dmodels
$ python scripts/download_data.py --type markers

Download Grasps

All grasp information - 3D joints, MANO fits, camera calibrations - (this is already done if you run startup.py):

$ python scripts/download_data.py --type grasps

Image Preprocessing

scripts/preprocess_images.py crops RGB and depth images and randomizes the background of RGB images. It also saves information about the projected visible object mesh vertices. This data is useful for training image-based contact models.

(contactpose) $ python scripts/preprocess_images.py --p_num 28 --intent use --object_name bowl --background_images_dir <DIR>
Inspecting background images directory...
Found 128 images
28:use:bowl:kinect2_left
  2%|██▌                                                                                                                                           | 10/558 [00:03<03:01,  3.02it/s]
28:use:bowl:kinect2_middle
  2%|██▌                                                                                                                                           | 10/558 [00:03<02:47,  3.27it/s]
28:use:bowl:kinect2_right
  2%|██▌                                                                                                                                           | 10/558 [00:08<07:28,  1.22it/s]

We also provide a wrapper script scripts/download_and_preprocess_images.sh to download the images and crop them.

3D Models and 3D Printing

STL files | STL files with cylindrical recesses for markers | High-poly PLY files

The cylindrical recesses were produced using this script, and the locations of cylindrical recesses were aligned to the Optitrack (mocap) "rigid body" using this script. Please see this README for more details about 3D printing the objects.

Miscellaneous

21 Joint Format

The ordering and placement of joints follows the OpenPose format. In 3D, the joints are at the center of the finger-cylinders, not on the surface. The functions mano2openpose() and mano_joints_with_fingertips() in utilities/misc.py convert the joints from the MANO model to this format. For example, see how they are used in load_mano_meshes().

Transform Tree

Transform matrices are consistently named in the code according to the following transform tree. For example, oTh is the pose of the hand w.r.t. the object.

world
|
|----wTc----camera
|
|----wTo----object
            |
            |----oTh----hand
                        |
                        |----hTm----MANO

Other matrices can be composed. For example, the pose of an object in the camera coordinate frame cTo = inv(wTc) * wTo. This naming convention, explained in this blog post makes keeping track of 3D transforms easier.

Contactmap Format

  • Following the original ContactDB paper, the contactmap is encoded as a per-vertex color in the object mesh. The value is in [0, 1]. All 3 components -- R, G and B -- of the vertex color are set to the same value.
  • Before using the contactmap, it needs to be pre-processed by fitting a sigmoid to its values, such that the min is at 0.05 and max at 0.95 (see paper). The texture_proc() function in utilities/misc.py does this.
  • Some small fraction of vertices will have their contact value set to 0. That vertex of the object was not scanned with the thermal camera (mostly because of occlusion), and the contact value at these vertices should be considered "don't know", not "no contact". See how texture_proc() handles this. Again, this should be rare but worth documenting.