Skip to content

Latest commit

 

History

History
72 lines (64 loc) · 4.47 KB

README.md

File metadata and controls

72 lines (64 loc) · 4.47 KB

Data preparation

Besides the demo code, we also provide training and evaluation code for our approach. To use this functionality, you need to download the relevant datasets. The datasets that our code supports are:

  1. Human3.6M
  2. 3DPW
  3. MPI-INF-3DHP
  4. MPII
  5. COCO
  6. Mannequin Challenge
  7. CMU-MOCAP

More specifically:

  1. Human3.6M: Unfortunately, due to license limitations, we are not allowed to redistribute the MoShed data that we used for training. However you can use the SMPLify 3D fitting code to generate SMPL parameter annotations by fitting the model to the 3D keypoints provided by the dataset. To download the relevant data, please visit the website of the dataset and download the Videos, BBoxes MAT (under Segments) and 3D Positions Mono (under Poses) for Subjects S9 and S11. After downloading and uncompress the data, store them in the folder ${Human3.6M root}. The sructure of the data should look like this:
${Human3.6M root}
|-- S9
    |-- Videos
    |-- Segments
    |-- Bboxes
|-- S11
    |-- Videos
    |-- Segments
    |-- Bboxes

You also need to edit the dataset config file prohmr/configs/datasets.yaml to reflect the path ${Human3.6M root} you used to store the data.

  1. 3DPW: We use this dataset only for evaluation. You need to download the data from the dataset website. After you unzip the dataset files, please complete the root path of the dataset in the file prohmr/configs/datasets.yaml.

  2. MPI-INF-3DHP: Again, we use this dataset for training and evaluation. You need to download the data from the dataset website. The expected fodler structure at the end of the processing looks like:

${MPI-INF-3DHP root}
|-- mpi_inf_3dhp_test_set
    |-- TS1
|-- S1
    |-- Seq1
        |-- imageFrames
            |-- video_0

Then, you need to edit the prohmr/configs/datasets.yaml file with the ${MPI-INF-3DHP root} path.

  1. MPII: We use this dataset for training. You need to download the compressed file of the MPII dataset. After uncompressing, please complete the root path of the dataset in the file prohmr/configs/datasets.yaml.

  2. COCO: We use this dataset for training. You need to download the images and the annotations for the 2014 training set of the dataset. After you unzip the files, the folder structure should look like:

${COCO root}
|-- train2014
|-- annotations

Then, you need to edit the dataset config file prohmr/configs/datasets.yaml with the ${COCO root} path.

  1. Mannequin Challenge: We use this dataset for multi-view evaluation. You need to download the dataset from here and follow all relevant instructions. We used the SMPL annotations generated by SMPLy benchmarking 3D human pose in-the-wild.

Generate dataset files

After preparing the data, we continue with the preprocessing to produce the data/annotations for each dataset in the expected format. With the exception of Human3.6M, we already provide these files and you can get them fetch_data.sh script. For MPII, COCO and MPI-INF-3DHP these contain the SMPL pose parameters generated by SPIN. If you want to generate the files yourself, you need to generate them using the preprocessing scirpts preprocess_{h36m,3dpw,...}.py.

For example, to generate the Human3.6M Protocol 2 validation file you need to run:

python preprocess_h36m.py --split=val-p2

To generate the Human3.6M multiview evaluation file you need to first generate the npz containing the validation examples from all views with:

python preprocess_h36m.py --split=val

and after generating the npz file from the previous step you can run:

python preprocess_h36m.py --split=multiview