Skip to content

facebookresearch/phosa

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
dev
 
 
doc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild (PHOSA)

Jason Y. Zhang*, Sam Pepose*, Hanbyul Joo, Deva Ramanan, Jitendra Malik, and Angjoo Kanazawa.

[arXiv] [Project Page] [Colab Notebook] [Bibtex]

In ECCV 2020

Requirements

There is currently no CPU-only support.

License

Our code is released under CC BY-NC 4.0. However, our code depends on other libraries, including SMPL, which each have their own respective licenses that must also be followed.

Installation

We recommend using a conda environment:

conda create -n phosa python=3.7
conda activate phosa
pip install -r requirements.txt

Install the torch version that corresponds to your version of CUDA, eg for CUDA 10.0, use:

conda install pytorch=1.4.0 torchvision=0.5.0 cudatoolkit=10.0 -c pytorch

Note that CUDA versions above 10.2 do not support Pytorch 1.4, so we recommend using cudatoolkit=10.0. If you need support for Pytorch >1.4 (e.g. for updated versions of detectron), follow the suggested updates to NMR here.

Alternatively, you can check out our interactive Colab Notebook.

Setting up External Dependencies

Install the fast version of Neural Mesh Renderer:

mkdir -p external
git clone https://github.com/JiangWenPL/multiperson.git external/multiperson
pip install external/multiperson/neural_renderer

Install Detectron2:

mkdir -p external
git clone --branch v0.2.1 https://github.com/facebookresearch/detectron2.git external/detectron2
pip install external/detectron2
# Download pre-trained PointRend weights
gdown -O models/model_final_3c3198.pkl https://drive.google.com/uc\?id\=1SoFg6AjB17CIekGvAf_sLIuCE7wEmVfK

Install FrankMocap (The body module is the same regressor trained on EFT data that we used in the paper):

mkdir -p external
git clone https://github.com/facebookresearch/frankmocap.git external/frankmocap
sh external/frankmocap/scripts/download_data_body_module.sh

You will also need to download the SMPL model from the SMPLify website. Make an account, and download the neutral model basicModel_neutral_lbs_10_207_0_v1.0.0.pkl and place it in extra_data/smpl/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl

If you did not clone detectron2 and frankmocap in the external directory, you will need to update the paths in the constants file.

Currently, the mesh interpenetration loss is not included, so the results may look slightly different from the paper.

Meshes

The repository only includes a bicycle mesh that we created. For other object categories and mesh instances, you will need to download your own meshes. We list some recommended sources here.

Running the Code

python demo.py --filename input/000000038829.jpg

We also have a Colab Notebook to interactively visualize the outputs.

Citing PHOSA

If you use find this code helpful, please consider citing:

@InProceedings{zhang2020phosa,
    title = {Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild},
    author = {Zhang, Jason Y. and Pepose, Sam and Joo, Hanbyul and Ramanan, Deva and Malik, Jitendra and Kanazawa, Angjoo},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year = {2020},
}

About

Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published