Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild (PHOSA)

Jason Y. Zhang*, Sam Pepose*, Hanbyul Joo, Deva Ramanan, Jitendra Malik, and Angjoo Kanazawa.

[arXiv] [Project Page] [Colab Notebook] [Bibtex]

In ECCV 2020


There is currently no CPU-only support.


Our code is released under CC BY-NC 4.0. However, our code depends on other libraries, including SMPL, which each have their own respective licenses that must also be followed.


We recommend using a conda environment:

conda create -n phosa python=3.7
conda activate phosa
pip install -r requirements.txt

Install the torch version that corresponds to your version of CUDA, eg for CUDA 10.0, use:

conda install pytorch=1.4.0 torchvision=0.5.0 cudatoolkit=10.0 -c pytorch

Note that CUDA versions above 10.2 do not support Pytorch 1.4, so we recommend using cudatoolkit=10.0. If you need support for Pytorch >1.4 (e.g. for updated versions of detectron), follow the suggested updates to NMR here.

Alternatively, you can check out our interactive Colab Notebook.

Setting up External Dependencies

Install the fast version of Neural Mesh Renderer:

mkdir -p external
git clone external/multiperson
pip install external/multiperson/neural_renderer

Install Detectron2:

mkdir -p external
git clone --branch v0.2.1 external/detectron2
pip install external/detectron2
# Download pre-trained PointRend weights
gdown -O models/model_final_3c3198.pkl\?id\=1SoFg6AjB17CIekGvAf_sLIuCE7wEmVfK

Install FrankMocap (The body module is the same regressor trained on EFT data that we used in the paper):

mkdir -p external
git clone external/frankmocap
sh external/frankmocap/scripts/

You will also need to download the SMPL model from the SMPLify website. Make an account, and download the neutral model basicModel_neutral_lbs_10_207_0_v1.0.0.pkl and place it in extra_data/smpl/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl

If you did not clone detectron2 and frankmocap in the external directory, you will need to update the paths in the constants file.

Currently, the mesh interpenetration loss is not included, so the results may look slightly different from the paper.


The repository only includes a bicycle mesh that we created. For other object categories and mesh instances, you will need to download your own meshes. We list some recommended sources here.

Running the Code

python --filename input/000000038829.jpg

We also have a Colab Notebook to interactively visualize the outputs.

Citing PHOSA

If you use find this code helpful, please consider citing:

    title = {Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild},
    author = {Zhang, Jason Y. and Pepose, Sam and Joo, Hanbyul and Ramanan, Deva and Malik, Jitendra and Kanazawa, Angjoo},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year = {2020},


Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild



Code of conduct

Security policy





No releases published


No packages published