Skip to content

Yasser-Cloud/2d-images-to-3d-meshes_app-deployment

 
 

Repository files navigation

Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild (PHOSA)

Jason Y. Zhang*, Sam Pepose*, Hanbyul Joo, Deva Ramanan, Jitendra Malik, and Angjoo Kanazawa.

[arXiv] [Project Page] [Colab Notebook] [Bibtex]

In ECCV 2020

Summary

In this work we fixed problems in this project change an old Detectron2 model with a new one, the torch version, and made the other project compatible with the new version, deploy it on aws ec2, and added new objects too (Motor-bike, Bat, Tennis Racket, Skiboard, Laptop, ... ), We also have a Colab Notebook provided below.

Challenges

  • Starting from working with previous works sequence to achieve your final work PointRend from Detectron, BodyMocap, Neural Mesh Renderer and Phosa
  • Resolving dependencies among projects for building your pipeline some of them out of date
  • Creating new 3D objects and annotated them using MeshLab
  • Deploying the final work on cloud which needed high power and computational resources and made our flask web application

Requirements

There is currently no CPU-only support.

License

Our code is released under CC BY-NC 4.0. However, our code depends on other libraries, including SMPL, which each have their own respective licenses that must also be followed.

Installation

We recommend using a conda environment:

conda create -n phosa python=3.7
conda activate phosa
pip install -r requirements.txt

Install the torch version that corresponds to your version of CUDA, eg for CUDA 10.0, use:

conda install pytorch=1.4.0 torchvision=0.5.0 cudatoolkit=10.0 -c pytorch

Note that CUDA versions above 10.2 do not support Pytorch 1.4, so we recommend using cudatoolkit=10.0. If you need support for Pytorch >1.4 (e.g. for updated versions of detectron), follow the suggested updates to NMR here.

Alternatively, you can check out our interactive Colab Notebook.

Setting up External Dependencies

Install the fast version of Neural Mesh Renderer:

mkdir -p external
git clone https://github.com/JiangWenPL/multiperson.git external/multiperson
pip install external/multiperson/neural_renderer

Install Detectron2:

mkdir -p external
git clone --branch v0.2.1 https://github.com/facebookresearch/detectron2.git external/detectron2
pip install external/detectron2

Install FrankMocap (The body module is the same regressor trained on EFT data that we used in the paper):

mkdir -p external
git clone https://github.com/facebookresearch/frankmocap.git external/frankmocap
sh external/frankmocap/scripts/download_data_body_module.sh

You will also need to download the SMPL model from the SMPLify website. Make an account, and download the neutral model basicModel_neutral_lbs_10_207_0_v1.0.0.pkl and place it in extra_data/smpl/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl

If you did not clone detectron2 and frankmocap in the external directory, you will need to update the paths in the constants file.

Currently, the mesh interpenetration loss is not included, so the results may look slightly different from the paper.

Meshes

The repository only includes a bicycle mesh that we created. For other object categories and mesh instances, you will need to download your own meshes. We list some recommended sources here.

Running the Code

python demo.py --filename input/000000038829.jpg

We also have a Colab Notebook to interactively visualize the outputs.

Citing PHOSA

If you use find this code helpful, please consider citing:

@InProceedings{zhang2020phosa,
    title = {Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild},
    author = {Zhang, Jason Y. and Pepose, Sam and Joo, Hanbyul and Ramanan, Deva and Malik, Jitendra and Kanazawa, Angjoo},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year = {2020},
}

About

As the name of the repo suggests, this app takes a 2d image then converts it to 3d model

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 87.1%
  • HTML 8.3%
  • Jupyter Notebook 3.8%
  • Other 0.8%