Skip to content

yongjun823/pytorch3d

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Install in NVIDIA RTX 3070

  1. Install requirements and Setup CUB reference : https://github.com/facebookresearch/pytorch3d/blob/master/INSTALL.md
    Otherwise download the CUB library from https://github.com/NVIDIA/cub/releases and unpack it to a folder of your choice. Define the environment variable CUB_HOME before building and point it to the directory that contains CMakeLists.txt for CUB. For example on Linux/Mac,
curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_HOME=$PWD/cub-1.10.0

1.1 conda install

conda install -yc conda-forge fvcore
conda install -yc iopath iopath
  1. Set environment variable
export TORCH_CUDA_ARCH_LIST="8.0"
  1. Download source code
git clone https://github.com/yongjun823/pytorch3d && cd pytorch3d
  1. Update setup.py
    extra_compile_args = {"cxx": ["-std=c++14"],
                          'nvcc':['--gpu-architecture=compute_80','--gpu-code=sm_80']
                           }
  1. Install pytorch3d
pip3 install -e .

Install in NVIDIA docker nvcr.io/nvidia/pytorch:19.12-py3

  1. Update pip & apt
apt-get update
apt install -y libusb-1.0-0 libgl1-mesa-glx
pip install -U pip
  1. Create anaconda environment
conda create -n pytorch3d python=3.8
conda init bash
source ~/.bashrc 
conda activate pytorch3d
  1. Install anaconda package (CUDA 10.2 & torch 1.7)
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
conda install -yc conda-forge fvcore 
conda install -yc iopath iopath
conda install -yc bottler nvidiacub
  1. set CUB_HOME
curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_HOME=$PWD/cub-1.10.0
  1. Install Pytorch3d
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"

CircleCI Anaconda-Server Badge

Introduction

PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch.

Key features include:

  • Data structure for storing and manipulating triangle meshes
  • Efficient operations on triangle meshes (projective transformations, graph convolution, sampling, loss functions)
  • A differentiable mesh renderer

PyTorch3D is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. For this reason, all operators in PyTorch3D:

  • Are implemented using PyTorch tensors
  • Can handle minibatches of hetereogenous data
  • Can be differentiated
  • Can utilize GPUs for acceleration

Within FAIR, PyTorch3D has been used to power research projects such as Mesh R-CNN.

Installation

For detailed instructions refer to INSTALL.md.

License

PyTorch3D is released under the BSD-3-Clause License.

Tutorials

Get started with PyTorch3D by trying one of the tutorial notebooks.

Deform a sphere mesh to dolphin Bundle adjustment
Render textured meshes Camera position optimization
Render textured pointclouds Fit a mesh with texture
Render DensePose data Load & Render ShapeNet data
Fit Textured Volume Fit A Simple Neural Radiance Field

Documentation

Learn more about the API by reading the PyTorch3D documentation.

We also have deep dive notes on several API components:

Overview Video

We have created a short (~14 min) video tutorial providing an overview of the PyTorch3D codebase including several code examples. Click on the image below to watch the video on YouTube:

Development

We welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to CONTRIBUTING.md for full instructions on how to run the code, tests and linter, and submit your pull requests.

Contributors

PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team.

In alphabetical order:

  • Amitav Baruah
  • Steve Branson
  • Luya Gao
  • Georgia Gkioxari
  • Taylor Gordon
  • Justin Johnson
  • Patrick Labtut
  • Christoph Lassner
  • Wan-Yen Lo
  • David Novotny
  • Nikhila Ravi
  • Jeremy Reizenstein
  • Dave Schnizlein
  • Roman Shapovalov
  • Olivia Wiles

Citation

If you find PyTorch3D useful in your research, please cite our tech report:

@article{ravi2020pytorch3d,
    author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
                  and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
    title = {Accelerating 3D Deep Learning with PyTorch3D},
    journal = {arXiv:2007.08501},
    year = {2020},
}

If you are using the pulsar backend for sphere-rendering (the PulsarPointRenderer or pytorch3d.renderer.points.pulsar.Renderer), please cite the tech report:

@article{lassner2020pulsar,
    author = {Christoph Lassner and Michael Zollh\"ofer},
    title = {Pulsar: Efficient Sphere-based Neural Rendering},
    journal = {arXiv:2004.07484},
    year = {2020},
}

News

Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under Releases, and the builds can be installed using conda as per the instructions in INSTALL.md.

[November 2nd 2020]: PyTorch3D v0.3 released, integrating the pulsar backend.

[Aug 28th 2020]: PyTorch3D v0.2.5 released

[July 17th 2020]: PyTorch3D tech report published on ArXiv: https://arxiv.org/abs/2007.08501

[April 24th 2020]: PyTorch3D v0.2 released

[March 25th 2020]: SynSin codebase released using PyTorch3D: https://github.com/facebookresearch/synsin

[March 8th 2020]: PyTorch3D v0.1.1 bug fix release

[Jan 23rd 2020]: PyTorch3D v0.1 released. Mesh R-CNN codebase released: https://github.com/facebookresearch/meshrcnn

About

PyTorch3D is FAIR's library of reusable components for deep learning with 3D data

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 73.2%
  • C++ 15.7%
  • Cuda 8.2%
  • Shell 1.0%
  • JavaScript 0.8%
  • C 0.4%
  • Other 0.7%