Skip to content

asengupta/plenoxels-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

plenoxels-pytorch

This is a (hopefully) well-commented implementation of Plenoxels in PyTorch (more comments are still being added to improve explainability). The relevant paper is Plenoxels: Radiance Fields without Neural Networks.

The theoretical background is explained in a series of posts below. Additionally, the incremental build-up of the code will hopefully aid in a more ground-up understanding.

Usage

Currently, the code is set to reconstruct a cube from 13 views of it. These views are stored in images/cube/training. The corresponding camera positions are in cube_training_positions(). Simply running plenoxels.py will do the training, and store the reconstructed world in reconstructed.pt. It will also show the reconstructed views from the original training viewpoints in the images/frames directory. The renderings during the training (which will be stochastic samples) are stored in the images/reconstruction directory.

Example Reconstructions

Note that for purposes of speed, the voxel grid is restricted to 40x40x40 for the cube, and 40x40x60 for the table.

Training

Training

Reconstruction

Reconstruction

Partial Reconstruction of a Table from a Single Image

Partial Reconstruction of a Table Partial Reconstruction Sequence of a Table

Training Image of a Table

Partial Reconstruction of a Table

Note: The table image above is taken from the Amazon Berkeley Objects Dataset. The ABO Dataset is made available under the Creative Commons Attribution-NonCommercial 4.0 International Public License (CC BY-NC 4.0).

About

A well-commented implementation of Plenoxels in PyTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published