This repository extends the work of Decomposing NeRF for Editing via Feature Field Distillation to render and extract features from 3D scenes that include transparent objects. We use TensoRF: Tensorial Radiance Fields instead of NeRF for faster scene rendering. We use feature field distillation to render specific features within a scene.
Install environment:
conda create -n RETrO python=3.8
conda activate RETrO
pip install torch torchvision
pip install tqdm scikit-image opencv-python configargparse lpips imageio-ffmpeg kornia lpips tensorboard
Use this implementation of DINO ViT Feature Extractor to extract features from images.
python extractor.py --image_path <image_path> --output_path <output_path> --is_dir=True --reshape=True
The training script is in train.py
, to train RETrO:
python train.py --config configs/flower.txt
we provide a few examples in the configuration folder, please note:
dataset_name
, choices = ['llff', 'dexnerfrealtable', 'llff_features'];
shadingMode
, choices = ['MLP_Fea', 'SH'];
model_name
, choices = ['TensorVMSplit', 'TensorCP'], corresponding to the VM and CP decomposition.
You need to uncomment the last a few rows of the configuration file if you want to training with the TensorCP model;
n_lamb_sigma
and n_lamb_sh
are string type refer to the basis number of density and appearance along XYZ
dimension;
N_voxel_init
and N_voxel_final
control the resolution of matrix and vector;
N_vis
and vis_every
control the visualization during training;
You need to set --render_test 1
/--render_path 1
if you want to render testing views or path after training.
More options refer to the opt.py
.
https://1drv.ms/u/s!Ard0t_p4QWIMgQ2qSEAs7MUk8hVw?e=dc6hBm
python train.py --config configs/flower.txt --ckpt checkpoints/flower.th --feat_ckpt log/tensorf_flower_VM_features/tensorf_flower_VM_features.th --query flower --render_only 1 --render_test 1
You can just simply pass --render_only 1
and --ckpt path/to/your/checkpoint
to render images from a pre-trained
checkpoint. You may also need to specify what you want to render, like --render_test 1
, --render_train 1
or --render_path 1
.
The rendering results are located in your checkpoint folder.
You can also export the mesh by passing --export_mesh 1
:
python train.py --config configs/flower.txt --ckpt path/to/your/checkpoint --export_mesh 1
Note: Please re-train the model and don't use the pretrained checkpoints provided by us for mesh extraction, because some render parameters has changed.
We provide two options for training on your own image set:
- Following the instructions in the NSVF repo, then set the dataset_name to 'tankstemple'.
- Calibrating images with the script from NGP:
python dataLoader/colmap2nerf.py --colmap_matcher exhaustive --run_colmap
, then adjust the datadir inconfigs/your_own_data.txt
. Please check thescene_bbox
andnear_far
if you get abnormal results.
If you find our code helpful, please consider citing:
@misc{retro,
title={RETrO: Rendering and Extracting Transparent Objects using TensoRF and Feature Field Distillation},
author={Wei, Megan and Xu, Katherine and Ray, Anushka},
journal={Github repository},
year={2022}
}