Official Code Release for AAAI 2024 Paper : Improving Robustness for Joint Optimization of Camera Poses and Decomposed Low-Rank Tensorial Radiance Fields
The release code is experimental and is not very stable, please raise issue to help improve the project.
(a) Naively applying joint optimization on voxel-based NeRFs leads to dramatic failure as premature high-frequency signals in the voxel volume would curse the camera poses to stuck in local minima. (b) We propose a computationally effective manner to directly control the spectrum of the radiance field by performing separable component-wise convolution of Gaussian filters on the decomposed tensor. The proposed training scheme allows the joint optimization to converge successfully to a better solution.
Our method enables joint optimization of camera poses and decomposed voxel representation by applying efficient separable component-wise convolution of Gaussian filters on 3D tensor volume and 2D supervision images.
- Install conda environment
- Create conda env:
# activate conda env
conda activate
# project root
cd Bundle_Adjusting_TensoRF
# create conda env ( Bundle_Adjusting_TensoRF )
bash ./env_setup/install.sh
Run the following scripts:
# activate conda env
conda activate Bundle_Adjusting_TensoRF
# dowload and unzip NeRF Datasets
./env_setup/dataset.sh
If the dataset.sh
doesn't work : try to manually download the files from google drive
- Download and unzip
nerf_synthetic.zip
andnerf_llff_data.zip
from NeRF Google Drive - Rename the directories to
blender
andllff
respectively - Move the directories to
Bundle_Adjusting_TensoRF/data/blender
andBundle_Adjusting_TensoRF/data/llff
- The project structure and training interface (options & yaml files) are inherited from BARF
- For common settings, user can specify options in yaml files in
options/
- When directly running
train_3d.py
, user can override options in cmd with--<key1>.<key2>=<value12> --<key3>=<value3>
- When running multiple experiments with our newly added
scripts/gpu_scheduler.py
, user can override default options with{"key1.key2": value}
python dictionary item
- For common settings, user can specify options in yaml files in
- It is strongly recommend to perform training and evaluation with
RunConfigsGPUScheduler.default_use_wandb=True
(default behaviour) because we log a lot of useful informations in Weights & Bias Platform, including:- All Quantitative Results
- Visualizing Training Process and Animations
- Depth Map and Depth Animations
- Camera Poses and Camera Poses Animations
- Final Results and Animation
- Option1: Training + Evaluation in 1 Step
- It is recommended to lower the testing split
data.test_sub
in yaml file or python config, otherwise the evaluation time will be longer than training time.
- It is recommended to lower the testing split
python -m scripts.train_and_evaluate_bat_blender
- Opiton2: Separate Training & Evaluation (for timing purpose)
# tranining , save checkpoint in `output` directory
python -m scripts.train_bat_blender
# don't change config in between the separated training and evaluation
# evaluation, auto load checkpoint and evaluate based on that , upload evaluation results to wandb as a separate run
python -m scripts.evaluate_bat_blender
- Option1: Training + Evaluation in 1 Step (recommended)
python -m scripts.train_and_evaluate_bat_llff
- Opiton2: Separate Training & Evaluation (for timing purpose)
# tranining , save checkpoint in `output` directory
python -m scripts.train_bat_llff
# don't change config in between the separated training and evaluation
# evaluation, auto load checkpoint and evaluate based on that , upload evaluation results to wandb as a separate run
python -m scripts.evaluate_bat_llff