Skip to content

Official Repo for "Improving Robustness for Joint Optimization of Camera Poses and Decomposed Low-Rank Tensorial Radiance Fields"

License

Notifications You must be signed in to change notification settings

Nemo1999/Joint-TensoRF

Repository files navigation

Introduction

Official Code Release for AAAI 2024 Paper : Improving Robustness for Joint Optimization of Camera Poses and Decomposed Low-Rank Tensorial Radiance Fields

The release code is experimental and is not very stable, please raise issue to help improve the project.

Robustify Joint Pose Optimization with Randomized 2D/3D Filtering and Edge-Guided Loss Mask

img (a) Naively applying joint optimization on voxel-based NeRFs leads to dramatic failure as premature high-frequency signals in the voxel volume would curse the camera poses to stuck in local minima. (b) We propose a computationally effective manner to directly control the spectrum of the radiance field by performing separable component-wise convolution of Gaussian filters on the decomposed tensor. The proposed training scheme allows the joint optimization to converge successfully to a better solution.

Efficient Separable Component-Wise Convolution

img

Our method enables joint optimization of camera poses and decomposed voxel representation by applying efficient separable component-wise convolution of Gaussian filters on 3D tensor volume and 2D supervision images.

Environment Setup

Create Conda Environment

  1. Install conda environment
  2. Create conda env:
# activate conda env
conda activate
# project root
cd Bundle_Adjusting_TensoRF
# create conda env ( Bundle_Adjusting_TensoRF )
bash ./env_setup/install.sh

Download Datasets

Run the following scripts:

# activate conda env
conda activate Bundle_Adjusting_TensoRF
# dowload and unzip NeRF Datasets
./env_setup/dataset.sh

If the dataset.sh doesn't work : try to manually download the files from google drive

  • Download and unzip nerf_synthetic.zip and nerf_llff_data.zip from NeRF Google Drive
  • Rename the directories to blender and llff respectively
  • Move the directories to Bundle_Adjusting_TensoRF/data/blender and Bundle_Adjusting_TensoRF/data/llff

Reproduce Experiments

  • The project structure and training interface (options & yaml files) are inherited from BARF
    • For common settings, user can specify options in yaml files in options/
    • When directly running train_3d.py, user can override options in cmd with --<key1>.<key2>=<value12> --<key3>=<value3>
    • When running multiple experiments with our newly added scripts/gpu_scheduler.py, user can override default options with {"key1.key2": value} python dictionary item
  • It is strongly recommend to perform training and evaluation with RunConfigsGPUScheduler.default_use_wandb=True (default behaviour) because we log a lot of useful informations in Weights & Bias Platform, including:
    • All Quantitative Results
    • Visualizing Training Process and Animations
    • Depth Map and Depth Animations
    • Camera Poses and Camera Poses Animations
    • Final Results and Animation

Blender Dataset

  • Option1: Training + Evaluation in 1 Step
    • It is recommended to lower the testing split data.test_sub in yaml file or python config, otherwise the evaluation time will be longer than training time.
python -m scripts.train_and_evaluate_bat_blender
  • Opiton2: Separate Training & Evaluation (for timing purpose)
# tranining , save checkpoint in `output` directory 
python -m scripts.train_bat_blender

# don't change config in between the separated training and evaluation

# evaluation, auto load checkpoint and evaluate based on that , upload evaluation results to wandb as a separate run
python -m scripts.evaluate_bat_blender

LLFF Dataset

  • Option1: Training + Evaluation in 1 Step (recommended)
python -m scripts.train_and_evaluate_bat_llff
  • Opiton2: Separate Training & Evaluation (for timing purpose)
# tranining , save checkpoint in `output` directory 
python -m scripts.train_bat_llff

# don't change config in between the separated training and evaluation

# evaluation, auto load checkpoint and evaluate based on that , upload evaluation results to wandb as a separate run
python -m scripts.evaluate_bat_llff

About

Official Repo for "Improving Robustness for Joint Optimization of Camera Poses and Decomposed Low-Rank Tensorial Radiance Fields"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •