Skip to content

Jho-Yonsei/SMURF

Repository files navigation

SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields

Official Implementation of SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields

Method Overview

image

Environment Setup

  1. Create conda environment
conda create -n smurf python=3.8
  1. Activate environment
conda activate smurf
  1. Clone the repository
git clone https://github.com/Jho-Yonsei/SMURF.git
cd SMURF
  1. Install packages
pip3 install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
pip3 install -r requirements.txt
  1. Check where your environment path is ($ conda env list), and copy the files in for_chrono_view_embedding directory to torchdiffeq package
cp ./for_chrono_view_embedding/* {environment_path}/lib/python3.8/site-packages/torchdiffeq/_impl/

Data Preparation

Dataset is from Deblur-NeRF and you can download the dataset from HERE. Download synthetic_camera_motion_blur and real_camera_motion_blur directories from the drive, and put them into the following sturcture:

- SMURF/
    - data/
        - synthetic_camera_motion_blur/
            - blurcozy2room/
            - blurfactory/
            ...
        - real_camera_motion_blur/
            - blurball/
            - blurbasket/
            ...

How to Train

We provide the config files of all scenes in synthetic (5) and real-world (10) scene dataset.

So, if you want to train factory scene from synthetic dataset on GPU 0:

python3 train.py --config ./configs/camera_motion_blur_synthetic/factory.txt --device 0

If you want to train girl scene from real-world dataset on GPU 1:

python3 train.py --config ./configs/camera_motion_blur_real/girl.txt --device 1

Training Options

You can adjust the hyperparameters to conduct ablative experiments:

--num_warp {N} : number of warped rays (Default: 8)

--chrono_view False : only with time embedding (Default: True)

--res_momentum False : deactivate residual momentum (Default: True)

How to Evaluate

If you have trained for factory scene and want to render the test images and spiral video:

python3 train.py --config ./configs/camera_motion_blur_synthetic/factory.txt --device 0 --ckpt ./work_dir/camera_motion_blur_synthetic/factory/factory.th --render_only 1

Pretrained Weights

To be released.

Citation

Please cite this work if you find it useful:

@article{lee2024smurf,
  title={SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields},
  author={Lee, Jungho and Lee, Dogyoon and Lee, Minhyeok and Kim, Donghyung and Lee, Sangyoun},
  journal={arXiv preprint arXiv:2403.07547},
  year={2024}
}

Acknowledgements

This repo is based on TensoRF, Deblur-NeRF, and our work is hugely influenced by Neural-ODE.

Thanks to the original authors for their awesome works!

TODO

  • Release source code.
  • Update README file.
  • Upload pretrained weights.

About

SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published