Skip to content

BokehMe: When Neural Rendering Meets Classical Rendering (CVPR 2022 Oral)

License

Notifications You must be signed in to change notification settings

JuewenPeng/BokehMe

Repository files navigation

BokehMe: When Neural Rendering Meets Classical Rendering (CVPR 2022 Oral)

Juewen Peng1, Zhiguo Cao1, Xianrui Luo1, Hao Lu1, Ke Xian1*, Jianming Zhang2

1Huazhong University of Science and Technology, 2Adobe Research

This repository is the official PyTorch implementation of the CVPR 2022 paper "BokehMe: When Neural Rendering Meets Classical Rendering".

NOTE: There is a citation mistake in the paper of the conference version. In section 4.1, the disparity maps of the EBB400 dataset are predicted by MiDaS [1] instead of DPT [2].

[1] Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer
[2] Vision Transformers for Dense Prediction

Installation

git clone https://github.com/JuewenPeng/BokehMe.git
cd BokehMe
pip install -r requirements.txt

Usage

python demo.py --image_path 'inputs/21.jpg' --disp_path 'inputs/21.png' --save_dir 'outputs' --K 60 --disp_focus 90/255 --gamma 4 --highlight
  • image_path: path of the input all-in-focus image
  • disp_path: path of the input disparity map (predicted by DPT in this example)
  • save_dir: directory to save the results
  • K: blur parameter
  • disp_focus: refocused disparity (range from 0 to 1)
  • gamma: gamma value (range from 1 to 5)
  • highlight: enhance RGB values of highlights before rendering for stunning bokeh balls

See demo.py for more details.

BLB Dataset

The BLB dataset is synthesized by Blender 2.93. It contains 10 scenes, each consisting of an all-in-focus image, a disparity map, a stack of bokeh images with 5 blur amounts and 10 refocused disparities, and a parameter file. We additionally provide 15 corrupted disparity maps (through gaussian blur, dilation, erosion) for each scene. Our BLB dataset can be downloaded from Google Drive or Baidu Netdisk.

Instructions:

  • EXR images can be loaded by image = cv2.imread(IMAGE_PATH, -1)[..., :3].astype(np.float32) ** (1/2.2) . The loaded images are in BGR, so you can convert them to RGB by image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) if necessary.
  • EXR depth maps can be loaded by depth = cv2.imread(DEPTH_PATH, -1)[..., 0].astype(np.float32). You can convert them to disparity maps by disp = 1 / depth. Note that it is unnecesary to normalize the disparity maps since we have pre-processed them to ensure that the signed defocus maps calculated by K * (disp - disp_focus) are in line with the experimental settings of the paper.
  • NOTE: Some pixel values of images may be larger than 1 for highlights (but mostly smaller than 1). Considering the fact that some rendering methods can only output values between 0 and 1, we clip the numerical ranges of the predicted bokeh images and the real ones to [0, 1] before evaluation. The main reason for this phenomenon (image values exceeding 1) is that the EXR images exported from Blender are in linear space, and we only process them with gamma 2.2 correction without tone mapping. We will improve it in the future.

Citation

If you find our work useful in your research, please cite our paper.

@inproceedings{Peng2022BokehMe,
  title = {BokehMe: When Neural Rendering Meets Classical Rendering},
  author = {Peng, Juewen and Cao, Zhiguo and Luo, Xianrui and Lu, Hao and Xian, Ke and Zhang, Jianming},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2022}
}

About

BokehMe: When Neural Rendering Meets Classical Rendering (CVPR 2022 Oral)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages