Skip to content
/ ISTY Public

This is a codebase for I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images (WACV 2023).

Notifications You must be signed in to change notification settings

HURJIWAN/ISTY

Repository files navigation

(WACV 2023) I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images

This is a codebase for I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images.

Requirements

Python 3.6 Pytorch >= 1.0

Dataset

├── train
|    ├── src_imgs_train        -> Source occlusion-free light field images
|    ├── occ_imgs              -> Occlusion images without background
|    └── occ_msks              -> Occlusion mask for occ_imgs (1 for occlusion 0 for background)
├── test_data_dir1
├── test_data_dir2
├── test_data_dir3
└── ...

src_imgs_train

We use the DUTLF-V2 training dataset for source occlusion-free light field images. Since the DUTLF-V2 dataset includes occlusion, we selected 1418 images from the dataset. Please refer to the DUTLF_V2_train_list.json.

occ_imgs and occ_msks

We use occlusion images from DeOccNet and resize them to 600x400. The occlusion mask is a binary mask for occlusion images where 1 indicates occlusion and 0 indicates background. The occlusion mask can be created by simply thresholding the occlusion images. We generate the mask by converting the occ_imgs into grayscale and thresholding them with 229.

test dataset

The test dataset can be downloaded DeOccNet, Stanford Lytro, and EPFL-10.

Train

Command

bash command/train.sh

The pre-trained LBAM model should be located in ISTY/LBAMmodels/LBAM_G_500.pth.

Since we use further occlusion images for training (as mentioned in the paper), the result can be slightly different if one re-trains the model following this repository. One can add occlusion images such as thick and complex objects to improve the performance.

Test

bash command/test.sh

The checkpoint should be in ./results/checkpoints/{scope}/LFGAN/.

Checkpoint and dataset

We provide the preprocessed dataset and checkpoint of the model, and pre-trained LBAM model for the backbone architecture for Occlusion Inpainter in here.

Please refer to each section for the proper location of each file.

Citations

@inproceedings{hur2023see,
  title={I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images},
  author={Hur, Jiwan and Lee, Jae Young and Choi, Jaehyun and Kim, Junmo},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={229--238},
  year={2023}
}

Acknowledgement

The code for model architecture is based on DeOccNet and LBAM.

About

This is a codebase for I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images (WACV 2023).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published