Multi-feature Co-learning for Image Inpainting
Jiayu Lin, Yuan-Gen Wang*, Wenzhi Tang, Aifeng Li.
In ICPR'2022.
Clone this repo.
git clone https://github.com/GZHU-DVL/MFCL-Inpainting.git
Prerequisites
- Python=3.8
- Pytorch=1.4
- Torchvision=0.5.0
- Torchaudio=0.4.0
- Tensorboard=2.9.0
- Pillow=8.2.0
- Cudatookit=10.1
Image Dataset.
We evaluate the proposed method on the CelebA, Paris StreetView, and Places2 datasets, download the datasets from the official website.
For Structure image of datasets used in this paper follows the structure flow and utlize the RTV smooth method. Run generation function data/Matlab/generate_structre_images.m in your matlab to get this dataset. For example, if you want to generate smooth images for CelebA, you can run the following code:
generate_structure_images("path to CelebA dataset root folder", "path to output folder");
Mask Dataset.
Irregular masks are obtained from Irregular Masks and classified based on their hole sizes relative to the entire image with an increment of 10%.
To train the model, you run the following code.
python train.py \
--de_root [the path of ground truth images] \
--st_root [the path of structure images] \
--mask_root [the path of mask images] \
--checkpoints_dir [models are saved here] \
--log_dir [the path to record log]
To test the model, you modify the following code in test.py.
model.netEN.module.load_state_dict(torch.load("")['net'])
model.netDE.module.load_state_dict(torch.load("")['net'])
model.netMEDFE.module.load_state_dict(torch.load("")['net'])
This source code is made available for research purpose only.
Our code is built upon Rethinking-Inpainting-MEDFE.