Unofficial PyTorch implementation of CartoonGAN. We followed the original Lua training implementation from the paper author (Yang Chen).
├─checkpoints
├─data
│ ├─train
│ │ ├─cartoon # You put cartoon images here
│ │ ├─cartoon_edge_pair
│ │ └─photo # You put photo images here
│ └─val
│ └─photo # You put photo images here
└─results
├─test
└─train
- Albumentations
pip install -U albumentations
- tqdm
pip install tqdm
- Read https://vinesmsuic.github.io/2022/01/21/i2i-cartoongan/ to understand the implementation
- Prepare the photo and cartoon data
- Get the pre-trained VGG19 weight and put it in the
CartoonGAN
folder : https://download.pytorch.org/models/vgg19-dcbb9e9d.pth - Preprocess data through edge promoting
python edge_smooth.py
- Edit
config.py
- If you are using your custom data that are in random size, please enable RandomCrop in
config.py
.
transform_cartoon_pairs = A.Compose(
#additional_targets: apply same augmentation on both images
[
A.RandomCrop(width=IMAGE_SIZE*1.2, height=IMAGE_SIZE*1.2),
A.Resize(width=IMAGE_SIZE, height=IMAGE_SIZE),
A.HorizontalFlip(p=0.5),
A.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5], max_pixel_value=255.0),
ToTensorV2(),
],
additional_targets={"image0": "image"},
)
- Training
python train.py
- The training consist of initialization phase and training phase.
- Wait for a long time and see the results at
results
folder
- [] Automatic Mixed Precision
- [] LR Scheduler
- [] Loss visualization
- [] WandB visualization
- [] Inference Code
- Explaining Code
- Windows with CUDA
- Ubuntu with CUDA