Skip to content

jedota/Style-Transfer-PS600

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 

Repository files navigation

Style-Transfer Print/Scan

This repository describes the complementary material for the paper: "Generating Automatically Print/Scan Textures for Morphing Attack Detection Applications". This process focuses on simulating the handcrafted print/scan texture used to create print/scan version images automatically from bona fide examples. This scenario allows us to train in single and differential morphing Attacks.

This repository describes the Transfer-style method-based GANs using a PyTorch-CycleGAN-and-pix2pix implementation

Pre-requisite

  • Linux
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

  • Clone this repo:
git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
cd pytorch-CycleGAN-and-pix2pix
  • Install PyTorch and 0.4+ and other dependencies (e.g., torchvision, visdom)
  • For pip users, please type the command pip install -r requirements.txt.
  • For Conda users, you can create a new Conda environment using conda env create -f environment.yml.

Pre-trained models

  • All the models were trained 300 epochs.
  • The models shared are the latest.
  • Checkpoint is available from 1 to 300. (if you'd like to look for the best from 300 models, please feel free to request them by email).
    • Downloading
      • cycleps600-resnet9v1
      • cycleps600-resnet6v1

Generate your own PS600 images:

  • Pix2pix
  • CycleGAN

Original test.py instruccion

python test.py --dataroot datasets/face/testA --name CycleGANps600-noflip-pixel --model test --no_dropout  

The option --model test is used to generate results of CycleGAN only for one side. This option will automatically set --dataset_mode single, which only loads the images from one set. On the contrary, using --model cycle_gan requires loading and generating results in both directions, which is sometimes unnecessary. The results will be saved at ./results/. Use --results_dir {directory_path_to_save_result} to specify the results directory.

Suggested test.py instruccion

python test.py --checkpoints_dir ...(your cheeckponts folder) --dataroot ...(your bona fide or source images) --name cycleps600-resnet9v1 --netG resnet_9blocks --norm instance --proprocess scale_width --result_dir ..folder to save the images --dataset_mode single --no_dropout

For pix2pix and your own models, you need to explicitly specify --netG, --norm, --no_dropout to match the generator architecture of the trained model.

Example images

Example-ps600-caption

Complentary model

Cite

@ARTICLE{10945320,
  author={Tapia, Juan E. and Russo, Maximilian and Busch, Christoph},
  journal={IEEE Access}, 
  title={Generating Automatically Print/Scan Textures for Morphing Attack Detection Applications}, 
  year={2025},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/ACCESS.2025.3555922}}

Disclaimer

This work and the methods proposed are only for research purposes, and the images are generated by chance. Any implementation or commercial use modification must be analysed separately for each case to the email: juan.tapia-farias@h-da.de

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors