Face morphing attacks compromise biometric security by creating document images that verify against multiple identities, posing significant risks from document issuance to border control. Differential Morphing Attack Detection (D-MAD) offers an effective countermeasure, particularly when employing face demorphing to disentangle identities blended in the morph. However, existing methods lack operational generalizability due to limited training data and the assumption that all document inputs are morphs. This paper presents SFDemorpher, a framework designed for the operational deployment of face demorphing for D-MAD that performs identity disentanglement within joint StyleGAN latent and high-dimensional feature spaces. We introduce a dual-pass training strategy handling both morphed and bona fide documents, leveraging a hybrid corpus with predominantly synthetic identities to enhance robustness against unseen distributions. Extensive evaluation confirms state-of-the-art generalizability across unseen identities, diverse capture conditions, and 13 morphing techniques, spanning both border verification and the challenging document enrollment stage. Our framework achieves superior D-MAD performance by widening the margin between the score distributions of bona fide and morphed samples while providing high-fidelity visual reconstructions facilitating explainability.
SFDemorpher detects face morphing attacks by performing identity disentanglement within joint StyleGAN latent and high-dimensional feature spaces, achieving state-of-the-art generalizability across unseen identities, diverse capture conditions, and multiple morphing techniques.
You can find the full paper on arXiv.
The framework has been validated using Python 3.12 on Ubuntu 24.04 LTS. Inference can run on CPU, but training should be done on an NVIDIA GPU with CUDA 12.9.
-
Git clone this repo:
git clone https://github.com/Raul2718/SFDemorpher cd SFDemorpher -
Set up a virtual Python environment:
python -m venv .venv
-
Activate your virtual environment:
source .venv/bin/activate -
Install PyTorch
- with CUDA 12.9 support (NVIDIA GPU required):
pip install torch==2.8.0+cu129 \ torchvision==0.23.0+cu129 \ --index-url https://download.pytorch.org/whl/cu129 - CPU-only:
pip install torch==2.8.0 \ torchvision==0.23.0 \ --index-url https://download.pytorch.org/whl/cpu
- with CUDA 12.9 support (NVIDIA GPU required):
-
Install packages from
requirements.txt:pip install -r requirements.txt
To run the code, you must download the following pre-trained weights and place them in their respective destination paths within the project directory.
| Model Link | Destination Path |
|---|---|
| SFDemorpher | experiments/SFDemorpher/checkpoints/iteration_68000_0.6277.pt |
| BiRefNet | models/BiRefNet/BiRefNet-portrait-epoch_150.pth |
| dlib | models/dlib/shape_predictor_68_face_landmarks.dat |
| MTCNN | models/mtcnn/checkpoints/(onet.pt, pnet.pt, rnet.pt) |
| SFE Inverter | models/psp/encoders/checkpoints/sfe_inverter_light.pt |
| StyleGAN2 (.pkl) | models/psp/stylegan2/checkpoints/stylegan2-ffhq-config-f.pkl |
| StyleGAN2 (.pt) | models/psp/stylegan2/checkpoints/stylegan2-ffhq-config-f.pt |
| AdaFace | models/face_recognition/adaface/checkpoints/adaface_ir101_webface12m.ckpt |
| ArcFace (optional) | models/face_recognition/arcface/checkpoints/ArcFace.pth |
| CurricularFace (optional) | models/face_recognition/curricularface/checkpoints/CurricularFace.pth |
Before training, all input images must be preprocessed through two steps: background removal and face alignment.
Step 1: Background Removal (BiRefNet)
python -m utils.run_birefnet \
--src /path/to/original/dataset/ \
--dst /path/to/preprocessed/dataset/ \
--batch-size 8 \
--num-workers 4 \
--device cuda:0--src: Source directory containing original images (scanned recursively)--dst: Destination directory for preprocessed images (structure preserved)--batch-size: Batch size for BiRefNet inference (default: 8)--num-workers: Number of data loading workers (default: 4)--device: Device for computation (e.g.,cuda:0orcpu)--image-size: Resize dimensions before model (default: 1024 1024)--background: Background color for replacement (default:#808080)
Step 2: Face Alignment (FFHQ Protocol)
python -m utils.align_images \
--src /path/to/original/dataset/ \
--dst /path/to/preprocessed/dataset/ \
--batch_size 16 \
--num_workers 4 \
--output_size 1024 \
--device cuda:0--src: Source directory containing original images (scanned recursively)--dst: Destination directory for aligned images (structure preserved)--batch_size: Batch size for processing (default: 16)--num_workers: Number of data loading workers (default: 4)--output_size: Output image size (default: 1024)--transform_size: Intermediate transform size (default: 4096)--verify_tol: Tolerance for landmark verification (default: 0.05)--device: Device for computation (e.g.,cuda:0orcpu)
Both utilities automatically copy non-image files from source to destination and preserve the directory structure.
Edit the configuration file configs/sfdemorpher_config.yaml to set your dataset paths and training parameters.
See configs/config_examples.yaml for some parameter examples.
The following datasets are supported out of the box:
| Dataset | Config Parameter |
|---|---|
| FLUXSynID | fluxsynid_demorphing |
| DemorphDB | demorph_db_demorphing |
| FRLL-Morphs | frll_morphs_demorphing |
| FEI Morph V2 | fei_morphs_demorphing |
| HNU-FM | hnu_fm_morphs_demorphing |
To add custom datasets, create a new dataset class in datasets/datasets.py following the existing implementations and register it with the @datasets_registry.add_to_registry decorator.
python -m scripts.train \
exp.config_dir=configs \
exp.config=sfdemorpher_config.yaml \
model.device=cuda:0 \
exp.name=SFDemorpher_exampleexp.config_dir: Directory containing configuration filesexp.config: Configuration file name (YAML)model.device: Training device (e.g.,cuda:0)exp.name: Experiment name (creates subdirectory underexperiments/)
SFDemorpher supports two inference modes: single-pair mode and batch mode.
Process a single suspected document image with a trusted reference:
python -m runners.inference_runners \
-s suspected_document.jpg \
-t trusted_reference.jpg \
-d ./output/ \
--device cuda:0Process multiple image pairs from a text file:
python -m runners.inference_runners \
--pairs-file pairs.txt \
-d ./output/ \
--device cuda:0 \
--batch-size 4 \
--num-workers 2Text file where each line should contain two absolute paths separated by comma or space:
/path/to/suspected1.jpg,/path/to/trusted1.jpg
/path/to/suspected2.jpg /path/to/trusted2.jpg
/path/to/suspected3.jpg, /path/to/trusted3.jpg (spaces after comma are OK)
--no-preprocessing: Skip BiRefNet background removal and FFHQ alignment. Use this if images are already preprocessed for faster GPU-only inference.--similarity-score: Return raw AdaFace cosine similarity score (-1 to 1) instead of mapped score.--checkpoint: Path to SFDemorpher checkpoint (default: uses path fromconfigs/paths.py).
For each processed pair, the following files are saved:
- Demorphed Image (
demorphed_<suspected>_by_<trusted>.jpg): The reconstructed face image - Score Report (
demorphed_<suspected>_by_<trusted>_report.txt): Text file containing:- Result classification (MORPH DETECTED / BONA FIDE)
- Score
- Processing Status
Without --similarity-score (default):
- Score range: 0 to 1
- Score < 0.5: Classified as Bona Fide (same identity)
- Score ≥ 0.5: Classified as Morph (different identities)
With --similarity-score:
- Score range: -1 to 1 (raw cosine similarity)
- Higher values indicate higher similarity between demorphed image and trusted reference (less likely to be a morph)
- Threshold depends on the FRS configuration (default: ~0.331 for AdaFace)
If you use this code or the SFDemorpher model in your research, please cite our paper:
@misc{ismayilov2026sfdemorpher,
title={{SFDemorpher}: Generalizable Face Demorphing for Operational Morphing Attack Detection},
author={Raul Ismayilov and Luuk Spreeuwers},
year={2026},
eprint={2603.28322},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.28322},
}The SFDemorpher framework was developed under the EINSTEIN project. The EINSTEIN project is funded by the European Union (EU) under G.A. no. 101121280 and UKRI Funding Service under IFS reference 10093453. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect the views of the EU/Executive Agency or UKRI. Neither the EU nor the granting authority nor UKRI can be held responsible for them.
