Skip to content

Source code implementation of "LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks (MM24)"

Notifications You must be signed in to change notification settings

wangty1/LampMark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks

Source code implementation of our paper accepted to Proceedings of the 32nd ACM International Conference on Multimedia (MM 2024).

Datasets used in this project

LampMark is trained using CelebA-HQ and tested on CelebA-HQ and LFW. We do not own the datasets, and they can be downloaded from the official webpages.

After splitting the image data following the official document of CelebA-HQ, the folder should be named as dataset_celeba_hq/ and placed under image_data/. For the cross-dataset evaluation under a balanced ratio, LFW is processed such that one for each identity is adopted. The directory should look like the following:

LampMark
└── image_data/
    ├── dataset_celeba_hq/
    │   ├── train/
    │   │   ├── 0.jpg
    │   │   ├── 1.jpg
    │   │   └── ...
    │   ├── val/
    │   │   ├── 1000.jpg
    │   │   └── ...         
    │   └── test/
    │       ├── 10008.jpg
    │       └── ...    
    └── lfw/
        └── test/
            ├── AJ_Cook_0001.jpg
            └──...

In this project, the landmarks with 106 points are extracted via Face++ with paid services. We directly provide the watermarks that we used for training, validation, and testing, which can be found in watermark_data\. The watermarks are stored as .npy files such that the file names match with the image file names.

Train the model from scratch

The model is trained following the configuration files located in configuration/.

We pre-train the framework against the common manipulation Jpeg(50) and then fine-tune it against SimSwap. The model is pre-trained following settings in configuration/pretrain.json, and fine-tuned following settings in configuration/tune_deepfake.json.

We use main.py to pre-train and fine-tune the model by calling train_common() and tune_deepfake(). Switch the mode by commenting out the unneeded one for pre-training and fine-tuning. Simply modify the configuration file and run

python main.py

Test the model

The model is tested following the configuration files located in configuration/.

We use configuration/test_common.json to test the watermarks against all benign manipulations and derive the watermark recovery accuracies. We use configuration/test_deepfake.json to test the watermarks against Deepfake manipulations and derive the watermark recovery accuracies.

We use main.py to test the model against the desired adversaries (e.g., benign manipulations, SimSwap, InfoSwap).

Use Deepfake models for LampMark to defend against

LampMark is trained against SimSwap, and tested against seven Deepfake models including SimSwap. Since we don't own the source code, we recommend downloading and placing the model source code and weights by yourself. The models should be placed under model/ folder so that the classes in model/deepfake_manipulations.py can utilize the generative models.

The source code can be found at the following links:

Citation

If you find our work useful, please properly cite the following:

@inproceedings{LampMark2024Wang,
author = {Wang, Tianyi and Huang, Mengxiao and Cheng, Harry and Zhang, Xiao and Shen, Zhiqi},
title = {LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks},
year = {2024},
isbn = {9798400706868},
doi = {10.1145/3664647.3680869},
booktitle = {Proceedings of the 32nd ACM International Conference on Multimedia},
pages = {10515–10524}
}

About

Source code implementation of "LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks (MM24)"

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages