Skip to content

ShanglunFengatETHZ/PrivacyBackdoor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Privacy backdoors in ML Models

Manipulate weights to implant backdoors into a pre-trained model for a data-stealing attack.

Example: Reconstructed images and ground truth images of the malicious ViT fine-tuned on the Caltech 101 dataset. We have successfully taken advantage of the pre-trained weights of ViT.

Here are some resources about:

  • configuration examples: malicious initializations & fine-tuning recipes
  • additional pre-trained weights for transformers using ReLU or smaller transformers
  • some examples of the fine-tuned weights

Note: we provide pre-trained weights of ViT and BERT using random heads for downstream classification tasks. It is possible that the pre-trained models break down during fine-tuning. Typically, breakdowns do not occur multiple times in succession. If the breakdown occurs this time, try training again.

About

Privacy backdoors

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages