Skip to content

[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu

License

OPTML-Group/Unlearn-Saliency

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation

WARNING: This repository contains model outputs that may be offensive in nature.

preprint video project page issues

Venue:ICLR 2024 License: MIT GitHub top language GitHub repo size GitHub stars

Examples
Figure 1: Example comparison of pre/after unlearning by SalUn.
(Left) Concept "Nudity"; (Middle) Object "Dog"; (Right) Style "Sketch".

This is the official code repository for the ICLR 2024 Spotlight paper SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation.

NEW RELEASED: Feel free to check out our new work about a new perspective of evaluation on Machine Unlearning!

Abstract

With evolving data regulations, machine unlearning (MU) has become an important tool for fostering trust and safety in today's AI models. However, existing MU methods focusing on data and/or weight perspectives often suffer limitations in unlearning accuracy, stability, and cross-domain applicability. To address these challenges, we introduce the concept of 'weight saliency' for MU, drawing parallels with input saliency in model explanation. This innovation directs MU's attention toward specific model weights rather than the entire model, improving effectiveness and efficiency. The resultant method that we call saliency unlearning (SalUn) narrows the performance gap with 'exact' unlearning (model retraining from scratch after removing the forgetting data points). To the best of our knowledge, SalUn is the first principled MU approach that can effectively erase the influence of forgetting data, classes, or concepts in both image classification and generation tasks. As highlighted below, For example, SalUn yields a stability advantage in high-variance random data forgetting, e.g., with a 0.2% gap compared to exact unlearning on the CIFAR-10 dataset. Moreover, in preventing conditional diffusion models from generating harmful images, SalUn achieves nearly 100% unlearning accuracy, outperforming current state-of-the-art baselines like Erased Stable Diffusion and Forget-Me-Not.

Teaser
Figure 2: Schematic overview of our proposal on Saliency Unlearning (SalUn).

Getting Started

SalUn can be applied to different tasks such as image classification and image generation. You can click the link below to access a more detailed installation guide.

Examples of Unlearning on Stable Diffusion

See our project page for more visualizations!

Church unlearn
Figure 3: Class-wise unlearning by SalUn on Stable Diffusion for the "Church" class.
Nudity unlearn
Figure 4: Concept unlearning by SalUn on Stable Diffusion for the "Nudity" concept.

Contributors

Cite This Work

@article{fan2023salun,
  title={SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation},
  author={Fan, Chongyu and Liu, Jiancheng and Zhang, Yihua and Wong, Eric and Wei, Dennis and Liu, Sijia},
  journal={arXiv preprint arXiv:2310.12508},
  year={2023}
}

About

[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages