A mitigation method against privacy violation attacks on face recognition systems
-
Updated
Jan 10, 2023 - Python
A mitigation method against privacy violation attacks on face recognition systems
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
An implementation of ICLR 22 paper "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" in PyTorch
Testing membership inference attacks on Deep learning models (LSTM, CNN);
The source code of the paper "Learning-Based Difficulty Calibration for Enhanced Membership Inference Attacks"(EuroS&P 2024)
This repository accompanies the paper "SynthShield: Leveraging Synthetic Distributions to Enhance Privacy Against Membership Inference" currently under review at the International Conference on Pattern Recognition (ICPR). It contains the main code used in applying and analysing the SynthShield technique analysed in the paper.
The official implementation of the paper "Data Contamination Calibration for Black-box LLMs" (ACL 2024)
An implementation of loss thresholding attack to infer membership status as described in paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting" (CSF 18) in PyTorch.
Investigating the privacy vulnerabilities in deep learning steganography using the membership inference attacks.
Performing membership inference attack (MIA) against Korean language models (LMs).
Defending Privacy Against More Knowledgeable Membership Inference Attackers
Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)
FederBoost's Federated Gradient Boosting Decision Tree Algorithm, Federated enabled Membership Inference
The source code for ICML2021 paper When Does Data Augmentation Help With Membership Inference Attacks?
Accompanying code for "Disparate Vulnerability to Membership Inference Attacks"
Codebase for Active Membership Inference Attack under Local Differential Privacy in Federated Learning
Min-K%++: Improved baseline for detecting pre-training data of LLMs https://arxiv.org/abs/2404.02936
DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
Differential Privacy Protection against MembershipInference Attack on Machine Learning for Genomic Data
Add a description, image, and links to the membership-inference-attack topic page so that developers can more easily learn about it.
To associate your repository with the membership-inference-attack topic, visit your repo's landing page and select "manage topics."