Relevant papers
https://link.springer.com/chapter/10.1007/978-3-030-93944-1_8 (https://arxiv.org/abs/2007.13086)
http://link.springer.com/article/10.1007/s43681-021-00095-8 (https://arxiv.org/abs/2008.04113)
Membership Inference Attacks against Machine Learning Models (2016): https://arxiv.org/abs/1610.05820
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning (2018): https://ieeexplore.ieee.org/document/8835245
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models (2018): https://arxiv.org/pdf/1806.01246.pdf
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference (2020): https://arxiv.org/abs/1906.11798 Label-Only Membership Inference Attacks (2020): https://arxiv.org/abs/2007.14321
Membership Inference Attacks on Machine Learning: A Survey (2021): http://arxiv.org/abs/2103.07853
Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing (2014): https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures (2015): https://rist.tech.cornell.edu/papers/mi-ccs.pdf
Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations (2018): https://dl.acm.org/doi/10.1145/3243734.3243834
On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models (2021): https://arxiv.org/abs/2103.07101
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning (2019): https://arxiv.org/pdf/1904.01067.pdf
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models (2021): https://arxiv.org/abs/2102.02551
Towards Measuring Membership Privacy (2017): https://arxiv.org/abs/1712.09136
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting (2018): https://www.cs.cmu.edu/~mfredrik/papers/YeomCSF18.pdf
Modelling and Quantifying Membership Information Leakage in Machine Learning (2020): https://ui.adsabs.harvard.edu/abs/2020arXiv200110648F/abstract
ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning (2020): https://arxiv.org/abs/2007.09339
Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics (2020): https://arxiv.org/abs/2009.05669
Quantifying Membership Privacy via Information Leakage (2020): https://arxiv.org/abs/2010.05965
Measuring Data Leakage in Machine-Learning Models with Fisher Information (2021): https://arxiv.org/abs/2102.11673
Using Rényi-divergence and Arimoto-Rényi Information to Quantify Membership Information Leakage (2021): https://ieeexplore.ieee.org/document/9400316
Bounding Information Leakage in Machine Learning (2021): https://arxiv.org/pdf/2105.03875v1.pdf
Deep Learning with Differential Privacy (2016): https://arxiv.org/abs/1607.00133
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (2016): https://arxiv.org/abs/1610.05755
Diffprivlib: The IBM Differential Privacy Library (2019): https://arxiv.org/abs/1907.02444
Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization (2020): https://arxiv.org/abs/2010.09063
A Survey on Differentially Private Machine Learning (2020): https://ieeexplore.ieee.org/document/9064731
https://trustworthy-machine-learning.github.io/
https://github.com/stratosphereips/awesome-ml-privacy-attacks
https://githubmemory.com/index.php/repo/HongshengHu/membership-inference-machine-learning-literature