Skip to content
This repository has been archived by the owner on Feb 19, 2022. It is now read-only.

Reproduction of Attack Effectiveness of Membership Inference Attacks #1

Open
MiracleHH opened this issue Dec 14, 2021 · 0 comments
Open

Comments

@MiracleHH
Copy link

Thanks for sharing the source code of your excellent work!

I tried to reproduce the experimental results of label-only membership inference attacks against various architectures in your paper. Here I followed the parameter settings in your paper (see Appendix B for more details) and the parameter settings in membership.py were modified as follows:

max_iter = 50
max_eval = 2500
sample_size = 1000
init_size = 100
init_eval = 100

And also, I used your pretrained models from Google Drive to conduct the experiments on the CIFAR10 dataset. The experimental results on the CIFAR10 dataset are shown below.

Architecture AUC
BiT 0.5392
DenseNet 0.5141
DLA 0.5060
ResNet 0.5049
ResNext 0.5043
VGG 0.6070
WideResnet 0.5352
AmoebaNet 0.5029
DARTS 0.5220
DrNAS 0.5192
ENAS 0.5069
NASNet 0.5285
PC-DARTS 0.5087
PDARTS 0.5271
SGAS 0.5038
SNAS 0.5081
Random 0.5023

However, the experimental results show a phenomenon contrary to what you present in your paper, i.e., the manual architectures seem to be more vulnerable to membership inference attacks than the NAS architectures.

Is there anything wrong with my parameter settings (I only modified the default parameter settings of membership.py in my experiments)? Or, do I need anything more to reproduce the experimental results of your paper?

Thanks in advance!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant