You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 19, 2022. It is now read-only.
Thanks for sharing the source code of your excellent work!
I tried to reproduce the experimental results of label-only membership inference attacks against various architectures in your paper. Here I followed the parameter settings in your paper (see Appendix B for more details) and the parameter settings in membership.py were modified as follows:
And also, I used your pretrained models from Google Drive to conduct the experiments on the CIFAR10 dataset. The experimental results on the CIFAR10 dataset are shown below.
Architecture
AUC
BiT
0.5392
DenseNet
0.5141
DLA
0.5060
ResNet
0.5049
ResNext
0.5043
VGG
0.6070
WideResnet
0.5352
AmoebaNet
0.5029
DARTS
0.5220
DrNAS
0.5192
ENAS
0.5069
NASNet
0.5285
PC-DARTS
0.5087
PDARTS
0.5271
SGAS
0.5038
SNAS
0.5081
Random
0.5023
However, the experimental results show a phenomenon contrary to what you present in your paper, i.e., the manual architectures seem to be more vulnerable to membership inference attacks than the NAS architectures.
Is there anything wrong with my parameter settings (I only modified the default parameter settings of membership.py in my experiments)? Or, do I need anything more to reproduce the experimental results of your paper?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Thanks for sharing the source code of your excellent work!
I tried to reproduce the experimental results of label-only membership inference attacks against various architectures in your paper. Here I followed the parameter settings in your paper (see Appendix B for more details) and the parameter settings in membership.py were modified as follows:
And also, I used your pretrained models from Google Drive to conduct the experiments on the CIFAR10 dataset. The experimental results on the CIFAR10 dataset are shown below.
However, the experimental results show a phenomenon contrary to what you present in your paper, i.e., the manual architectures seem to be more vulnerable to membership inference attacks than the NAS architectures.
Is there anything wrong with my parameter settings (I only modified the default parameter settings of membership.py in my experiments)? Or, do I need anything more to reproduce the experimental results of your paper?
Thanks in advance!
The text was updated successfully, but these errors were encountered: