New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some question with this code #4
Comments
Hi, Thanks for the question! Yes DPSGD by nature does not comply with BatchNorm and we've supported the conversion from BatchNorm layers to GroupNorm (you would need to disable the comment mode to allow the conversion): GradAttack/gradattack/defenses/dpsgd.py Lines 140 to 142 in 4496f57
To launch the gradient inversion attack with DPSGD, you may want to design your own regularization term for GroupNorm statistics. The regularization term for BatchNorm statistics may be a good reference: GradAttack/gradattack/attacks/gradientinversion.py Lines 325 to 334 in 4496f57
Happy to answer further questions if any :) Best, |
Thanks for your reply, I noticed that the attack model and target model are all in evaluation state |
Hi, Thanks for the follow-up question!
You are definitely right about this, and this is one of the main takeaways from Section 3 of our paper. Also, please note that the main results (Table 2) we reported in the paper are for the strongest (and unrealistic) setting where the attacker has access to
We evaluated such a scenario as it helps us understand the upper bound of the realistic attack performance. Best, |
I noticed opacus is not supported batchnorm2d,so we should use convert_batchnorm_modules to convert batchnorm2d module to groupnorm. In this way, we can not use batchnorm statistics to conduct grad_attack, so how can solve this question. Thanks for your reply.
The text was updated successfully, but these errors were encountered: