Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some question with this code #4

Closed
xiaoyangx0 opened this issue Oct 7, 2022 · 3 comments
Closed

some question with this code #4

xiaoyangx0 opened this issue Oct 7, 2022 · 3 comments

Comments

@xiaoyangx0
Copy link

I noticed opacus is not supported batchnorm2d,so we should use convert_batchnorm_modules to convert batchnorm2d module to groupnorm. In this way, we can not use batchnorm statistics to conduct grad_attack, so how can solve this question. Thanks for your reply.

@Hazelsuko07
Copy link
Collaborator

Hi,

Thanks for the question!

Yes DPSGD by nature does not comply with BatchNorm and we've supported the conversion from BatchNorm layers to GroupNorm (you would need to disable the comment mode to allow the conversion):

# Converts all BatchNorm modules to another module (defaults to GroupNorm) that is privacy compliant # FIXME: should update the privacy accountant
# pipeline.model._model = convert_batchnorm_modules(
# pipeline.model._model)

To launch the gradient inversion attack with DPSGD, you may want to design your own regularization term for GroupNorm statistics. The regularization term for BatchNorm statistics may be a good reference:

if self.hparams["bn_reg"] > 0:
rescale = [self.hparams["first_bn_multiplier"]] + [
1.0 for _ in range(len(self.loss_r_feature_layers) - 1)
]
loss_r_feature = sum([
mod.r_feature * rescale[idx]
for (idx, mod) in enumerate(self.loss_r_feature_layers)
])
reconstruction_loss += self.hparams["bn_reg"] * loss_r_feature

Happy to answer further questions if any :)

Best,
Yangsibo

@xiaoyangx0
Copy link
Author

Thanks for your reply, I noticed that the attack model and target model are all in evaluation state
if eval_mode is True: self.eval() else: self.train()
but if we set it to training state ,It will lead to a poor result compared with the result in the paper.
In the evaluation status, it means that we have used the previous batchnorm module information,which is a strong assumption in realistic application.
This is my question, thanks for your answer.

@Hazelsuko07
Copy link
Collaborator

Hi,

Thanks for the follow-up question!

but if we set it to training state ,It will lead to a poor result compared with the result in the paper.

You are definitely right about this, and this is one of the main takeaways from Section 3 of our paper.

Also, please note that the main results (Table 2) we reported in the paper are for the strongest (and unrealistic) setting where the attacker has access to

  • BatchNorm statistics of the private batch
  • labels of the private batch.

We evaluated such a scenario as it helps us understand the upper bound of the realistic attack performance.

Best,
Yangsibo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants