Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrapper references can be easily replaced, consider using properties instead #649

Closed
andreatgretel opened this issue May 2, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@andreatgretel
Copy link

andreatgretel commented May 2, 2024

(apologies for skipping template, not properly reporting a bug but rather suggesting an improvement)

Currently, the DPOptimizer passes through the state, default and param_groups attributes by simply referencing them. This can be an issue when another object tries to set these attributes from the outside, which might cause the reference to be replaced, instead of changing original_optimizer's attributes. For instance, this happens when HF's AcceleratedOptimizer wraps the DPOptimizer, causing issues such as this one.

A safer alternative might be to do what the AcceleratedOptimizer itself does, using properties instead of passing by reference. For instance, for param_groups, that would look something like

@property
def param_groups(self):
    return self.original_optimizer.param_groups

@param_groups.setter
def param_groups(self, param_groups):
    self.original_optimizer.param_groups = param_groups

Reproducing etc.

Please take a look at this Github issue to see a particular situation where passing by reference could become a problem. In summary: HF's accelerate will put yet another wrapper around the DPOptimizer, and ideally this "doubly-wrapped" optimizer should still function properly. However, due to the AcceleratedOptimizer attempt to change the whole param_groups attribute instead of its contents, changes will never get to the original_optimizer.

Consider the follow MWE of how this could happen

from copy import deepcopy

import opacus
import torch

m = torch.nn.Linear(10, 1)
o = torch.optim.AdamW(m.parameters())
odp = opacus.optimizers.DPOptimizer(o, noise_multiplier=0.5, max_grad_norm=0.1, expected_batch_size=32)

pg = deepcopy(odp.param_groups)
pg[0]['lr'] = 1.0
odp.param_groups = pg

print('LR at DPOptimizer:', odp.param_groups[0]['lr'])
print('LR at original_optimizer:', odp.original_optimizer.param_groups[0]['lr'])
@HuanyuZhang
Copy link
Contributor

Thanks for reporting this. We will investigate and launch a fix.

@HuanyuZhang HuanyuZhang added the bug Something isn't working label Jul 18, 2024
iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this issue Jul 30, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this issue Jul 30, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this issue Jul 31, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this issue Aug 1, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this issue Aug 1, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this issue Aug 1, 2024
…izer wrapper (pytorch#660)

Summary:
Pull Request resolved: pytorch#660

Fix for github issue # [649](pytorch#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Differential Revision: D60453849
facebook-github-bot pushed a commit that referenced this issue Aug 1, 2024
…izer wrapper (#660)

Summary:
Pull Request resolved: #660

Fix for github issue # [649](#649)

**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.

**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference.  Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).

**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.

Reviewed By: HuanyuZhang

Differential Revision: D60453849

fbshipit-source-id: 2f181986e55d853866e1f8492c4e77a8bc2aabb2
@iden-kalemaj
Copy link
Contributor

Thanks again for reporting this. The issue has now been fixed.

@HuanyuZhang
Copy link
Contributor

#660

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants