-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrapper references can be easily replaced, consider using properties instead #649
Labels
bug
Something isn't working
Comments
Thanks for reporting this. We will investigate and launch a fix. |
iden-kalemaj
added a commit
to iden-kalemaj/opacus
that referenced
this issue
Jul 30, 2024
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
iden-kalemaj
added a commit
to iden-kalemaj/opacus
that referenced
this issue
Jul 30, 2024
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
iden-kalemaj
added a commit
to iden-kalemaj/opacus
that referenced
this issue
Jul 31, 2024
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
iden-kalemaj
added a commit
to iden-kalemaj/opacus
that referenced
this issue
Aug 1, 2024
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
iden-kalemaj
added a commit
to iden-kalemaj/opacus
that referenced
this issue
Aug 1, 2024
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
iden-kalemaj
added a commit
to iden-kalemaj/opacus
that referenced
this issue
Aug 1, 2024
…izer wrapper (pytorch#660) Summary: Pull Request resolved: pytorch#660 Fix for github issue # [649](pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
facebook-github-bot
pushed a commit
that referenced
this issue
Aug 1, 2024
…izer wrapper (#660) Summary: Pull Request resolved: #660 Fix for github issue # [649](#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Reviewed By: HuanyuZhang Differential Revision: D60453849 fbshipit-source-id: 2f181986e55d853866e1f8492c4e77a8bc2aabb2
Thanks again for reporting this. The issue has now been fixed. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
(apologies for skipping template, not properly reporting a bug but rather suggesting an improvement)
Currently, the
DPOptimizer
passes through thestate
,default
andparam_groups
attributes by simply referencing them. This can be an issue when another object tries to set these attributes from the outside, which might cause the reference to be replaced, instead of changingoriginal_optimizer
's attributes. For instance, this happens when HF'sAcceleratedOptimizer
wraps theDPOptimizer
, causing issues such as this one.A safer alternative might be to do what the
AcceleratedOptimizer
itself does, using properties instead of passing by reference. For instance, forparam_groups
, that would look something likeReproducing etc.
Please take a look at this Github issue to see a particular situation where passing by reference could become a problem. In summary: HF's accelerate will put yet another wrapper around the
DPOptimizer
, and ideally this "doubly-wrapped" optimizer should still function properly. However, due to theAcceleratedOptimizer
attempt to change the wholeparam_groups
attribute instead of its contents, changes will never get to theoriginal_optimizer
.Consider the follow MWE of how this could happen
The text was updated successfully, but these errors were encountered: