New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Standardized Distributions #45115
Comments
Keep in mind that the design goals of For details you could read the original design doc, but I would summarize the design goal of Given these different design goals, maybe it would be more helpful to update the |
Yes, if the distribution interface in Not having the And having an explicit As for my other suggestions. I just thought a more uniform implementation, where the in-place functions used the distributions from I'll take another look at this tomorrow. But if the adjustment of the distribution interface would be the way to go, I could get behind that. |
@pepper-jk great, it sounds like a workable solution might be to pass |
The generator object that @pepper-jk mentioned is stateful, and I think it would be reasonable to pass that in |
Re: passing One design concern is that most |
I would prefer passing |
I agree about the boilerplate in |
For me either would work fine. I'm not opposed to the idea of passing the What does the pytorch/opacus team say, @karthikprasad? |
Also no strong opinions on our side at Opacus. We can certainly just pass the generator. Adding @pbelevich since he built torchcsprng (https://github.com/pytorch/csprng) and can have a more informed opinion than me here :) |
Was this ever resolved? Adding a |
Not to my knowledge. I ended up creating a small library for our research projects. You can find the library here: https://github.com/tklab-tud/aDPtorch EDIT: just noticed that our solution is probably not what you are looking for. Sorry if this is the case. |
馃殌 Feature
torch.normal
,torch.distributions.normal
, andtorch.tensor.normal_
should allow the same parameters and execute the same function/method in the backend.Motivation
This issue is related to pytorch/opacus/issues/59. The motivation behind it is to allow any distribution to be used by the noise generation of differential privacy in
opacus
, formerly known aspytorch-dp
.There are multiple ways to create random samples from the normal (or Gaussian) distribution, three at least that I know of (see above).
However not all allow the same parameters and therefore configuration.
torch.distributions.normal.Normal
does not allow a customgenerator
and therefore does not supporturandom
.torch.distributions
torch.tensor
allow agenerator
torch.tensor
only supports a subset of the distributions available intorch.distributions
torch.distributions
only allow the device to be passed explicitlyTensor
instead of afloat
Tensor
it samples forPitch
torch.distributions
should allow thegenerator
parameter and (optionally) the explicitdevice
parameter.torch.normal
, which as stated here is no longer the "best practice" (and also does not allow other distributions as only normal is availabletorch.tensor
should be complete, meaning include all the distributions available intorch.distributions
.Alternatives
If 1. is not possible, at least each distribution in
torch.distributions
should also be available throughtorch.tensor.<distribution_name>_
as stated in 3., as it allows the use of the in-place sampling to replacetorch-normal
and allows addition of Laplace as an alternative noise generation.Additional context
I highly recommend to read the other issue pytorch/opacus/issues/59 for more context.
cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw @nikitaved
The text was updated successfully, but these errors were encountered: