-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent Behavior in torch.distributions log_prob for float input in uniform distribution. #22970
Comments
This should be an easy fix. Those lines should be made such that they work with Python floats as well, so replacing pytorch/torch/distributions/uniform.py Lines 73 to 75 in 7a99f39
|
@fmassa, there is some ambiguity with |
@rrkarim You can take a look at the pull request in which this was added: #5358. |
@fmassa I would like to take up this issue. I'm working on it. |
@vishwakftw then we can just explicitly write all the checks in the instance methods. Ok, I see, performance gain can be the case. Still, pretty weird design. Sizes can be checked using generic methods, about the cases for each distribution - need a better design. (maybe quick fix is enough) |
I think it wouldn't be appropriate to assume this.
Well, it would be too verbose and against the DRY principle. This is the actual discussion thread regarding this addition: #5248. cc: @fritzo @neerajprad |
DRY is good with checks at instance methods. Performance is another case. Everything is pretty much discussed in the thread (including all the |
馃悰 Bug
(Most?) torch.distributions'
cdf
andlog_prob
work for inputs that are eithertorch.Tensor
s or native python floats. However,torch.distributions.uniform.Uniform.log_prob()
fails for python-float inputs.To Reproduce
Steps to reproduce the behavior:
Expected behavior
Either one of the following:
cdf
,icdf
andlog_prob
fail consistently for native python inputs across all distributionstorch.distributions.Uniform.log_prob
returns correct log_probability when given native python inputs.Environment
Collecting environment information...
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.2 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
GPU 2: GeForce RTX 2080 Ti
GPU 3: GeForce RTX 2080 Ti
GPU 4: GeForce RTX 2080 Ti
GPU 5: GeForce RTX 2080 Ti
GPU 6: GeForce RTX 2080 Ti
GPU 7: GeForce RTX 2080 Ti
Nvidia driver version: 430.26
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.1.0
[pip] torchvision==0.3.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl_fft 1.0.12 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.1.0 py3.7_cuda10.0.130_cudnn7.5.1_0 pytorch
[conda] torchvision 0.3.0 py37_cu10.0.130_1 pytorch
The text was updated successfully, but these errors were encountered: