Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.equals bug #15900

Closed
PetrochukM opened this issue Jan 9, 2019 · 6 comments
Closed

torch.equals bug #15900

PetrochukM opened this issue Jan 9, 2019 · 6 comments

Comments

@PetrochukM
Copy link

PetrochukM commented Jan 9, 2019

🐛 Bug

Torch equals does not work.

To Reproduce

>>> import torch
>>> t =torch.tensor([0.3000, 0.3000, 0.0000, 0.0000])
>>> t1 =torch.tensor([0.3000, 0.3000, 0.0000, 0.0000])
>>> torch.equal(t, t1)
True
>>> t1 =torch.tensor([0.3000, 0.3000, 0.0000, 0.0000]) + 1 - 1
>>> torch.equal(t, t1)
False
>>> t1
tensor([0.3000, 0.3000, 0.0000, 0.0000])

Expected behavior

That equality still works after some operations.

Environment

Collecting environment information...
PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: None

OS: Mac OSX 10.14.1
GCC version: Could not collect
CMake version: Could not collect

Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA

Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
@fmassa
Copy link
Member

fmassa commented Jan 9, 2019

I believe this is floating point error, does this work for double tensors?

@PetrochukM
Copy link
Author

PetrochukM commented Jan 10, 2019

No.

>>> import torch
>>> t =torch.tensor([0.3000, 0.3000, 0.0000, 0.0000], dtype=torch.float64)
>>> t1 =torch.tensor([0.3000, 0.3000, 0.0000, 0.0000], dtype=torch.float64)
>>> torch.equal(t, t1)
True
>>> t1 = t1 + 1.0 - 1.0
>>> torch.equal(t, t1)
False
>>> t1
tensor([0.3000, 0.3000, 0.0000, 0.0000], dtype=torch.float64)

It'd be hard to imagine this being something other than a floating point error though.

Is there any support for almost_equal?

@fmassa
Copy link
Member

fmassa commented Jan 10, 2019

I think there is something in torch.testing

@PetrochukM
Copy link
Author

PetrochukM commented Jan 10, 2019

It's not publically documented, yeah? It'd be nice to have that public for writing tests. Using numpy.testing at the moment.

@eickenberg
Copy link

Definitely a floating-point issue, since it works with [0.1250, 0.1250, 0.0000, 0.0000] which has a finite expansion in binary (1/2**3), contrary to 0.3.

Maybe a robust workaround is something like torch.norm of difference tested to be under a threshold

@PetrochukM
Copy link
Author

Thanks guys!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants