Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throw warning if python optimise flags are enabled #77869

Open
vitrioil opened this issue May 19, 2022 · 2 comments
Open

Throw warning if python optimise flags are enabled #77869

vitrioil opened this issue May 19, 2022 · 2 comments
Labels
module: python frontend For issues relating to PyTorch's Python frontend triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@vitrioil
Copy link
Contributor

馃悰 Describe the bug

Currently pytorch will not thrown any obvious warnings / errors when PYTHONOPTIMZE(-O and -OO) flags are used. [#76619, #76034, #76659, #60953] This might imply that the behaviour is consistent whether the flags are enabled or disabled.

However this is not true. Currently assertions are used for checks and throwing error messages.

Examples (run first with python, then run again with python -O or python -OO):

import torch
m = torch.nn.Softmax2d()
input = torch.randn(2, 3, 12, 13, 15)
output = m(input)

Correct behaviour will throw: AssertionError: Softmax2d requires a 4D tensor as input
Incorrect behaviour with -O: Input will be silently accepted.

Similarly:

import torch.nn as nn
att = nn.MultiheadAttention(6, 5, kdim=2,vdim=2)

Correct behaviour: AssertionError: embed_dim must be divisible by num_heads
Incorrect behaviour with -O: Input will be silently accepted.

rnn = nn.RNNCell(10, 20)
input = torch.ones((6, 3, 10, 5))
hx = torch.randn(3, 20)
output = []
for i in range(6):
    hx = rnn(input[i], hx)

Correct behaviour with helpful error message: AssertionError: RNNCell: Expected input to be 1-D or 2-D but received 3-D tensor
Incorrect behaviour with not so useful error message: RuntimeError: input has inconsistent input_size: got 3 expected 10

Using -O flag in python is after all user's responsibility and the user should be aware of the potential problem in using it if they are using pytorch or anything else. However if it is known that assertions are used in pytorch to throw error then it could be informed to the user to avoid any confusion.

If someone using this flag is unaware about this behaviour will potentially miss few errors (leading to funny results?) or will not get meaningful error messages.

Maybe important errors which are behind asserts can be accompanied / replaced with an exception or a warning can be shown to the user if they have enabled this flag and imports torch.

Versions

PyTorch version: 1.12.0a0+git4d527cd
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27

Python version: 3.9.2 (default, Mar 26 2021, 21:58:27) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.3.0-46-generic-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1050
Nvidia driver version: 460.91.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False

Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.0
[pip3] torch==1.12.0a0+gitffd9608
[pip3] torchvision==0.13.0a0+970ba35

@albanD
Copy link
Collaborator

albanD commented May 19, 2022

Hi,

Isn't that the whole point of the python optimize flag: remove all asserts ?
So I think this is expected behavior no?

@albanD albanD added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: python frontend For issues relating to PyTorch's Python frontend labels May 19, 2022
@vitrioil
Copy link
Contributor Author

Hi,

It is expected yes, what I meant was that someone who is using pytorch might not be aware that few errors are implemented as assertions and might see unintended behaviour.

Maybe this isn't that important but anyway wanted to discuss if it's worth addressing this behaviour to the user.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: python frontend For issues relating to PyTorch's Python frontend triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

2 participants