Throw warning if python optimise flags are enabled #77869
Labels
module: python frontend
For issues relating to PyTorch's Python frontend
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃悰 Describe the bug
Currently pytorch will not thrown any obvious warnings / errors when PYTHONOPTIMZE(-O and -OO) flags are used. [#76619, #76034, #76659, #60953] This might imply that the behaviour is consistent whether the flags are enabled or disabled.
However this is not true. Currently assertions are used for checks and throwing error messages.
Examples (run first with
python
, then run again withpython -O
orpython -OO
):Correct behaviour will throw:
AssertionError: Softmax2d requires a 4D tensor as input
Incorrect behaviour with
-O
: Input will be silently accepted.Similarly:
Correct behaviour:
AssertionError: embed_dim must be divisible by num_heads
Incorrect behaviour with
-O
: Input will be silently accepted.Correct behaviour with helpful error message:
AssertionError: RNNCell: Expected input to be 1-D or 2-D but received 3-D tensor
Incorrect behaviour with not so useful error message:
RuntimeError: input has inconsistent input_size: got 3 expected 10
If someone using this flag is unaware about this behaviour will potentially miss few errors (leading to funny results?) or will not get meaningful error messages.
Maybe important errors which are behind asserts can be accompanied / replaced with an exception or a warning can be shown to the user if they have enabled this flag and imports torch.
Versions
PyTorch version: 1.12.0a0+git4d527cd
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.9.2 (default, Mar 26 2021, 21:58:27) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.3.0-46-generic-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1050
Nvidia driver version: 460.91.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.0
[pip3] torch==1.12.0a0+gitffd9608
[pip3] torchvision==0.13.0a0+970ba35
The text was updated successfully, but these errors were encountered: