Floating point exception and segfault for empty tensors to BatchNorm2d #29578
Labels
high priority
module: nn
Related to torch.nn
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃悰 Bug
Intermediate empty tensors result in exceptions and segfault.
To Reproduce
In the actual model
BatchNorm2d
is actually trained and then.eval
is called. Segfault still happens.Expected behavior
No segfault. Ideally please do support empty tensors.
I hit these issues with empty tensors during exporting/tracing, and cannot use
jit.script
because I want to export to ONNX.Even thought exceptions such as
are not that great, at least we can try-catch and work around it.
Environment
PyTorch version: 1.4.0.dev20191023
Is debug build: No
CUDA used to build PyTorch: 10.0
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: version 3.14.0
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1070
Nvidia driver version: 418.40.04
cuDNN version: /usr/local/lib/libcudnn.so.5.1.10
Versions of relevant libraries:
[pip] iristorch==0.0.43
[pip] numpy==1.17.2
[pip] torch==1.4.0.dev20191023
[pip] torchvision==0.5.0a0+558beab
[conda] Could not collect
Additional context
The empty tensor is an intermediate tensor, with dynamic batch size. Related to issue #15343 and #12013
cc @ezyang @gchanan @zou3519 @jerryzh168
The text was updated successfully, but these errors were encountered: