Skip to content

softmax fail #49157

@pas-valkov

Description

@pas-valkov

🐛 Bug

Calling torch.softmax fails with torch version 1.7.0 but works with previous ones.

Traceback (most recent call last):
  File "test.py", line 13, in <module>
    results.append(norm1.norm_text(text1))
  File "/home/pavel/projects/text-to-speech/modules/text_normalization/normalizer.py", line 120, in norm_text
    norm_parts.append(self._norm_string(part[start_point:]))
  File "/home/pavel/projects/text-to-speech/modules/text_normalization/normalizer.py", line 72, in _norm_string
    out = self.model(src, src2tgt)
  File "/home/pavel/projects/text-to-speech/modules/text_normalization/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/test_jit2.py", line 356, in forward
      _126 = torch.__is__(annotate(Optional[int], None), None)
      if _126:
        alphas = torch.softmax(scores0, dim, annotate(Optional[int], None))
                 ~~~~~~~~~~~~~ <--- HERE
      else:
        dtype = ops.prim.unchecked_unwrap_optional(annotate(Optional[int], None))

Traceback of TorchScript, original code (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1230, in forward
        dim = _get_softmax_dim('softmax', input.dim(), _stacklevel)
    if dtype is None:
        ret = input.softmax(dim)
              ~~~~~~~~~~~~~ <--- HERE
    else:
        ret = input.softmax(dim, dtype=dtype)
RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch. 

To Reproduce

Steps to reproduce the behavior:

  1. git clone https://github.com/snakers4/russian_stt_text_normalization.git && cd russian_stt_text_normalization
  2. python
import torch
from normalizer import Normalizer

norm = Normalizer()
text = 'С 12.01.1943 г. площадь сельсовета — 1785,5 га. С 12.01.1943 г. площадь сельсовета — 1785,5 га.'
result = norm.norm_text(text)
print(result)

Expected behavior

Printed text

Environment

PyTorch version: 1.7.0
Is debug build: True
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Linux Mint 20 (x86_64)
GCC version: (Ubuntu 9.3.0-10ubuntu2) 9.3.0
Clang version: Could not collect
CMake version: version 3.18.2

Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX TITAN X
Nvidia driver version: 455.32.00
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] torch==1.7.0
[conda] Could not collect

Additional context

with torch 1.4.0 and 1.6.0 everything is ok

cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @mruberry @gmagogsfm

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: autogradRelated to torch.autograd, and the autograd engine in generalmodule: nnRelated to torch.nnmodule: viewing and reshapingneeds reproductionEnsure you have actionable steps to reproduce the issue. Someone else needs to confirm the repro.oncall: jitAdd this issue/PR to JIT oncall triage queue

    Type

    No type

    Projects

    Status

    In progress

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions