Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_conformer_lstm.py bug #37

Closed
upskyy opened this issue Jun 21, 2021 · 0 comments
Closed

test_conformer_lstm.py bug #37

upskyy opened this issue Jun 21, 2021 · 0 comments
Assignees
Labels
Projects

Comments

@upskyy
Copy link
Member

upskyy commented Jun 21, 2021

Environment info

  • Platform: Windows 10
  • Python version: 3.7
  • PyTorch version (GPU?): PyTorch version : 1.8.1, CUDA version : 10.2
  • Using GPU in script?: GeForce RTX 2080 Ti

Information

Model I am using (ListenAttendSpell, Transformer, Conformer ...): Conformer_lstm

The problem arises when using:

Testing started at 오후 5:22 ...
C:\Users\cote\Anaconda3\envs\sc\python.exe "C:\Program Files\JetBrains\PyCharm Community Edition with Anaconda plugin 2019.3.3\plugins\python-ce\helpers\pycharm\_jb_unittest_runner.py" --path C:/Users/cote/PycharmProjects/kospeech2/tests/test_conformer_lstm.py
Launching unittests with arguments python -m unittest C:/Users/cote/PycharmProjects/kospeech2/tests/test_conformer_lstm.py in C:\Users\cote\PycharmProjects\kospeech2\tests


Error
Traceback (most recent call last):
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 59, in testPartExecutor
    yield
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 628, in run
    testMethod()
  File "C:\Users\cote\PycharmProjects\kospeech2\tests\test_conformer_lstm.py", line 56, in test_beam_search
    prediction = model(DUMMY_INPUTS, DUMMY_INPUT_LENGTHS)["predictions"]
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\conformer_lstm\model.py", line 109, in forward
    return super(ConformerLSTMModel, self).forward(inputs, input_lengths)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\openspeech_encoder_decoder_model.py", line 125, in forward
    predictions = self.decoder(encoder_outputs, encoder_output_lengths)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\search\beam_search_lstm.py", line 78, in forward
    step_outputs, hidden_states, attn = self.forward_step(inputs, hidden_states, encoder_outputs)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\decoders\lstm_attention_decoder.py", line 140, in forward_step
    embedded = self.embedding(input_var)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\sparse.py", line 158, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\functional.py", line 1916, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device


Error
Traceback (most recent call last):
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 59, in testPartExecutor
    yield
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 628, in run
    testMethod()
  File "C:\Users\cote\PycharmProjects\kospeech2\tests\test_conformer_lstm.py", line 37, in test_forward
    outputs = model(DUMMY_INPUTS, DUMMY_INPUT_LENGTHS)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\conformer_lstm\model.py", line 109, in forward
    return super(ConformerLSTMModel, self).forward(inputs, input_lengths)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\openspeech_encoder_decoder_model.py", line 130, in forward
    teacher_forcing_ratio=0.0,
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\decoders\lstm_attention_decoder.py", line 220, in forward
    attn=attn,
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\decoders\lstm_attention_decoder.py", line 140, in forward_step
    embedded = self.embedding(input_var)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\sparse.py", line 158, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\functional.py", line 1916, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device


Error
Traceback (most recent call last):
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 59, in testPartExecutor
    yield
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 628, in run
    testMethod()
  File "C:\Users\cote\PycharmProjects\kospeech2\tests\test_conformer_lstm.py", line 103, in test_test_step
    batch=(DUMMY_INPUTS, DUMMY_TARGETS, DUMMY_INPUT_LENGTHS, DUMMY_TARGET_LENGTHS), batch_idx=i
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\conformer_lstm\model.py", line 148, in test_step
    return super(ConformerLSTMModel, self).test_step(batch, batch_idx)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\openspeech_encoder_decoder_model.py", line 225, in test_step
    teacher_forcing_ratio=0.0,
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\decoders\lstm_attention_decoder.py", line 220, in forward
    attn=attn,
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\decoders\lstm_attention_decoder.py", line 140, in forward_step
    embedded = self.embedding(input_var)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\sparse.py", line 158, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\functional.py", line 1916, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device


Error
Traceback (most recent call last):
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 59, in testPartExecutor
    yield
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 628, in run
    testMethod()
  File "C:\Users\cote\PycharmProjects\kospeech2\tests\test_conformer_lstm.py", line 71, in test_training_step
    batch=(DUMMY_INPUTS, DUMMY_TARGETS, DUMMY_INPUT_LENGTHS, DUMMY_TARGET_LENGTHS), batch_idx=i
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\conformer_lstm\model.py", line 122, in training_step
    return super(ConformerLSTMModel, self).training_step(batch, batch_idx)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\openspeech_encoder_decoder_model.py", line 177, in training_step
    target_lengths=target_lengths,
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\openspeech_encoder_decoder_model.py", line 105, in collect_outputs
    "learning_rate": self.get_lr(),
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\openspeech_model.py", line 215, in get_lr
    for g in self.optimizer.param_groups:
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 948, in __getattr__
    type(self).__name__, name))
AttributeError: 'ConformerLSTMModel' object has no attribute 'optimizer'


Error
Traceback (most recent call last):
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 59, in testPartExecutor
    yield
  File "C:\Users\cote\Anaconda3\Lib\unittest\case.py", line 628, in run
    testMethod()
  File "C:\Users\cote\PycharmProjects\kospeech2\tests\test_conformer_lstm.py", line 87, in test_validation_step
    batch=(DUMMY_INPUTS, DUMMY_TARGETS, DUMMY_INPUT_LENGTHS, DUMMY_TARGET_LENGTHS), batch_idx=i
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\conformer_lstm\model.py", line 135, in validation_step
    return super(ConformerLSTMModel, self).validation_step(batch, batch_idx)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\models\openspeech_encoder_decoder_model.py", line 197, in validation_step
    teacher_forcing_ratio=0.0,
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\decoders\lstm_attention_decoder.py", line 220, in forward
    attn=attn,
  File "C:\Users\cote\PycharmProjects\kospeech2\openspeech\decoders\lstm_attention_decoder.py", line 140, in forward_step
    embedded = self.embedding(input_var)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\modules\sparse.py", line 158, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "C:\Users\cote\Anaconda3\envs\sc\lib\site-packages\torch\nn\functional.py", line 1916, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device


Assertion failed


Ran 5 tests in 24.698s

FAILED (errors=5)

Process finished with exit code 1

Assertion failed

Assertion failed
@upskyy upskyy added BUG Something isn't working WIP ongoing labels Jun 21, 2021
@upskyy upskyy added this to In progress in Scrum Jun 21, 2021
@upskyy upskyy closed this as completed Jul 16, 2021
Scrum automation moved this from In progress to DONE Jul 16, 2021
@upskyy upskyy added DONE and removed WIP ongoing BUG Something isn't working labels Jul 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Scrum
DONE
Development

No branches or pull requests

2 participants