You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While trying to fix warnings, I stumbled upon a new warning that only comes up since pytorch 1.8: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
The warning is triggered in the basic and transformer tests:
platalea/basic.py:39: in cost
speech_enc, image_enc = self.forward(item['audio'], item['audio_len'], item['image'])
platalea/basic.py:34: in forward
speech_enc = self.SpeechEncoder(audio, audio_len)
../../../sw/miniconda3/envs/platalea/lib/python3.8/site-packages/torch/nn/modules/module.py:914: in _call_impl
self._maybe_warn_non_full_backward_hook(input, result, grad_fn)
(Note: I added the forward function in SpeechImage as a test, this call to SpeechEncoder is the actual troublemaker, and it was in SpeechImage.cost() before.)
I frankly have no idea what a backward hook is, let alone a non-full one. Anyone have any clue?
My best guess is that it has something to do with the multiple inputs and multiple outputs and some kind of missing element regarding autograd somewhere. I tried to search the other models/experiments (asr, mtl) for hints in this direction, but couldn't really find anything.
The text was updated successfully, but these errors were encountered:
While trying to fix warnings, I stumbled upon a new warning that only comes up since pytorch 1.8:
UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
The warning is triggered in the basic and transformer tests:
(Note: I added the forward function in SpeechImage as a test, this call to SpeechEncoder is the actual troublemaker, and it was in SpeechImage.cost() before.)
I frankly have no idea what a backward hook is, let alone a non-full one. Anyone have any clue?
My best guess is that it has something to do with the multiple inputs and multiple outputs and some kind of missing element regarding autograd somewhere. I tried to search the other models/experiments (asr, mtl) for hints in this direction, but couldn't really find anything.
The text was updated successfully, but these errors were encountered: