You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it currently possible to enable batch sizes greater than 1? It seems like there is an error due to certain dataset elements being ragged (different dimensionality). Is this expected behaviour?
Similarly, I run into issues enabling CUDA. Does this functionality work on your end?
To enable batch sizes > 1 without error, add the additional filtering condition l.size > 0 where l is a label in make_dataset.py. There are a couple cases where the label is an empty numpy list instead of a 2-list for some reason, which destroys the torch stacking of the batch mid-training.
In model.py, modify the forward function to move the maskers to the correct device on each iteration. Probably a better way to do this.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Thanks for the repository!
Is it currently possible to enable batch sizes greater than 1? It seems like there is an error due to certain dataset elements being ragged (different dimensionality). Is this expected behaviour?
Similarly, I run into issues enabling CUDA. Does this functionality work on your end?
Beta Was this translation helpful? Give feedback.
All reactions