You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The inputs.to(self.device) in ConformerConvmodule and FeedForwardModule will cause the network graph in tensorboard to fork and appear kind of messy. Is there any special reason to write like that? Since in most cases we should have send both the model and tensor to the device before we input the tensor to the model, probably no more sending action is needed?
The text was updated successfully, but these errors were encountered:
In addition to that it also breaks things when running on a computer w/o gpu. Even if you do model.cpu() that doesn't change the device attribute for the module & submodules, so when you hit a inputs.to(self.device) it tries to convert to cuda (the default value for device) and crashes.
The inputs.to(self.device) in ConformerConvmodule and FeedForwardModule will cause the network graph in tensorboard to fork and appear kind of messy. Is there any special reason to write like that? Since in most cases we should have send both the model and tensor to the device before we input the tensor to the model, probably no more sending action is needed?
The text was updated successfully, but these errors were encountered: