You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I attempted to use torch.nn.Modules inside a keras_core.Model without wrapping them with TorchModuleWrapper (assuming it to be applied behind the scene). However, when I pass a torch.cuda.FloatTensor to the Model, it shows the following error:
RuntimeError: Exception encountered when calling Classifier.call().
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
Arguments received by Classifier.call():
• inputs=torch.Tensor(shape=torch.Size([1, 1, 28, 28]), dtype=float32)
This error is not encountered when trainable modules are explicitly wrapped with TorchModuleWrapper.
Thanks for the report. It is surprising that there would be any difference between the two, because the autowrapping doesn't do anything besides creating the TorchModuleWrapper.
What happens if you place the module on device before setting it as a model attribute?
What happens if you place the module on device before setting it as a model attribute?
Hi @fchollet
If the modules are placed on device while setting it as a model attribute, it doesn't seem like the trainable variables are being tracked.
I attempted to use
torch.nn.Module
s inside akeras_core.Model
without wrapping them withTorchModuleWrapper
(assuming it to be applied behind the scene). However, when I pass atorch.cuda.FloatTensor
to the Model, it shows the following error:This error is not encountered when trainable modules are explicitly wrapped with
TorchModuleWrapper
.Notebook to reproduce the issue: https://colab.research.google.com/drive/1UO8uY86Ff-lNq5_7vKeLAb5sK99UFDWJ?usp=sharing
The text was updated successfully, but these errors were encountered: