Replies: 1 comment
-
Additional info: When I run my fitted model, I get the following results:
Thus, for some reason my tabular_model object is not utilizing the GPU for inference. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am having the following problem after training my DANet model. When I reload the model after saving it using the 'load_model()' method, my GPU shows almost no usage. Thus, the loaded model is being run on the CPU. Moreover, unlike other PyTorch models, I cannot use the 'model.to(cuda_device)' as the 'to()' method is not defined. Please advise how I can insure that my loaded model will use the GPU. Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions