You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running a job that queries a Lightwood model, the job run ends with an error:
error in apply predictor step: [lightwood/home_rentals_model]: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
It is because Lightwood models require to be run on a GPU node.
The solution is to detect whether a job contains any Lightwood model, and if it does, then run it on a GPU node.
Video or screenshots
No response
Expected behavior
No response
How to reproduce the error
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered:
Lightwood can run on both CPU and GPU. The issue here has to do with storing models in one node type then trying to run them on another, so the fix is to improve LW's model loading procedures. Relevant issue: #1129
Short description of current behavior
When running a job that queries a Lightwood model, the job run ends with an error:
It is because Lightwood models require to be run on a GPU node.
The solution is to detect whether a job contains any Lightwood model, and if it does, then run it on a GPU node.
Video or screenshots
No response
Expected behavior
No response
How to reproduce the error
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: