GPU's defaulting to 0 in Timeseries forecasting #3863
-
My windows machine is recognising in anaconda with AG installed that GPU=1, but when I run the forecasting model, it defaults to GPU=0. Any idea what's going on, I am now lost for ideas.
Fitting with arguments: train_data contains missing values represented by NaN. They have been filled by carrying forward the last valid observation. Provided dataset contains following columns: AutoGluon will gauge predictive performance using evaluation metric: 'MASE'
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, this is a mistake in the debug logging. If you see
at the start of AutoGluon training, then all models that support GPU (=deep learning models like TFT, PatchTST, DeepAR) will use the GPU. You can verify this by checking the GPU utilization during training. You can safely ignore the debug message |
Beta Was this translation helpful? Give feedback.
Hi, this is a mistake in the debug logging. If you see
at the start of AutoGluon training, then all models that support GPU (=deep learning models like TFT, PatchTST, DeepAR) will use the GPU. You can verify this by checking the GPU utilization during training.
You can safely ignore the debug message
Fitting SeasonalNaive with 'num_gpus': 0, 'num_cpus': 12
, it's logged by mistake and we will try to fix it soon.