Low GPU occupancy when fitting DeepAREstimator #1431
-
When fitting DeepAREstimator models, I have the impression my GPU (GTX1080) is pretty under-occupied (even using For example, when running this gist, the GPU occupancy is somewhere between 10 and 15%. Would this be due to the particular setting in the gist (training data set made of a single large time series)? When profiling, I see fairly large GPU idle times - for example, wouldn't there be the possibility to allow for multiple data loading workers with gluon-ts (as often witnessed other deep learning frameworks such as Tensorflow)? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @pbruneau, thank your for your questions. In many cases, there is no particular gain for running DeepAR on GPU (other than large batch sizes and/or large hidden cell size). This is because the computation of the RNN is inherently sequential and there is not a lot to parallelize. We have recently moved to multi threaded data loaders already - but there is a limit how much to speed up DeepAR with a GPU (although there is certainly room for improvement). |
Beta Was this translation helpful? Give feedback.
Hi @pbruneau,
thank your for your questions. In many cases, there is no particular gain for running DeepAR on GPU (other than large batch sizes and/or large hidden cell size). This is because the computation of the RNN is inherently sequential and there is not a lot to parallelize. We have recently moved to multi threaded data loaders already - but there is a limit how much to speed up DeepAR with a GPU (although there is certainly room for improvement).