You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been experimenting with torchdiffeq on google colab, but have found that running on CUDA is much slower on the GPU compared to the CPU. Colab apparently runs on a K80 so i expect it to be much faster.
I created a colab notebook that shows a simple benchmark, using modified code from ode_demo.py. Using the code I get an average of 1.72 seconds for a forward & backward pass on the GPU, compared to 0.32 seconds on the CPU. colab benchmark
The text was updated successfully, but these errors were encountered:
This is quite normal since such small problems aren't compute-bound. Perhaps a more thorough breakdown of the cost would be more meaningful, but the default solver is a sequence of many ops and control flow which can be slow for small or medium sized problems.
I've been experimenting with torchdiffeq on google colab, but have found that running on CUDA is much slower on the GPU compared to the CPU. Colab apparently runs on a K80 so i expect it to be much faster.
I created a colab notebook that shows a simple benchmark, using modified code from ode_demo.py. Using the code I get an average of 1.72 seconds for a forward & backward pass on the GPU, compared to 0.32 seconds on the CPU. colab benchmark
The text was updated successfully, but these errors were encountered: