Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runs very slow on CUDA #15

Closed
MostlyHarmless420 opened this issue Jan 3, 2019 · 1 comment
Closed

Runs very slow on CUDA #15

MostlyHarmless420 opened this issue Jan 3, 2019 · 1 comment

Comments

@MostlyHarmless420
Copy link

MostlyHarmless420 commented Jan 3, 2019

I've been experimenting with torchdiffeq on google colab, but have found that running on CUDA is much slower on the GPU compared to the CPU. Colab apparently runs on a K80 so i expect it to be much faster.

I created a colab notebook that shows a simple benchmark, using modified code from ode_demo.py. Using the code I get an average of 1.72 seconds for a forward & backward pass on the GPU, compared to 0.32 seconds on the CPU. colab benchmark

@rtqichen
Copy link
Owner

rtqichen commented Jan 3, 2019

This is quite normal since such small problems aren't compute-bound. Perhaps a more thorough breakdown of the cost would be more meaningful, but the default solver is a sequence of many ops and control flow which can be slow for small or medium sized problems.

@rtqichen rtqichen closed this as completed Jan 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants