Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Acceleration? #32

Closed
lim-xem opened this issue Sep 11, 2018 · 1 comment
Closed

GPU Acceleration? #32

lim-xem opened this issue Sep 11, 2018 · 1 comment

Comments

@lim-xem
Copy link

lim-xem commented Sep 11, 2018

Hi!

Thanks for this, especially the sample code. I'm new here and this has been a great introduction to EEG and deep neural networks.

Does this code tap on the GPU, e.g. via CUDA/cuDNNs? I have a mid-range Nvidia GPU on my laptop, and when running previous deep learning programmes using PyTorch on a Jupyter Notebook, I have been able to successfully speed up training times by about 5 times using CUDA.

It doesn't seem like this code does, but I'd just like to check with you because I don't have much experience! Thanks.

Best regards,
Michelle

@lim-xem
Copy link
Author

lim-xem commented Sep 13, 2018

I've figured it out. If we set up Theano=1.0 (not 0.8.2) and CUDA drivers etc according to the Theano documentation, i.e. differently from what's indicated in the README, GPU acceleration is automatic!

@lim-xem lim-xem closed this as completed Sep 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant