New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenCL status #2936

Open
nouiz opened this Issue May 20, 2015 · 5 comments

Comments

Projects
None yet
5 participants
@nouiz
Member

nouiz commented May 20, 2015

This issue is to track the OpenCL status.

Current status, not usable by user. We need someone to few fix the crashes and add a few missing op. We don't need it, so it is low on our priority. We need someone from the community to come up and finish this.

Detail:

  • The back-end support it.
  • We miss too much Op not converted from CUDA to OpenCL to make it usable.
  • Some crash happen.

To know the list of Op that support OpenCL is there, but no not look at the check mark, read each line and if it see opencl is writen, if so, it mean this op support it. Otherwise, it don't: #1471

TODO: doc how to do the conversion to allow people outside the core (that don't need it) to do it.

To use this partial version (DON'T DO THAT) :

@mdda

This comment has been minimized.

Show comment
Hide comment
@mdda

mdda Jul 3, 2015

Contributor

Back in November 2014, I was enthusiastic to work hard on the OpenCL functionality, but was told that I should wait for "Add a mechanism to use more than one (gpu) context in a single theano function" #2182 to be resolved - because it was a pretty far-ranging change.

Since that PR still doesn't seem to have been resolved, what should I do? I had wanted to spent December (2014) fixing this full time (and had a couple of preparatory libgpuarray PRs accepted)... But obviously the time slipped by.

Contributor

mdda commented Jul 3, 2015

Back in November 2014, I was enthusiastic to work hard on the OpenCL functionality, but was told that I should wait for "Add a mechanism to use more than one (gpu) context in a single theano function" #2182 to be resolved - because it was a pretty far-ranging change.

Since that PR still doesn't seem to have been resolved, what should I do? I had wanted to spent December (2014) fixing this full time (and had a couple of preparatory libgpuarray PRs accepted)... But obviously the time slipped by.

@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Jul 3, 2015

Member

That multi-GPU have advanced enough that working now on the OpenCL back-end is possible even if the multi-GPU isn't merged in master of Theano. Some detail, libgpuarray now have a good interface to code that can work in cuda and opencl. libgpuarray itself support multi-gpu. We had one PR on Theano that provided that to Theano, but it diverged too much from Theano. We already have took chunks of that PR and merged them separatly in Theano, but not the multi-gpu core.

So mostly, having people work on OpenCL now is possible.

Just 2 notes. 1) for the next 2-3 weeks, we will be pretty busy, so we won't be able to reply rapidly and give detailed help. But you can start/continue. 2) A few others people started a little bit on that. Check Theano mailing list about on opencl to find them. I think we need to coordinate more to prevent duplicate work.

To help on that, I think it would be useful to make a table, where each row is an op, and have those columns: old back-end (answer python or C to tell which implementation exist), new back-end cuda (answer python or C), new back-end OpenCL (answer python or C).

This will give us the information we need. Is someone start to work on one of them, the can just put there name in the box.

Do you want to make a google doc of this? You could make it publicly readable and give write access to the prople working on that. The first list could be a conversion of the information in gh-1471

Member

nouiz commented Jul 3, 2015

That multi-GPU have advanced enough that working now on the OpenCL back-end is possible even if the multi-GPU isn't merged in master of Theano. Some detail, libgpuarray now have a good interface to code that can work in cuda and opencl. libgpuarray itself support multi-gpu. We had one PR on Theano that provided that to Theano, but it diverged too much from Theano. We already have took chunks of that PR and merged them separatly in Theano, but not the multi-gpu core.

So mostly, having people work on OpenCL now is possible.

Just 2 notes. 1) for the next 2-3 weeks, we will be pretty busy, so we won't be able to reply rapidly and give detailed help. But you can start/continue. 2) A few others people started a little bit on that. Check Theano mailing list about on opencl to find them. I think we need to coordinate more to prevent duplicate work.

To help on that, I think it would be useful to make a table, where each row is an op, and have those columns: old back-end (answer python or C to tell which implementation exist), new back-end cuda (answer python or C), new back-end OpenCL (answer python or C).

This will give us the information we need. Is someone start to work on one of them, the can just put there name in the box.

Do you want to make a google doc of this? You could make it publicly readable and give write access to the prople working on that. The first list could be a conversion of the information in gh-1471

@viper7882

This comment has been minimized.

Show comment
Hide comment
@viper7882

viper7882 Jun 20, 2017

Hi @nouiz ,

Hugh Perkins has created Coriander that could run NVIDIA® CUDA™ code on OpenCL 1.2 devices. You might want to take a look if that suits your need to connect your Deep Learning software to OpenCL 1.2 devices. Kindly attribute his name and his contribution in case if you plan to use his work.

viper7882 commented Jun 20, 2017

Hi @nouiz ,

Hugh Perkins has created Coriander that could run NVIDIA® CUDA™ code on OpenCL 1.2 devices. You might want to take a look if that suits your need to connect your Deep Learning software to OpenCL 1.2 devices. Kindly attribute his name and his contribution in case if you plan to use his work.

@mratsim mratsim referenced this issue Sep 25, 2017

Closed

OpenCL #69

@abitrolly

This comment has been minimized.

Show comment
Hide comment
@abitrolly

abitrolly Nov 16, 2017

To help on that, I think it would be useful to make a table, where each row is an op, and have those columns: old back-end (answer python or C to tell which implementation exist), new back-end cuda (answer python or C), new back-end OpenCL (answer python or C).

Did somebody came up with such a table?

abitrolly commented Nov 16, 2017

To help on that, I think it would be useful to make a table, where each row is an op, and have those columns: old back-end (answer python or C to tell which implementation exist), new back-end cuda (answer python or C), new back-end OpenCL (answer python or C).

Did somebody came up with such a table?

@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Nov 16, 2017

Member
Member

nouiz commented Nov 16, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment