Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

motive for DeepCL #68

Closed
NEELMCW opened this issue May 16, 2016 · 27 comments
Closed

motive for DeepCL #68

NEELMCW opened this issue May 16, 2016 · 27 comments

Comments

@NEELMCW
Copy link

NEELMCW commented May 16, 2016

Just curious to know what DeepCL is for ? does it stand as a counterpart to NVIDIA's cuDNN library ?

@Dexdev08
Copy link

For me i was looking for an open cl deep learning library -- how did you
stumble upon the project?

On Mon, 16 May 2016 at 15:34 NEELMCW notifications@github.com wrote:

Just curious to know what DeepCL is for ? does it stand as a counterpart
to NVIDIA's cuDNN library ?


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#68

@hughperkins
Copy link
Owner

@NEELMCW It's a neural network framework, similar to caffe, but targeted specifically at OpenCL.

@NEELMCW
Copy link
Author

NEELMCW commented May 16, 2016

How different is deepCL compared to cltorch, clnn etc of yours and OpenCL-caffe from AMD?
Does it outperform all of these ?
Is there an equivalent of cuDNN out there ?
I was looking for an OpenCL support for Tensorflow that's how I bumped into this thread

@hughperkins
Copy link
Owner

I was looking for an OpenCL support for Tensorflow that's how I bumped into this thread

Ah. Well. Tensorflow is compilable with gpucc, per my understanding, and it has pluggable backends to compile as cuda, opencl, etc. Currently I dont think there is a pluggable backend for opencl, but google has one planned, per slides from november. It probably wouldnt be hard to write one.

Is there an equivalent of cuDNN out there ?

Well, right, even if tensorflow is built as opencl, it uses cudnn as its convolution library, so one would need to slot in something else instead of that. What hardware(s) are you targeting?

How different is deepCL compared to cltorch, clnn etc of yours and OpenCL-caffe from AMD?

Well...

  • DeepCL has more stars than clTorch. I think people like the python and commandline interfaces
  • clTorch has the entire Torch community at its disposal, lots of ready-built network models and so on
  • Caffe, ditto, but Caffe, rather than Torch

@NEELMCW
Copy link
Author

NEELMCW commented May 16, 2016

I am targeting AMD compute

@hughperkins
Copy link
Owner

I am targeting AMD compute

Ah. Much as it pains me to say it, probably the fastest convolutional backend for AMD right now is by Fabian Tschopp @naibaf7 . Since he works almost exclusively with AMD hardware, and works closely with AMD guys. (I made a decision to buy a laptop with an NVIDIA card, which has certain advantages for porting, but obviously sucks for optimizing for AMD :-P)

You can probably get hold of his convolutional backend by raising an issue at:

https://github.com/BVLC/caffe/tree/opencl

@NEELMCW
Copy link
Author

NEELMCW commented May 16, 2016

Agree with you :-).

Oflate I have working in porting few of these frameworks in particular torch, caffe onto AMD's newly promoted HCC language over their new ROCM stack.

You may find my efforts @ https://bitbucket.org/multicoreware/hcc_torch. Its still under development though

@hughperkins
Copy link
Owner

Yes, hcc does seem to be the future, I concur. I dont think I'll be switching my own projects onto hcc, since opencl has its place too. But for AMD hardware, which is the main competitor for NVIDIA, in the discrete GPU space, as far as I know, hcc will plausibly be the way forward.

@hughperkins
Copy link
Owner

https://bitbucket.org/multicoreware/hcc_torch.

Whoa, cool :-)

@NEELMCW
Copy link
Author

NEELMCW commented May 16, 2016

I am now looking at HCCizing Tensorflow. I reckon there are lot of dependencies the likes of Eigen, cuDNN that might refrain. I am currently investigating the efforts in this direction. any help from your side would be great

@hughperkins
Copy link
Owner

Ok. Dont suppose... for hcc-torch, do you mind linking also to cltorch? I think we can agree that hcc is a great way forward, so seems no need to hide alternative implementations :-)

@NEELMCW
Copy link
Author

NEELMCW commented May 16, 2016

Sorry Didn't quite get what you meant here "linking also to cltorch" ???

@hughperkins
Copy link
Owner

In this bit:

"This repository hosts the HCC backend implementation project for torch7. To know what HCC is please refer here. Torch7 framework currently has a CUDA backend support in the form of cutorch and cunn packages. The goal of this project is to develop hctorch and hcnn packages that would functionally behave as HCC counterparts for existing cutorch and cunn packages. This project mainly targets the linux platform and makes use of the linux-based HCC compiler implementation hosted here. "

@hughperkins
Copy link
Owner

I mean, you dont have to. Its all good :-)

@NEELMCW
Copy link
Author

NEELMCW commented May 16, 2016

:-) OK shall add references to cltorch and clnn

@hughperkins
Copy link
Owner

Ok. And you're waiting for .... ? If you're waiting for me to say that I will referrence hctorch from cltorch, I already wrote it in, just waiting for your reference, then I'll commit and push :-D

# Related projects

The OpenCL is enabled by using the following two projects, which are installed implicitly by this distro:
* [cltorch](https://github.com/hughperkins/cltorch)
* [clnn](https://github.com/hughperkins/clnn)

An hcc implementation for Torch is in progress here:
* [hctorch](https://bitbucket.org/multicoreware/hcc_torch)

@hughperkins
Copy link
Owner

(PS hctorch is much easier to say than hcctorch , and I notice that one of your modules is called hctorch, hence I've put hctorch, but I can put hcc_torch if you prefer :-) )

@NEELMCW
Copy link
Author

NEELMCW commented May 16, 2016

I have made a reference to cltorch and clnn.... Agree with you the names got to change. Presently I am hosting both hctorch and hcnn, counterparts to cutorch and cunn under a single repo hcc_torch

@NEELMCW
Copy link
Author

NEELMCW commented May 16, 2016

Repo name is now changed to HcTorch
https://bitbucket.org/multicoreware/hctorch/overview

@NEELMCW
Copy link
Author

NEELMCW commented May 17, 2016

Hugh,

The repo URL has changed. You may want to change it in cltorch Wiki here

https://github.com/hughperkins/cltorch

@hughperkins
Copy link
Owner

Ok updated :-)

@FuriouslyCurious
Copy link

Hugh - sorry to tag comment on a closed thread but didn't know how to reach Neelmcw.

Neel, I noticed your are working on hctorch. Thank you for that - I am also looking forward to a strong alternative to NVidia cuDNN from AMD.

I would like to request you to perform a benchmark and share how the library compares to cuDNN and fbnn. Can you please run these two benchmarks with hctorch and share the results?

  1. Soumith Chintala's benchmark: https://github.com/soumith/convnet-benchmarks
  2. Justin Johnson's benchmark: https://github.com/jcjohnson/cnn-benchmarks

Thank you!

@naibaf7
Copy link

naibaf7 commented Jul 18, 2016

@campbx
I'm currently also working on an alternative to cuDNN, namely libDNN (supposed to be autotuned vendor-independently at some point (https://github.com/naibaf7/libdnn)).

@hughperkins
Copy link
Owner

I'm currently also working on an alternative to cuDNN, namely libDNN (supposed to be autotuned vendor-independently at some point (https://github.com/naibaf7/libdnn)).

Hmmm, I should slot this into cltorch sooner rather than later really... I was going to wait until my winograd port was presentable, but that seems considerably harder than I thought, so maybe I'll add your library in now-ish.

@FuriouslyCurious
Copy link

@naibaf7 Thanks Fabian - the libDNN repo looks amazing. It will make it much easier to buy the cheapest card on market and run same Torch codebase on it.

What are your thoughts on AMD's new effort to de-CUDAify the code through HIP compiler? Early benchmarks I tested on my machine have been really good.
https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP

@NEELMCW

@naibaf7
Copy link

naibaf7 commented Jul 19, 2016

@campbx
While HIP is really good, and helping developers to target more hardware, it naturally can't fix performance issues or help to get close to cuDNN, since that needs device specific optimizations.

@bhack
Copy link

bhack commented Jul 19, 2016

@hughperkins You can always contribute winongrad to libdnn (if we can consider it upstream) later. ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants