-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move to an organization & renaming tiny-cnn #235
Comments
Sounds good. I like the name hornet, but tiny-* might be better in terms of visibility/say what it is. |
For consistency tiny-dnn I think is okay. However, hornet could reflect a reborn of the library. |
/cc @mtamburrano |
I start push some names: .hdnn, lightdnn, lightnet, tinygraph, tinyflow, hornet |
I would vote for tiny-dnn. "tiny-cnn" is already quite known, and with "tiny-dnn" the familiarity with the name is not lost. It's a nice name. |
So people, what's the decision? |
tiny-dnn seems good for everyone. It's a nice name for me, and have no namespace problem. Let's |
nice! @bhack green light for the logo |
How about an actual tiny name?: dnn |
@garybradski |
OK, tiny-dnn. I had some questions about this module. We have some other deep net modules: dnn, (not very helpful README.md -- module to run caffe deep nets) Anyhow, I'm concerned that a new user won't know where to start. Maybe the READMEs have to be cross-linked and, for my intent, most of the real work should go into tiny-dnn which is the most complete now. I'm also concerned for exactly how tiny-dnn will work. Will it be dependent on tiny-cnn or a completely stand alone module? It will be much cleaner if it is a complete stand alone module. Is this OK? that effectively tiny-cnn becomes tiny-dnn? Of course, full credit can be given both in the module and in the contributors page: |
I'm worried about it, too. I think current dnn module (lacks too many building blocks to use) should be replaced with the new dnn based on tiny-dnn. Re-organize cnn_3dobj as an example of tiny-dnn seems good. /cc: @wangyida any idea?
We'll just rename tiny-cnn to tiny-dnn, they're both the same. Basically tiny-dnn is stand alone, and we need some libraries to enable extra functionalities (libdnn for GPU, NNPACK for x86 speedup, and Protocol buffers for importing caffe's model).
I'm happy to hear that :) Of course you might want to add two great GSoC students bridging OpenCV and tiny-dnn to this list. |
/cc @ludv1x |
@garybradski @nyanp cnn_3dobj is an interesting example for pose estimation together with category prediction, this could be included as an example rather than a standalone module. But the rendering process has a dependency on VTK toolkits. |
This is the commit log for the other GSoC project https://github.com/ludv1x/opencv_contrib/commits/dnn-dev. |
As stated in email, and beyond this scope, but: |
@ludv1x I've seen you are contributing with OpenCL kernels to dnn module. @garybradski The current dnn module seems to have an extended core, so we'll have a parallelism with the new "tiny" module. What's the plan? |
@edgarriba @ludv1x is mentored by @Nerei. /cc @vpisarev |
@garybradski cnn_3dobj module could be included in tiny-dnn because core function is a triplet loss this is included in Caffe triplet loss, I enclosed a trained model less than 5 MB in cnn_3dobj module, this could be used directly in caffe_converter in tiny-cnn. |
@edgarriba Contributed kernels is based on ones from OpenCL-Caffe repo (which distributed under BSD licence) . So you can use them freely. So, I don't see serious problems with OCL kernels. |
@ludv1x the plan is to port kernels from opencl branch. Notice that for convolution, we are porting LibDNN which is the standalone version of greentea. @naibaf7 is giving support in this task. For general OpenCL functionality we decided to use CLCudaAPI which is a header-project with common interfaces for OpenCL and CUDA. |
tinydnn |
I got distracted for a few months, and this project exploded and even changed its name... Meanwhile, I studied cuDNN, and now I'm catching up with this huge batch of updates (and updating my code as well)... I hope I can provide some help again soon :) |
@pansk welcome to the party again! Check new ops proposal for custom convolution And for OpenCL (not working yet) |
Is, like it seems, tensor_t still a std::vector<...>? This approach requires you to get back and forth to CPU memory between two layers, and it's quite inefficient (it will probably destroy any performance gain given from the GPU). The first structure was supposed to "automatically" move between the CPU and the GPU when needed (e.g., when interleaving CPU and GPU layers, something which is probably still inefficient, but should be allowed for prototyping new, complex layers), while the second was conceived for full-time resident on the GPU (for GPU backends) and was supposed to be downloaded/uploaded only for serialization/deserialization purposes (probably manually). I also see some interoperability issue between CLCudaAPI and cuDNN (and most of other cu* libraries): the former wants to wrap the kernel, and the others don't provide them, but are built on top of a CUDA stream. Please note that I didn't explore CLCudaAPI thoroughly, so I might be missing some details that made this last comment useless or wrong. |
@pansk I would also recommend to keep one data structure for both weights and data. If coded correctly, the transfers should not be triggered often anyways, and it allows more flexibility. You could add a profiling method to these memory objects to find unnecessary transfers. |
@naibaf7
Yes, The approach I propose is just a proof of concept to have a minimum GPU pipeline working and as pointed out memory transfer is needed in order to take advantage of GPU. Things in the TODO list are model tensors in classes and implement a proper memory transfer module. Since the GSOC is about to finish, probably it's time to rewrite the roadmap and assign tasks. /cc @nyanp @bhack |
@naibaf7 About the data structure, you're correct: another solution is to use a single data structure for both. At the moment data and weights are already held in two different structures (tensor_t and vec_t respectively). I quite liked the solution with two different structures, partly because of memory occupation and but mostly because it makes "coding correctly" obvious. It's always possible to check with a profiler later, but I generally prefer making such bad conditions not so easy to code rather than track them down when they've been committed and used for a while... In the end, for most platforms I expect one of the structures to be a decorated version of the other, so there's no real code duplication. Anyway, I'm not at all strongly against a unified structure, I just have a mild preference for the other solution. |
@pansk @CNugteren |
@pansk |
There is a queue/stream object constructor.... See https://github.com/CNugteren/CLCudaAPI/blob/master/doc/api.md |
Indeed, you can have multiple streams/queues, and you can enqueue kernels or memory copy operations (asynchronous or synchronous) to different streams. You can also wait for all tasks in a certain stream/queue to be completed. If any other features are needed, CLCudaAPI can be extended. However, note that it can only support features which are available in both CUDA and OpenCL. |
@edgarriba Memory concerns are two-fold.
@naibaf7, @bhack, @CNugteren Thank you, I'll have a deeper look at CLCudaAPI in the next days. |
I think we can close this. |
Exactly. |
We've decided to move tiny-cnn to an organization account to accelarate its development (discussed in #226).
Since it is clear that we are expanding the scope of tiny-cnn from convolutional networks to general networks, the project name tiny-Cnn seems a bit inaccurate now. I want to change the project name to more appropriate one (if we agreed), at the timing of the transferring repository.
In #226 we have these 3 proposals:
Whichever we take, naming of project doesn't affect the library API except for its namespace, and hyperlinks and folks, pull requests will be correctly redirected to the new repository.
Please feel free to give me your feedback if you have suggestions for the naming! We want to decide the name and move to a new account until around 7/25.
The text was updated successfully, but these errors were encountered: