Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does or will nnabla support dynamically loading custom function? #10

Open
DuinoDu opened this issue Jun 28, 2017 · 2 comments
Open

Does or will nnabla support dynamically loading custom function? #10

DuinoDu opened this issue Jun 28, 2017 · 2 comments

Comments

@DuinoDu
Copy link

DuinoDu commented Jun 28, 2017

Hey, I am interested in this NN tool. And I wonder if it will support custom function? Besides, I wonder is there any difference between nnabla with other NN tool, such as mxnet or tensorflow?
Much thanks.

@TakuyaNarihira
Copy link
Contributor

Thanks for your question.

As a short answer, dynamic loading of a custom function definition is not currently supported. I'll describe it more details below.

There may be two different definitions of custom functions.

  1. A new function definition (e.g adding SpatialPyramidPooling)
  2. A new implementation of an existing function (e.g. adding a CUDA implementation of SpatialPyramidPooling)

For 1, NNabla only provides a way to add a new function definition statically. We're still investigating how we achieve a dynamic function definition.

For 2. a special implementation of a base function (e.g. Convolution --> ConvolutionCuda) is dynamically registered as we do in a CUDA extension of NNabla, and a user can call it by setting an appropriate context descriptor.

There is actually an alternative way to define a custom function (1). We provide a way to write a custom function in Python side, which is not documented so far (i.e. not officially supported so far). You can find it in a unit test. The test demonstrates how we implement an operator which adds two tensors using Numpy array.

Although it is off-topic on this thread, I will give you a bit of answer on differences between NNabla and others (Please post a question like this to a user group hosted in Google Groups).
I believe the dynamic loading of a special implementation is one of the differences from some of the other libraries. Another difference is that NNabla supports both paradigms of static and dynamic neural network programming, which enables the users to choose the performance efficiency of the static one or the flexibility of dynamic NN, depending on their purposes. Also, NNabla's core is implemented in C++11, including a computation graph and its construction. We'll soon make it available the C++ API, which can be used in training and inference of NN even in a system which prohibits running interpreter languages like Python.

@DuinoDu
Copy link
Author

DuinoDu commented Jun 29, 2017

Much thanks for your answer!

TE-PoornimaBiradar pushed a commit that referenced this issue May 28, 2018
TE-andrewshin pushed a commit that referenced this issue Jan 23, 2019
Implement epsilon insensitive loss of cuda version.
MasatoIshii-sony pushed a commit that referenced this issue Nov 24, 2021
* change layer to calculate shap #3

* visualize shap images #3

* add comment #3

* delete comment out #3

* use variable for num_epochs

* change the word Grad-CAM to SHAP #3

* use variable for ratio_num #3

* delete garbage collection #3

* change data_iterator to dataset #3

* rename folder and delete unnecessary files #3

* delete unnecessary files and images #3

* delete unnecessary argument #3

* split into precise functions #3

* add copyright #3

* delete unnecessary DS_Store #3

* fix lint error #3

* change readme #3

* change readme for explainable AI #3

* change image #3

* fix readme for shap #3

* change readme for nnabla-example #3

* fix comment #3

* fix readme for shap #3

* delete DSstore #3

* fix license #3

* delete unnecessary spaces and cells #3

* change the repository where ipynb file is in #3

* change the repository to reference from ghelia to sony #3

* clear the output of the first cell #3

* change readme #3

* deal gloabl variable as an argument #3

* deal error message as an exception handling #3

* delete unnecessary error message #3

* (#3) delete readme and ipynb

* (#3) fix readme line break

* add 50 images and ipynb file #3

* fix comment #3

* fix layer index #3

Co-authored-by: ohmorimori <morio.ohki@ghelia.com>
MasatoIshii-sony pushed a commit that referenced this issue Nov 24, 2021
* change layer to calculate shap #3

* visualize shap images #3

* add comment #3

* delete comment out #3

* use variable for num_epochs

* change the word Grad-CAM to SHAP #3

* use variable for ratio_num #3

* delete garbage collection #3

* change data_iterator to dataset #3

* rename folder and delete unnecessary files #3

* delete unnecessary files and images #3

* delete unnecessary argument #3

* split into precise functions #3

* add copyright #3

* delete unnecessary DS_Store #3

* fix lint error #3

* change readme #3

* change readme for explainable AI #3

* change image #3

* fix readme for shap #3

* change readme for nnabla-example #3

* fix comment #3

* fix readme for shap #3

* delete DSstore #3

* fix license #3

* delete unnecessary spaces and cells #3

* change the repository where ipynb file is in #3

* change the repository to reference from ghelia to sony #3

* clear the output of the first cell #3

* change readme #3

* deal gloabl variable as an argument #3

* deal error message as an exception handling #3

* delete unnecessary error message #3

* (#3) delete readme and ipynb

* (#3) fix readme line break

* add 50 images and ipynb file #3

* fix comment #3

* fix layer index #3

Co-authored-by: ohmorimori <morio.ohki@ghelia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants