Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

[Question] - How to extend graph functions? #4

Open
pedronahum opened this issue Nov 27, 2016 · 2 comments
Open

[Question] - How to extend graph functions? #4

pedronahum opened this issue Nov 27, 2016 · 2 comments

Comments

@pedronahum
Copy link

Dear Nervana team,

First and foremost, great work!

If someone would like to use ngraph for GPP, instead of solely for DL, how could he/she add additional graph functions? Is there a document / example that describes the steps?

Thanks,

Pedro N.

@diyessi
Copy link
Contributor

diyessi commented Nov 28, 2016

First, this is an area quite likely to change significantly.

Look at op_graph.py. For a tensor operation, you can either just define a function that constructs a few ops, or define a new op. If you define a new op, you need to add something in the cpu transformer and gpu transformer to tell how to execute the op. In the future, I expect that you'll be able to do that without modifying existing functions/methods. For GPU, you need to reshape your tensor dimensions from arbitrary to whatever the kernels you are using expect, while for NumPy you just need tensor striding that NumPy can work with.

If you need autodiff to work for your new op, there's a partial explanation in the docs about how autodiff works, and there are a lot of examples. Docs on this will be more extensive in the future.

@pedronahum
Copy link
Author

Thank you! Greatly appreciate the insights.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants