-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TVM and HLO/XLA #151
Comments
They are orthogonal.
|
What will be the role of Fabian libdnn and Fair sponsored NNPACK in this? |
both libdnn and nnpack are different, they can maybe be used as blackbox calls. (NNPACK is not FAIR sponsored, it's just continued research/dev after FAIR) |
What is the goal here? Rewrite new kernels? |
write kernels in a new language that can be retargeted to multiple backends with great perf. |
see the matrix-multiply or persistent-rnn examples, maybe? |
@soumith I thought that investing FAIR work hours on NNPACK was like sponsoring. But it is ok if you meant that is not officially sponsored by FAIR |
yes, we did not sponsor a grant and say: give us NNPACK. |
Yes ok.. so what I meant is that we would try to superseed libdnn and NNPACK at some point if we will share this DSL kernels |
yes, slowly and incrementally we can try move the value into TVM backend. Will happen over time. There's some systems research needed to be done before we get there as well, so there's a little bit of uncertainty too. |
Yes of course I was just talking about the "great design" |
So are you trying to do what TF team didn't want to do? |
@soumith with collectives you mean different frameworks (like the ones we represent) sharing kernel codes? |
Here is what a deep learning system stack would look like in nowday.
Most libraries goes with 1 -> 4. An easy and restrictive path for compilation and fusion is going from 2 -> 4/5, by manually code up fused kernels, or have rules to generate certain fused kernels. TVM sits on level 3, to make jump from level 2 to level 5 easier and give user more control. In terms of design philosophy, we want to make it work together with existing ecosystem. This include
I think we can expect all approaches in the stack will continue to exist. We just design a layer in 3 that can incrementally transit toward automation while still being able to transparently benefit from things in 4. |
Can we put some of this info in a file so that we can close it? |
Yes, let us have an FAQ file https://github.com/dmlc/tvm/blob/master/docs/faq.md |
* declare a type name for each tuple type * generic way to declare type names for each tuple type * fix lint error * update submodule dmlc-core
* declare a type name for each tuple type * generic way to declare type names for each tuple type * fix lint error * update submodule dmlc-core
* declare a type name for each tuple type * generic way to declare type names for each tuple type * fix lint error * update submodule dmlc-core
* declare a type name for each tuple type * generic way to declare type names for each tuple type * fix lint error * update submodule dmlc-core
…ache#151) * Refactor passes to be functional * Refactor existing passes * Heavily refactor Env and TyCk to help prevent bugs * Segfault * Fix segfaults * Sync * Sync * Fix strange memory error * Fix broken Environment APIs * Have resolve writeback types * Fix eval bug * Fix issue in mono-morph * Fix linting * Fix PyLint * Fix MyPy * Fix compile error
Any quick overview on the differences?
The text was updated successfully, but these errors were encountered: