Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TVM and HLO/XLA #151

Closed
bhack opened this issue May 21, 2017 · 16 comments
Closed

TVM and HLO/XLA #151

bhack opened this issue May 21, 2017 · 16 comments

Comments

@bhack
Copy link

bhack commented May 21, 2017

Any quick overview on the differences?

@tqchen
Copy link
Member

tqchen commented May 21, 2017

They are orthogonal.

  • XLA is more high level, like NNVM, developer of XLA need to define codegen and loop transformation rules(like writing kernel) for each operator, on how to generate kernels, and the system stitches the kernel for you
  • TVM is one level below, provide common low level primitives for describing the computation, as well as the loop transformation rules, and allow user to do these, you can use these to implement something like XLA(by using NNVM or high level graph description), or simply directly bypass the high level description layer and directly use it in framework

@bhack
Copy link
Author

bhack commented May 22, 2017

What will be the role of Fabian libdnn and Fair sponsored NNPACK in this?

@soumith
Copy link

soumith commented May 22, 2017

both libdnn and nnpack are different, they can maybe be used as blackbox calls. (NNPACK is not FAIR sponsored, it's just continued research/dev after FAIR)

@bhack
Copy link
Author

bhack commented May 22, 2017

What is the goal here? Rewrite new kernels?

@soumith
Copy link

soumith commented May 22, 2017

write kernels in a new language that can be retargeted to multiple backends with great perf.
folks can build languages or collectives to write kernels on top of TVM.

@soumith
Copy link

soumith commented May 22, 2017

see the matrix-multiply or persistent-rnn examples, maybe?

@bhack
Copy link
Author

bhack commented May 22, 2017

@soumith I thought that investing FAIR work hours on NNPACK was like sponsoring. But it is ok if you meant that is not officially sponsored by FAIR

@soumith
Copy link

soumith commented May 22, 2017

yes, we did not sponsor a grant and say: give us NNPACK.

@bhack
Copy link
Author

bhack commented May 22, 2017

Yes ok.. so what I meant is that we would try to superseed libdnn and NNPACK at some point if we will share this DSL kernels

@soumith
Copy link

soumith commented May 22, 2017

yes, slowly and incrementally we can try move the value into TVM backend. Will happen over time. There's some systems research needed to be done before we get there as well, so there's a little bit of uncertainty too.

@bhack
Copy link
Author

bhack commented May 22, 2017

Yes of course I was just talking about the "great design"

@bhack
Copy link
Author

bhack commented May 22, 2017

So are you trying to do what TF team didn't want to do?

@edgarriba
Copy link

@soumith with collectives you mean different frameworks (like the ones we represent) sharing kernel codes?

@tqchen
Copy link
Member

tqchen commented May 22, 2017

Here is what a deep learning system stack would look like in nowday.

  • 1 Build operator level graph description language
    • Name whatever dl frameworks you care about
  • 2 Tensor primitive level graph description lanugage
    • NNVM, HLO, NGraph
    • It is close enough to the first one that you can also build graph optimization on first layer and bypass this layer
  • 3 DSL for description and codegen.
  • 4 Hardcoded optimized kernel library like nnpack, cudnn, libdnn
  • 5 Device dependent library

Most libraries goes with 1 -> 4. An easy and restrictive path for compilation and fusion is going from 2 -> 4/5, by manually code up fused kernels, or have rules to generate certain fused kernels. TVM sits on level 3, to make jump from level 2 to level 5 easier and give user more control.

In terms of design philosophy, we want to make it work together with existing ecosystem. This include

  • Friendly frontend that can be directly used for kernel generation
  • Give framework full control of memory allocation, graph execution, data layout etc.
  • Generate DLPack compatible kernels that every framework can directly take.
  • Make use of blackbox calls like cudnn when user says so.

I think we can expect all approaches in the stack will continue to exist. We just design a layer in 3 that can incrementally transit toward automation while still being able to transparently benefit from things in 4.

@bhack
Copy link
Author

bhack commented May 23, 2017

Can we put some of this info in a file so that we can close it?

@tqchen
Copy link
Member

tqchen commented May 23, 2017

Yes, let us have an FAQ file https://github.com/dmlc/tvm/blob/master/docs/faq.md

@tqchen tqchen closed this as completed May 24, 2017
tqchen pushed a commit to tqchen/tvm that referenced this issue May 26, 2018
* declare a type name for each tuple type

* generic way to declare type names for each tuple type

* fix lint error

* update submodule dmlc-core
tqchen pushed a commit that referenced this issue May 29, 2018
* declare a type name for each tuple type

* generic way to declare type names for each tuple type

* fix lint error

* update submodule dmlc-core
tqchen pushed a commit to tqchen/tvm that referenced this issue Jul 6, 2018
* declare a type name for each tuple type

* generic way to declare type names for each tuple type

* fix lint error

* update submodule dmlc-core
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this issue Aug 8, 2018
* declare a type name for each tuple type

* generic way to declare type names for each tuple type

* fix lint error

* update submodule dmlc-core
jroesch added a commit to jroesch/tvm that referenced this issue Aug 29, 2018
…ache#151)

* Refactor passes to be functional

* Refactor existing passes

* Heavily refactor Env and TyCk to help prevent bugs

* Segfault

* Fix segfaults

* Sync

* Sync

* Fix strange memory error

* Fix broken Environment APIs

* Have resolve writeback types

* Fix eval bug

* Fix issue in mono-morph

* Fix linting

* Fix PyLint

* Fix MyPy

* Fix compile error
vinx13 pushed a commit to vinx13/tvm that referenced this issue Mar 9, 2022
jinhongyii pushed a commit to jinhongyii/tvm that referenced this issue Jun 20, 2022
Hzfengsy pushed a commit to Hzfengsy/tvm that referenced this issue Jul 30, 2022
areusch pushed a commit to areusch/tvm that referenced this issue Sep 20, 2022
junrushao pushed a commit to junrushao/tvm that referenced this issue Oct 18, 2022
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this issue Nov 20, 2022
junrushao pushed a commit to junrushao/tvm that referenced this issue Feb 8, 2023
yelite pushed a commit to yelite/tvm that referenced this issue Feb 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants