Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MLIR or TVM target? #430

Closed
machineko opened this issue Apr 6, 2020 · 2 comments
Closed

MLIR or TVM target? #430

machineko opened this issue Apr 6, 2020 · 2 comments
Labels

Comments

@machineko
Copy link

So i see a lot of work done to even basic tensor operations on GPU by building custom CUDA and opencl code.
Why not target it via some compiler like MLIR or TVM?
Then we can target everything via one stack and probably one tensor abstraction (CPU, GPU and even "TPU" in the future) and make it easier and faster to develop package 📦
I would love to help with that but im pretty new with low level proggraming and just starting with TVM right now so im just giving idea and waiting for some response [few months and probably can help with coding too]

MLIR also MLIR tutorial
TVM -> https://github.com/apache/incubator-tvm

@mratsim mratsim added the Laser label Apr 7, 2020
@mratsim
Copy link
Owner

mratsim commented Apr 7, 2020

I follow closely MLIR development and participated in the very first MLIR open design meetings back in July and August.

My plan currently is stemming from this discussion: #347

  1. Implement a tensor / linear algebra compiler. This will have its own IR inspired by Halide probably (which inspired TVM). The main issue with Halide IR is modeling recurrence but I think I solved that.
  2. At start the compiler will have a Nim AST backend for CPU targets. I.e. The compiler will work at Nim compile-time.
  3. Then I will add a JIT backend with LLVM IR target, i.e. the compiler will work at Nim runtime.
  4. Then MLIR could be added as a target once the tooling matures.

The compiler, codenamed Lux, has a proof of concept in Laser repo at:

Beyond what TVM and MLIR propose I also want the compiler to automatically compute gradient from the IR similar to Gradient Halide, Julia Tapenade/Cassette/Zygote or Swift:

The reason why is that this is what slowed my development the most, especially when implementing RNNs.

In terms of internal, I'm not set on re-using Halide design.

  • I've explored a polyhedral approach but implementing the ILP solver (Integer linear Programming) and the code generation from a polytope (multidimensional polyhedron) is non-trivial.

  • One very promising approach is DACE, the main bottleneck for a MVP is a good Nim graph library that works at compile-time and runtime.

Other concerns

One of the concerns I have with deferring to TVM, Halide, DACE is installation/deployment complexity and in particular for TVM/Halide the compilation time to create a pipeline.

Just having BLAS (#422) as a dependency causes issue for several people so a pure Nim solution would be helpful.

MLIR is better since now it comes integrated in LLVM, but a pure Nim macros is still needed in my opinion for people who don't want to deal with installing LLVM on their machine. This would also help packaging in conda/pip, npm or producing WebAssembly-enabled builds.

What's next

Lack of time.

One of the main driver behind my work on Weave the past 6 months was to have a robust, scalable, flexible, high performance and low overhead multithreading runtime for Nim that can be used by a linear algebra / deep learning compiler.

That was necessary work but it's still 6 months not working on the final compiler.

Now currently I feel like Nim has a key advantage to address the growing needs in elliptic curve cryptography, especially given that it now requires 16+ cores machine and GPU does not help unlike for Linear Algebra so I'm working on it at https://github.com/mratsim/constantine (see https://medium.com/loopring-protocol/zksnark-prover-optimizations-3e9a3e5578c0 and https://nakamoto.com/cambrian-explosion-of-crypto-proofs/)

@machineko
Copy link
Author

Hey thanks for this post and all good job u do here!
I give star and start to follow laser project but if something change and u will need some help in next 3/4 months u can ping me here will help :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants