Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rose and Jit kernel complier support #8330

Closed
develooper1994 opened this issue Jul 15, 2018 · 6 comments
Closed

Rose and Jit kernel complier support #8330

develooper1994 opened this issue Jul 15, 2018 · 6 comments

Comments

@develooper1994
Copy link

  1. http://rosecompiler.org/
    Rose complier is an source to source translator. It takes input as c,c++ code and translates device kernel. I tried some mathmatical formulas and it works. I am also matlab cuda complier user. Rose can translate cuda to opencl in some cases.
  2. Jit kernel compitation inpired from pyton numba and accelerate and openhmpp. I just want to use with pragram like syntax. to parallelize on device. I am still learning nim.
@andreaferretti
Copy link
Collaborator

What is exactly the issue? I honestly do not understand what you are asking for

@develooper1994
Copy link
Author

Autoparallelization with compile to Opencl,cuda or device backend.

@mratsim
Copy link
Collaborator

mratsim commented Jul 16, 2018

  1. You can try to use cudanim which was developed for High-Energy Physics framework QEX

  2. Numba is slower than Arraymancer. We don't need JIT when we have static compilation.

Regarding ROSE, is it used in production? The project seems much less maintained, structured and optimized than Halide (used for computational photography and by FB for deep learning for example) and it does not support ARM backend.

Like for #8331, Rose should be done in a separate independant repo.

@develooper1994
Copy link
Author

I don't want to use any cuda and opencl library. İ just want to say to complier "hey! Come here and transcompile to my code to device kernel as much as possible". Device can be CPU,gpu,FPGA,tensor processor,DSP calculator,...
There is lots of usage in Real world like c/c++->VHDL

@andreaferretti
Copy link
Collaborator

If I recall correctly, there was a discussion about having OpenCL as a compilation target for Nim (possibly in one of the summer of code proposals?), but it was never implemented. Even if this could ever work, many features of Nim (mainly about heap allocations, and hence much of the standard library) would not work in such a target.

But Nim could still be useful wrt to raw C thanks to its metaprogramming capabilities.

If I recall correctly, there was some attempt to generate code using macros for vertex and pixel shaders - probably the same approach would work for CUDA or OpenCL kernels

@develooper1994
Copy link
Author

Yes, ı am saying that OpenCL as a compilation target for Nim.
Allocate buffer in device instead of ram, pure functions or function queries after some analisis compile to device kernel. Opencl is much more portable than cuda and VHDL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants