Translation from CUDA -> KernelAbstraction is the only translation supported right now.
Sample usage:
> using(Julana); main("--input fileinput1.jl --outpupt fileoutput.jl --backend=(CUDA|ONEAPI|METAL|CPU|AMD) --recursive")
Working Julia Rodinia benchmarks:
- backprop
- bfs
- hotspot
- leukocyte
- lud
- nn
- nw
- particlefilter_double
- pathfinder
- streamcluster
Working ExaTronKernels benchmarks:
It's necessary to remove the comment from CUDA.jl at the beginning of the file quoted with """ as juliana doesn´t support multiline comments yet, and It's also necessary to mark the n variable as const, and remove n as an argument for all functions as KernelAbstractions doesn´t allow dynamic shared memory allocation. All this changes are reflected in my ExaTronKernels fork
- dicf
- dicfs
- dcauchy
- dtrpcg0
- dprsrch
- daxpy
- dssyax
- dmid
- dgpstep
- dbreakpt
- dnrm2
- nrm2
- dcopy
- ddot
- dscal
- dtrqsol
- dspcg
- dgpnorm
- dtron
- driver_kernel -> requires some changes in how functions called from the inside of kernels are handled.