Skip to content

A tool for converting specific Julia GPU code writen in CUDA.jl, into abstract multi-backend code with KernelAbstractions.jl.

Notifications You must be signed in to change notification settings

101001000/Juliana.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JULIANA (Julia Unification Layer for Intel, AMD, Nvidia and Apple)

Translation from CUDA -> KernelAbstraction is the only translation supported right now.

Sample usage:

> using(Julana); main("--input fileinput1.jl --outpupt fileoutput.jl --backend=(CUDA|ONEAPI|METAL|CPU|AMD) --recursive")

Working Julia Rodinia benchmarks:

  • backprop
  • bfs
  • hotspot
  • leukocyte
  • lud
  • nn
  • nw
  • particlefilter_double
  • pathfinder
  • streamcluster

Working ExaTronKernels benchmarks:

It's necessary to remove the comment from CUDA.jl at the beginning of the file quoted with """ as juliana doesn´t support multiline comments yet, and It's also necessary to mark the n variable as const, and remove n as an argument for all functions as KernelAbstractions doesn´t allow dynamic shared memory allocation. All this changes are reflected in my ExaTronKernels fork

  • dicf
  • dicfs
  • dcauchy
  • dtrpcg0
  • dprsrch
  • daxpy
  • dssyax
  • dmid
  • dgpstep
  • dbreakpt
  • dnrm2
  • nrm2
  • dcopy
  • ddot
  • dscal
  • dtrqsol
  • dspcg
  • dgpnorm
  • dtron
  • driver_kernel -> requires some changes in how functions called from the inside of kernels are handled.

About

A tool for converting specific Julia GPU code writen in CUDA.jl, into abstract multi-backend code with KernelAbstractions.jl.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published