An open-source CUDA compiler that targets AMD GPUs, with more architectures planned. Written in 15,000 lines of C99. Zero LLVM dependency. Compiles .cu files straight to GFX11 machine code and spits out ELF .hsaco binaries that AMD GPUs can actually run.
This is what happens when you look at NVIDIA's walled garden and think "how hard can it be?" The answer is: quite hard, actually, but I did it anyway.
Takes CUDA C source code, the same .cu files you'd feed to nvcc, and compiles them to AMD RDNA 3 (gfx1100) binaries. No LLVM. No HIP translation layer. No "convert your CUDA to something else first." Just a lexer, a parser, an IR, and roughly 1,700 lines of hand-written instruction selection that would make a compiler textbook weep.
┌──────────────────────────────────────────────────────────────┐
│ BarraCUDA Pipeline │
├──────────────────────────────────────────────────────────────┤
│ Source (.cu) │
│ ↓ │
│ Preprocessor → #include, #define, macros, conditionals │
│ ↓ │
│ Lexer → Tokens │
│ ↓ │
│ Parser (Recursive Descent) → AST │
│ ↓ │
│ Semantic Analysis → Type checking, scope resolution │
│ ↓ │
│ BIR (BarraCUDA IR) → SSA form, typed instructions │
│ ↓ │
│ mem2reg → Promotes allocas to SSA registers │
│ ↓ │
│ Instruction Selection → AMDGPU machine instructions │
│ ↓ │
│ Register Allocation → VGPR/SGPR assignment │
│ ↓ │
│ Binary Encoding → GFX11 instruction words │
│ ↓ │
│ ELF Emission → .hsaco ready for the GPU │
│ ↓ │
│ Your kernel runs on ya silicon │
└──────────────────────────────────────────────────────────────┘
Every single encoding has been validated against llvm-objdump with zero decode failures. I didn't use LLVM to compile, but I did use it to check my homework.
# It's C99. It builds with gcc. There are no dependencies.
make
# That's it. No cmake. No autoconf. No 47-step build process.
# If this doesn't work, your gcc is broken, not the Makefile.- A C99 compiler (gcc, clang, whatever you've got)
- A will to live (optional but recommended)
- LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.
# Compile to AMD GPU binary
./barracuda --amdgpu-bin kernel.cu -o kernel.hsaco
# Dump the IR (for debugging or curiosity)
./barracuda --ir kernel.cu
# Just parse and dump the AST
./barracuda --ast kernel.cu
# Run semantic analysis
./barracuda --sema kernel.cuThe following CUDA features compile to working GFX11 machine code:
__global__,__device__,__host__function qualifiersthreadIdx,blockIdx,blockDim,gridDimbuiltins- Structs, enums, typedefs, namespaces
- Pointers, arrays, pointer arithmetic
- All C control flow:
if/else,for,while,do-while,switch/case,goto/label - Short-circuit
&&and|| - Ternary operator
- Templates (basic instantiation)
- Multiple return paths,
continue,break
__shared__memory (allocated from LDS, properly tracked)__syncthreads()→s_barrier- Atomic operations:
atomicAdd,atomicSub,atomicMin,atomicMax,atomicExch,atomicCAS,atomicAnd,atomicOr,atomicXor - Warp intrinsics:
__shfl_sync,__shfl_up_sync,__shfl_down_sync,__shfl_xor_sync - Warp votes:
__ballot_sync,__any_sync,__all_sync - Vector types:
float2,float3,float4,int2,int3,int4with.x/.y/.z/.waccess - Half precision:
__half,__float2half(),__half2float() __launch_bounds__(parsed, propagated, enforces VGPR caps)- Cooperative groups:
cooperative_groups::this_thread_block()with.sync(),.thread_rank(),.size() - Operator overloading
- Full C preprocessor:
#include,#define/#undef, function-like macros,#ifdef/#ifndef/#if/#elif/#else/#endif,#pragma,#error,-I/-Dflags - Error recovery (reports multiple errors without hanging)
- Source location tracking in IR dumps
- Struct pass-by-value
__global__ void vector_add(float *c, float *a, float *b, int n)
{
int idx = threadIdx.x + blockIdx.x * blockDim.x;
if (idx < n)
c[idx] = a[idx] + b[idx];
}$ ./barracuda --amdgpu-bin vector_add.cu -o vector_add.hsaco
wrote vector_add.hsaco (528 bytes code, 1 kernels)
No LLVM required :-)
| File | Lines | What It Does |
|---|---|---|
lexer.c |
747 | Tokenises CUDA C source |
preproc.c |
1,370 | C preprocessor (macros, includes, conditionals) |
parser.c |
1,500 | Recursive descent parser → AST |
sema.c |
1,725 | Type checking, scope resolution, overload resolution |
bir.c + bir_lower.c |
3,032 | SSA intermediate representation + AST→BIR lowering |
bir_mem2reg.c |
965 | Promotes stack allocas to SSA registers |
bir_print.c |
579 | IR pretty printer with source location annotations |
amdgpu_isel.c |
1,788 | Instruction selection: BIR → AMDGPU machine ops |
amdgpu_emit.c |
1,735 | Register allocation + GFX11 binary encoding + ELF emission |
main.c |
317 | CLI driver |
| Total | 15,117 |
All data structures use pre-allocated fixed-size arrays. No malloc in hot paths. No recursion. Bounded loops everywhere. The kind of code that would make JPL's coding standards committee nod approvingly before going back to landing things on Mars.
Being honest about limitations is important. Here's what's missing:
unsignedas a bare type specifier (useunsigned intor justint)+=,-=,>>=and friends (compound assignment, spell it out for now)constqualifier__constant__memory- 2D array declarations in shared memory (
__shared__ float a[16][16], flatten to 1D) - Integer literal suffixes (
0xFFu,1ULL) - Parameter reassignment in
__device__functions (use local variables) - Textures and surfaces
- Dynamic parallelism (device-side kernel launch)
- Multiple translation units
- Host code generation (only device code is compiled)
None of these are architectural blockers. They're all "haven't got round to it yet" items.
14 test files, 35+ kernels, ~1,700 BIR instructions, ~27,000 bytes of machine code:
vector_add.cu- The "hello world" of GPU computingcuda_features.cu- Atomics, warp ops, barriers, gotos, switch, short-circuittest_tier12.cu- Vectors, shared memory, operator overloadingnotgpt.cu- AI-generated CUDA with extremely sarcastic comments (tiled SGEMM, reductions, histograms, prefix scan, stencils, half precision, cooperative groups, and the "kitchen sink" kernel)stress.cu- N-body simulation, nested control flow, bit manipulation, struct pass-by-value, chained function callscanonical.cu- Canonical patterns from NVIDIA samples adapted for the parsertest_errors.cu- Deliberate syntax errors to verify error recoverytest_launch_bounds.cu-__launch_bounds__parsing and VGPR cap enforcementtest_coop_groups.cu- Cooperative groups lowering- Plus preprocessor tests, template tests, unsigned integer tests
Fix the known gaps: compound assignment operators, bare unsigned, integer literal suffixes, const, parameter reassignment. These are all small parser/lowerer changes. The goal is to compile real-world .cu files without modifications.
The generated code works but isn't winning any benchmarks. Priorities:
- Instruction scheduling (hide memory latency)
- Better register allocation (currently linear scan, consider graph colouring)
- Constant folding and dead code elimination
- Loop-invariant code motion
- Occupancy tuning based on register pressure
The IR (BIR) is target-independent. The backend is cleanly separated. Adding a new target means writing a new isel + emit pair. Candidates:
- Tenstorrent - RISC-V based AI accelerators. Open ISA. Very different execution model (tile-based, not SIMT) but the IR maps well.
- Intel Arc - Xe architecture. Would give BarraCUDA coverage across all three major GPU vendors.
- RISC-V Vector Extension - For when GPUs are too mainstream and you want to run CUDA on a softcore.
If you're considering writing your own AMDGPU backend, here are the things that will ruin your afternoon:
- SOP1 prefix is
0xBE800000, not what you'd expect from the docs - SOPC prefix is
0xBF000000 - VOP3 VDST is at bits
[7:0], not[15:8]like a sensible person would assume - Null SADDR is
0x7Cfor global memory,0xFCfor scratch - RDNA 3 is Wave32 by default, not Wave64 like GCN
- The ISA manual is 500 pages and contradicts itself at least twice
All 1,735 lines of amdgpu_emit.c are a testament to reading those pages so you don't have to.
Found a bug? Want to discuss the finer points of AMDGPU instruction encoding? Need someone to commiserate with about the state of GPU computing?
Open an issue if theres anything you want to discuss. Or don't. I'm not your mum.
Based in New Zealand, where it's already tomorrow and the GPUs are just as confused as everywhere else.
Apache 2.0. Do whatever you want. If this compiler somehow ends up in production, I'd love to hear about it, mostly so I can update my LinkedIn with something more interesting than wrote a CUDA compiler for fun.