Skip to content
master
Go to file
Code

README.md

alox

GPU Accelerated, Distributed, Actor Model Language

Goals:

  • Have code running on the GPU and CPU
  • Have code running across many machines
  • Use the actor model for concurrency

This is very much a Work In Progress, nothing works yet.

Roadmap:

  • Frontend
    • Lexer
    • Parser WIP
    • Start parsing imported modules immediately
  • Middle
    • AST Structure
    • Thread-safe IR Structure
    • Concurrent IR Symbol Resolution
    • AST Expression -> IR Instruction conversion
    • Passes to validate things
  • Error messages
    • Parser error messages
    • Validation messages
  • Backend
  • Runtime
    • Schedulers
    • Cross-node communication
    • GC for actors?
  • Really dig into semantics

Language Ideas

  • Compile time code execution
  • Strong type system
    • Algebraic Data Types
    • Unique & Borrowed Types
  • Automatic Versioning
    • Enforce public APIs
  • Clean syntax
  • Concurrent compiler pipeline
actor A {
    behave ping(n: Int32, b: &B) {
        b.pong(n, &this)
    }
}

actor B {
    behave pong(n: Int32, a: &A) {
        let x = n & 0xF0 >> 4
        let y = n & 0x0F
        let arr = [x, y]
        let newArr = process(arr)
        let z = (newArr[0] << 4) | newArr[1]
        a.ping(z, &this)
    }

    fun process(arr: &mut [Int32]) {
        for (x in arr) {
            x *= 17 + (2 * x)
        }
    }
}

actor Main {
    behave main() {
        let n = 2
        let a = new A()
        let b = new B()
        a.ping(n, &b)
    }
}

About

GPU Accelerated, Distributed, Actor Model Language (WIP)

Topics

Resources

License

Releases

No releases published

Packages

No packages published
You can’t perform that action at this time.