On-device machine learning for Swift. Autograd engine, reinforcement learning, and demo apps.
No Python. No server. No cloud. Everything trains directly on Apple devices.
| Repository | Description | Status |
|---|---|---|
| SwiftGrad | Autograd engine and neural network library. Reverse-mode autodiff in ~250 lines of Swift. Inspired by Karpathy's micrograd. | v0.2.0 |
| SwiftRL | On-device reinforcement learning. REINFORCE, DQN, environments (GridWorld, Snake, CartPole, Bandit), Adam optimizer. | v0.2.0 |
| SwiftRLDemos | macOS demo app showcasing RL training with live visualization. | In development |
Your App (game, fitness, spatial computing)
|
SwiftRL (RL algorithms, environments)
|
SwiftGrad (autograd engine, neural networks)
SwiftGrad computes gradients. SwiftRL uses those gradients to train RL agents. Your app provides the environment.
There is no reinforcement learning library for Swift. Every RL tool (Stable-Baselines3, CleanRL, RLlib, Unity ML-Agents) requires Python and cannot run on iOS, iPadOS, or visionOS.
SwiftAutograd fills that gap with pure Swift libraries that train on-device.
// Package.swift
dependencies: [
.package(url: "https://github.com/SwiftAutograd/SwiftRL.git", from: "0.2.0")
]import SwiftRL
var env = GridWorld(size: 6)
var agent = DQN(observationSize: 2, hiddenSizes: [16], actionCount: 4)
let rewards = agent.train(environment: &env, episodes: 500)All repositories are MIT licensed.