माया - illusion, the rendered world. Take the gradient of it.
A differentiable physics substrate in Rust. Every simulator is a function θ → trajectory; maya is the machinery that lets you take ∂loss/∂θ through it - which is to say, the machinery that turns simulation into inference, fitting, and control.
Differentiable physics is the right layer for the next decade of physical-AI work - robot learning, design optimization, system identification, fluid control, soft-body manipulation. The existing options are PyTorch (slow tape, heavy runtime) or JAX (XLA, but not built for stateful simulation loops).
A Rust crate with a fast tape, native tensor type, and gradient-correct contact constraints would beat both for a class of problems where (a) the simulator is the bottleneck and (b) you want to embed it in a non-Python system. maya is that crate, built up from the autodiff substrate out.
| v | Layer | Status |
|---|---|---|
| 0.0 | Reverse-mode scalar autodiff via Wengert tape | shipped |
| 0.1 | Tensor<f64> + autodiff lifted to add / mul / matmul / sum |
shipped |
| 0.2 | Scalar-tensor mul + differentiable particle dynamics (explicit + semi-implicit Euler) | shipped |
| 0.3 | RK4 for state-independent acceleration | shipped |
| 0.4 | RK4 with state-dependent force closure; harmonic-oscillator gradients | shipped |
| 0.5 | NumPy-style broadcasting with gradient summed over broadcast axes | shipped |
| 0.6 | 2D rigid body integrator (free flight, semi-implicit, differentiable) | shipped |
| 0.7 | ReLU + 1D penalty-method ground contact (differentiable bounce) | shipped |
| 0.8 | 2D contact with damping; tensor slicing; LCP contact via smoothed barrier | next |
| 0.9 | Soft bodies via FEM; differentiable mesh | |
| 0.5 | WGSL kernels for the hot path; CPU↔GPU tape |
use maya::Tape;
let t = Tape::new();
let x = t.var(1.3);
// f(x) = exp(sin(x²))
let y = (x * x).sin().exp();
let grads = t.grad(y);
println!("∂f/∂x = {}", grads[x.idx()]); // ≈ -0.7497...Reverse mode is O(ops) storage and one linear sweep backwards. The chain rule is the whole algorithm; the tape just bookkeeps it.
use maya::{Tensor, TensorTape};
let t = TensorTape::new();
let a = t.var(Tensor::from_data(vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0], vec![2, 3]));
let b = t.var(Tensor::from_data(vec![7.0, 8.0, 9.0, 10.0, 11.0, 12.0], vec![3, 2]));
let c = a.matmul(&b); // 2×2 result
let loss = c.sum(); // scalar
let grads = t.grad(loss);
// grads[a.idx()] is ∂loss/∂A = ∂loss/∂C · B^T, shape [2, 3]
// grads[b.idx()] is ∂loss/∂B = A^T · ∂loss/∂C, shape [3, 2]use maya::{euler_step, semi_implicit_euler_step, Tensor, TensorTape};
let t = TensorTape::new();
let mut p = t.var(Tensor::from_data(vec![0.0], vec![1]));
let mut v = t.var(Tensor::from_data(vec![1.5], vec![1]));
let v0_idx = v.idx();
let a = t.var(Tensor::from_data(vec![-9.81], vec![1]));
// Roll out 10 steps under constant gravity.
for _ in 0..10 {
let (np, nv) = semi_implicit_euler_step(&p, &v, &a, 0.05);
p = np;
v = nv;
}
// Take ∂(sum p_final)/∂v_0. Should equal N·dt = 0.5.
let loss = p.sum();
let grads = t.grad(loss);
assert!((grads[v0_idx].data[0] - 0.5).abs() < 1e-10);The gradient flows through every recorded add and scalar_mul along the
trajectory. This is the foundation for differentiable simulation: fit
forces to observations, optimize control inputs to hit a target, learn
material parameters from rollouts.
The matmul gradient identity is the reason autodiff frameworks exist - hand-deriving it correctly through arbitrary compositions is a pain. v0.1 validates these gradients entry-by-entry against central differences in the test suite.
- Griewank & Walther - Evaluating Derivatives (2008). The reference text on automatic differentiation.
- de Avila Belbute-Peres et al. - End-to-End Differentiable Physics for Learning and Control (NeurIPS 2018).
- Hu et al. - DiffTaichi: Differentiable Programming for Physical Simulation (ICLR 2020).
MIT.