A 3D Gaussian Splatting training engine for Apple Silicon, built entirely on Metal. No external dependencies beyond system frameworks.
The entire training pipeline: projection, sorting, rasterization, SSIM loss, backward pass, Adam optimizer, and densification runs as fused Metal compute shaders.
The result is a self-contained engine that trains a full-resolution Mip-NeRF 360 scene in ~90 seconds and renders it at ~350 FPS on an M4 Max.
Python and Swift bindings are provided, as well as a standalone C++ CLI.
demo.mp4
The original 3D Gaussian Splatting implementation is CUDA-only. Ports to other frameworks (gsplat, taichi-3dgs, etc.) still depend on PyTorch for autograd, optimizer state, and tensor management. This means ~2GB of framework overhead, Python GIL contention, and no straightforward path to native macOS/iOS integration.
core/metal/msplat_metal.metal ← Compute kernels
core/src/ ← C++ training loop, dataset loaders, SSIM eval
core/include/ ← MTensor (lightweight GPU tensor), Model, API headers
python/bindings.cpp ← nanobind Python module
swift/Sources/Msplat/ ← Swift package (via C API bridge)
cli/msplat.cpp ← C++ CLI
Each training step dispatches all work into one Metal command encoder:
Forward:
project_and_sh_forward ← fused 3D→2D projection + spherical harmonics
prefix_sum + scatter ← gaussian→tile intersection mapping
bitonic_sort_per_tile ← tile-local depth sort + inline data packing
nd_rasterize_forward ← per-pixel alpha compositing (16x16 tiles)
ssim_h_fwd + ssim_v_fwd ← separable 11-tap SSIM + L1 loss
Backward:
ssim_h_bwd + ssim_v_bwd ← separable SSIM gradient
rasterize_backward ← per-pixel backward compositing
project_and_sh_backward ← fused projection + SH VJP
fused_adam (×6 groups) ← optimizer step (means, scales, quats, opacity, SH)
accumulate_grad_stats ← gradient norms for densification
Tile-local bitonic sort instead of global radix sort. Each 16x16 tile independently sorts its gaussians (up to 2048) in threadgroup shared memory. The sort kernel also packs per-gaussian data (xy, opacity, conic, color) inline, eliminating a separate scatter dispatch.
GPU-resident densification. The split/clone/cull cycle never leaves the GPU. Classification, growth, and compaction are all compute kernels operating on device buffers. No CPU readback of gradient statistics or gaussian counts.
Fused kernels. Projection and spherical harmonic evaluation share registers (avoid a device memory round-trip for world-space position). The backward pass recomputes 3D covariance from scales/quaternions on-the-fly rather than storing it. Adam optimizer updates all six parameter groups in fused dispatches.
Separable SSIM. The 11x11 Gaussian-weighted SSIM window decomposes into two 1D passes (horizontal then vertical), reducing per-pixel work from 121 to 22 multiply-adds. Forward and backward each take two kernels, using threadgroup shared memory for the intermediate statistics.
Depth-chunked rasterization. For tiles with extreme gaussian counts, the forward pass splits into 512-gaussian chunks with a merge kernel that reconstructs absolute transmittance. The backward pass uses precomputed prefix/suffix transmittance to avoid re-traversal.
pip install msplatimport msplat
dataset = msplat.load_dataset("path/to/colmap/", eval_mode=True)
config = msplat.TrainingConfig(iterations=7000, num_downscales=0)
trainer = msplat.GaussianTrainer(dataset, config)
trainer.train(lambda s: print(f"step={s.iteration} splats={s.splat_count:,}"),
callback_every=100)
trainer.export_ply("output.ply")
trainer.save_checkpoint("checkpoint.msplat") # save/resume training
metrics = trainer.evaluate()
print(f"PSNR: {metrics['psnr']:.2f} SSIM: {metrics['ssim']:.3f}")
# Render from arbitrary viewpoints
pose = dataset.camera_pose(0) # (4, 4) cam-to-world matrix
img = trainer.render_from_pose(pose) # numpy (H, W, 3) float32Supported dataset formats: COLMAP, Nerfstudio, Polycam.
Type stubs (_core.pyi) are included for IDE autocompletion.
pip install msplat[cli]
msplat-train path/to/dataset -n 7000 --evalRequires Xcode and CMake (brew install cmake).
// Package.swift
dependencies: [
.package(url: "https://github.com/rayanht/msplat.git", from: "1.1.0")
]Build the XCFramework (one-time, from repo root):
./scripts/build-xcframework.shimport Msplat
let dataset = GaussianDataset(path: "path/to/colmap/", downscaleFactor: 4.0)
let trainer = GaussianTrainer(dataset: dataset)
for _ in 0..<1000 {
let stats = trainer.step()
print("step=\(stats.iteration) splats=\(stats.splatCount)")
}
trainer.exportPly(to: "output.ply")
// Render from arbitrary viewpoints
let pose = dataset.cameraPose(at: 0) // [Float] cam-to-world matrix
let img = trainer.renderFromPose(camToWorld: pose)cmake -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
./build/msplat path/to/dataset -n 7000 --evalgit clone https://github.com/rayanht/msplat.git && cd msplat
# Python
pip install -e .
# C++ CLI + static lib
cmake -B build -DCMAKE_BUILD_TYPE=Release && cmake --build build -j
# Swift XCFramework
./scripts/build-xcframework.sh
cd swift && swift buildRequires macOS 14+, Apple Silicon. No external dependencies.
mipnerf360, M4 Max. msplat runs 7K iterations with no downscales:
msplat-train path/to/scene -n 7000 --num-downscales 0 --eval| Scene | msplat PSNR | msplat SSIM | msplat wall time | gsplat PSNR | gsplat SSIM | gsplat wall time |
|---|---|---|---|---|---|---|
| bicycle | 23.21 | 0.605 | 82s | 23.71 | 0.668 | ~335s |
| counter | 27.44 | 0.881 | 91s | 27.14 | 0.878 | ~335s |
| garden | 25.76 | 0.786 | 107s | 26.30 | 0.833 | ~335s |
| room | 30.21 | 0.898 | 85s | 29.21 | 0.893 | ~335s |
msplat-train path/to/garden -n 30000 --num-downscales 0 --eval| msplat | gsplat | |
|---|---|---|
| PSNR | 27.17 | 27.32 |
| SSIM | 0.854 | 0.865 |
| Gaussians | 3.55M | — |
| Wall time | 1039s | ~2149s |
gsplat numbers from docs.gsplat.studio (TITAN RTX). gsplat wall times are the reported average across all mipnerf360 scenes (per-scene times not published).
Apache 2.0