Exp minus Log is all you need. Reducing Deep Learning to a single continuous Sheffer primitive.
The full story, including detailed technical breakdowns and the Lean 4 walkthrough, is available on amund.blog: 👉 Exp minus Log is all you need for Deep Learning?
This repository contains the full "Zero-Sorry" formalization and empirical evidence for the EML framework.
| Layer | Component | Verification Tool | Resource Path |
|---|---|---|---|
| Architecture | Full picoGPT (GPT-2) | 🧮 Lean 4 | lean/EmlNN/PicoGPT.lean |
| Evidence | EML-native Grokking | 🚀 Apple MLX | eml-mlx-grokking/main_eml.py |
| Stability | LayerNorm (Newton-Schulz) | 🧮 Lean 4 | lean/EmlNN/NormNewtonSchulz.lean |
| Numerics | FP32 Error Bounds | 🛡️ Gappa | proofs/gappa/ |
| Concurrency | KV-Cache Safety | ⏱️ TLA+ | proofs/tla+/PagedAttention.tla |
Achieve 100% validation accuracy in under 60 seconds using the Sheffer primitive.
cd eml-mlx-grokking
python3 main_eml.py --epochs 150 --p 97 --train-fraction 0.5Run real inference using official GPT-2 weights via the EML-native architecture.
python3 picoGPT_eml.py "Exp minus log is"This repository contains the full source for the "one-op" series. Follow-up posts on Tropical SSMs, Neuromorphic EML hardware, and TurboQuant are included as draft plans.