A beginner course for learning machine learning as a translation problem:
plain English <-> algebra <-> Rust
The goal is not to memorize symbols. The goal is to learn how to read formulas as programs, and how to read Rust code as precise mathematical structure.
- Beginners with little or no machine learning background
- Rust learners who want a concrete reason to use vectors, structs, loops, and functions
- Self-paced learners who want short lessons and small practice steps
- Read 01 Foundations.
- Continue with 02 Vectors.
- Continue with 03 Neuron.
- Use Lessons index to see the full course map.
If you specifically want the current Transformer material after the fundamentals, jump to 07 Transformer.
The repo uses sequential folder numbers even though the curriculum starts at Module 0:
- Course Module 0 -> Repo folder
lessons/01-foundations - Course Module 1 -> Repo folder
lessons/02-vectors
- Lesson 4: Rust Essentials for a Tiny Neuron
- Lesson 5: A Neuron as a Chain of Functions
- Neuron exercises
- Neuron solutions
- Lesson 17: What Problem the Transformer Solves
- Lesson 18: Typed Rust Transformer with Expressive Errors
- Lesson 19: Transformer Encoder in Small Chunks
- Transformer exercises
- Transformer solutions
rust-ml/
├── lessons/ # canonical course content
├── references/ # transcripts and papers used as source material
├── code/ # runnable companion crates
├── book/ # future mdBook/site wrapper
└── README.md
lessons/is the source of truth for written teaching content.code/follows the lesson progression and now includes a real testedtransformercrate.book/is intentionally thin in this pass so the course content does not drift into two competing copies.
The course keeps the same translation goal everywhere:
plain English <-> algebra <-> Rust
Module 07 now applies that rule in two complementary ways:
- narrative lessons that explain the architecture and the implementation choices
- a chunked encoder lesson where every concept is written as
English -> Algebra -> Rust
That repetition is intentional. Repetition is how the translation dictionary becomes automatic.
- Read the module README.
- Work through the lesson files in order.
- Do the module exercises without copying from the solutions first.
- Use the solution files to check reasoning, naming, and Rust syntax.
- Move to the next module only after you can explain each formula out loud in English.
The current runnable code artifact is the Transformer teaching crate:
cargo test --manifest-path code/transformer/Cargo.tomlThat crate covers:
- dense vectors and matrices
- semantic model newtypes such as
TokenEmbedding,Query,Key, andValue - expressive
thiserrordiagnostics for shape mistakes - standard self-attention and multi-head attention
- a simplified linear-attention comparison point
- positional encodings, layer norm, feed-forward layers, and an encoder block
The repo now includes two GitHub Actions workflows for quality control:
CIruns deterministic checks for lesson structure, local Markdown links, and authored-section contracts.CIalso compile-checks Rust snippets embedded in lessons and runscargo fmt,cargo clippy, andcargo testfor the Transformer teaching crate.Gemini Writing Reviewreviews Markdown content on pull requests for English clarity, technical-teaching quality, structural discipline, and beginner friendliness.
The Gemini review is advisory, not a replacement for human judgment. It is designed to catch weak phrasing, excess cognitive load, mismatches between English and code, and places where the teaching flow violates common technical-writing or technical-instruction best practices.
To enable Gemini review in GitHub Actions, configure:
- repository secret
GEMINI_API_KEY - optional repository variable
GEMINI_MODELif you want a model other than the defaultgemini-2.0-flash
The workflow writes a review artifact named gemini-writing-review so the writing assessment can be read directly from the workflow run.
The repo keeps supporting source material in references/, including:
- a Transformer explainer transcript
- Bahdanau et al. (2014)
- Luong et al. (2015)
- Vaswani et al. (2017)
- Sebastian Raschka's LLMs From Scratch repository as an external inspiration source for attention, GPT, and educational sequencing