You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AwesomeOPD is an awesome list summarising open-source repositories and papers for training LLMs (and VLMs / agents / draft models) with On-Policy Distillation (OPD) and On-Policy Self-Distillation (OPSD):
π― OPD = C1 + C2.C1: student samples its own trajectories y ~ Ο_student(Β·|x) during training. C2: teacher provides per-token / sequence supervision on those student samples. Methods that only partially satisfy are flagged in π Strictness notes per section.
πͺ OPSD = special case where teacher is the same model, conditioned on privileged context (verified trace / answer / "be concise" prefix / longer context) or an earlier checkpoint.
π Each entry is annotated along four design axes β teacher source (external Β· same model with privileged context Β· earlier checkpoint Β· multi-teacher Β· discriminator), supervision signal (logits / top-k / sequence reward / verbal score / discriminator / verifier / feature), rollout consumption (all / selected / truncated / replaced / as PG samples), and pipeline slot (cold-start / mid / RL-replacement / inside-RL / inter-stage / compression / continual-anchor).
β οΈ Built by reading paper PDFs, project pages, and source code with LLM coding agents; manually reviewed but errors possible. PRs welcome.
π Last updated: 2026-04-28
Taxonomy:
π Surveys, Foundations & Position Papers β meta-references and seed papers (GKD, MiniLLM, Thinking Machines blog, Tencent / THUNLP surveys)
π¬ White-Box β logit-based OPD on student rollouts with an external teacher
Cached teacher log-probs over SFT rollouts (offline OPD)
Student (cached)
White-box
Token
Introduces "teacher consistency" β same teacher must be used for SFT and OPD or else gradient bias. Eliminates the live teacher server.
GKD (Agarwal)
Generalised JSD (FKL/RKL configurable)
Mixed (Ξ» interpolates teacherβstudent)
White-box
Token
The seminal paper that named OPD; introduced student-self-rollout supervision.
π Strictness notes (against the strict OPD definition C1: student samples its own trajectories during training + C2: teacher provides supervision on those samples)
Lightning OPD β β οΈ partially satisfies C1: teacher log-probs are pre-computed once over SFT rollouts and reused during training; student doesn't actively sample during the OPD step. Authors call this "offline OPD" explicitly. Listed in OPD because the data is past-student-generated rollouts, not teacher-generated.
π¬ OPD with Larger External Teachers β White-Box
White-box methods use teacher logits / log-probabilities to supervise the student on student-generated rollouts. Each entry below has been verified to (a) train on student rollouts and (b) operate at the token level.
Methods that turned out to be RL-style on verification have been moved to OPD-RL Hybrids; off-policy / pure-loss-function / pretraining-side methods are excluded from this list.
Unifies KD as token-level reweighted likelihood; lightweight on-policy sampling preserves training efficiency.
π OPD with Black-Box / Outcome-Based Teachers
When the teacher is API-only (no logits), OPD uses scalar rewards, verbal scores, preferences, or adversarial discriminators β all evaluated on student rollouts. Entries that turned out to use static teacher data only (Lion, SuperCorrect, DAIL, SODA) are excluded from this list.
A trained discriminator distinguishes student outputs from teacher (e.g. GPT-5) responses; minimax game makes the discriminator co-evolve into an on-policy reward model. Qwen2.5-14B student becomes comparable to GPT-5-Chat on LMSYS.
OVD
Verbal scores (0β9) on student trajectories
Student
Sequence
General
Replaces token-level logit matching with verbal scoring; +25.7% over baselines.
β»οΈ Self-Distillation with Privileged Context β OPSD
Same model = teacher = student, but the teacher is conditioned on something the student doesn't see (verified trace, ground-truth answer, "be concise" prefix, longer context, document, β¦). The gap exists because of the conditioning, not weights.
Several entries previously listed here turned out on verification to use static teacher data or a fixed self-rewritten dataset rather than student rollouts; those have been excluded. SPIN was reclassified to Iterative Self-Bootstrapping.
Same-model OPSD; matches GRPO with 1Γ8 rollouts and 1024 length vs. GRPO's 8Γ16 / 16k. The canonical OPSD paper. Built on TRL's GOLD trainer.
CRISP / OPSDC
"Be concise" instruction prefix
Per-token RKL on student rollouts
Token
Reasoning compression
Compresses long-CoT without entropy collapse (unlike RL-with-length-penalty).
SDFT-Continual (idanshen)
Demo-conditioned same model
RKL on student rollouts vs. demo-conditioned teacher
Token
Continual learning
Self-distillation enables continual learning.
OPCD
In-context-knowledge-augmented same model
RKL on student rollouts
Token
Knowledge internalisation
Internalise context to be faithful even after context is removed.
OEL (Online Experiential Learning)
Same model with interactive game environment
RKL on student rollouts
Token
Game / planning
Self-distillation on interactive trajectories.
MTP Self-Distill
Multi-token prediction same model
RKL on student rollouts
Token
General
Multi-Token Prediction via Self-Distillation. Author-stated on-policy.
Apple SSD
Same model w/ temperature/truncation sampling
Cross-entropy on its own samples
Sequence
Code generation
"Embarrassingly simple" β sample, then SFT on those samples. Degenerate OPSD; "decoding-config" privilege.
GATES
Document-conditioned tutor (same model)
RKL gated by tutor consensus
Token (gated)
Document QA
Both tutor and student sample rollouts; on-policy student-rollout updates contribute "modest additional improvement" on top of off-policy distillation. Mixed.
OPSDL
Short-context same model
Point-wise RKL
Token
Long-context
On-Policy Self-Distillation for Long-Context LMs.
π Strictness notes
Apple SSD β β οΈ C2 is degenerate: no teacher KL signal; pure self-generated SFT (sample with temperature/truncation, then SFT on those samples). Closer to STaR-style self-bootstrapping than to OPSD. Kept because the "teacher" is the same model with a different decoding config β privileged-context-by-decoding.
GATES β β οΈ Authors' own ablation says off-policy trajectory-level distillation drives the primary gains; on-policy student-rollout updates contribute only "modest additional improvement". Mixed; the OPSD leg is genuine but secondary.
π Iterative Self-Bootstrapping
Same model is the teacher, but as a frozen earlier checkpoint, not a privileged-context view. The teacher snapshot is frozen for one round, the student trains, then the snapshot rolls forward. Listed separately because the supervision is typically sequence-level / preference, not per-token logit-distillation.
SPIN β β οΈ C1 β (student samples), but C2 fails strict per-token logit form: supervision is sequence-level DPO preference against the previous frozen checkpoint. More accurately "iterative on-policy DPO" than per-token OPD. Kept because the "teacher = previous self" pattern is what people search for in OPD lists.
rStar / rStar-Math / rStar2-Agent β β οΈ MCTS-filtered student samples + SFT; the "teacher signal" is a step-level PPM / discriminator score, not per-token logit KL. Iterative self-improvement, not classical OPD.
π€ OPD-RL Hybrids β Inside-RL OPD
Methods that fuse OPD with RLVR / GRPO / PPO / DPO. Teacher logits become a dense reward shaping or trust-region anchor inside an RL objective; or BoN / preference signals are used as the imitation target.
Newly added on verification: AlignDistil (RLHF-equivalent distillation), BOND / Faster WIND (sequence-level Best-of-N as target), KETCHUP (k-step RL-based KD), π³-KD / DDT (IRL-style), LUFFY (mixed-policy GRPO with off-policy traces), NPO / AutoNPO (mixed-policy GRPO with near-future self as teacher). Removed on verification: RLKD (only sequence-level structural reward), ExGRPO (pure RL, no teacher), REDI (offline R1 traces, no student rollouts).
Sample student rollout, get tokenised feedback, re-evaluate under feedback-conditioned self-teacher, distill the corrected next-token distribution back into policy.
OpenClaw-RL
GRPO + OPD
Judge model extracts hindsight hints, teacher token-logprob gap = directional advantage
Mixed
Token
Terminal / GUI / SWE / Tool-call
Unifies binary RL and OPD in one trainer.
Open-AgentRL
GRPO-TCR
Multi-domain teachers
Student
Token
Reasoning / GUI / Coding
Includes process-reward modelling via SandboxFusion.
AlignDistil
RLHF-equivalent KD
DPO-derived combination of DPO model + ref-model logits
Student
Token
Alignment
Re-frames DPO as policy distillation.
LUFFY
Mixed-Policy GRPO + policy shaping
Off-policy R1 traces inserted into student rollouts
Mixed
Token + sequence
Reasoning
"Learn to reason under off-policy guidance". On-policy student-roll + off-policy teacher-trace mix.
NPO / AutoNPO
Mixed-Policy GRPO
Verifier-filtered trajectories from a later checkpoint of the same training run
Mixed
Sequence
Reasoning (RLVR)
"Learn from your near-future self". Picks a teacher that is strong enough (higher Q than current policy) yet close enough (low V vs. external teachers like R1), maximising effective Q/V signal. AutoNPO adaptively schedules the interventions; preserves higher entropy than vanilla GRPO.
KEPO
Knowledge-enhanced PO
Knowledge-base teacher
Mixed
Sequence
Reasoning
Adds KB grounding to preference RL.
BOND
Best-of-N distillation
Same model's BoN target
Student (iterative)
Sequence
Alignment
Treats Best-of-N as the target distribution; iterative anchor; Jeffreys divergence.
Faster WIND
Win-rate dominance
Same model BoN
Student (iterative)
Sequence
Alignment
Game-theoretic acceleration of BOND.
KETCHUP
k-step return REINFORCE on KD
External teacher
Student
Sequence
General
RL-based KD with k-step Bellman returns.
π³-KD
AVRIL inverse-RL
Joint reward + policy distillation
Student
Token + sequence
General
IRL-flavoured experiential KD.
DDT
On-policy SFT theory
Theoretical
Student
Token
General
Distribution Discriminant Theory; foundations for on-policy SFT.
RLAD
PPO/GRPO ratio anchored to teacherβold-policy mixture
External teacher (Qwen3-32B)
Student
Token
Reasoning
Trust-region likelihood-ratio.
KDRL
Joint reverse-KL + GRPO rule-based reward
External teacher (Skywork-OR1)
Student
Token + outcome
Reasoning
Unified KD + RL objective.
Self-Distilled RLVR (RLSD)
RLVR direction + teacher evidence-ratio modulates magnitude
Same model + privileged answer
Student
Token + outcome
Reasoning
Combines self-distillation magnitudes with RLVR directions.
HDPO
RL on most prompts; on "cliff" prompts generate privileged rollouts and self-distill
"Explanatory probes" force logical articulation; GRPO + dialogue-structure reward
Self-probe
Student
Sequence
Reasoning
Reinforcement Distillation via Explanatory Inversion.
π Strictness notes
LUFFY β β οΈ Mixed-policy: half on-policy student rollouts (C1+C2 β) + half off-policy R1 traces inserted into GRPO (C1 β on the off-policy half). Net is OPD-flavor with off-policy import.
NPO / AutoNPO β β οΈ Same mixed-policy GRPO pattern as LUFFY, but the off-policy traces come from a near-future checkpoint of the same run instead of an external R1 teacher. Authors frame it as RLVR, not OPD; included here as an OPD variant because (a) the imported trajectories play the same "stronger-self teacher" role, and (b) the paper itself explicitly invites follow-up work to inject the near-future-self signal via on-policy distillation. Strict per-token logit KL (C2) is not the loss β supervision is verifier-filtered sequence-level trajectory mixing inside GRPO.
BOND, Faster WIND β β οΈ Iterative self-bootstrapping; teacher = same model's BoN distribution. Loss is Jeffreys / win-rate-dominance at the sequence level β no per-token logit supervision (C2 partially fails strict form). More accurately "on-policy iterative alignment" than OPD.
KETCHUP β β οΈ Sequence-level RL-based KD with k-step Bellman returns; the paper itself self-describes as "RL-based KD". Closer to RL with KD-anchor reward than per-token OPD.
π³-KD β β οΈ Built on AVRIL inverse-RL framework with joint reward modeling; closer to IRL+OPD hybrid than pure OPD.
DDT β β οΈ Theoretical foundations paper for "on-policy SFT" (Distribution Discriminant Theory); not a specific deployable algorithm. Kept for completeness.
KEPO, Open-AgentRL, Probing-to-Refine β β οΈ C1 β (on-policy student rollouts), but the per-token KL component vs. sequence-level reward shaping vs. preference optimization is not fully resolved from abstracts. Listed because the papers self-describe as OPD/on-policy distillation but exact form of C2 needs full-paper reading.
π§ Reasoning OPD (by application)
Genuine OPD work on math / code / long-CoT reasoning. Off-policy SFT-distill from R1, pure RL methods (Skywork-OR1, SimpleRL-Zoo, Time-R1), and analysis-only papers are excluded from this list β each had no student-rollout-with-teacher-supervision component.
The reasoning-OPD canon already lives across OPSD (siyan-zhao/OPSD, CRISP), Iterative Self-Bootstrapping (rStar / rStar-Math), OPD-RL Hybrids (LUFFY, RLAD, KDRL, RLSD, HDPO, SD-Zero), and White-Box (REOPOLD, Fast OPD, Entropy-Aware OPD, TIP, SCOPE, PACED). This section only lists items not already covered above.
π Click to view technical details
Method
Loss / Objective
Data
Teacher
Granularity
Base / Benchmark
Notes
Rethinking OPD (THUNLP)
RKL with progressive top-K alignment + off-policy cold-start
Mixed
White-box (Qwen3-4B/1.7B teacher pairs)
Token
Math reasoning
Identifies teacher-novelty and thinking-pattern compatibility as success conditions.
OPD for AV Motion Planning
GPT-Driver framework + GKD on student-generated trajectories
Student
White-box (LLM teacher)
Token
Driving
5Γ model-size reduction.
πΌοΈ Multimodal OPD (VLM, Video, Audio, Image)
Strict OPD work in non-text modalities. Many "R1"/"GRPO" multimodal models that bear the brand are pure RL (no teacher-distillation loss) and are excluded.
Genuine OPD where the student is an agent rolling out actions; teacher (or self) supervises those trajectories. Pure-RL agent works (WebRL, WebAgent-R1, InfiGUI-G1, GUI-R1) and off-policy SFT-on-teacher-trajectories (Nardien, AgentRefine, Chain-of-Agents, MapCoder-Lite, SAD, Structured-Web) are excluded.
Distillation of the draft model so it better mimics the verifier/target. The on-policy element here is over the drafter's own continuations as judged by the target. Listed separately because the goal is inference speedup, not student capability.
This section only lists drafters trained with the drafter's own rollouts. Off-policy drafter training (EAGLE-1/2, Medusa, Hydra, Kangaroo, ReDrafter, BiTA, SpecDec++, LayerSkip, FREE, AdaSPEC, POSS) and training-free system tricks (Ouroboros, Sequoia, TriForce, SwiftKV, SuffixDecoding) are excluded.
HASS, Falcon β β οΈ Partial on-policy: multi-step draft trajectory / glancing distillation uses drafter samples for a subset of the training signal. Listed because the on-policy leg drives the gains.
π Click to view technical details
Method
Drafter type
On-/Off-policy
Loss
Notes
EAGLE-3
Self-speculative (uses target features)
On-policy multi-step (TTT)
Smooth-L1 (feature) + CE (token)
"Training-Time Test" simulates draft rollouts during training.
HASS
Self-speculative
Partial on-policy (multi-step draft trajectory in training)
KL-controlled (off-policy default; integrates into GRPO)
One of many
PyTorch
Yes (async distributed)
distill_loss_weight.
NeMo-RL
FKL / RKL / mixed (configurable kl_type)
OPD documented
PyTorch
Yes (Ray + Megatron + vLLM)
Replaces archived NeMo-Aligner.
SkyRL
Reverse KL + importance sampling
OPD added Nov 2025 (PR #585)
PyTorch
Yes (Ray + vLLM/SGLang)
Notion blog "On-Policy Distillation in SkyRL".
slime
Reverse KL token-level
OPD as additive penalty on any advantage estimator
PyTorch + Megatron
Yes (SGLang teacher mode)
Behind GLM-4.5/4.6/4.7.
KDFlow
FKL / RKL / JSD / AKL + Skewed-KL/RKL variants
Yes β KD-first
PyTorch
Yes (Ray + SGLang teacher + FSDP2 student)
Decoupled backends; transmits teacher hidden states (zero-copy) and recomputes logits on student to cut comm cost; 1.44β6.36Γ speedup over homogeneous-backend baselines. Native cross-tokenizer; VLM support (Qwen3-VL). Colocate mode shares GPUs via SGLang sleep/wakeup.
Excluded (no native OPD support, or distillation pipeline is offline / fixed-corpus rather than student-rollout): axolotl, OpenRLHF, allenai/open-instruct, prime-rl, TextBrewer (pre-LLM era), open-r1 (off-policy SFT + GRPO), Modelopt, Tunix v0.1.6, DistillKit, easydistill.
π Strictness notes β frameworks judged by whether they ship a recipe that satisfies C1+C2
LLaMA-Factory β β οΈ OPD only available via TRL integration; no native OPD trainer. Listed for users who already use LLaMA-Factory and want to know it can host OPD.
π Industrial / Production Model Reports
Flagship model technical reports that publicly describe on-policy distillation in their post-training pipeline. Reports whose tech papers don't actually describe student-rollout distillation (Qwen2.5, Qwen2.5-Math, MiMo predecessor, DeepSeek-V3 / V3.2-Exp / R1, Phi-4, Hunyuan-Large / A13B, Kimi-K2 / K2.5, Yi-Lightning, DistilQwen) are excluded.
Reports ~10Γ cheaper than RL for equal performance. The canonical industrial OPD recipe. Inspired the Thinking Machines blog.
Qwen3-Coder-Next
Distillation of multi-experts into 80A3 student
Combined SFT + on-policy logit alignment
Production scaling of Qwen3 recipe.
Gemma 2
Post-training
"We also use on-policy distillation, where the student generates completions from the SFT prompts" β KL on student samples
Among the first production models to name OPD.
GLM-5
Throughout post-training
"On-Policy Cross-Stage Distillation" β a final anti-forgetting refinement applied between stages
Generalises Qwen3 recipe to "OPD as a stage glue".
GLM-4.5 / 4.6
Multi-stage post-training
Expert iteration; SFT distillation merges experts into hybrid generalist
Predecessors of GLM-5.
MiMo-V2-Flash
Post-training
Multi-Teacher On-Policy Distillation (MOPD) β "the student model samples from its own evolving distribution and receives token-level supervision from domain-specific teachers"
Multi-teacher OPD: domain specialists trained independently (SFT + GRPO per domain β math, code, agent, IF), then a unified student optimises reverse-KL against the specialist set on its own rollouts
Full-vocabulary KL (not token-level estimate) stabilises gradients when specialists disagree; first DeepSeek release where OPD replaces the RL consolidation stage from V3 / R1. V4-Pro 1.6T MoE; V4-Flash 284B.
π Strictness notes
GLM-4.5 / 4.6 β β οΈ Tech report describes "expert iteration + RL" without explicit OPD wording. Kept as predecessor of GLM-5 which does have explicit cross-stage OPD.
π Curator's Picks β where to start
Opinionated reading order for someone starting an OPD project today.
#
Why it's the pick
Resource
1
Clearest one-page explanation of why OPD beats both SFT and RL on token efficiency.
PRs are very welcome. When adding an entry, please attempt to fill the technical-details columns (loss / divergence, data source, teacher access, granularity). If you cannot determine these by reading the paper or repo, leave a ? β that's still useful.