AdaDecode is a fast and accurate LLM decoding method based on the core idea: Adaptive Early Prediction + Parallel Token Processing:
- 🧩 No draft model needed — just a lightweight LM head (0.2% model size)!
- ✅ Predict tokens early using trained lightweight LM heads
- 🚀 Start decoding the next token before finishing the current one
- 🛡️ Final-layer verification ensures identical output to standard decoding

- Speculative decoding relies on an auxiliary drafter model, leading to increased memory usage and requiring the same tokenizer and vocabulary as the main model
- Layer skipping bypasses certain layers, which results in missing KV cache at those layers and can introduce discrepancies in future token predictions
-
AdaDecode accelerates decoding by adaptively predicting future tokens early based on confidence (e.g.,
$t_2$ and$t_3$ are predicted from different intermediate layers), enabling earlier progression to subsequent tokens- When future token steps require KV caches from the skipped layers (due to early predictions), these missing computations are executed in parallel with subsequent token processing (same-colored layers)
- A final verification step is employed to ensure output consistency with standard autoregressive decoding

Create a Python virtual environment and install all required packages.
conda create -n adadec python=3.10 -y
conda activate adadec
pip install -r requirements.txt
Use the following scripts to evaluate AdaDecode and compared with the standard autoregressive decoding.
bash run_vanilla.sh
bash run_AdaDecode.sh
If you have any questions related to the code or the paper, feel free to email Zhepei (zhepei.wei@virginia.edu). If you encounter any problems when using the code, or want to report a bug, feel free to open an issue! Please try to specify the problem with details so we can help you better and quicker!
This codebase is influenced by remarkable projects from the LLM community such as LayerSkip and Medusa.
Please cite our paper if you find the repo helpful in your work:
@inproceedings{
wei2025adadecode,
title={AdaDecode: Accelerating {LLM} Decoding with Adaptive Layer Parallelism},
author={Zhepei Wei and Wei-Lin Chen and Xinyu Zhu and Yu Meng},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=VnO2GEpmlb}
}