π This repository contains a list of recent papers on token reduction (token pruning, merging, clustering, compressing, adaptive thinking etc.) for ML/Gen AI; we categorize them based on their year and application scenarios.
π If you found any errors or missing papers, please don't hesitate to open an issue or pull request. We invite your participation in advancing this field.
If you find our work useful for your project, please consider citing our paper and starring this repo.
@article{kong2025token,
title={Token Reduction Should Go Beyond Efficiency in Generative Models--From Vision, Language to Multimodality},
author={Kong, Zhenglun and Li, Yize and Zeng, Fanhu and Xin, Lei and Messica, Shvat and Lin, Xue and Zhao, Pu and Kellis, Manolis and Tang, Hao and Zitnik, Marinka},
journal={arXiv preprint arXiv:2505.18227},
year={2025}
}
2026/03/08Added CVPR 2026, ICLR 2026, AAAI 2026, WACV 2026, ICASSP 2026.2026/01/12π₯π₯ Added new section π€Agentic Systems.2026/01/12π₯π₯ Update paper "Token Reduction Should Go Beyond Efficiency in Generative Models -- From Vision, Language to Multimodality": Add Agent, Efficient Reasoning, VLA and more reference works.2025/05/25Checkout our newly released position paper "Token Reduction Should Go Beyond Efficiency in Generative Models -- From Vision, Language to Multimodality", which demonstrates how token reduction is leveraged for more than just efficiency gains, and outlines key future directions.2025 ConferenceNeurIPS 2025, CVPR 2025, ICLR 2025, ICML 2025, ICCV 2025, WACV 2025, AAAI 2025, ACL 2025, EMNLP 2025, COLING 2025, COLM 2025, NAACL 2025, ICASSP 2025, ACM MM 2025, ICME 2025.
A detailed list of papers organized by modality can be found in this Google Sheet, including a brief introduction of the task, token reduction type, contribution, and methodology for each paper.
- π Vision
- π Language
- π¬ Vision-Language (Action) Model
- π€ Agentic Systems
- π± Hardware Co-design
- π State Space Models
- [CVPR'26] UTPTrack: Towards Simple and Unified Token Pruning for Visual Tracking [Paper] [Code]
- [ICLR'26] RegionE: Adaptive Region-Aware Generation for Efficient Image Editing [Paper] [Code]
- [ICASSP'26] TINYDROP: TINY MODEL GUIDED TOKEN DROPPING FOR VISION TRANSFORMERS [Paper]
- [AAAI'26] CompTrack: Information Bottleneck-Guided Low-Rank Dynamic Token Compression for Point Cloud Tracking [Paper]
- [NeurIPS'25] Frequency-Aware Token Reduction for Efficient Vision Transformer [Paper] [Code]
- [ICASSP'25] Cross-Layer Cache Aggregation for Token Reduction in Ultra-Fine-Grained Image Recognition [Paper] [Code]
- [ICME'25] Sparsedm: Toward sparse efficient diffusion models [Paper]
- [ICME'25] SPEECHPRUNE: Context-aware Token Pruning for Speech Information Retrieval [Paper]
- [ICCV'25] Keyframe-oriented Vision Token Pruning: Enhancing Efficiency of Large Vision Language Models on Long-Form Video Processing [Paper]
- [ICCV'25] AuroraLong: Bringing RNNs Back to Efficient Open-Ended Video Understanding [Paper]
- [ICCV'25] Representation Shift: Unifying Token Compression with FlashAttention [Paper] [Code]
- [CVPR'25] Faster Parameter-Efficient Tuning with Token Redundancy Reduction [Paper] [Code]
- [CVPR'25] AdaCM2: On Understanding Extremely Long-Term Video with Adaptive Cross-Modality Memory Reduction [Paper]
- [CVPR'25] Token Cropr: Faster ViTs for Quite a Few Tasks [Paper]
- [CVPR'25] Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration [Paper] [Code]
- [CVPR'25] MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization [Paper] [Code]
- [CVPR'25] Rethinking Token Reduction with Parameter-Efficient Fine-Tuning in ViT for Pixel-Level Tasks [Paper] [Code]
- [CVPR'25] CATANet: Efficient Content-Aware Token Aggregation for Lightweight Image Super-Resolution [Paper] [Code]
- [CVPR'25] VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsification [Paper] [Code]
- [CVPR'25] Faster Parameter-Efficient Tuning with Token Redundancy Reduction [Paper]
- [ICLR'25] Accelerating Diffusion Transformers with Token-wise Feature Caching [Paper] [Code]
- [ICLR'25] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark [Paper] [Code]
- [ICLR'25] Mutual Effort for Efficiency: A Similarity-based Token Pruning for Vision Transformers in Self-Supervised Learning [Paper]
- [ICLR'25] Dynamic diffusion transformer [Paper] [Code]
- [WACV'25] Pruning One More Token is Enough: Leveraging Latency-Workload Non-Linearities for Vision Transformers on the Edge [Paper]
- [ICASSP'25] Pruning then reweighting: Towards data-efficient training of diffusion models [Paper] [Code]
- [AAAI'25] FreqTS: Frequency-Aware Token Selection for Accelerating Diffusion Models [Paper]
- [AAAI'25] Multimodal Promptable Token Merging for Diffusion Models [Paper]
- [AAAI'25] Training-free and hardware-friendly acceleration for diffusion models via similarity-based token pruning [Paper] [Code]
- [arXiv] Pretraining Frame Preservation in Autoregressive Video Memory Compression [Paper] [Code]
- [arXiv] Co-Me: Confidence Guided Token Merging for Visual Geometric Transformers [Paper] [Code]
- [arXiv] OminiControl2: Efficient Conditioning for Diffusion Transformers [Paper] [Code]
- [arXiv] Token Transforming: A Unified and Training-Free Token Compression Framework for Vision Transformer Acceleration [Paper] [Code]
- [arXiv] Pyramid Sparse Transformer: Efficient Multi-Scale Feature Fusion with Dynamic Token Selection [Paper] [Code]
- [arXiv] Cached Adaptive Token Merging: Dynamic Token Reduction and Redundant Computation Elimination in Diffusion Model [Paper] [Code]
- [arXiv] Layer-and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers [Paper] [Code]
- [arXiv] UniCP: A Unified Caching and Pruning Framework for Efficient Video Generation [Paper]
- [arXiv] CAT Pruning: Cluster-Aware Token Pruning For Text-to-Image Diffusion Models [Paper] [Code]
- [arXiv] Concise Reasoning, Big Gains: Pruning Long Reasoning Trace with Difficulty-Aware Prompting [Paper] [Code]
- [NeurIPS'24] Accelerating Transformers with Spectrum-Preserving Token Merging [Paper]
- [NeurIPS'24] Video Token Merging for Long Video Understanding [Paper]
- [NeurIPS'24] Don't Look Twice: Faster Video Transformers with Run-Length Tokenization [Paper] [Code]
- [NeurIPSW'24] M2M-TAG: Training-Free Many-to-Many Token Aggregation for Vision Transformer Acceleration [Paper] [Code]
- [ECCV'24] Agglomerative Token Clustering [Paper] [Code]
- [ECCV'24] Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning [Paper] [Code]
- [ECCV'24] LookupViT: Compressing visual information to a limited number of tokens [Paper]
- [ECCV'24] PYRA: Parallel Yielding Re-Activation for Training-Inference Efficient Task Adaptation [Paper] [Code]
- [ECCV'24] Turbo: Informativity-driven acceleration plug-in for vision-language large models [Paper]
- [ECCV'24] Object-centric diffusion for efficient video editing [Paper]
- [ECCV'24] Leveraging temporal contextualization for video action recognition [Paper] [Code]
- [IJCAI'24] ToDo: token downsampling for efficient generation of high-resolution images [Paper]
- [CVPR'24] Attention-driven training-free efficiency enhancement of diffusion models [Paper]
- [CVPR'24] vid-TLDR: Training Free Token Merging for Light-weight Video Transformer [Paper] [Code]
- [CVPR'24] Vidtome: Video token merging for zero-shot video editing [Paper] [Code]
- [CVPR'24] Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers [Paper] [Code]
- [ICLR'24] Synergistic Patch Pruning for Vision Transformer: Unifying Intra- & Inter-Layer Patch Importance [Paper]
- [WACV'24] Token Fusion: Bridging the Gap Between Token Pruning and Token Merging [Paper]
- [WACV'24] Revisiting Token Pruning for Object Detection and Instance Segmentation [Paper] [Code]
- [arXiv] Token Pruning for Caching Better: 9 Times Acceleration on Stable Diffusion for Free [Paper]
- [arXiv] Vote&Mix: Plug-and-Play Token Reduction for Efficient Vision Transformer [Paper]
- [arXiv] Dynamic and Compressive Adaptation of Transformers From Images to Videos [Paper]
- [arXiv] Importance-based Token Merging for Diffusion Models [Paper]
- [arXiv] AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration [Paper] [Code]
- [arXiv] Token Caching for Diffusion Transformer Acceleration [Paper]
- [arXiv] FlexDiT: Dynamic Token Density Control for Diffusion Transformer [Paper] [Code]
- [arXiv] Principles of Visual Tokens for Efficient Video Understanding [Paper]
- [EMNLP'23] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding [Paper] [Code]
- [ICCV'23] Dynamic Token Pruning in Plain Vision Transformers for Semantic Segmentation [Paper] [Code]
- [ICCV'23] DiffRate: Differentiable Compression Rate for Efficient Vision Transformers [Paper] [Code]
- [ICCV'23] TORE: Token Reduction for Efficient Human Mesh Recovery with Transformer [Paper] [Code]
- [ICCV'23] Prune spatio-temporal tokens by semantic-aware temporal accumulation [Paper] [Code]
- [ICCV'23] Efficient Video Action Detection with Token Dropout and Context Refinement [Paper] [Code]
- [ICCV'23] Masked Diffusion Transformer is a Strong Image Synthesizer [Paper] [Code]
- [ICCV'23 Workshop] Which Tokens to Use? Investigating Token Reduction in Vision Transformers [Paper] [Code]
- [CVPR'23] Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers [Paper] [Code]
- [CVPRW'23] Token merging for fast stable diffusion [Paper] [Code]
- [ICLR'23] Token Merging: Your ViT But Faster [Paper] [Code]
- [IJCAI'23] Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention [Paper] [Code]
- [TIP] Efficient Vision Transformer via Token Merger [Paper]
- [arXiv] PPT: Token Pruning and Pooling for Efficient Vision Transformers [Paper] [Code]
- [ECCV'22] SPViT: Enabling Faster Vision Transformers via Latency-aware Soft Token Pruning [Paper] [Code]
- [ECCV'22] ATS: Adaptive Token Sampling For Efficient Vision Transformers [Paper] [Code]
- [ECCV'22] PPT: token-Pruned Pose Transformer for monocular and multi-view human pose estimation [Paper] [Code]
- [ECCV'22] Ts2-net: Token shift and selection transformer for text-video retrieval [Paper]
- [ECCV'22] Efficient video transformers with spatial-temporal token selection [Paper] [Code]
- [CVPR'22] Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space [Paper] [Code]
- [CVPR'22] Patch Slimming for Efficient Vision Transformers [Paper]
- [CVPR'22] A-ViT: Adaptive Tokens for Efficient Vision Transformer [Paper] [Code]
- [ICLR'22] EViT: Expediting Vision Transformers via Token Reorganizations [Paper] [Code]
- [AAAI'22] Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer [Paper] [Code]
- [NeurIPS'21] IA-RED2: Interpretability-Aware Redundancy Reduction for Vision Transformers [Paper]
- [NeurIPS'21] Tokenlearner: Adaptive space-time tokenization for videos [Paper] [Code]
- [NeurIPS'21] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification [Paper] [Code]
- [ICLR'26] DiffAdapt: Difficulty-Adaptive Reasoning for Token-Efficient LLM Inference [Paper]
- [ICLR'26] LightMem: Lightweight and Efficient Memory-Augmented Generation [Paper] [Code]
- [ICLR'26] Self-Aligned Reward: Towards Effective and Efficient Reasoners [Paper] [Code]
- [AAAI'26] Efficient Reasoning for Large Reasoning Language Models via Certainty-Guided Reflection Suppression [Paper]
- [ICASSP'26] Mask-GCG: Are All Tokens in Adversarial Suffixes Necessary for Jailbreak Attacks? [Paper]
- [arXiv] Neural Chain-of-Thought Search: Searching the Optimal Reasoning Path to Enhance Large Language Models [Paper] [Code]
- [arXiv] Do LLMs Encode Functional Importance of Reasoning Tokens? [Paper] [Code]
- [arXiv] Self-Distilled Reasoner: On-Policy Self-Distillation for Large Language Models [Paper]
- [arXiv] When Reasoning Meets Its Laws [Paper] [Code]
- [arXiv] Sparse-dLLM: Accelerating Diffusion LLMs with Dynamic Cache Eviction [Paper] [Code]
- [arXiv] PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning [Paper] [Code]
- [arXiv] Uni-cot: Towards Unified Chain-of-Thought Reasoning Across Text and Vision [Paper] [Code]
- [arXiv] ARM2: Adaptive Reasoning Model with Vision Understanding and Executable Code [Paper]
- [arXiv] AdaCoT: Pareto-Optimal Adaptive Chain-of-Thought Triggering via Reinforcement Learning [Paper]
- [arXiv] AdaptThink: LLM Can Learn When to Think [Paper] [Code]
- [arXiv] Qwen3 Technical Report [Paper] [Code]
- [COLM'25] SEAL: Steerable Reasoning Calibration of Large Language Models for Free [Paper] [Code]
- [EMNLP'25] Position IDs Matter: An Enhanced Position Layout for Efficient Context Compression in Large Language Models [Paper] [Code]
- [EMNLP'25] ThinkSwitcher: When to Think Hard, When to Think Fast [Paper]
- [EMNLP'25] TokenSkip: Controllable Chain-of-Thought Compression in LLMs [Paper] [Code]
- [EMNLP'25] TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection [Paper] [Code]
- [EMNLP'25] LightThinker: Thinking Step-by-Step Compression [Paper] [Code]
- [arXiv] MixReasoning: Switching Modes to Think [Paper]
- [NeurIPSW'25] Adaptive Dual Reasoner: Large Reasoning Models Can Think Efficiently by Hybrid Reasoning [Paper]
- [NeurIPSW'25] Chopping Trees: Semantic Similarity Based Dynamic Pruning for Tree-of-Thought Reasoning [Paper] [Code]
- [NeurIPSW'25] DTS: Enhancing Large Reasoning Models via Decoding Tree Sketching [Paper] [Code]
- [NeurIPS'25] VeriThinker: Learning to Verify Makes Reasoning Model Efficient [Paper] [Code]
- [NeurIPS'25] Learning to Focus: Causal Attention Distillation via Gradient-Guided Token Pruning [Paper] [Code]
- [NeurIPS'25] Multi-head Temporal Latent Attention [Paper] [Code]
- [NeurIPS'25] Training Language Models to Reason Efficiently [Paper] [Code]
- [NeurIPS'25] Flexible Realignment of Language Models [Paper] [Code]
- [NeurIPS'25] ARM: Adaptive Reasoning Model [Paper] [Code]
- [NeurIPS'25] Ada-R1: Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization [Paper] [Code]
- [NeurIPS'25] Soft Thinking: Unlocking the Reasoning Potential of LLMs in Continuous Concept Space [Paper] [Code]
- [NeurIPS'25] Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning [Paper] [Code]
- [NeurIPS'25] Does Thinking More always Help? Mirage of Test-Time Scaling in Reasoning Models [Paper]
- [NeurIPS'25] Thinkless: LLM Learns When to Think [Paper] [Code]
- [arXiv] Beyond Fixed: Training-Free Variable-Length Denoising for Diffusion Large Language Models [Paper] [Code]
- [arXiv] Optimizing Length Compression in Large Reasoning Models [Paper] [Code]
- [arXiv] DPad: Efficient Diffusion Language Models with Suffix Dropout [Paper] [Code]
- [arXiv] CompLLM: Compression for Long Context Q&A [Paper]
- [arXiv] Less is More: Improving LLM Reasoning with Minimal Test-Time Intervention [Paper] [Code]
- [arXiv] SlimInfer: Accelerating Long-Context LLM Inference via Dynamic Token Pruning [Paper]
- [arXiv] Can Pruning Improve Reasoning? Revisiting Long-CoT Compression with Capability in Mind for Better Reasoning [Paper]
- [arXiv] A*-Thought: Efficient Reasoning via Bidirectional Compression for Low-Resource Settings [Paper] [Code]
- [arXiv] Steering LLM Thinking with Budget Guidance [Paper] [Code]
- [arXiv] TL;DR: Too Long, Do Re-weighting for Effcient LLM Reasoning Compression [Paper] [Code]
- [arXiv] EPiC: Towards Lossless Speedup for Reasoning Training through Edge-Preserving CoT Condensation [Paper] [Code]
- [arXiv] Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning [Paper]
- [arXiv] ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning [Paper] [Code]
- [ACL'25] CoT-Valve: Length-Compressible Chain-of-Thought Tuning [Paper] [Code]
- [ACL'25] Token-Budget-Aware LLM Reasoning [Paper] [Code]
- [ACL'25] Accurate KV Cache Quantization with Outlier Tokens Tracing [Paper] [Code]
- [ICLR'25] MrT5: Dynamic Token Merging for Efficient Byte-level Language Models [Paper] [Code]
- [KAIS] Dynamic token pruning for LLMs: leveraging task-specific attention and adaptive thresholds [Paper] [Code]
- [NeurIPS'24] Fast Best-of-N Decoding via Speculative Rejection [Paper] [Code]
- [EMNLP'24] Attention Score is not All You Need for Token Importance Indicator in KV Cache Reduction: Value Also Matters [Paper] [Code]
- [EMNLP'24] Memory-Efficient Fine-Tuning of Transformers via Token Selection [Paper] [Code]
- [arXiv] LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference [Paper]
- [ICLR'24] In-context autoencoder for context compression in a large language model [Paper] [Code]
- [EMNLP'24] Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning [Paper]
- [EMNLP'23] Optimizing Retrieval-augmented Reader Models via Token Elimination [Paper] [Code]
- [EMNLP'23] Context Compression for Auto-regressive Transformers with Sentinel Tokens [Paper] [Code]
- [EMNLP'23] Leap-of-Thought: Accelerating Transformers via Dynamic Token Routing [Paper] [Code]
- [EMNLP'23] TLM: Token-Level Masking for Transformers [Paper] [Code]
- [EMNLP'23] Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance? [Paper]
- [EMNLP'23] Adapting Language Models to Compress Contexts [Paper] [Code]
- [NeurIPS'23] Learning to Compress Prompts with Gist Tokens [Paper] [Code]
- [NeurIPS'23] Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers [Paper]
- [ACL'23] Efficient Transformers with Dynamic Token Pooling [Paper] [Code]
- [ACL'23] Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions [Paper]
- [ACL'23] LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models [Paper] [Code]
- [ACL'23] Revisiting Token Dropping Strategy in Efficient BERT Pretraining [Paper]
- [ACL'22] Pyramid-BERT: Reducing Complexity via Successive Core-set based Token Selection [Paper]
- [ACL'22] AdapLeR: Speeding up Inference by Adaptive Length Reduction [Paper] [Code]
- [KDD'22] Learned Token Pruning for Transformers [Paper] [Code]
- [EMNLP'22] Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models [Paper]
- [NeurIPS'21] Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning [Paper]
- [ACL'21] Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime with Search [Paper] [Code]
- [NAACL'21] TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference [Paper] [Code]
- [ICML'20] Power-bert: Accelerating bert inference via progressive word-vector elimination [Paper] [Code]
- [CVPR'26] StreamingTOM: Streaming Token Compression for Efficient Video Understanding [Paper] [Code]
- [CVPR'26] Prune2Drive: A Plug-and-Play Framework for Accelerating Vision-Language Models in Autonomous Driving [Paper] [Code]
- [CVPR'26] FocusUI: Efficient UI Grounding via Position-Preserving Visual Token Selection [Paper] [Code]
- [CVPR'26] ZOO-Prune: Training-Free Token Pruning via Zeroth-Order Gradient Estimation in Vision-Language Models [Paper] [Code]
- [CVPR'26] OTPrune: Distribution-Aligned Visual Token Pruning via Optimal Transport [Paper] [Code]
- [CVPR'26] Accelerating Streaming Video Large Language Models via Hierarchical Token Compression [Paper] [Code]
- [ICLR'26] HiDrop: Hierarchical Vision Token Reduction in MLLMs via Late Injection, Concave Pyramid Pruning, and Early Exit [Paper] [Code]
- [ICLR'26] FLoC: Facility Location-Based Efficient Visual Token Compression for Long Video Understanding [Paper]
- [ICLR'26] Prune Redundancy, Preserve Essence: Vision Token Compression in VLMs via Synergistic Importance-Diversity [Paper] [Code]
- [ICLR'26] AgilePruner: An Empirical Study of Attention and Diversity for Adaptive Visual Token Pruning in Large Vision-Language Models [Paper] [Code]
- [ICLR'26] FlashVID: Efficient Video Large Language Models via Training-free Tree-based Spatiotemporal Token Merging [Paper] [Code]
- [ICLR'26] VisionTrim: Unified Vision Token Compression for Training-Free MLLM Acceleration [Paper] [Code]
- [ICLR'26] PPE: Positional Preservation Embedding for Multimodal Large Language Models [Paper] [Code]
- [ICLR'26] MARC: Memory-Augmented RL Token Compression for Efficient Video Understanding [Paper] [Code]
- [ICASSP'26] PAR: Prompt-Aware Token Reduction Method for Efficient Large Multimodal Models [Paper]
- [ICASSP'26] ADAPTIVE-VOCO: COMPLEXITY-AWARE VISUAL TOKEN COMPRESSION FOR VISION-LANGUAGE MODELS [Paper]
- [AAAI'26] TinyChemVL: Advancing Chemical Vision-Language Models via Efficient Visual Token Reduction and Complex Reaction Tasks [Paper] [Code]
- [AAAI'26] Filter, Correlate, Compress: Training-Free Token Reduction for MLLM Acceleration [Paper] [Code]
- [AAAI'26] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models [Paper] [Code]
- [AAAI'26] FastDriveVLA: Efficient End-to-End Driving via Plug-and-Play Reconstruction-based Token Pruning [Paper]
- [AAAI'26] TabFlash: Efficient Table Understanding with Progressive Question Conditioning and Token Focusing [Paper] [Code]
- [WACV'26] Delta-LLaVA: Base-then-Specialize Alignment for Token-Efficient Vision-Language Models [Paper]
- [arXiv] LEO-VL: Efficient Scene Representation for Scalable 3D Vision-Language Learning [Paper] [Code]
- [arXiv] On the Adversarial Robustness of Large Vision-Language Models under Visual Token Compression [Paper]
- [arXiv] VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering Twice [Paper] [Code]
- [arXiv] SparseOccVLA: Bridging Occupancy and Vision-Language Models via Sparse Queries for Unified 4D Scene Understanding and Planning [Paper] [Code]
- [arXiv] Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language Models [Paper]
- [arXiv] Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression Methods [Paper] [Code]
- [ACL'25] Token Pruning in Multimodal Large Language Models: Are We Solving the Right Problem? [Paper]
- [TPAMI'25] MovieChat+: Question-Aware Sparse Memory for Long Video Question Answering [Paper] [Code]
- [ICML'25] CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models [Paper] [Code]
- [ACM MM'25] TimeChat-Online: 80% Visual Tokens are Naturally Redundant in Streaming Videos [Paper] [Code]
- [ACM MM'25] Short-LVLM: Compressing and Accelerating Large Vision-Language Models by Pruning Redundant Layers [Paper] [Code]
- [ACM MM'25] VISA: Group-wise Visual Token Selection and Aggregation via Graph Summarization for Efficient MLLMs Inference [Paper] [Code]
- [ACM MM'25] Mitigating Information Loss under High Pruning Rates for Efficient Large Vision Language Models [Paper] [Code]
- [EMNLP'25] CROP: Contextual Region-Oriented Visual Token Pruning [Paper] [Code]
- [EMNLP'25] AdaTP: Attention-Debiased Token Pruning for Video Large Language Models [Paper]
- [EMNLP'25] D-CoDe: Scaling Image-Pretrained VLMs to Video via Dynamic Compression and Question Decomposition [Paper] [Code]
- [EMNLP'25] Static or Dynamic: Towards Query-Adaptive Token Selection for Video Question Answering [Paper] [Code]
- [EMNLP'25] Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More [Paper] [Code]
- [EMNLP'25] LightVLM: Acceleraing Large Multimodal Models with Pyramid Token Merging and KV Cache Compression [Paper]
- [EMNLP'25] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models [Paper] [Code]
- [ACL'25] PruneVid: Visual Token Pruning for Efficient Video Large Language Models [Paper] [Code]
- [NeurIPS'25] SCOPE: Saliency-Coverage Oriented Token Pruning for Efficient Multimodel LLMs [Paper] [Code]
- [NeurIPS'25] EfficientVLA: Training-Free Acceleration and Compression for Vision-Language-Action Models [Paper] [Code]
- [NeurIPS'25] Don't Just Chase "Highlighted Tokens" in MLLMs: Revisiting Visual Holistic Context Retention [Paper] [Code]
- [NeurIPS'25] UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface [Paper] [Code]
- [NeurIPS'25] VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning [Paper] [Code]
- [NeurIPS'25] Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs [Paper] [Code]
- [NeurIPS'25] VQToken: Neural Discrete Token Representation Learning for Extreme Token Reduction in Video Large Language Models [Paper] [Code]
- [NeurIPS'25] Vision-centric Token Compression in Large Language Model [Paper]
- [NeurIPS'25] AutoPrune: Each Complexity Deserves a Pruning Policy [Paper] [Code]
- [NeurIPS'25] Recurrent Attention-based Token Selection for Efficient Streaming Video-LLMs [Paper] [Code]
- [NeurIPS'25] FlexSelect: Flexible Token Selection for Efficient Long Video Understanding [Paper] [Code]
- [NeurIPS'25] Balanced Token Pruning: Accelerating Vision Language Models Beyond Local Optimization [Paper] [Code]
- [NeurIPS'25] Why 1 + 1 < 1 in Visual Token Pruning: Beyond Naive Integration via Multi-Objective Balanced Covering [Paper]
- [NeurIPS'25] HoliTom: Holistic Token Merging for Fast Video Large Language Models [Paper] [Code]
- [NeurIPS'25] FastVID: Dynamic Density Pruning for Fast Video Large Language Models [Paper] [Code]
- [ICCV'25] FALCON: Resolving Visual Redundancy and Fragmentation in High-resolution Multimodal Large Language Models via Visual Registers [Paper] [Code]
- [ICCV'25] Oasis: One Image is All You Need for Multimodal Instruction Data Synthesis [Paper] [Code]
- [ICCV'25] Growing a Twig to Accelerate Large Vision-Language Models [Paper] [Code]
- [ICCV'25] Multi-Granular Spatio-Temporal Token Merging for Training-Free Acceleration of Video LLMs [Paper] [Code]
- [arXiv] Sparsity Forcing: Reinforcing Token Sparsity of MLLMs [Paper]
- [arXiv] ColaVLA: Leveraging Cognitive Latent Reasoning for Hierarchical Parallel Trajectory Planning in Autonomous Driving [Paper][Code]
- [arXiv] Action-aware Dynamic Pruning for Efficient Vision-Language-Action Manipulation [Paper][Code]
- [arXiv] The Better You Learn, The Smarter You Prune: Towards Efficient Vision-language-action Models via Differentiable Token Pruning [Paper][Code]
- [arXiv] STORM: Token-Efficient Long Video Understanding for Multimodal LLMs [Paper] [Code]
- [arXiv] DeepSeek-OCR: Contexts Optical Compression [Paper][Code]
- [arXiv] Glyph: Scaling Context Windows via Visual-Text Compression [Paper][Code]
- [arXiv] VTCBench: Can Vision-Language Models Understand Long Context with Vision-Text Compression? [Paper][Code]
- [arXiv] OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models [Paper] [Code]
- [arXiv] StreamingTOM: Streaming Token Compression for Efficient Video Understanding [Paper]
- [arXiv] Adaptive Token Merging for Efficient Transformer Semantic Communication at the Edge [Paper]
- [arXiv] D2Pruner: Debiased Importance and Structural Diversity for MLLM Token Pruning [Paper] [Code]
- [arXiv] TransPrune: Token Transition Pruning for Efficient Large Vision-Language Model [Paper] [Code]
- [arXiv] A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models [Paper] [Code]
- [arXiv] VisionSelector: End-to-End Learnable Visual Token Compression for Efficient Multimodal LLMs [Paper][Code]
- [arXiv] Can Visual Input Be Compressed? A Visual Token Compression Benchmark for Large Multimodal Models [Paper][Code]
- [arXiv] HiPrune: Training-Free Visual Token Pruning via Hierarchical Attention in Vision-Language Models [Paper][Code]
- [arXiv] To Sink or Not to Sink: Visual Information Pathways in Large Vision-Language Models [Paper] [Code]
- [arXiv] Sparsity Forcing: Reinforcing Token Sparsity of MLLMs [Paper]
- [arXiv] Fine-grained Token Allocation Via Operation Pruning for Efficient MLLMs [Paper][Code]
- [arXiv] GreedyPrune: Retenting Critical Visual Token Set for Large Vision Language Models [Paper]
- [arXiv] Generic Token Compression in Multimodal Large Language Models from an Explainability Perspective [Paper]
- [arXiv] DynTok: Dynamic Compression of Visual Tokens for Efficient and Effective Video Understanding [Paper]
- [arXiv] SmolVLM: Redefining small and efficient multimodal models [Paper] [Code]
- [arXiv] Similarity-Aware Token Pruning: Your VLM but Faster [Paper] [Code]
- [arXiv] LFTR: Learning-Free Token Reduction for Multimodal Large Language Models [Paper]
- [ICCV'25] Dynamic-VLM: Simple Dynamic Visual Token Compression for VideoLLM [Paper]
- [ICML'25] Streamline Without Sacrifice - Squeeze out Computation Redundancy in LMM [Paper]
- [ICML'25] SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference [Paper] [Code]
- [ICML'25] LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding [Paper] [Code]
- [ICML'25] Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation [Paper] [Code]
- [CVPR'25] LION-FS: Fast & Slow Video-Language Thinker as Online Video Assistant [Paper] [Code]
- [CVPR'25] A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for accelerating Large VLMs [Paper] [Code]
- [CVPR'25] PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction [Paper] [Code]
- [CVPR'25] Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language Models [Paper] [Code]
- [CVPR'25] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models [Paper]
- [CVPR'25] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models [Paper] [Code]
- [CVPR'25] SynerGen-VL: Towards Synergistic Image Understanding and Generation with Vision Experts and Token Folding [Paper]
- [CVPR'25] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models [Paper]
- [CVPR'25] TopV: Compatible Token Pruning with Inference Time Optimization for Fast and Low-Memory Multimodal Vision Language Model [Paper]
- [CVPR'25] Accelerating Multimodel Large Language Models by Searching Optimal Vision Token Reduction [Paper]
- [CVPR'25] ATP-LLaVA: Adaptive Token Pruning for Large Vision Language Models [Paper] [Code]
- [CVPR'25] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models [Paper]
- [CVPR'25] VoCo-LLaMA: Towards Vision Compression with Large Language Models [Paper] [Code]
- [CVPR'25] VisionZip: Longer is Better but Not Necessary in Vision Language Models [Paper] [Code]
- [NAACL'25 Finding] LVPruning: An Effective yet Simple Language-Guided Vision Token Pruning Approach for Multi-modal Large Language Models [Paper]
- [ICLR'25] Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification [Paper] [Code]
- [ICLR'25] Inference Optimal VLMs Need Only One Visual Token But Larger Models [Paper]
- [ICLR'25] Towards Semantic Equivalence of Tokenization in Multimodal LLM [Paper] [Code]
- [ICLR'25] LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token [Paper] [Code]
- [ICLR'25] Matryoshka Multimodal Models [Paper] [Code]
- [ICLR'25] MrT5: Dynamic Token Merging for Efficient Byte-level Language Models [Paper] [Code]
- [ICLR'25] TempMe: Video Temporal Token Merging for Efficient Text-Video Retrieval [Paper] [Code]
- [ICLR'25] Inference Optimal VLMs Need Fewer Visual Tokens and More Parameters [Paper]
- [WACV'25] VLTP: Vision-Language Guided Token Pruning for Task-Oriented Segmentation [Paper] [Code]
- [WACV'25] Patch Ranking: Token Pruning as Ranking Prediction for Efficient CLIP [Paper]
- [AAAI'25] Boosting Multimodal Large Language Models with Visual Tokens Withdrawal for Rapid Inference [Paper] [Code]
- [AAAI'25] HiRED: Attention-Guided Token Dropping for Efficient Inference of High-Resolution Vision-Language Models in Resource-Constrained Environments [Paper] [Code]
- [AAAI'25] Fit and Prune: Fast and Training-free Visual Token Pruning for Multi-modal Large Language Models [Paper] [Code]
- [COLING'25] Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMs [Paper] [Code]
- [arXiv] Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models [Paper] [Code]
- [arXiv] ZipR1: Reinforcing Token Sparsity in MLLMs [Paper]
- [arXiv] Fast-Slow Thinking for Large Vision-Language Model Reasoning [Paper] [Code]
- [arXiv] Dynamic Token Reduction during Generation for Vision Language Models [Paper]
- [arXiv] Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration [Paper] [Code]
- [arXiv] FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models [Paper] [Code]
- [arXiv] VScan: Rethinking Visual Token Reduction for Efficient Large Vision-Language Models [Paper] [Code]
- [arXiv] ToDRE: Visual Token Pruning via Diversity and Task Awareness for Efficient Large Vision-Language Models [Paper]
- [EMNLP'24] TinyChart: Efficient Chart Understanding with Program-of-Thoughts Learning and Visual Token Merging [Paper] [Code]
- [NeurIPS'24] Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis [Paper] [Code]
- [ECCV'24] IVTP: Instruction-guided Visual Token Pruning for Large Vision-Language Models [Paper]
- [ECCV'24] An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Acceleration for VLLM Inference [Paper] [Code]
- [ICML'24] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers [Paper] [Code]
- [ECCV'24] LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models [Paper] [Code]
- [ECCV'24] BRAVE: Broadening the visual encoding of vision-language models [Paper] [Code]
- [CVPR'24] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding [Paper] [Code]
- [CVPR'24] MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer [Paper] [Code]
- [CVPR'24] Honeybee: Locality-enhanced Projector for Multimodal LLM [Paper] [Code]
- [arXiv] ZipVL: Efficient Large Vision-Language Models with Dynamic Token Sparsification and KV Cache Compression [Paper]
- [arXiv] Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration [Paper] [Code]
- [OpenReview] LVP: Language-guide Visual Projector for Efficient Multimodal LLM [Paper]
- [arXiv] FrameFusion: Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models [Paper] [Code]
- [arXiv] AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning [Paper] [Code]
- [arXiv] TokenPacker: Efficient Visual Projector for Multimodal LLM [Paper] [Code]
- [arXiv] mPLUG-DocOwl2: High-resolution Compressing for OCR-free Multi-page Document Understanding [Paper] [Code]
- [arXiv] TempMe: Video Temporal Token Merging for Efficient Text-Video Retrieval [Paper] [Code]
- [arXiv] Recoverable Compression: A Multimodal Vision Token Recovery Mechanism Guided by Text Information [Paper]
- [arXiv] Token-level Correlation-guided Compression for Efficient Multimodal Document Understanding [Paper] [Code]
- [arXiv] DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models [Paper] [Code]
- [arXiv] CATP: Cross-Attention Token Pruning for Accuracy Preserved Multimodal Model Inference [Paper]
- [arXiv] MobileVLM V2: Faster and Stronger Baseline for Vision Language Model [Paper] [Code]
- [arXiv] LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models [Paper] [Code]
- [arXiv] iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models [Paper] [Code]
- [ICLR'26] ACON: Optimizing Context Compression for Long-horizon LLM Agents [Paper] [Code]
- [arXiv] SWE-Pruner: Self-Adaptive Context Pruning for Coding Agents [Paper] [Code]
- [arXiv] AgentOCR: Reimagining Agent History via Optical Self-Compression [Paper]
- [arXiv] CARL: Critical Action Focused Reinforcement Learning for Multi-Step Agent [Paper]
- [arXiv] Latent Collaboration in Multi-Agent Systems [Paper] [Code]
- [arXiv] Scaling Graph Chain-of-Thought Reasoning: A Multi-Agent Framework with Efficient LLM Serving [Paper]
- [ACL'25] Efficient Pretraining Data Selection for Language Models via Multi-Actor Collaboration [Paper] [Code]
- [NAACL'25] S2-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency [Paper]
- [arXiv] Training-free Token Reduction for Vision Mamba [Paper]
- [arXiv] Dynamic Vision Mamba [Paper] [Code]
- [EMNLP'24] Rethinking Token Reduction for State Space Models [Paper] [Code]
- [NeurIPS'24] Exploring Token Pruning in Vision State Space Models [Paper]
- [ECCV'24 Workshop] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion [Paper] [Code]
- [FCCM'24] Accelerating ViT Inference on FPGA through Static and Dynamic Pruning [Paper]
- [TCASI] BSViT: A Bit-Serial Vision Transformer Accelerator Exploiting Dynamic Patch and Weight Bit-Group Quantization [Paper]
- [ASPDAC'24] PRIMATE: Processing in Memory Acceleration for Dynamic Token-Pruning Transformers [Paper]
- [DATE'24] ViT-ToGo : Vision Transformer Accelerator with Grouped Token Pruning [Paper]
- [HPCA'23] HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers [Paper]
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning [Paper] [Code]
