Skip to content

MArSha1147/MetaCompress

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

[CVPR 2026] Rethinking Token Reduction for Large Vision-Language Models

Overview

This is the official code implementation of our CVPR 2026 paper "Rethinking Token Reduction for Large Vision-Language Models". We propose a novel learning-based, prompt-agnostic token compression method tailored for Large Vision-Language Models (LVLMs) in multi-turn Visual Question Answering (MT-VQA) scenarios.

Status

⌛️ Code Release Update: The code implementation is currently being organized and will be released as soon as possible.

About

[CVPR 2026] Rethinking Token Reduction for Large Vision-Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors