DToMA is a training-free method designed to enhance the efficiency and comprehension capabilities of Video Large Language Models (VideoLLMs) in long video understanding tasks. Inspired by human cognitive reasoning processes, DToMA dynamically manipulates visual tokens across three stages β shallow, intermediate, and deep β leading to significant computational savings without sacrificing performance.
- Training-free approach β No need to fine-tune models.
- Generalizes across architectures β Works with various VideoLLM backbones.
- Three-stage reasoning optimization β Tailored to mimic human cognition.
- Efficiency gains β Up to 70% reduction in visual tokens with minimal or no loss in performance.
Details can be found in https://github.com/yuanrr/DToMA