Skip to content

JPShi12/VideoLoom

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding

Jiapeng Shi, Junke Wang, Zuyao You, Bo He, Zuxuan Wuβœ‰

[πŸ“œ Paper] [πŸ“₯ Model] [πŸ€— Dataset]

πŸ”Ž Overview

This paper presents VideoLoom, a unified Video Large Language Model (Video LLM) for joint spatial-temporal understanding. To facilitate the development of fine-grained spatial and temporal localization capabilities, we curate LoomData-8.7k, a human-centric video dataset with temporally grounded and spatially localized captions. With this, VideoLoom achieves state-of-the-art or highly competitive performance across a variety of spatial and temporal benchmarks (e.g., 63.1 J&F on ReVOS for referring video object segmentation, and 48.3 R1@0.7 on Charades-STA for temporal grounding). In addition, we introduce LoomBench, a novel benchmark consisting of temporal, spatial, and compositional video-question pairs, enabling a comprehensive evaluation of Video LLMs from diverse aspects. Collectively, these contributions offer a universal and effective suite for joint spatial-temporal video understanding, setting a new standard in multimodal intelligence.

Model

πŸ”₯ News

  • Jan. 13, 2026 Our paper and checkpoints are released.

πŸ“¦ Model Zoo

We provide the following models:

Model Name Base MLLM Checkpoints
VideoLoom-4B InternVL2.5-4B πŸ€— link
VideoLoom-8B InternVL3-8B πŸ€— link

βœ… Todo List

  • Release our checkpoints
  • Release our evaluation code
  • Release LoomData
  • Release our training code
  • Release LoomBench

πŸ“œ Citation

If you find our work helpful, please consider giving a star ⭐ and citation πŸ“

@article{shi2026videoloom,
      title={VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding}, 
      author={Shi, Jiapeng and Wang, Junke and You, Zuyao and He, Bo and Wu, Zuxuan},
      journal={arXiv preprint arXiv:2601.07290},
      year={2026}
}

πŸ“§ Contact

Feel free to contact us if you have any questions or suggestions

🀝 Acknowledgements

We refer to Sa2VA and TimeChat to build our codebase. Thanks for their wonderful project.

About

VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •