Skip to content

[ECCV2024] Learning Video Context as Interleaved Multimodal Sequences

Notifications You must be signed in to change notification settings

showlab/MovieSeq

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MovieSeq (ECCV'24)

overview

MovieSeq is a method designed to enhance Large Multimodal Models for improved video in-context learning using interleaved multimodal sequences (e.g., character photo, human dialogues, etc).

We have developed a lightweight practical code that can be easily integrated into existing LMMs (e.g., GPT-4o) for easy usage.

Environments

conda create --name movieseq python=3.10
conda activate movieseq
conda install pytorch==2.0.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia

pip install git+https://github.com/m-bain/whisperx.git
pip install tqdm moviepy openai opencv-python

Guideline

Please refer to example.ipynb to learn how MovieSeq works. Have fun!

About

[ECCV2024] Learning Video Context as Interleaved Multimodal Sequences

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published