Skip to content

dayoon-ko/ExFunTube

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Can Language Models Laugh at YouTube Short-form Videos? [EMNLP 2023] [ArXiv]

ExFunTube Dataset   [Dataset Page]

To evaluate whether LLMs can understand humor in videos, we collect user-generated, short-form funny videos from YouTube. It consists of 10,136 videos annotated with start and end timestamps of funny moments and corresponding explanations.

How to make LLMs watch and explain funny videos?

Since black-box LLMs can be given only text, we have to convert videos into text form. For this, we devise a zero-shot video-to-text prompting framework. We divide the video into visual and audio, and audio is further split into transcript and sound tags. To gather video information from three different components, we use SOTA models in a zero-shot manner. And then, we configure prompts with gathered texts.

ExFunTube

Usage

To run filter pipeline:
$ cd pipeline
$ conda env create --file environment.yaml
$ pip install git+https://github.com/openai/CLIP.git
$ python run_pipeline.py --video_ids {video_id_file_name}
To run prompting:
$ cd prompting
$ conda env create --file environment.yaml
$ pip install git+https://github.com/openai/CLIP.git
$ python run_prompting.py

Citation

@inproceedings{ko2023can,
      title={Can Language Models Laugh at YouTube Short-form Videos?},
      author={Dayoon Ko, Sangho Lee, Gunhee Kim},
      booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
      year={2023}
    }

About

The source code of ExFunTube

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published