Skip to content

VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation

License

Notifications You must be signed in to change notification settings

THUDM/VisionReward

Repository files navigation

VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation

๐Ÿ“ƒ Paper โ€ข ๐Ÿ–ผ Dataset โ€ข ๐Ÿค— HF Repo โ€ข ๐ŸŒ ไธญๆ–‡ๅšๅฎข

VisionReward is a fine-grained, multi-dimensional reward model designed to capture human preferences in images and videos. By breaking down subjective judgments into interpretable dimensions with weighted scoring, it delivers precise and comprehensive evaluations. Excelling in video quality prediction, VisionReward sets a new benchmark by thoroughly analyzing dynamic video features.

โœจ Key Highlights:

  • New reward model& SOTA Performance: VisionReward, a fine-grained, multi-dimensional, interpretable reward model, achieves 64.0 (Tau) / 72.1 (Diff) on Video Preference Test Set, surpassing VideoScore by 17.2% and setting a new state-of-the-art!
  • Fine-Grained Multidimensional Dataset: A rich, high-quality dataset with detailed annotations drives VisionRewardโ€™s precise understanding of human preferences across images and videos.
  • Multi-objective preference optimization(MPO): Achives stable and controllable RLHF, enabling the generate model to consider and balance multiple dimensions of human preferences simultaneously.

๐Ÿš€ Release Information

โœจ Models

๐Ÿ“‹ Model ๐Ÿง  Base Model ๐Ÿค— HF Link ๐Ÿค– MS Link
VisionReward-Image cogvlm2-llama3-chat-19B ๐Ÿค— Huggingface ๐Ÿค– ModelScope
VisionReward-Video cogvlm2-video-llama3-chat ๐Ÿค— Huggingface ๐Ÿค– ModelScope

๐ŸŽจ Datasets

๐Ÿ“‹ Dataset ๐Ÿ“ Annotation ๐Ÿค— HF Link ๐Ÿค– MS Link
VisionRewardDB-Image 48K * 60 (dimensions) ๐Ÿค— Huggingface ๐Ÿค– ModelScope
VisionRewardDB-Video 33K * 64 (dimensions) ๐Ÿค— Huggingface ๐Ÿค– ModelScope

๐Ÿ”ง Quick Start

Set Up the Environment

Run the following commands to install dependencies:

pip install -r requirements.txt

Run VQA (Vision-Question-Answering)

Perform a checklist query using the commands below. Available image and video questions can be found in VisionReward_Image/VisionReward_image_qa.txt and VisionReward_Video/VisionReward_video_qa.txt, respectively.

# For Image QA
python inference-image.py --bf16 --question [[your_question]]
# Input: image_path + prompt + question
# Output: yes/no

# For Video QA
python inference-video.py --question [[your_question]]
# Input: video_path + prompt + question
# Output: yes/no

Scoring with VisionReward

Calculate scores for images/videos with the following commands. The corresponding weights are in VisionReward_Image/weight.json and VisionReward_Video/weight.json.

# Scoring an Image
python inference-image.py --bf16 --score 
# Input: image_path + prompt
# Output: score

# Scoring a Video
python inference-video.py --score
# Input: video_path + prompt
# Output: score

Compare Two Videos

Directly compare the quality of two videos, leveraging the weights in VisionReward_Video/weight.json.

python inference-video.py --compare
# Input: video_path1 + video_path2 + prompt
# Output: better_video


๐Ÿ“š Citation

If you find VisionReward helpful, please cite us:

@misc{xu2024visionrewardfinegrainedmultidimensionalhuman,
      title={VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation}, 
      author={Jiazheng Xu and Yu Huang and Jiale Cheng and Yuanming Yang and Jiajun Xu and Yuan Wang and Wenbo Duan and Shen Yang and Qunlin Jin and Shurun Li and Jiayan Teng and Zhuoyi Yang and Wendi Zheng and Xiao Liu and Ming Ding and Xiaohan Zhang and Xiaotao Gu and Shiyu Huang and Minlie Huang and Jie Tang and Yuxiao Dong},
      year={2024},
      eprint={2412.21059},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.21059}, 
}

About

VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages