This repository contains a collection of resources and papers on Text-to-Image evaluation.
-
What You See is What You Read? Improving Text-Image Alignment Evaluation (Jul., 2023)
-
Let's ViCE! Mimicking Human Cognitive Behavior in Image Generation Evaluation (Jul., 2023)
-
Divide, Evaluate, and Refine: Evaluating and Improving Text-to-Image Alignment with Iterative VQA Feedback (Jul., 2023)
-
T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation (Jul., 2023)
-
Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis (Jun., 2023)
-
Visual Programming for Text-to-Image Generation and Evaluation (May, 2023)
-
LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation (May, 2023)
-
X-IQE: eXplainable Image Quality Evaluation for Text-to-Image Generation with Visual Large Language Models (May, 2023)
-
Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation (May, 2023)
-
ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation (Apr., 2023)
-
Better Aligning Text-to-Image Models with Human Preference (Mar., 2023)
-
TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering (Mar., 2023)
-
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers (Feb., 2022)
-
- Summary: IS evaluates the quality and diversity of generated images.
- Implemetation: PyTorch
-
FID (Fréchet Inception Distance)
- Summary: FID measures the quality of generated images by comparing the distribution of generated images to real images in the feature space of a pre-trained Inception network.
- Implemetation: PyTorch
-
- Summary: CLIP Score measures the consistency between text and generated images.
-
- Summary: BLIP Score measures the consistency between text and generated images, similar to CLIP Score but with different methodologies.
Feel free to Fork this repository and contribute via Pull Requests. If you have any questions, please open an Issue.