Skip to content

A curated list of papers and resources for text-to-image evaluation.

License

Notifications You must be signed in to change notification settings

zhangjiewu/awesome-t2i-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 

Repository files navigation

awesome-t2i-eval

Awesome License: MIT Made With Love

This repository contains a collection of resources and papers on Text-to-Image evaluation.

Table of Contents

Papers

Metrics

  • IS (Inception Score) Star arXiv

    • Summary: IS evaluates the quality and diversity of generated images.
    • Implemetation: PyTorchStar
  • FID (Fréchet Inception Distance) Star arXiv

    • Summary: FID measures the quality of generated images by comparing the distribution of generated images to real images in the feature space of a pre-trained Inception network.
    • Implemetation: PyTorchStar
  • CLIP Score Star arXiv

    • Summary: CLIP Score measures the consistency between text and generated images.
  • BLIP Score Star arXiv

    • Summary: BLIP Score measures the consistency between text and generated images, similar to CLIP Score but with different methodologies.

Feel free to Fork this repository and contribute via Pull Requests. If you have any questions, please open an Issue.

About

A curated list of papers and resources for text-to-image evaluation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published