-
Apple
- Singapore
-
03:04
(UTC +08:00) - https://iceclear.github.io
- @Iceclearwjy
Highlights
- Pro
IQA
An emotion extraction system for images, that extracts emotion which will be felt by the user of viewing the image, representing them in a 2-Dimensional space that represents Arousal and Valence.
👁️ 🖼️ 🔥PyTorch Toolbox for Image Quality Assessment, including PSNR, SSIM, LPIPS, FID, NIQE, NRQM(Ma), MUSIQ, TOPIQ, NIMA, DBCNN, BRISQUE, PI and more...
Source code for the CVPR'20 paper "Blindly Assess Image Quality in the Wild Guided by A Self-Adaptive Hyper Network"
Code for VCRNet: Visual Compensation Restoration Network for No-Reference Image Quality Assessment
[unofficial] CVPR2014-Convolutional neural networks for no-reference image quality assessment
Fast and differentiable MS-SSIM and SSIM for pytorch.
IQA: Deep Image Structure and Texture Similarity Metric
Official implementation for "CONVIQT: Contrastive Video Quality Estimator"
Code for the paper "Understanding Aesthetics with Language: A Photo Critique Dataset for Aesthetic Assessment"
[CVPRW oral 2022] MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment
[TMLR 2023] as a featured article (spotlight 🌟 or top 0.01% of the accepted papers). In this study, we systematically examine the robustness of both traditional and learned perceptual similarity me…
LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation
[CVPR2023] Blind Image Quality Assessment via Vision-Language Correspondence: A Multitask Learning Perspective
A curated list of papers and resources for text-to-image evaluation.
[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
A one-stop library to standardize the inference and evaluation of all the conditional image generation models. [ICLR 2024]
A comprehensive collection of IQA papers
[NeurIPS2022] Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)
[CVPR 2025] Teaching Large Language Models to Regress Accurate Image Quality Scores using Score Distribution
RichHF-18K dataset contains rich human feedback labels we collected for our CVPR'24 paper: https://arxiv.org/pdf/2312.10240, along with the file name of the associated labeled images (no urls or im…
Q-Insight is open-sourced at https://github.com/bytedance/Q-Insight. This repository will not receive further updates.


