Skip to content
/ MDFS Public

[TMM-2024] Pytorch implementation of "Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics".

Notifications You must be signed in to change notification settings

eezkni/MDFS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics

IEEE Transactions on Multimedia (T-MM)

Zhangkai Ni1, Yue Liu2, Keyan Ding3, Wenhan Yang4, Hanli Wang1, Shiqi Wang2

1Tongji University, 2City University of Hong Kong, 3Zhejiang University, 4Peng Cheng Laboratory

This repository provides the official PyTorch implementation for the paper “Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics”, IEEE Transactions on Multimedia (TMM). Paper

Teaser

About MDFS

Deep learning-based methods have significantly influenced the blind image quality assessment (BIQA) field, however, these methods often require training using large amounts of human rating data. In contrast, traditional knowledge-based methods are cost-effective for training but face challenges in effectively extracting features aligned with human visual perception. To bridge these gaps, we propose integrating deep features from pre-trained visual models with a statistical analysis model into a Multi-scale Deep Feature Statistics (MDFS) model for achieving opinion-unaware BIQA (OU-BIQA), thereby eliminating the reliance on human rating data and significantly improving training efficiency. Specifically, we extract patch-wise multi-scale features from pre-trained vision models, which are subsequently fitted into a multivariate Gaussian (MVG) model. The final quality score is determined by quantifying the distance between the MVG model derived from the test image and the benchmark MVG model derived from the high-quality image set. A comprehensive series of experiments conducted on various datasets show that our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models. Furthermore, it shows improved generalizability across diverse target-specific BIQA tasks.

Experimental Results

Quick Start

Requirements:

  • Python>=3.6
  • Pytorch>=1.0

Train:

  • Download the dataset and put it in the data folder. The training data can be downloaded from here. Then run the following command:
python train.py

Test:

  • Download the pre-trained model from here and put it in the same folder as the test.py file. Then run the following command:
python test.py

Citation

If you find our work useful, please cite it as

@article{ni2024opinion,
  title={Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics},
  author={Ni, Zhangkai and Liu, Yue and Ding, Keyan and Yang, Wenhan and Wang, Hanli and Wang, Shiqi},
  journal={IEEE Transactions on Multimedia},
  year={2024},
  publisher={IEEE}
}

Contact

Thanks for your attention! If you have any suggestion or question, feel free to leave a message here or contact Dr. Zhangkai Ni (eezkni@gmail.com).

License

MIT License

About

[TMM-2024] Pytorch implementation of "Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics".

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages