Skip to content

yuqing-liu-dut/NRIQA_SR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commits
 
 

Repository files navigation

Textural-Structural Joint Learning for No-Reference Super-Resolution Image Quality Assessment

Abstract

Image super-resolution (SR) has been widely investigated in recent years. However, it is challenging to fairly estimate the performances of various SR methods, as the lack of reliable and accurate criteria for perceptual quality. Existing SR image quality assessment (IQA) metrics usually concentrate on the specific kind of degradation without distinguishing the visual sensitive areas, which have no adaptive ability to describe the diverse SR degeneration situations. In this paper, we focus on the textural and structural degradation of image SR which acts as a critical role for visual perception, and design a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet. By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable, which improves the prediction accuracy. Feature normalization (F-Norm) is also developed to investigate the inherent spatial correlation of SR features and boost the network representation capacity. Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.

Code

Comming soon.

Citation

Liu Y, Jia Q, Wang S, et al. Textural-Structural Joint Learning for No-Reference Super-Resolution Image Quality Assessment[J]. arXiv preprint arXiv:2205.13847, 2022.

About

Textural-Structural Joint Learning for No-Reference Super-Resolution Image Quality Assessment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published