Skip to content

USTC-IMCL/HST-for-Compressed-Image-SR

 
 

Repository files navigation

HST

HST: Hierarchical Swin Transformer for Compressed Image Super-resolution

HST, Bingchen Li, Xin Li, et al.

Achieved the fifth place in the competition of the AIM2022 compressed image super-resolution track.

Accepted by ECCV2022 Workshop

PWC PWC PWC PWC PWC PWC PWC PWC PWC

image

Abstract

Compressed Image Super-resolution has achieved great attention in recent years, where images are degraded with compression artifacts and low-resolution artifacts. Since the complex hybrid distortions, it is hard to restore the distorted image with the simple cooperation of super-resolution and compression artifacts removing. In this paper, we take a step forward to propose the Hierarchical Swin Transformer (HST) network to restore the low-resolution compressed image, which jointly captures the hierarchical feature representations and enhances each-scale representation with Swin transformer, respectively. Moreover, we find that the pretraining with Super-resolution (SR) task is vital in compressed image super-resolution. To explore the effects of different SR pretraining, we take the commonly-used SR tasks (e.g., bicubic and different real super-resolution simulations) as our pretraining tasks, and reveal that SR plays an irreplaceable role in the compressed image super-resolution. With the cooperation of HST and pre-training, our HST achieves the fifth place in AIM 2022 challenge on the low-quality compressed image super-resolution track, with the PSNR of 23.51dB. Extensive experiments and ablation studies have validated the effectiveness of our proposed methods.

Usages

More details will be decribed progressively.

The checkpoints for HST are released:

Cite US

Please cite us if this work is helpful to you.

@inproceedings{li2022hst, 
title={HST: Hierarchical Swin Transformer for Compressed Image Super-resolution}, 
   author={Li, Bingchen and Li, Xin and Lu, Yiting and Liu, Sen and Feng, Ruoyu and Chen, Zhibo}, 
   booktitle={Proceedings of the European Conference on Computer Vision (ECCV) Workshops}, 
   year={2022} 
}

The model is implemented based on the works: MSGDN, SwinIR, SwinTransformer

About

The code for AIM2022 compressed image super-resolution

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%