Skip to content

ddghjikle/Out-of-sample-restoration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Generalizing to Out-of-Sample Degradations via Model Reprogramming (IEEE TIP 2024)

Official implementation of ["Generalizing to Out-of-Sample Degradations via Model Reprogramming"], Runhua Jiang, YaHong Han. DOI: 10.1109/TIP.2024.3378181

We will update links once arXiv or journal versions are available.

Existing image restoration models are typically designed for specific tasks and struggle to generalize to out-of-sample degradations not encountered during training. While zero-shot methods can address this limitation by fine-tuning model parameters on testing samples, their effectiveness relies on predefined natural priors and physical models of specific degradations. Nevertheless, determining out-of-sample degradations faced in real-world scenarios is always impractical. As a result, it is more desirable to train restoration models with inherent generalization ability. To this end, this work introduces the Out-of-Sample Restoration (OSR) task, which aims to develop restoration models capable of handling out-of-sample degradations. An intuitive solution involves pre-translating out-of-sample degradations to known degradations of restoration models. However, directly translating them in the image space could lead to complex image translation issues. To address this issue, we propose a model reprogramming framework, which translates out-of-sample degradations by quantum mechanic and wave functions. Specifically, input images are decoupled as wave functions of amplitude and phase terms. The translation of out-of-sample degradation is performed by adapting the phase term. Meanwhile, the image content is maintained and enhanced in the amplitude term. By taking these two terms as inputs, restoration models are able to handle out-of-sample degradations without fine-tuning. Through extensive experiments across multiple evaluation cases, we demonstrate the effectiveness and flexibility of our proposed framework.

We will release training codes of Mindspore and codes of pytorch as soon as possible.

1. Introduction

The introduced out-of-sample restoration task is to develop models with the capability of handling unknown degradations. It extends previous restoration researches as paying more attention to cross-degradation generalization. Specifically, task-specific methods focus on establishing image-to-image translation network for a certain kind of degradation, while task-agnostic methods combine various types of degraded images to train a single restoration network.

As above results show, while the task-specific network Dehaze effectively eliminates haze degradation, it struggles to address out-of-sample degradations such as blur and rain. Furthermore, the task-agnostic network SR+dehaze, trained on datasets encompassing super-resolution and dehazing networks, fails to exhibit improved performance on rainy examples. Although zero-shot researches can address this issue by fine-tuning restoration models on testing samples, they require prior knowledge of degradation categories. In contrast, the OSR task aims to learn generalizable models from a limited set of training samples, making it different and complementary to previous researches.

2. Method

First of all, the input transform module is designed following quantum mechanism, where entities are represented by wave functions comprising both amplitude and phase components. The amplitude corresponds to a real-valued feature that represents the maximum intensity of the wave, while the phase term modulates intensity patterns by indicating locations of each point. This design allows for the decoupling of input images into continuous vectors of content and style, with the style representation aligned to recognizable degradations of the reprogrammed model. Second, the reprogrammed model aims to map the style representation and enhance the content details by these components. Since no existing methods study the problem of reprogramming restoration models, two kinds of restoration models, \emph{i.e.}, randomly initialized and specifically trained, are explored in subsequent experiments. Finally, after processing these two components, the output transform function formulates wave functions and remaps them into the original image space to yield clear outputs.

3. Usage

3.1 Data

The datasets used in the paper are available at the following links. Please download and place them according to instructions of each task.

3.2 Dependencies

  • Python 3.8.15
  • PyTorch 1.9.0
  • Mindspore

3.3 Train

To Do.

3.4 Test

We first release testing codes based on the Mindspore framework.

python test.py

Citation

If you find our code or paper useful, please consider citing:


Acknowledgments

The code is build on both GridDehazeNet and Mindspore. Of course, many wonderful studies are referred in our experiments. Thanks them very much for their sharing.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages