Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About using your research for super-resolution #12

Closed
CR7forMadrid opened this issue Oct 20, 2022 · 2 comments
Closed

About using your research for super-resolution #12

CR7forMadrid opened this issue Oct 20, 2022 · 2 comments

Comments

@CR7forMadrid
Copy link

Hello, your innovative work is great and has contributed very much to the field of image restoration. I am working in the field of super-resolution, and the super-resolution field also crops the image into some patches during training, for example, the DIV2K dataset is cropped to 192*192 size, but the LR image of the test set is usually smaller than the HR image after cropping, which is equivalent to the training patches are larger than the whole image I will use for testing, is your work also positive and effective in this case? ?

@achusky
Copy link
Collaborator

achusky commented Oct 20, 2022

Thank you for your interest.

In my opition, the answer is Yes.
If your models:

  1. use global operators (e.g., global avg pooling in channel attention)
  2. use cropped patches (for training) that are much smaller than the whole image for inference.

In this case, models may face train-test inconsistency issue and we found our work is effective in various tasks (including super-resolution). For example, our TLSC* (also known as TLC) improves the performance of NAFSSR by 0.05 ~ 0.12 dB in stereo image super resolution tasks.

Interestingly, for memory-efficient inference, some works (e.g., RCAN) uses cropping images for inference, which also inadvertently alleviates train-test inconsistency issue. We found that inference with cropping images (instead of whole images) has positive effect on performance (e.g., PSNR) of Urban test dataset, which shows that train-test inconsistency hurts the performance of models. However, it may introduce other side-effects (e.g., artifacts) as discussed in our paper. You can test to choose the suitable inference method for your models and datasets.

*TLSC is the old name for our work.

@CR7forMadrid
Copy link
Author

Thank you for your patient answer, it reminds me that some time ago when I was changing a small model, I added the crop operation directly in the network architecture, so the training and testing were using the same size patches, and then stitching them together afterwards, so the final image I got had obvious traces of stitching, i.e. artifacts, which led to a lower PSNR, but the NIQE (the lower, the better) was reduced, so your work has pointed me in the right direction, thanks and have a nice life!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants