You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is even more strange when I could successfully replicate DF2K experiments with almost the same configs (but in training patch size 64).
What might be the problem? Thank you very much!
Also, thanks for the open-sourcing of this wonderful repo!
The text was updated successfully, but these errors were encountered:
YuchuanTian
changed the title
Problem with Urban100 & Manga109 replication
Problem with SwinIR Urban100 & Manga109 replication
Jan 16, 2023
May I ask which validation dataset you used when training exclusively on DIV2K for replicating the paper at a 4x scale factor, and how many epochs were used for training?
As I replicate SwinIR SR x3&x4 on DIV2K dataset, I encountered a drop on U100&M109 testsets compared to results reported in the paper:
All training are done with the original DIV2K dataset, without lmdb or patch preprocessing.
SR x3
Only the following configs in options/swinir/train_swinir_sr_classical.json are changed:
Test script parsers are:
The results are:
SRx4
Only the following configs in options/swinir/train_swinir_sr_classical.json are changed:
Test script parsers are:
The results are:
This is even more strange when I could successfully replicate DF2K experiments with almost the same configs (but in training patch size 64).
What might be the problem? Thank you very much!
Also, thanks for the open-sourcing of this wonderful repo!
The text was updated successfully, but these errors were encountered: