Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downsample function and loss function normalization at training stage #16

Open
HaoDot opened this issue Jan 7, 2022 · 1 comment
Open

Comments

@HaoDot
Copy link

HaoDot commented Jan 7, 2022

Hi @chosj95 , MIMO-UNet is a fantastic work! THX for sharing!
However, there are some problems confusing me a lot.

  • During the training stage, the different size of blurry images are generated by nearest interpolation. However, the corresponding supervision of different size sharp images are generated by bilinear interpolation, rather than nearest interpolation again. Intuitively, the interpolation function should be the same.
  • In the paper, function 7&8 have denominators in loss function for normalization. However, the code of loss function hasn't the denominators that I have mentioned above. I take it that chances are that the model will have better performance with normalization in loss function.

I am eagerly waitting for your explanation. THX a lot!

@HaoDot
Copy link
Author

HaoDot commented Jan 7, 2022

Hi,@chosj95 #6 has solved my problem2. So the remaining problem is just problem1! Waiting for your reply! THX.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant