Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the input size in training and testing time #13

Closed
SuperCrystal opened this issue May 27, 2021 · 3 comments
Closed

About the input size in training and testing time #13

SuperCrystal opened this issue May 27, 2021 · 3 comments

Comments

@SuperCrystal
Copy link

In the paper, it is said that during training the input size is set to 384 x 384 for all the images from all the databases, while that during the test, the network will inference on the original size. What if the test size is also 384 x 384? Will this affect the performance?

@zwx8981
Copy link
Owner

zwx8981 commented May 30, 2021

@SuperCrystal Hi, sorry for the late response. Empirically, testing images with their original size usually delivers better performance than cropping an image patch with 384 x 384 in our experiments.

@SuperCrystal
Copy link
Author

@zwx8981
Thanks a lot for your response! Another question is that if std_modeling is False and therefore p = y_diff = y1 - y2 by the code you provided. When I use it in this way, the loss just can not converge. However, if a sigmoid function is used, it can work correctly. Wondering whether you have also test about this. It is really an impressive work anyway :)

@xiongxiongtiao
Copy link

xiongxiongtiao commented Jun 2, 2021 via email

@zwx8981 zwx8981 closed this as completed Jun 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants