Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Over-fitting with data generation #42

Open
diff7 opened this issue Aug 11, 2019 · 4 comments
Open

Over-fitting with data generation #42

diff7 opened this issue Aug 11, 2019 · 4 comments

Comments

@diff7
Copy link

diff7 commented Aug 11, 2019

Hi,
thank you for sharing this code. It is rather a general question than an issue.

Do not you think you over-fit if you randomly generate masks on the same images and it happens that with future iterations the net eventually will see the whole image?

I am just curios how you dealt with this problem.

@burhr2
Copy link

burhr2 commented May 11, 2020

Hi, it's a good question, it will be great to see other peoples view,
But I am also thinking if you use the same mask for each image during training doesn't that also expose the network to overfitting and poor generalization as it will limit in learning those mask used only.

From your question- Training with a different mask for the image in each iteration is some sort of data augmentation hence the network generalize better when testing with a new image. I have used this option and my testing performance is around 0.95ssim.
train - 800+ images
val - 100+ images
test - 200 - images

A comparison and a reported on performance will help understand it better

@diff7
Copy link
Author

diff7 commented May 11, 2020

Recently I was dealing with other related problem. Now I start to think if it is one mask per image per epoch + some random augmentation that should be fine.

I can close the issue or if you want we wait and see other opinions on that, I am curios what ppl say.

@burhr2
Copy link

burhr2 commented May 11, 2020

Recently I was dealing with other related problem. Now I start to think if it is one mask per image per epoch + some random augmentation that should be fine.

I can close the issue or if you want we wait and see other opinions on that, I am curious what ppl say.

Let's wait and see what others say

@sfwyly
Copy link

sfwyly commented Oct 2, 2020

Hi,
In my experiment, the use of random mask+ random image training resulted in the difficulty of convergence. Maybe I did not have enough training time, but I was considering whether batch image + one random mask training could achieve the balance between the fitting effect and the time during the training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants