Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial performance very poor after training on P3M dataset. #202

Open
yashsandansing opened this issue Oct 12, 2022 · 0 comments
Open

Initial performance very poor after training on P3M dataset. #202

yashsandansing opened this issue Oct 12, 2022 · 0 comments

Comments

@yashsandansing
Copy link

I have tried training MODNet on the publicly available 30k-image dataset and the performance was poor due to the unclean dataset. Now, I've changed the dataset and chosen the P3M-10k dataset that uses 10k images with good-quality of segmentation. Now I'm fine-tuning the model using this code where I'm training on top of the existing modnet_photographic_portrait_matting.ckpt with backbone_pretrained = True.

My losses are however drastically high. I've seen losses to be as low as this but my losses are:
Semantic Loss: 9.234,
Detail Loss: 0.334,
Matte Loss: 6.392,

and seem to roll around in the same vicinity. Could you please check the code and see if there's any mistake in my data preparation/augmentation or is my training code somewhat wildly incorrect?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant