Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to reproduce, but extremely low performance #2

Closed
thanhnt-2658 opened this issue Oct 4, 2022 · 16 comments
Closed

Trying to reproduce, but extremely low performance #2

thanhnt-2658 opened this issue Oct 4, 2022 · 16 comments

Comments

@thanhnt-2658
Copy link

I'm trying to reproduce the reported results. I use the default hyper-parameters in train_mms.py. I wonder if anything need to change since I got extremely low results.

Train log
2022-10-03 20:00:16,393 2022-10-03 20:00:16.393250 Epoch [099/100], total_loss : 2.3719
2022-10-03 20:00:16,394 Train loss: 2.371871218389394
2022-10-03 20:00:28,635 Validation dice coeff model 1: 0.38830975438087945
2022-10-03 20:00:28,636 Validation dice coeff model 1: 0.3108561814208858
2022-10-03 20:00:28,637 current best dice coef model 1 0.47562526039001296, model 2 0.3603631227240354
2022-10-03 20:00:28,637 current patience :101

Test log
2022-10-03 20:11:16,730 logs/kvasir/test/saved_images_1/
2022-10-03 20:11:39,722 Model 1 F1 score : 0.18228444612672604
2022-10-03 20:11:39,838 Model 1 MAE : 0.18325799916872645
2022-10-03 20:11:39,839 logs/kvasir/test/saved_images_2/
2022-10-03 20:12:06,201 Model 2 F1 score : 0.15014741534272816
2022-10-03 20:12:06,324 Model 2 MAE : 0.1558560876074035

@AngeLouCN
Copy link
Owner

Hi thank you for your interest. Could please share me more information about training, such as which dataset and label ratio? By the way, I have check the code and find I upload the wrong loss.py file and I will upload correct one soon.

@thanhnt-2658
Copy link
Author

Thanks for your prompt response. I used the command python train_mms.py with no arguments so everything is default, the same went for python test.py. So the dataset is kvasir and the label ratio is 0.5

@AngeLouCN
Copy link
Owner

Hi, I have uploaded the correct one. You can try again and let me know what happened.

@thanhnt-2658
Copy link
Author

Btw, I got an error while testing. In your 'test.py', I need to change line 31, 32 from preUnet to preUnet() in order to avoid the error.

@AngeLouCN
Copy link
Owner

Thanks, I will fix it.

@thanhnt-2658
Copy link
Author

@AngeLouCN The results have not improved after I updated loss.py. Do you have any ideas?

@AngeLouCN
Copy link
Owner

I have updated the loss.py and fix some errors. And I test it right now it looks ok.
sample

@thanhnguyentung95
Copy link

The latest loss.py did not resolve the issue. I attached the logs.

run.log

@thanhnguyentung95
Copy link

The following log is suspicious:
2022-10-04 09:22:56,623 Split Percentage : 1 Labeled Data Ratio : 0.5
2022-10-04 09:22:56,693 train_loader_1 53 train_loader_2 45 unlabeled_train_loader 101 val_loader 118

While the total number of train images is 472

@AngeLouCN
Copy link
Owner

Hi, I download my repository and run it. I find the dataloader in my PC is correct. (I use windows system)
1664909447(1)

@AngeLouCN
Copy link
Owner

Did you do something to organize the datasets?

@thanhnguyentung95
Copy link

thanhnguyentung95 commented Oct 5, 2022

I found that sorted() should be done in image_loader() instead of ObjDataset. May I make a PR for this?

@AngeLouCN
Copy link
Owner

I am not really sure what you mean. I can run with this code without any errors. Maybe you can try to modify the dataloader to load all of those images. And I think the performance will reach over 90%.

@thanhnt-2658
Copy link
Author

Yes, I got over 92% now.

@AngeLouCN
Copy link
Owner

:) 👍

@EmarkZOU
Copy link

EmarkZOU commented Oct 3, 2023

Yes, I got over 92% now.

Hello! I've encountered similar issues during my reproduction process. On one hand, there's significant filtering of images with sizes not matching trainsize, and on the other hand, the final training dice score is quite low. I would like to seek advice on how you handled such situations. Looking forward to your response! Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants