New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the results on pascal voc look strange #12
Comments
I also found something strange in low data ratio split scenarios with pascal voc augmentation dataset, the results of my reproduce experiments can't be as well as reported in the paper. Hope author release detailed parameter settings used in his experiment. |
@qdd1234 The author uses it in the following lines. You can find it in the file train.py from line 228 to line 231.
|
I found that the reason is that the number of times the different implementation traverses the labeled data is different. In PixelSSL, they traverses the labeled data 40 epoch(times) in all partition. but in this work, it traverses the labeled data according to total iters. Therefore, the number of times label data is seen under different partitions is different. I don't know which is more fair / appropriate... |
according to table 6,supervised baseline with different label ratio has achieved good results. and none of the other methods seemed to work as their paper presented.
it is strange that the results are very different between with others paper.
for example, in PixelSSL, there are significant decrease of miou with diff label ratio.
PixelSSL
The text was updated successfully, but these errors were encountered: