Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the results on pascal voc look strange #12

Closed
CuberrChen opened this issue Dec 2, 2021 · 4 comments
Closed

the results on pascal voc look strange #12

CuberrChen opened this issue Dec 2, 2021 · 4 comments

Comments

@CuberrChen
Copy link

according to table 6,supervised baseline with different label ratio has achieved good results. and none of the other methods seemed to work as their paper presented.
it is strange that the results are very different between with others paper.
for example, in PixelSSL, there are significant decrease of miou with diff label ratio.
PixelSSL

@YanFangCS
Copy link

I also found something strange in low data ratio split  scenarios with pascal voc augmentation dataset, the results of my reproduce experiments can't be as well as reported in the paper. Hope author release detailed parameter settings used in his experiment.

@qdd1234
Copy link

qdd1234 commented Dec 10, 2021

Hi, Do you find the code in the following picture? I only find the code to print this information, But I can't find a real use for it
image
image

@YanFangCS
Copy link

@qdd1234 The author uses it in the following lines. You can find it in the file train.py from line 228 to line 231.

    for step in range(len(data_loader_unsup)):
        i_iter = epoch * len(data_loader_unsup) + step
        lr = lr_scheduler.get_lr()
        lr_scheduler.step()
        if acp or acm:
            conf = 1 - class_criterion[0]
            conf = conf[target_cat]
            conf = (conf**0.5).numpy()
            conf = np.exp(conf)/np.sum(np.exp(conf))
            query_cat = []
            for rc_idx in range(num_cat):
                query_cat.append(np.random.choice(target_cat, p=conf))
            query_cat = list(set(query_cat))

@CuberrChen
Copy link
Author

I found that the reason is that the number of times the different implementation traverses the labeled data is different. In PixelSSL, they traverses the labeled data 40 epoch(times) in all partition. but in this work, it traverses the labeled data according to total iters. Therefore, the number of times label data is seen under different partitions is different.

I don't know which is more fair / appropriate...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants