Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

May be a wrong code ? #33

Closed
zhijiejia opened this issue Mar 15, 2021 · 7 comments
Closed

May be a wrong code ? #33

zhijiejia opened this issue Mar 15, 2021 · 7 comments

Comments

@zhijiejia
Copy link

zhijiejia commented Mar 15, 2021

In train.py the validate function.
image
but i find the len(subcls) == 5, len(subcls[0]) == 20, I set 5-shot, and batch_size_val = 20, split is 1. I can't get the purpose of ' 'subcls[0].cpu().numpy()[0]', I think the operator subcls[0].cpu().numpy()[0] is wrong,

@zhijiejia
Copy link
Author

请问,在validate中有两层for循环,一层循环10次,一层遍历val_loader, 请问外面的那10次循环目的是什么呢? 另外我怕我英语说不清楚,我再说下我上面的问题,就是我输出subcls发现它的形状是(5, 20), 可是这样subcls[0].cpu().numpy()[0] 这样获取的是第一个query-support集合的类别,可是一个mini_batch中包含20个query-support集合啊,不同的query-support的类别不一定是一样的啊,问什么只取subcls[0].cpu().numpy()[0]

@zhijiejia
Copy link
Author

作者您看我自己的理解对不对,我感觉外层的for循环应该紧贴在我截图的代码外面,subcls[0].cpu().numpy()[0]也应改成subcls[i].cpu().numpy()[i],而对于外面的for循环是10,是因为默认的batch_size_val 是10

@tianzhuotao
Copy link
Collaborator

tianzhuotao commented Mar 15, 2021

@zhijiejia

Because our code evaluates the prediction by comparing it with the original label without resizing operation and different ground-truth labels are in different sizes, the batch size for validation can only be set as one, which is the reason why we only adopt "subcls[0].cpu().numpy()[0]".

If you'd like to evaluate prediction with the altered labels (for example resizing all labels to a fixed size), you can accordingly modify the code in your way by supporting evaluation with multiple batches. While this may cause unfair comparisons with the previous method because they may use different sizes for altering labels (like 417). Normally, in semantic segmentation, the size of the ground truth label should not be altered.

@zhijiejia
Copy link
Author

Thanks, I remember wrong batch_ size_ val value, but why is there a for loop in the outer layer of validate?
image

@tianzhuotao
Copy link
Collaborator

tianzhuotao commented Mar 16, 2021

@zhijiejia

This loop is used for additional evaluation steps.

PASCAL only has 1400+ images for validation and for some splits there are only 300-400 images containing the corresponding categories. However, evaluation with 300-400 steps causes unstable results since the support samples are randomly selected for each query image.

Previous practices evaluate with 1,000 steps, but the results are still unstable in different runs. Therefore, in our paper, we instead recommend evaluating few-shot segmentation methods with 5,000 steps on PASCAL, which is approx. 10 times of the images of each split (each split has been evaluated with approx. 10 different support samples) and this helps minimize the performance variation.

So in our code, you can see that we stop the evaluation when to total step number is larger than the pre-defined test_num, or 10 epochs have been evaluated.

@zhijiejia
Copy link
Author

Thanks for your careful answer, I get it.

@tianzhuotao
Copy link
Collaborator

@zhijiejia

If you have any further questions, feel free to drop me a message :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants