Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the size of prototype. #3

Closed
ShifuShen opened this issue Apr 13, 2021 · 2 comments
Closed

about the size of prototype. #3

ShifuShen opened this issue Apr 13, 2021 · 2 comments

Comments

@ShifuShen
Copy link

Thanks for your great work and code sharing. Here is one confusing I concerted.

I saw you define the objective_vectors in models/adaptation_modelv2.py with a size of 256
self.objective_vectors = torch.zeros([self.class_numbers, 256]) self.objective_vectors_num = torch.zeros([self.class_numbers])

but I found that the channel of feature maps is 2048 in layer4.
self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4, BatchNorm=BatchNorm) self.layer5 = self._make_pred_layer(Classifier_Module2, 2048, [6, 12, 18, 24], [6, 12, 18, 24], num_classes)

and in your command, you just use the output feature of layer4 as out['feat']
` def forward(self, x, ssl=False, lbl=None):
_, _, h, w = x.size()
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
if self.bn_clr:
x = self.bn_pretrain(x)

    out = self.layer5(x, get_feat=True)
    # out = dict()
    # out['feat'] = x
    # x = self.layer5(x)
    
    # if not ssl:
    #     x = nn.functional.upsample(x, (h, w), mode='bilinear', align_corners=True)
    #     if lbl is not None:
    #         self.loss = self.CrossEntropy2d(x, lbl)    
    # out['out'] = x
    return out`

so....which is the correct size of prototype and if it's 256 how to get the feature~

@tudragon154203
Copy link

Hi ShifuShen,

I don't belong to the author team, but I might know the answer for your confusion.

The feature is extracted right before the last layer of Classifier_Module2 in deeplabv2.py. If you look at the forward function of Classifier_Module2, you might see that:
out['feat'] is input of layer nn.Conv2d(256, num_classes, kernel_size=1, padding=0, dilation=1, bias=False)

So it has 256 kernels in the end.

@ShifuShen
Copy link
Author

Hi ShifuShen,

I don't belong to the author team, but I might know the answer for your confusion.

The feature is extracted right before the last layer of Classifier_Module2 in deeplabv2.py. If you look at the forward function of Classifier_Module2, you might see that:
out['feat'] is input of layer nn.Conv2d(256, num_classes, kernel_size=1, padding=0, dilation=1, bias=False)

So it has 256 kernels in the end.

Got it Thanks ~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants