You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great work !
I want to ask a question that confuses me. In the Paper, Table 4 shows that the parameter numbers of the entire model is not increased. However, i see that in the code the prototype is inceased by class_num as follow:
self.prototypes = nn.Parameter(torch.zeros(self.num_classes, self.num_prototype, in_channels), requires_grad=True)
If class_num is increased, the parameter numbers of prototypes are also increased. Did I get it wrong?
The text was updated successfully, but these errors were encountered:
Hi @Sunting78, Thanks for your interests. Table 4 actually report the number of learnable parameters, while in our model, self.prototypes are non-learnable. P.s., though requires_grad is set to True in the code you refer to, it will be changed to False immediately in the forward function. It is same to directly set requires_grad=False in the definition.
Thanks for the great work !
I want to ask a question that confuses me. In the Paper, Table 4 shows that the parameter numbers of the entire model is not increased. However, i see that in the code the prototype is inceased by class_num as follow:
self.prototypes = nn.Parameter(torch.zeros(self.num_classes, self.num_prototype, in_channels), requires_grad=True)
If class_num is increased, the parameter numbers of prototypes are also increased. Did I get it wrong?
The text was updated successfully, but these errors were encountered: