Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about get_det_bboxes function #45

Open
JKZhan opened this issue Dec 11, 2022 · 10 comments
Open

Question about get_det_bboxes function #45

JKZhan opened this issue Dec 11, 2022 · 10 comments

Comments

@JKZhan
Copy link

JKZhan commented Dec 11, 2022

Hi i'm confuse about why need to count this as the final score ?

seen_scores = torch.mm(scores, self.vec.t())
seen_scores = torch.mm(seen_scores, self.vec)
unseen_scores = torch.mm(scores, self.vec.t())
unseen_scores = torch.mm(unseen_scores, self.vec_unseen)

because when in training the score will be semantic_score that is not same as
above does it have special meaning and why the score can work?

@zhengye1995
Copy link
Owner

In training, the semantic_score is calucated in here
semantic_score = torch.mm(semantic_score, self.vec).

Therefore the test code you mentioned above is in line with training.

@JKZhan
Copy link
Author

JKZhan commented Dec 12, 2022

I know how the semantic_score be calculated, my question is why the scores used in training are semantic_score but when testing will first calculate semantic_score with softmax and then use the code above to count the seen/unseen scores ?

@zhengye1995
Copy link
Owner

In training, the classification loss function is F.Cross_entropy(input, target), which contains the softmax. Therefore, the semantic_score is also processed with softmax.

@JKZhan
Copy link
Author

JKZhan commented Dec 13, 2022

But in training did not calculate

seen_scores = torch.mm(scores, self.vec.t())
seen_scores = torch.mm(seen_scores, self.vec)

before calculate the loss

@zhengye1995
Copy link
Owner

I got your problem, in inference:
for seen classes, using:

seen_scores = torch.mm(scores, self.vec.t())
seen_scores = torch.mm(seen_scores, self.vec)

or directly the scores only have the numerical difference, the relative rank has not changed. You can remove these lines and the performance of seen classes will not be changed.

I add this process only for be consistency with unseen classes.

@JKZhan
Copy link
Author

JKZhan commented Dec 13, 2022

But the scores are belong to seen, why they can be used as unseen scores.
I can't understand why after calculate

unseen_scores = torch.mm(unseen_scores, self.vec_unseen)

the results can be used as unseen scores

@zhengye1995
Copy link
Owner

For unseen scores, this line unseen_scores = torch.mm(scores, self.vec.t()) projects the scores back to the semantic space:
e.g., scores (100, s) mm vec.t (s, 300) -> (100,300)
then this line unseen_scores = torch.mm(unseen_scores, self.vec_unseen) calucates the similarity between the semantic space with the unseen word2vec to obtain the unseen scores.

@JKZhan
Copy link
Author

JKZhan commented Dec 13, 2022

In your experience, does this part can be used in training?

scores = F.softmax(semantic_score, dim=1)
seen_scores = torch.mm(scores, self.vec.t())
seen_scores = torch.mm(seen_scores, self.vec)

the loss will beF.Cross_entropy(seen_scores, target)and the inference part of unseen will be come

scores = F.softmax(semantic_score, dim=1)
unseen_scores = torch.mm(scores, self.vec.t())
unseen_scores = torch.mm(unseen_scores, self.vec_unseen)

@zhengye1995
Copy link
Owner

I did not try this before, you can have a try and good luck.

@JKZhan
Copy link
Author

JKZhan commented Dec 14, 2022

Thanks for reply

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants