Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpreting the results of the score in evaluate_gpu.py #48

Closed
ksuhartono97 opened this issue Jul 16, 2020 · 2 comments
Closed

Interpreting the results of the score in evaluate_gpu.py #48

ksuhartono97 opened this issue Jul 16, 2020 · 2 comments

Comments

@ksuhartono97
Copy link

Hi, I am trying to understand what is the range of the score that comes out from the evaluate function

Putting the function here as reference

def evaluate(qf,ql,qc,gf,gl,gc):
    query = qf.view(-1,1)
    score = torch.mm(gf,query)
    score = score.squeeze(1).cpu()
    score = score.numpy()
    
    .....

I expected the scores to be in a range from 0 to 1, but it seems that the results here show values that can go negative and beyond 1. Is this expected behaviour? If yes, what is the score range as I would like to normalize these values.

@layumi
Copy link
Contributor

layumi commented Jul 16, 2020

Hi @ksuhartono97

For the cosine similarity, the score is in [-1,1].

For this repo, I normalize and then concatenate the two features at
https://github.com/NVlabs/DG-Net/blob/master/reid_eval/test_2label.py#L137-L138

So the value of score is [-2,2]. You may have a try.

@ksuhartono97
Copy link
Author

I see thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants