Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about fashionBERT Rank@K, AUC #18

Open
SeonbeomKim opened this issue Nov 10, 2020 · 0 comments
Open

Question about fashionBERT Rank@K, AUC #18

SeonbeomKim opened this issue Nov 10, 2020 · 0 comments

Comments

@SeonbeomKim
Copy link

SeonbeomKim commented Nov 10, 2020

When calculating rank@k, it is calculated based on the sorted TIA score.
However, the code does not seem to be right because the scores are compared without the softmax applied.

I guess it's an unfair comparison of TIA score.

So i think read_batch_result_file.py function should be changed

# step 2: rank@K
for idx in range(len(text_prod_ids)):
        query_id = text_prod_ids[idx] if type == 'txt2img' else image_prod_ids[idx]
        doc_id   = image_prod_ids[idx] if type == 'txt2img' else text_prod_ids[idx]
        dscore   = predictions[idx, 1]    --> softmax(predictions[idx], axis=-1)[1]
        dlabel   = labels[idx] 
        doc      = Doc(id=doc_id, score = dscore, label=dlabel)
        if query_id in query_dict:
            query_dict[query_id].append(doc)
        else:
            docs = []
            docs.append(doc)
              query_dict[query_id] = docs

and the AUC is weird too.

# step 3: AUC
for idx in range(len(text_prod_ids)):
        y_preds.append(predictions[idx, 1]) -->  y_preds.append(softmax(predictions[idx], axis=-1)[1])
        y_trues.append(labels[idx])

Is that right?

Thank you

@SeonbeomKim SeonbeomKim changed the title Question about fashionBERT rank@k Question about fashionBERT Rank@K, AUC Nov 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant