Skip to content
This repository has been archived by the owner on Apr 13, 2023. It is now read-only.

Questions about Accuracy Assessment #27

Open
yiyi-today opened this issue Feb 28, 2022 · 0 comments
Open

Questions about Accuracy Assessment #27

yiyi-today opened this issue Feb 28, 2022 · 0 comments

Comments

@yiyi-today
Copy link

I am very interested in your paper. But when it comes to accuracy assessment, I have some problems. You used "prfs(labels.data.cpu().numpy().flatten(), cd_preds.data.cpu().numpy().flatten(), average='binary', pos_label=1 in train )" for accuracy evaluation, and use "tn, fp, fn, tp =
confusion_matrix(labels.data.cpu().numpy().flatten(),cd_preds.data.cpu().numpy().flatten()).ravel()" in eval. During the calculation, I found that the accuracy of the two methods differed by up to 10%. May I ask why the accuracy of the two methods is so different. Which method is more reliable for accuracy assessment?
Looking forward to your answer, thank you!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant