-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about label/score normalization #7
Comments
also, i find that the target network contains 5 fc layers in the code, while 4 fc layers claimed in the paper. |
Actually, we didn't do this normalization in cross database evaluation. This is because the final criterion SRCC calculates only the correlation ratio between two vectors and thus has no relationship with vector scale. |
Thanks for your kind remind, there indeed exists a little difference about fc numbers from the paper and our code. However, it seems using whether 4 or 5 fc layers doesn't affect the model performance much, this is probably due to the target net has already learned sufficient quality representation in several front layers. You can also change the fc numbers by your own to see if the performance changes accordingly. |
Thanks for your reply. |
Our pleasure ; ) |
hey, thanks for your great work.
i viewed code and i found that there's no label normalization, e.g. normalizing scores to range [0, 1]. it's ok not to normalize when train and test on the same dataset or datasets with similar range.
in the paper, table 3 lists 3 datasets (livec, bid and knoiq) which have different score ranges. is it reasonable to use raw scores? or maybe you have normalized scores?
look forward to your reply.
The text was updated successfully, but these errors were encountered: