New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the 300-w evaluation and dataset. #6

Closed
mariolew opened this Issue Nov 28, 2016 · 4 comments

Comments

Projects
None yet
2 participants
@mariolew

mariolew commented Nov 28, 2016

Hi, trigeorgis!
Thanks for sharing your code.
In the provided code, the pre-trained model get AUC of 41.04 which is much lower than 45.32(reported in the paper).

And I can't find the right website to download test set of 300w. http://ibug.doc.ic.ac.uk/resources/facial-point-annotations/ only provide the full set.

Thanks.

@trigeorgis

This comment has been minimized.

Show comment
Hide comment
@trigeorgis

trigeorgis Nov 28, 2016

Owner

Hi Mario,

There are two reasons for it:

  • It seems there was an error in our evaluation code which caused the AUC for /all/ methods to be a bit higher. Thanks for letting me know, I will put an errata on my website to let people know.
  • We used a different step size for calculating the AUC (0.005) in order to match the density of the available CED curves we had from the competition results.

You can download the 300W testset from here:
http://ibug.doc.ic.ac.uk/resources/300-W_IMAVIS/

Owner

trigeorgis commented Nov 28, 2016

Hi Mario,

There are two reasons for it:

  • It seems there was an error in our evaluation code which caused the AUC for /all/ methods to be a bit higher. Thanks for letting me know, I will put an errata on my website to let people know.
  • We used a different step size for calculating the AUC (0.005) in order to match the density of the available CED curves we had from the competition results.

You can download the 300W testset from here:
http://ibug.doc.ic.ac.uk/resources/300-W_IMAVIS/

@mariolew

This comment has been minimized.

Show comment
Hide comment
@mariolew

mariolew Nov 28, 2016

Hi, trigeorgis,

Thanks for your quick reply.
As for the first reason you provided, did you mean that the AUC reported in the paper is higher due to the error in the evaluation code?
And as for the second reason, the provided code used a step size of 0.0001, which is much smaller than 0.005, and I think smaller step size would result in better results, but it doesn't, so I don't think it's one reason causing lower result.

Thanks again.

mariolew commented Nov 28, 2016

Hi, trigeorgis,

Thanks for your quick reply.
As for the first reason you provided, did you mean that the AUC reported in the paper is higher due to the error in the evaluation code?
And as for the second reason, the provided code used a step size of 0.0001, which is much smaller than 0.005, and I think smaller step size would result in better results, but it doesn't, so I don't think it's one reason causing lower result.

Thanks again.

@trigeorgis

This comment has been minimized.

Show comment
Hide comment
@trigeorgis

trigeorgis Nov 28, 2016

Owner

Yeap, there was an error in the library we used to calculate the AUC.

Owner

trigeorgis commented Nov 28, 2016

Yeap, there was an error in the library we used to calculate the AUC.

@trigeorgis trigeorgis closed this Nov 28, 2016

@mariolew

This comment has been minimized.

Show comment
Hide comment
@mariolew

mariolew Nov 28, 2016

Okay, thanks for your reply.

mariolew commented Nov 28, 2016

Okay, thanks for your reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment