Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why there are 15 MPIIGaze trained models? #11

Closed
nicola-scarano opened this issue Sep 8, 2022 · 4 comments
Closed

Why there are 15 MPIIGaze trained models? #11

nicola-scarano opened this issue Sep 8, 2022 · 4 comments

Comments

@nicola-scarano
Copy link

First of all thanks Ahmed for the work. Here the question:

Why there are 15 MPIIGaze trained models and not just one as with the Gaze360 Dataset? And in this case how should the inference be performed?

@ziminMIAO
Copy link

您好,我和你一样的疑问,请问你解决了吗

@Ahmednull
Copy link
Owner

Since we use k-fold cross validation for MPIIFaceGaze dataset with k=15.

@hefei12
Copy link

hefei12 commented Nov 24, 2023

首先感谢艾哈迈德的工作。这里有一个问题:

为什么有 15 个 MPIIGaze 训练的模型,而不是像 Gaze360 数据集那样只有一个?在这种情况下,应该如何进行推理?

Hello, can you guide me on how to reason?

@Ahmednull
Copy link
Owner

We use 15-fold cross-validation to evaluate L2CS-Net on the MPIIFaceGaze dataset. The model is trained on data from all 15 subjects except one, which is held as the test set. This procedure is repeated in such a way that each subject is used as a test subject once, ensuring that the evaluation covers the variability between different individuals in a comprehensive way. so we have 15 models

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants