Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2D/3D: Market scene demo. Distinguish between 2D/3D pupil-gaze-mapping and 2D/3D data representation. #47

Closed
gabrielDiaz-performlab opened this issue May 25, 2018 · 4 comments

Comments

@gabrielDiaz-performlab
Copy link

From what I understand, the 2D and 3D market scene demos differ both in pupil-to-gaze mapping algorithm as well as the way in which the gaze data is represented. More specifically...


In the 2D market scene demo:

  1. the 2D pupil-to-gaze mapping algorithm is used. However, this algorithm seems to use only the centroid of the detected pupil, and calibraiton quality is quickly degraded by small shifts of the helmet on the face. I do not recommend that anyone use this method for research.

  2. The eye-in-head vectors are represented by a series of points mapped to a 2D plane at a fixed distance.


In the 3D market scene demo:

  1. the 3D pupil-to-gaze mapping is used. From my experience with the mobile tracker, this algorithm can work very well. Unfortunately, it does not fare well in this demo. I do not know why. Poor eye images?

  2. The eye-in-head vectors are represented by single point placed along the cyclopean gaze vector at the depth of gaze estimated from the convergence of the left and right gaze vectors.


There are two issues here.

One is that folks are never quite sure what is implied the 2D vs 3D marketplace demo, because the issues of mapping algorithm and data representation are conflated. Is it called 2D simply because the gaze direction is represented as discs on a 2D plane? On both Discord and in personal communication, I have been asked the question "What algorithm is used in the 2D marketplace demo?" (or some variant of the question) several times.

The second issue is that, for the 3D gaze mapping algorithm to succeed, it must meet a HIGHER criterion than the 2D mapping: that is, it must also be sufficiently accurate for the recovery of depth of gaze. I have never seen this done with a conventional eye tracker, and I believe the estimated depth of gaze is extremely inaccurate beyond 3m even with a very high quality eye tracker.

I strongly suggest that you use the same method of data representation for both the 2D and 3D trackers: the discs on the screen. You may give the option to try and recover depth of gaze, but please do not make it the default representation.

I also **strongly * recommend an explicit explanation of what differs between the 2D and 3D marketplace demos.

Thanks!

  • gD
@mkassner
Copy link
Member

mkassner commented Jul 3, 2018

Yes this is a very good idea and its our plan to do this soon!

@mkassner
Copy link
Member

This is still on our radar. Just FYI.

@gabrielDiaz-performlab
Copy link
Author

gabrielDiaz-performlab commented Oct 29, 2018 via email

@fx-lange
Copy link
Contributor

fx-lange commented Sep 6, 2019

This can be closed as we focus entirely on 3d gaze mapping and dropped support for 2D in hmd-eyes v1.0.

@fx-lange fx-lange closed this as completed Sep 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants