You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From what I understand, the 2D and 3D market scene demos differ both in pupil-to-gaze mapping algorithm as well as the way in which the gaze data is represented. More specifically...
In the 2D market scene demo:
the 2D pupil-to-gaze mapping algorithm is used. However, this algorithm seems to use only the centroid of the detected pupil, and calibraiton quality is quickly degraded by small shifts of the helmet on the face. I do not recommend that anyone use this method for research.
The eye-in-head vectors are represented by a series of points mapped to a 2D plane at a fixed distance.
In the 3D market scene demo:
the 3D pupil-to-gaze mapping is used. From my experience with the mobile tracker, this algorithm can work very well. Unfortunately, it does not fare well in this demo. I do not know why. Poor eye images?
The eye-in-head vectors are represented by single point placed along the cyclopean gaze vector at the depth of gaze estimated from the convergence of the left and right gaze vectors.
There are two issues here.
One is that folks are never quite sure what is implied the 2D vs 3D marketplace demo, because the issues of mapping algorithm and data representation are conflated. Is it called 2D simply because the gaze direction is represented as discs on a 2D plane? On both Discord and in personal communication, I have been asked the question "What algorithm is used in the 2D marketplace demo?" (or some variant of the question) several times.
The second issue is that, for the 3D gaze mapping algorithm to succeed, it must meet a HIGHER criterion than the 2D mapping: that is, it must also be sufficiently accurate for the recovery of depth of gaze. I have never seen this done with a conventional eye tracker, and I believe the estimated depth of gaze is extremely inaccurate beyond 3m even with a very high quality eye tracker.
I strongly suggest that you use the same method of data representation for both the 2D and 3D trackers: the discs on the screen. You may give the option to try and recover depth of gaze, but please do not make it the default representation.
I also **strongly * recommend an explicit explanation of what differs between the 2D and 3D marketplace demos.
Thanks!
gD
The text was updated successfully, but these errors were encountered:
----------------------
Gabriel J. Diaz, Ph.D.
Assistant Professor
Rochester Institute of Technology
Chester F. Carlson Center for Imaging Science
*Founder of PerForM Labs*
*Click for demos.* <https://www.cis.rit.edu/performlab/>
Office 2108, Building #76
Rochester, NY 14623
Office: (585) 475-6215
gabriel.diaz@rit.edu
From what I understand, the 2D and 3D market scene demos differ both in pupil-to-gaze mapping algorithm as well as the way in which the gaze data is represented. More specifically...
In the 2D market scene demo:
the 2D pupil-to-gaze mapping algorithm is used. However, this algorithm seems to use only the centroid of the detected pupil, and calibraiton quality is quickly degraded by small shifts of the helmet on the face. I do not recommend that anyone use this method for research.
The eye-in-head vectors are represented by a series of points mapped to a 2D plane at a fixed distance.
In the 3D market scene demo:
the 3D pupil-to-gaze mapping is used. From my experience with the mobile tracker, this algorithm can work very well. Unfortunately, it does not fare well in this demo. I do not know why. Poor eye images?
The eye-in-head vectors are represented by single point placed along the cyclopean gaze vector at the depth of gaze estimated from the convergence of the left and right gaze vectors.
There are two issues here.
One is that folks are never quite sure what is implied the 2D vs 3D marketplace demo, because the issues of mapping algorithm and data representation are conflated. Is it called 2D simply because the gaze direction is represented as discs on a 2D plane? On both Discord and in personal communication, I have been asked the question "What algorithm is used in the 2D marketplace demo?" (or some variant of the question) several times.
The second issue is that, for the 3D gaze mapping algorithm to succeed, it must meet a HIGHER criterion than the 2D mapping: that is, it must also be sufficiently accurate for the recovery of depth of gaze. I have never seen this done with a conventional eye tracker, and I believe the estimated depth of gaze is extremely inaccurate beyond 3m even with a very high quality eye tracker.
I strongly suggest that you use the same method of data representation for both the 2D and 3D trackers: the discs on the screen. You may give the option to try and recover depth of gaze, but please do not make it the default representation.
I also **strongly * recommend an explicit explanation of what differs between the 2D and 3D marketplace demos.
Thanks!
The text was updated successfully, but these errors were encountered: