Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thought: tag elements near point-of-gaze w/ colors, IDs, and lines – project to large elements #71

Closed
JeffKang opened this issue Jul 31, 2015 · 1 comment

Comments

@JeffKang
Copy link

Tag elements near point-of-gaze w/ colors, IDs, and lines – project to large elements

There was a thought that I had for target selection, and I put it in the Eye Tribe forums last year.

http://theeyetribe.com/forum/viewtopic.php?f=8&t=189&sid=8838daf228a29ef23adac83538fd1ca1

I was wondering if anyone could tell me how difficult it is to create, as I've never seen it exist yet.

While trying to deal with non-eye-tracking elements, zooming and snapping might not consistently get good results, as the size and arrangement of elements can vary greatly.

To help, an additional process could involve projecting larger-sized, touch, Windows 8 Metro-like versions of non-touch/non-eye-tracking elements.
After an activation (could be by keyboard, EEG, or eye-tracker-dwell), a program would scan and detect the elements of an application (links, menus, menu items, drop-down list items, tabs et al.) that are near the point-of-gaze.
Then, the program would project larger-sized versions of those elements, while still somewhat preserving the layout.


Here’s a screenshot with a mock-up of the larger, colored elements that project from the smaller elements: http://i.imgur.com/3erfG6K.png


It might operate on demand like the Vimium Chrome extension (keyboard Internet browsing without the mouse) IDs for elements when needed for activation by keyboard, and then they disappear.
Similarly, you would bring out the alternate large-button elements, make the selection, and then continue in the regular, non-touch/non-eye-tracking interface.
The speed of this two-step selection process depends on how fast a person can track the location of projected larger elements.
In addition to IDs, the smaller elements that will be selected could be tagged with colors.
A color of a small element would be the same color as its corresponding large element.
With the matching colors, you might not even need the IDs.

(E.g. here is a timestamp for a video that shows the letters of Vimium: http://youtu.be/t67Sn0RGK54?t=23s.
Here’s a picture of the letters: http://i.imgur.com/YxRok5K.jpeg
https://github.com/philc/vimium).


The interface for controlling Windows for Tobii’s $2000(?) PCEye tracker still uses magnification:
http://youtu.be/6n38nQQOt8U?t=4m22s.

That makes me think that finding and projecting new and actionable elements is very difficult to achieve.

A reply that I got from a Tobii rep in their forums:

May 11, 2014 at 10:52
Anders [Tobii]

Jeff, you’re quite right that small interface elements are difficult to select using eye tracking.
We’re currently prototyping a solution based on a two-step workflow.
It will be interesting to see how it works out for example on crowded web pages.


I really believe that if something like this were possible, any average, non-disabled person could find benefits in using it with the keyboard activation.
A two-step pop-out process might seem slower, but with the ability to instantly eye-move the cursor before a selection, and not needing to reach for and move a mouse, the mouse-less process may be faster in many instances.

@JeffKang JeffKang changed the title Tag elements near point-of-gaze w/ colors, IDs, and lines – project to large elements Thought: tag elements near point-of-gaze w/ colors, IDs, and lines – project to large elements Jul 31, 2015
@JuliusSweetland
Copy link
Member

Adding to big list of potential tasks and closing this for the moment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants