Skip to content

The Journey of a Touch Point

Valentin Simonov edited this page Jul 30, 2017 · 5 revisions

This section describes how a touch pointer originated at user's finger gets to a gesture.

First, a touch pointer comes to an Input Source. It doesn't matter what touch-enabled device is used, it might be mouse or a fake pointer generating script. All input sources have CoordinatesRemapper property which may contain an instance of ICoordinatesRemapper. In this case all touch pointers go through it before getting to TouchManager.

Remappers are used when an input device isn't aligned to the screen correctly.
It might be necessary to rotate or scale data to match the screen.

Since many pointer events can theoretically arrive between frames, TouchManager keeps them in buffers till the next Update. During the next Update all messages are processed and the following events are dispatched in the following order:

  1. FrameStarted
  2. PointersAdded
  3. PointersPressed
  4. PointersUpdated
  5. PointersReleased
  6. PointersRemoved
  7. PointersCancelled
  8. FrameFinished

Before dispatching PointersPressed TouchManager goes through all instances of TouchLayer in the scene to determine if one of them wants to take pressed pointers (note: there might be several pointers pressed during a frame).

The most used layer type is StandardLayer. It checks if a ray originated from camera's position hits any collider or UI element in the scene. If it does, the system checks if there are any instances of HitTest attached to target object. They can intercept successful raycasts and modify them. For example Untouchable makes it impossible to touch an object.

Note: before version 9.0 there was a CameraLayer, which is now merged into StandardLayer.

Layer sets current target of a pointer which is accessible via Pointer.GetPressData() method.

When Target is determined, touch pointer goes to GestureManager with PointersPressed event. GestureManager checks if any gesture on the target or in its transform hierarchy is interested in this pointer. This process is a bit tricky.

For example we have an interface shown on the image above with nested boxes. When user touches box E the system looks for all the gestures on boxes from the root object to the Target which are able to receive touch input (in this case boxes E, C and A). If there's no active gesture in the graph containing the Target, all gestures get the touch pointer until one of them changes its state to Recognized or Began.

So, let's assume that some gesture on box E got this touch point and started. It now owns this touch pointer and all other gestures in the graph which are not friendly to this gesture and returned true from CanBePreventedByGesture(gesture) will be forced to fail and reset.

Now user touches box C which is a parent container of box E. The system once again grabs all gestures on yellow boxes but checks them against all the gestures on green boxes. In this case a gesture on box E is active and prevents all gestures starting from C and up from beginning.

To make gestures in hierarchy work together we need to add one to another's Friendly gestures property in inspector or via AddFriendlyGesture(gesture) method. If gestures are friendly they can share owned touch pointers.

When PointersUpdated, PointersReleased or PointersCancelled events occur, the process is much simpler since only gestures which own touch pointers from an event will be notified.

So, that's how a touch pointer goes all the way from an input device to gestures.

You can’t perform that action at this time.