Skip to content

Main Ideas Behind TouchScript

Valentin Simonov edited this page Jul 30, 2017 · 6 revisions

TouchScript was developed by Valentin Simonov at Interactive Lab to:

  • Provide a reliable way to code multi-touch interfaces on various devices,
  • Unify different input methods,
  • Handle complex interactions between gestures,
  • Be able to work the same way on large touch devices and small mobile phones.

Multi-touch interfaces on various devices

If you check Interactive Lab's projects you will see that they are very different. If you take only multi-touch apps, they can run on big multi-touch tables, huge multi-touch walls or (comparably) small iPads. These devices use different touch input technologies and different APIs: tables might use Windows 8 native touch, walls might use TUIO and iPads use iOS native touch API. The first two aren't even supported in Unity.

That's why we needed a way to abstract from devices and input sources. Plus, provide support for input methods which are not built in Unity. TouchScript supports many input sources which even can be used simultaneously. This approach makes it very easy to "feed" your app with data from any source which it will interpret as touches.

You don't even need to use any of gesture recognizing code if you only want to get touch events. TouchScript can be used just as a way to combine input sources.

Gesture interactions

There's nothing hard in writing code which detects tap or zoom gesture. You can find hundreds of short gists on the Net which do just that.

But when you try to make gestures work simultaneously in hierarchy you are going to have a hard time.

How to make these pan and scale gestures work together? Should a panel move if I drag a button on it? How to make a button listen for tap and double tap? Gesture interaction makes code over 9000 times more complicated. But Simpsons already did itApple has already implemented a system which does this and works really good. TouchScript is largely inspired by iOS GestureRecognizers.

This is especially relevant to large touch surfaces because many people can interact with them simultaneously.

If you want to know more about how gesture recognition works in TouchScript (especially if you want to contribute to the project) please read the following sections:

Small and large touch devices

Touch interfaces must take into account how big target touch surface is. Interfaces on an iPad and a 23" touch monitor must be different. Not mentioning a 6 meters wide multi-touch wall vs. an iPhone.

This essentially comes to several key attributes. Most important of which are:

  • DPI,
  • Point clusters.

Your smartphone most likely has more than 120 pixels in 1 cm, while your TV may have as low as 16 pixels in 1 cm. If you design an interface in pixels and try to use it on a smartphone and on a TV this obviously won't work.

TouchScript works in centimeters and provides ways to set (or detect) current DPI, so a 2 cm swipe will always be a 2 cm swipe. No matter if you are testing your app on an iPad or 100" TV.

Point Clusters
If interaction with a smartphone takes two fingers at most, people tend to work with large touch surfaces differently. Even when one person works with a large touch surface they tend to use several fingers when tapping on UI elements instead of a single finger. When performing a zoom or rotation gesture on a large touch surface people tend to do it with both hands.

In these situations working with individual points may lead to jerky movement or totally wrong results. The system must think in point clusters instead of points. Which TouchScript does from the beginning. Most of the gestures work with clusters but have simpler versions to be used where clustering is not required.

You can’t perform that action at this time.