Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Vega - tangible recognition for infrared overlays
This wiki is in a set-up phase! You might want to read this document instead of the wiki Thesis
The whole code is licensed under GPLv3
This project has been created by Thomas Becker as a student at the Fachhochschule Düsseldorf, University of Applied Sciences during my time as Technical Student at CERN
The recent introduction of multi-touch sensitive displays has brought with it the challenge of recognizing tangibles on these kinds of screens.
There are several wide spread and/or sophisticated solutions to fulfil this need but they seem to have some flaws. Printed patterns, for example, can be recognized quite well by displays with integrated optical sensors, but those systems are either time-consuming to calibrate or don’t work properly in case of extensive use of illumination.
One popular system is an overlay frame that can be placed on a normal display with the corresponding size. The frame creates a grid with infra-red light emitting diodes. The disruption of this grid can be detected and messages with the positions are sent via USB to a connected computer.Note: This is a simplified description of what is happening
This system is quite robust in matters of ambient light insensitivity and also fast to calibrate. Unfortunately it is not created with the recognition of tangibles in mind and printed patterns can not be resolved. This software is an attempt the create fiducials that are recognized by an infra-red multi-touch frame as fingers. Those false fingers are checked by a software for known patterns. Once a known pattern (= fiducial) has been recognized its position and orientation are send with the finger positions towards the interactive software.
Status quo in tangible recognition
There are several approaches to recognize tangibles on multi-touch displays. Often they have some of those flaws:
- Difficult to build
- Only working if the ambient light is set up in a specific way
- Need calibration often and / or extensive
But most of them work quite reliable in terms of fiducial recognition.
The reactable like approach
One of the most used techniques for fiducial or marker recognition is the optical tracking of printed markers with an infra-red camera. The basic setup for this system is a surface with back projection. On the same side as the projector, the back, is an infra-red camera positioned. It records an infra-red image of the projection surface. With this technique it is possible to recognise touches and printed markers on the “display surface” which then can be identified by pattern recognition techniques. [Kaltenbrunner, M. & Bencina, R. (2007)] This system works quite well, but needs to be calibrated once the projector or the camera have been displaced. Furthermore the system has the height of a table, due to the necessity of projectors and cameras underneath. Another issue can be the cooling of the projector, as it generates heat.
Capacitive multi-touch screens measure the capacity of transparent and conducting layer on top of an insulator like glass. The position is obtained by different techniques e.g. measuring the capacity from the four corners of he display. [Wikipedia Touchscreen] The tangible recognition with capacitive multi-touch screens is robust and accurate. Unfortunately are capacitive screens only used in small ( < 20 inch) dimensions. A at least 55 inch big screen is needed most of the time when multiple people want to interact with a screen. According to this, those screens are not suitable.
Microsoft PixelSense (former Surface)
This system is a LCD display where each pixel has RGB values plus a sensor to detect infrared light. The infrared light is emitted with the normal backlight from behind the LCD. Once an object is placed on the surface of the display, it reflects the infrared light back to the sensors in the LCD. This makes it possible to see the surface like a black and white scanner. [Microsoft PixelSense] The Microsoft PixelSense system is at the moment of writing this document only available in a 40 inch version. Also recent tests in the UAS Düsseldorf showed that the use of this table can be difficult for example when it is used in a studio environment. The sensors are then “blinded” by the ambient light.
Overlays for displays and projection surfaces can be used to extend a stock display or TV with touchscreen capabilities. Due to their frame-like design they can be easily affixed to a screen without losing image quality. In this way 3D TVs can easily be retrofitted to be touchsensitive. Technique behind this system is a horizontal and vertical grid of infrared rays that are created by irLEDs and received by infrared light sensitive sensors. Disrupting the grid creates a signal with the position of the interference. The system needs just a calibration to setup it’s position in relation to the screen. This is done by four touches with a finger. Unfortunately this system detects touches in an layer above and parallel to the screen, thus rendering it unable to detect, for example, printed tangibles.
False fingers - another approach
The basic idea is to use tangibles that glide on “false fingers” on the display surface. In that manner the frame can detect those “false fingers” and provide the information on their positions to the receiving software.
The software can then determine whether the constellation of points (false fingers) is stored in a database and the object thereby is recognized.
If it is recognized the orientation and the position of the marker can be calculated.
The advantages of this system are:
- The tangibles can be directly placed on a screen.
- No projector or camera is needed
- Just a one-time calibration is needed
The maximum calibration error is bound to the ability of the frame to shift relatively to the display. This can be reduced to less than 1 mm with a professional fixation.