Surface computing
Surface computing is the use of a specialized computer GUI in which traditional GUI elements are replaced by intuitive, everyday objects. Instead of a keyboard and mouse, the user interacts with a surface. Typically the surface is a touch-sensitive screen, though other surface types like non-flat three-dimensional objects have been implemented as well. It has been said that this more closely replicates the familiar hands-on experience of everyday object manipulation.
Early work in this area was done at the University of Toronto, Alias Research, and MIT. Surface work has included customized solutions from vendors such as or GestureTek, Applied Minds for Northrop Grumman. Major computer vendor platforms are in various stages of release: the iTable by PQLabs, Linux MPX, the Ideum MT-50, interactive bar by spinTOUCH, and Microsoft PixelSense.
Surface types
Surface computing employs the use of two broad categories of surface types, flat and non-flat. The distinction is made not only due to the physical dimensions of the surfaces, but also the methods of interaction.Flat
Flat surface types refer to two-dimensional surfaces such as tabletops. This is the most common form of surface computing in the commercial space as seen by products like Microsoft's PixelSense and iTable. The aforementioned commercial products utilize a multi-touch LCD screen as a display, but other implementations use projectors. Part of the appeal of two-dimensional surface computing is the ease and reliability of interaction. Since the advent of tablet computing, a set of intuitive gestural interactions have been developed to complement two-dimensional surfaces. However, the two-dimensional plane limits the range of interactions a user is able to perform. Furthermore, interactions are only detected when making direct contact with the surface. In order to afford the user a wider range of interaction, research has been done to augment the interaction schemes for two-dimensional surfaces. This research involves using the space above the screen as another dimension for interaction, so, for example, the height of a user's hands above the surface becomes a meaningful distinction for interaction. This particular system would qualify as a hybrid that uses a flat surface, but a three-dimensional space for interaction.Non-flat
While most work with surface computing has been done with flat surfaces, non-flat surfaces have become an interest with researchers. The eventual goal of surface computing itself is tied to the notion of ubiquitous computing "where everyday surfaces in our environment are made interactive". These everyday surfaces are often non-flat, so researchers have begun exploring curved and three-dimensional modes. Some of these include spherical, cylindrical and parabolic surfaces. Including a third dimension to surface computing presents both benefits and challenges. One of these benefits is an extra dimension of interaction. Unlike flat surfaces, three dimensional surfaces allow for a sense of depth and are thus classified as "depth-aware" surfaces. This allows for more diverse gestural interactions. However, one of the main challenges is designing intuitive gestural actions to facilitate interaction with these non-flat surfaces. Furthermore, three-dimensional shapes such as spheres and cylinders require viewing from all angles, also known as omnidirectional displays. Designing compelling views from every angle is a difficult task, as is designing applications that make sense for these display types.Technological components
Display
Displays for surface computing can range from LCD and projection screens to physical object surfaces. Alternatively, an augmented reality headset may be used to display images on real-world objects. Displays can be divided into single-viewpoint and multi-viewpoint displays. Single-viewpoints include any flat screen or surface where viewing is typically done from one angle. A multi-viewpoint display would include any three-dimensional object surface like a sphere or cylinder that allows viewing from any angle.Projectors
If a projection screen or a physical object surface is being used, a projector is needed to superimpose the image on the display. A wide range of projectors are used including DLP, LCD, and LED. Front and rear projection techniques are also utilized. The advantage of a projector is that it can project onto any arbitrary surface. However, a user will end up casting shadows onto the display itself, making it harder to identify high detail.Infrared cameras
Infrared or thermographic cameras are used to facilitate gestural detection. Unlike digital cameras, infrared cameras operate independently of light, instead relying on the heat signature of an object. This is beneficial because it allows for gesture detection in all lighting conditions. However, cameras are subject to occlusion by other objects that may result in a loss of gesture tracking. Infrared cameras are most common in three-dimensional implementations.Interaction methods
Various methods of interaction exist in surface computing. The most common method of which is touch based, this includes single and multi-touch interactions. Other interactions exist such as freehand 3D interactions that depth-aware cameras can sense.•Two Dimensional Typically, traditional surface types are two-dimensional and only require two-dimensional touch interactions. Depending on the system, multi-touch gestures, such as pinch to zoom, are supported. These gestures allow the user to manipulate what they see on the surface by physically touching it and moving their fingers across the surface. For sufficiently large surfaces, multi-touch gestures can extend to both hands, and even multiple sets of hands in multi-user applications.
•Three Dimensional Using depth aware cameras it is possible to make three dimensional gestures. Such gestures allow the user to move in three dimensions of space without having to come into contact with the surface itself, such as the methods used in Depth perception. DepthTouch makes use of a depth-sensing camera, a projector, desktop computer, and a vertical screen for the user to interact with. Instead of physically touching the screen, the user can manipulate the objects he or she sees displayed onto it by making freehand gestures in mid-air. The depth-aware camera can then detect the user's gestures and the computer processes them to show what the user is doing on the display.