Lab:Multitouch Device

From CS294-84 Spring 2013
Jump to: navigation, search


You will build your own input device and write a small application that demonstrates its capabilities.

You will submit code, a demo video, and a short written description.

In the first two weeks, focus on input only - we'll add projection output later.

Device: Multitouch Trackpad

Build a computer-vision-based multitouch trackpad (i.e., a device with capabilities similar to the Apple Magic Trackpad, but with camera input instead of capacitive sensing). This is an indirect input device (output occurs on a screen somewhere else). Your trackpad should be able to sense multiple simultaneous touch points.

Then write a software application that makes use of this touch data: at a minimum, create an image viewer (similar to the iPhone photo viewer) that implements common gestures (swipe, pinch, drag) for navigating, zooming, and panning images.

Screen capture of a sample implementation: File:ImageNavDemo1.mp4

Hardware: Sensing

You will sense touches on a surface using the Rear Diffused Illumination approach, in which a partly transparent surface is illuminated from behind with infrared (IR) light, and an infrared camera senses light reflected from fingers on the surface.

Basic touch sensing can be accomplished without IR with a standard webcam by looking for shadows in the visible light spectrum. However, such an approach has an important limitation: when building a direct-input interactive surface (projection on the same surface used for input), the projected image would interfere with visible light sensing. A common strategy is therefore to shift sensing into the infrared spectrum by using IR illumination and a IR-pass / visible-light-cut filter on the camera. An early and straightforward implementation of a Rear Diffused Illumination system is covered in this 2-page paper:

Matsushita, N. and Rekimoto, J. 1997. HoloWall: designing a finger, hand, body, and object sensitive wall. In Proceedings of UIST 1997, p. 209-210. web site

In an IR setup, you need an IR illuminant and an IR camera. We will provide both to you. For reference: To only sense IR light, you need to block light in the visible spectrum and pass light in the IR spectrum to the camera. Industrial automation cameras often come with a choice of filter - but most affordable consumer webcams have built-in IR-cut filters. When using such cameras, you will need to remove the IR-cur filter and replace it with a visible-light-cut filter. A very popular consumer-grade camera to use for this purpose is the Sony Playstation 3 Eye. We will make cameras available to you that have already been modified for infrared sensing. DIY Instructions from Video Tutorial - PS3 Eye Camera: Removing IR Blocking Filter, Installing Visible Blocking Filter. Modified PS3 Eye cameras like the ones we are providing are available for sale from PeauProductions.

You will also need an infrared illuminant. An affordable choice are IR illuminants sold for home security systems by (search eBay for "IR illuminator"). These devices usually have a light sensor that turns the LEDs off during the day - to override this behavior, cover the light sensor with a piece of tape. Finally, pay attention that the LEDs emit light of the same wavelength as that of your IR-pass filter. Two common choices are 850 and 940 nm.

We are providing the following hardware to each student team:

  • One Sony PlayStation 3 Eye Camera, modified to sense light only in the infrared spectrum in a band centered around 850nm.
  • One infrared illuminant with 850nm IR LEDs, and a 12V power adapter for it. You still need to modify the illuminant to bypass the light sensor as described above.
  • One set of 12M lenses with different focal distances for the camera.

Hardware: Frame

You will have to build a frame or enclosure for your device. I suggest using a surface size of approximately 8x10 inches or smaller, to keep your project manageable and mobile.

To get inspiration, you may want to look at tutorials on on building multitouch trackpads and tables. A cardboard box with cut out top may be a good prototype to get started; however, to save yourself a lot of calibration troubles, a more rigid setup made out of wood or acrylic is preferable.

It is especially important to find a good way to mount the camera in a fixed position in the frame. One way to achieve this is to add a 1/4"-20 hex nut to the camera base (with epoxy or super glue). This is the standard thread size for consumer cameras, so you can use standard tripods, clamps, bolts.

You will also need to build a transparent surface top with a diffuser. A thin sheet of acrylic with a layer of tracing paper or vellum works well enough. You can buy acrylic sheets in custom sizes from Tap plastics; vellum from any art supply store.

Software: Tracking touches

First you will need an appropriate camera driver for your platform. If you are using the supplied Sony PS3 Eye camera:

Finger tracking consists of identifying touch points in the camera image and tracking touches across frames. You may use an existing application or write your own tracking algorithm. Recommended existing packages are:

These packages can export touch data in one or more common protocols such as TUIO. You can receive messages in this format in Processing or any number of other languages to write your multi-touch-aware application.

If you want to implement your own tracking, I suggest using OpenCV, the powerful vision toolkit. Processing does NOT directly support OpenCV. However, openFrameworks a C++ application framework that is very similar to Processing, includes an OpenCV wrapper. There are also Python bindings for OpenCV.

Software: Application Layer

Write an application that receives touch data uses it to control browsing of multiple images. At a minimum, implement the following features:

  • Panning: dragging with a single finger should pan the current image
  • Zooming: pinch with two fingers to zoom in/out
  • Navigation: flicking left/right with a single

Features you may implement for extra credit:

  • Inertia: implement pseudo-physics so image objects keep moving after you release them
  • Annotation: enable users to draw on images. switch between drawing and navigating/panning through a long hold of a single contact.

Submission Instructions

(the usual)