HW 2 - Input Device

From CS260Wiki
Jump to: navigation, search

Due: September 24

Overview

You will build your own input device and write a small application that demonstrates its capabilities.

You will submit code, a demo video, and a short written description.


Device: Multitouch Trackpad

Build a computer-vision-based multitouch trackpad (i.e., a device with capabilities similar to the Apple Magic Trackpad, but with camera input instead of capacitive sensing). This is an indirect input device (output occurs on a screen somewhere else). Your trackpad should be able to sense multiple simultaneous touch points.


Then write a software application that makes use of this touch data: at a minimum, create an image viewer (similar to the iPhone photo viewer) that implements common gestures (swipe, pinch, drag) for navigating, zooming, and panning images.

Screen capture of a sample implementation: File:ImageNavDemo1.mp4


Hardware: Sensing

You will sense touches on a surface using the Rear Diffused Illumination approach, in which a partly transparent surface is illuminated from behind with infrared (IR) light, and an infrared camera senses light reflected from fingers on the surface.

Basic touch sensing can be accomplished without IR with a standard webcam by looking for shadows in the visible light spectrum. However, such an approach has an important limitation: when building a direct-input interactive surface (projection on the same surface used for input), the projected image would interfere with visible light sensing. A common strategy is therefore to shift sensing into the infrared spectrum by using IR illumination and a IR-pass / visible-light-cut filter on the camera. An early and straightforward implementation of a Rear Diffused Illumination system is covered in this 2-page paper:

Matsushita, N. and Rekimoto, J. 1997. HoloWall: designing a finger, hand, body, and object sensitive wall. In Proceedings of UIST 1997, p. 209-210. web site

In an IR setup, you need an IR illuminant and an IR camera. We will provide both to you. For reference: To only sense IR light, you need to block light in the visible spectrum and pass light in the IR spectrum to the camera. Industrial automation cameras often come with a choice of filter - but most affordable consumer webcams have built-in IR-cut filters. When using such cameras, you will need to remove the IR-cur filter and replace it with a visible-light-cut filter. A very popular consumer-grade camera to use for this purpose is the Sony Playstation 3 Eye. We will make cameras available to you that have already been modified for infrared sensing. DIY Instructions from NUIGroup.com: Video Tutorial - PS3 Eye Camera: Removing IR Blocking Filter, Installing Visible Blocking Filter. Modified PS3 Eye cameras like the ones we are providing are available for sale from PeauProductions.

You will also need an infrared illuminant. An affordable choice are IR illuminants sold for home security systems by (search eBay for "IR illuminator"). These devices usually have a light sensor that turns the LEDs off during the day - to override this behavior, cover the light sensor with a piece of tape. Finally, pay attention that the LEDs emit light of the same wavelength as that of your IR-pass filter. Two common choices are 850 and 940 nm.


We are providing the following hardware to each student team:

  • One Sony PlayStation 3 Eye Camera, modified to sense light only in the infrared spectrum in a band centered around 850nm.
  • One infrared illuminant with 850nm IR LEDs, and a 12V power adapter for it. You still need to modify the illuminant to bypass the light sensor as described above.
  • One set of 12M lenses with different focal distances for the camera.


Hardware: Frame

You will have to build a frame or enclosure for your device. I suggest using a surface size of approximately 8x10 inches or smaller, to keep your project manageable and mobile.

To get inspiration, you may want to look at tutorials on Instructables.com on building multitouch trackpads and tables. A cardboard box with cut out top may be a good prototype to get started; however, to save yourself a lot of calibration troubles, a more rigid setup made out of wood or aluminum profile is preferable.

It is especially important to find a good way to mount the camera in a fixed position in the frame. One way to achieve this is to add a 1/4"-20 hex nut to the camera base (with epoxy or super glue). This is the standard thread size for consumer cameras, so you can use standard tripods, clamps, bolts.

You will also need to build a transparent surface top with a diffuser. A thin sheet of acrylic with a layer of tracing paper or vellum works well enough. You can buy acrylic sheets in custom sizes from Tap plastics; vellum from any art supply store.

Software: Tracking touches

First you will need an appropriate camera driver for your platform. If you are using the supplied Sony PS3 Eye camera:

Finger tracking consists of identifying touch points in the camera image and tracking touches across frames. You may use an existing application or write your own tracking algorithm. Recommended existing packages are:

These packages can export touch data in one or more common protocols such as TUIO. You can receive messages in this format in Processing or any number of other languages to write your multi-touch-aware application.

To implement your own tracking, I suggest using OpenCV, the powerful vision toolkit. Processing does NOT directly support OpenCV. However, openFrameworks a C++ application framework that is very similar to Processing, includes an OpenCV wrapper. There are also Python bindings for OpenCV.

Software: Application Layer

Write an application that receives touch data uses it to control browsing of multiple images. At a minimum, implement the following features:

  • Panning: dragging with a single finger should pan the current image
  • Zooming: pinch with two fingers to zoom in/out
  • Navigation: flicking left/right with a single

Features you may implement for extra credit:

  • Inertia: implement pseudo-physics so image objects keep moving after you release them
  • Annotation: enable users to draw on images. switch between drawing and navigating/panning through a long hold of a single contact.



Submission Instructions

Create a Wiki Page for this assignment

Begin by creating a new wiki page for this assignment. Go to your user page that you created when you made your account. You can get to it by typing the following URL into your browser:

http://hci.berkeley.edu/cs260-fall10/index.php/User:FirstName_LastName

Replace FirstName and LastName with your real first and last names. This will take you to the page you created for yourself when you created your wiki account. If you have trouble accessing this page, please check that you created your wiki account properly.

Edit your user page to add a link to a new wiki page for this assignment. The wiki syntax should look like this:

[[Homework2-FirstNameLastName|Homework 2]]

Again replace FirstName and LastName with your name. Look at my user page for an example. Then click on the link and enter the information about your assignment. You should upload the files described below and describe any extra functionality you implemented and want us to review.

Upload Project

  • Your submitted project must include both the full source code as well as the executable of the working application.
  • Create a zip file of your project tree. Rename the zip file to firstname-lastname-hw2.zip (e.g., bjoern-hartmann-hw2.zip)
  • Upload the zip file to the Homework2-FirstNameLastName page you just created:
    • Create a new file link like this: [[File:firstname-lastname-hw2.zip]]
    • Save the page, then click on the File link you just created to upload the zip file.

Create & Upload Live Video

  • What your video should contain:
    • Since you built a new hardware device, this video should show a live video of you operating the device. Both the output screen as well as the device should be visible in the shot. Narrate your video. If the screen is overexposed, turn down the screen brightness and add room lighting. If you still have trouble getting legible video shots, you may also upload a screen recording in addition to the live video.
    • Be CONCISE. Your video shouldn't be longer than two minutes.
    • Be prepared to do multiple takes; plan and/or write out a script first.
  • Your file should be in WMV, MOV, or OGV format, and no larger than 10MB.
    • Rename the file to firstname-lastname-hw2.mov (or wmv/ogv; e.g., bjoern-hartmann-hw2.mov)
  • Upload the file to the Homework2-FirstNameLastName page you just created:
    • Create a new file link like this: [[File:firstname-lastname-hw2.mov]]
    • Save the page, then click on the File link you just created to upload the mov file.

Describe your implementation

  • On the Homework2-FirstNameLastName page you just created, write one to two paragraphs:
    • what platform, language, and tools you used (especially: what libraries you used and what you wrote yourself)
    • how you constructed and calibrated your device -- upload at least one still image of your device
    • what you learned from this assignment

Add Link to Your Finished Assignment

One you are finished editing the page, add a link to it at the bottom of the page with your full name as the link text. The wiki syntax will look like this: *[[Homework2-FirstNameLastName|FirstName LastName]]. Hit the edit button for the last section to see how I created the link for my name.


Links to Finished Assignments

Add your submission below this line.