Sensing and Actuation

From CS260 Fall 2011
Jump to: navigation, search

Bjoern's Slides

Media:cs260-06-sensing.pdf

Extra Materials

Discussant's Materials

Reading Responses

Valkyrie Savage - 9/14/2011 9:42:20

Main idea:

Here comes that word again: “natural.” The idea behind these designs is to create interfaces for users which require very little attention by harnessing their power of touch: auditory and visual output for computers means that to use either of those facilities for input would mean overload.

Response:

The Haptic Feedback paper made me a bit sad when it discussed how they were just getting into the swing of their research as their company was shuddering and dying beneath them. They presented a huge array of different prototypes and ideas for novel interactions based on touch sense, and it made the paper a bit hard to understand, I guess. They didn’t seem to have conducted formal trials with any of their devices (they mentioned that they had been making use of some of these devices over the course of about 12 months with “regulars, novices, enthusiasts, and skeptics”), which I suppose is alright considering their circumstances.

As to the problem that they were trying to solve, they came right out and said that we’re getting sensory overload by having visually-demanding input devices as well as visually-stimulating output devices. I am inclined to agree that this is a problem; we discussed touch-typing in class last week, and I recently watched my boyfriend (who is a software engineer by trade) as he learned to touch type. Just removing himself visually from his inputting tasks improved his speed as well as his coding; he had more time to dwell on what he was doing once it was no longer taken up by thinking about how he was doing it. Also, the addition of haptic feedback creates a richer experience for the user: users like touching things, and when they touch back, it feels more “real.” I was amused by their stories of users who seemed to feel the “favorites” subconsciously, stopping on them without really knowing why.

Reading the Skinput paper felt like a glimpse into the future. Portable, human-mounted displays? Data entry on one’s arm? Awesome! As a girl who’s played a lot of video games and dreamed about these things for some time, I’m pretty excited. I enjoyed in particular their discussion of the piezoelectronic devices attuned to differing frequencies (that being something with which I am rather unfamiliar). I thought their experimental design was good (they even went so far as to cite a clinical article about obesity during their discussion of BMIs and males vs. females), and would be excited to read their future work. I wonder how difficult it would be to extend such a system to work on other parts of the body? I realize that most other parts of the body are constituted of significantly more fat and less structure, with the exception of perhaps the feet, which might make a fairly awkward interface, anyway (though they are quite suitable for driving). I was poking through the articles they cited, and it sounds like many people have capitalized on the idea of differing sound conductions coming from rubbing fingers, snapping, flicking, etc. I wonder why this isn’t a product yet? I suppose that, again as we discussed in class, it seems that the lag from academic research to corporate research to product is a long one...

In short, I thought these papers were exciting. I find things more engaging when I feel physically connected to them, and I can only assume that other people feel similarly. This is, for instance, a key part of the rapid prototyping idea: we like to touch stuff.


Galen Panger - 9/15/2011 23:26:13

There are a lot of great ideas, and good values, in "Haptic Techniques for Media Control." Their concept is simple—textured, force-feedback one-dimensional input knobs and sliders. And their values are solid—achieving modeless interactions that help restore some of the physicality of traditional media control tools like vinyl. I like it that a lot of the HCI research we've read so far has as its mission to "restore" some of the good feelings we get from traditional real-world browsing, editing, steering, etc. tools. And I think this piece hits on two key components of what made those traditional, physical tools so compelling. They were modeless (what you get is what you expect), and they had force feedback (feedback you can feel!).

I trash talked the "direct manipulation" piece, but it wasn't because I didn't care about that good feeling that comes from great interfaces, by the way. I just resist unhelpful formalizations.

What more can I say about "Haptic Techniques"? I like their idea of "orthogonal force illusion," where you feel like you're going up or down a hill based on the resistance or lack thereof of the knob. "Haptic landmarks" are similar, and they get extended by concepts like "foreshadowing," and "alphabet browsing." Preview buttons I thought were great, too. Could be fun to integrate a "preview" effect into enter keys or mouse buttons so we can see what's going to happen before we execute something.

As for Skinput. I think it's interesting research, but they over-sold it. Their interaction model promises "roughly two square meters of external surface area," and due to proprioception, "eyes-free" operation. "Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area," they say. Mhmm. But prioprioception isn't that high-resolution, it turns out. And that two-square-meter surface area is not laid out neatly and, honestly, a lot of it is hard to reach! (Unless people tap on our backs to operate our interfaces for us.)

So they hyped it too much, which is a turn-off. But there are some great things about the article. First, their related work section is sublime. I love learning about concepts that attempt to better utilize the body for computer input. Heart rate, skin resistance, brain sensing (which, interestingly enough, they say requires a lot of concentration), electromyography and, of course, bone conduction microphones and headphones. All fascinating.

Their own research on skin and bone conduction I think is great, too. I don't think that we should focus on tapping and interpreting all parts of the body; even having to tap one arm with the other seems cumbersome. But tapping, flicking, snapping, etc. behaviors that we can do with the fingers on one hand I do think is really promising. I could see this technology commercialized into Nike+ armbands that, upon the snap or flick of a finger, advances a song or calls up your "power song." It could be a lot easier than messing with the iPod’s controls.


Hong Wu - 9/18/2011 12:08:33

Main Idea:

“Skinput” demonstrates a wearable armband sensor to detect and localize the finger tap on forearm and hand. “Haptic Techniques” described a set of haptically technique for manipulate digital media.

Interpretation:

“Skinput” detect the finger tap by examining the pulse. The sensor is non-invasive and easily removable. The disadvantage is that the design is tricky. The prototype cannot get very good accuracy (highest one is 88.8%). It may not sense correctly when people run or do exercise.

“Skinput” may be very popular when it becomes smaller and more accurate. It can be the accessory of patients or animal to collect the data.

“Haptic Techniques” is the early paper for haptically technique. The technique of touching screen has been widely used in today’s product. Though the technique describe in papers are somehow naïve, the concept and perspective are amazing at the paper’s time.


Laura Devendorf - 9/18/2011 12:08:50

The readings for this week focused on novel sensing and feedback techniques such as haptics and using the skin as an input surface.

Haptic Techniques for Media Controls presents very interesting ideas and a number of devices that integrate haptics into the control of various types of media. The paper provides a survey of devices and techniques as well as a review of the benefits and set backs of those approaches. The paper pushed for personal customization of haptic devices in order to support the adoption of such tools. Having such customized tools might pose a challenge to a consistent interaction style between various devices. The success of the haptic devices is also coupled closely to the visual feedback and metaphors used to simplify interaction. Most of the visual metaphors used in the paper were very effective however, I found the inner and outer wheel metaphor to be a bit confusing. What struck me about the demo video was how many of the ideas had been implemented by Apple, particularly aspects of the FishEye in the earlier implementations of the iPod. What I wound interesting is how they replaced the haptic feedback with audio feedback in the form of "tick" sounds.

Skinput outlines a method for detecting and classifying audio frequencies as they propagate through the body. Their method is able to precisely classify a number of regions on the body based on the audio signals created by pressing those points or combinations of those points. The paper provided evidence that Body Mass Index, and bone density don't significantly reduce the effectiveness of such a system suggesting that this sort of input could work for a large majority of the population. One of the limitations of this sort of input system is that it doesn't provide continuous manipulation, each press is a discrete event. The paper also describes at length that the benefit to the system is that you don't have to carry anything around. Yet, their implementations use a arm band and projector. While I would assume that the technology might reduce the bulk of the arm band, I can't imagine a projector beings so easily reduced in size.


Viraj Kulkarni - 9/18/2011 20:58:10

The paper on 'Skinput' presents a novel method of input which treats the user's skin as an input surface. Input is generated from taps in predetermined areas of the body. Along with presenting this technique, the paper also discusses the user study conducted to evaluate the technique. 'Haptic techniques for media control' asserts the usefulness of touch-based techniques for manipulating media and introduces a few such haptic techniques.

The authors provide a background on the model of wave transmission that takes place through the body once it is tapped and build Skinput on this model. They present several methods of input based on the part of the body (fingers, forearm etc) to be used and the number of tap zones offered. Although this is a really interesting concept, I do not think of it to be very useful. They present 10 locations on the forearm that can be tapped. Personally, I would rather carry a controller with 10 buttons than walk around wearing an armband sensor that detects taps on 10 locations on my forearm! I feel the number of inputs that can be generated by this technique is too low. There are much better alternatives available. I can carry a controller or even use my cellphone to do the same. A cellphone can generate much complex inputs than this technique can and carrying a cellphone around is a lot easier than wearing an armband!

'Haptic techniques for media control' argues that we are used to manipulating objects in the real world by touch and the tactile feedback we receive is an important part of the 'feeling' of manipulating objects. Computer interfaces like keyboards and mice do not offer a tactile feedback and, while using these, the user feels distant or disconnected from the activity. The paper argues for using haptic techniques for such interactions and also introduces a few such techniques.


Amanda Ren - 9/18/2011 21:03:18

The Snibbe paper describes a set of techniques used to haptically control digitial media like video, audio, voicemail, and computer graphics.

They based their techniques on the following design principles: a continuous controller is better than a button triggering a discrete action, touch is media indepedent so it reduces noise, use dynamic systems for control, and modeless interactions can be achieved by a consistent physical behavior. One device they had that I thought would be useful was the Alphabet Browser. Most people now do have large media collections and with the combined haptic knob and audio feedback, searching to a given title would require less effort. I thought that the devices like the haptic clutch, which involved pressing on a force sensor would not be as useful. It seems like a lot of effort to stop a video or to shuttle between frames. The paper concluded that the amount of haptic feedback to include in a complete system still isn't clear, but they do believe that their techniques are both feasible and cost effective. One of their biggest challenge of power requirements will no longer be such a challenge in the future.

The Harrison paper describes Skinput, a technology that allows the skin to be used as an input surface by analyzing vibration thats result from finger taps on the body.

This paper is important because it brings about the interesting idea of actually using our skin as an input area because of the large external surface area provided and because people are inconvenienced by carrying around larger computational devices. Other always-available mobile input systems have had their flaws such as being computationally expensive or prone to errors. The researchers were able to make a armband prototype that performed well in recognizing the finger taps.I think it's important to further do testing for accuracy while the user is in movement because of people are constantly multitasking and on the move. It would also be interesting to see if age and sex actually matter, or if only BMI affects accuracy. I think the identification of finger tap type is definitely beneficial. Different types of taps can activate different options.


Yun Jin - 9/18/2011 21:34:46

Skinput: Appropriating the Body as an Input Surface In this paper, it represents a new technology named skinput, which allows the body to be appropriated for finger input using a novel, non-invasive, wearable bio-acoustic sensor. And this article assesses the capability, accuracy and limitations of the technique through an experiment of twenty participants. The technology of skinput has several advantages by appropriating the human body as an input device. 1 We could have roughly two square meters of external surface area, which solves the problem of limited interaction space and consequently diminishing usability and functionality. 2 Using skinput is more accessible by our hands. That is, this approach provides an always available, naturally portable and on-body finger input system, which makes the input surface more convenient for users. 3 Proprioception allows us to accurately interact with our bodies in an eyes-free manner. We can move body as input surface without visual assistance. In the design of skinput, they put two sensor packages on the arm for input through bio-acoustics to transmit sound as signal to the projector. In this prototype, there are some limits by exhibiting an experiment of twenty users. First, thumbs share a similar skeletal and muscular structure, thus it reduces acoustic variation and makes differentiating among them difficult. And acoustic information must across as many as five joints to reach the forearm which further dampens the signal. Second, variations in our body composition, such as the prevalence of fatty tissues and the density of bones tend to dampen or facilitate the transmission of acoustic energy in the body and thus variations would affect the sensing accuracy. Finally, acoustically-driven input techniques are often sensitive to environmental noise. Thus, noise would also affect the accuracy of the sense. Despite of so many limitations of the technology of skinput, this new method can be applied and explored in the area of HCI in the future.


Haptic Techniques for Media Control This paper describes several haptic techniques for media control, such as video, audio, voicemail and computer graphics. Also, it introduces the general principles of the intuitive physical metaphors and document the hardware prototypes developed to explore them, then describe the haptic metaphors and behaviors themselves. To construct modeless dynamic systems with the immediacy of real-world physical controls, the paper used three categories of experiments, which are haptic navigation and control, haptic annotation and functional integration. And all of them have specific significant importance for users to manipulate media control. For instance, with tagged handles, people could adapt easily to a multiple behavior model and people could use functional integration together with simplification. And all of them are convenient for users to control. However, there are some limitations of these techniques for haptically manipulating digital media. First, these prototypes cannot be implemented on the general-purpose platforms and errors are common. Second, not all of these techniques have been put into practical ways, such as the Slider. They have only conceptual sketches showing a device in the side of a cell phone or remote control. Thirdly, forces and magnitudes cannot be combined without interfaces, capture and blocking. Finally, these techniques have to use powered devices and opening door to portable devices.


Steve Rubin - 9/18/2011 22:17:32

These two papers, "Skinput," and "Haptic Techniques for Media Control" focused on sensing and actuation. Skinput presented a method for acoustically sensing touches on distinct parts of the human body, and the second paper presented a series of devices for haptically interacting with digital media (i.e., adding a new sensation of touch and feel to traditional input devices).

Our bodies are compelling input devices because they are always available and have a large surface area. "Skinput" acknowledges this, and uses the acoustic properies of our bodies to its advantage. Because of this fact, I think that Skinput is a great idea. However, there are serious problems with the system as represented in this paper. First, the armband is, regardless of what they may claim, invasive. This will probably get fixed with further engineering work, though. Second, the presented techniques do not lend themselves to multitouch techniques. We would have to do the training on every individual combination of touches, as the acoustics would be different for multitouches. It's possible that they could mathematically determine the multiple touches because of the physical properties of waves--this is not mentioned in the paper (The authors have a paper in UIST this year ("Omnitouch") that may solve this problem, but it might not be skin-based input). The final, and biggest problem is that each user must provide the machine learning algorithm with training examples before it will work on him. It is unclear whether this is a one-time thing, or something that must be repeated for every slight repositioning of the armband.

The paper on haptic techniques was interesting mostly because it addresses a type of input device that seems horribly under-represented. There is no reason that more devices shouldn't take advantage of our body's internal feedback loop: not only can our bodies provide touches, but we can feel things that make us adjust our touches. The ubiquity of touch-screens has really taken us away from the idea of haptically interacting with our digital world. I would guess that these technologies haven't taken off because, as the paper suggests, these systems are most useful when hand-tuned. This is totally contrary to, for example, the iPad, where every application uses the same multitouch display to do all of its input. From a business perspective, it is easier to sell a consumer on one device that can do everything (jack of all trades, master of none, to be cliched) rather than on a highly specific hand-tuned input mechanism.


Jason Toy - 9/18/2011 22:37:01

Skinput: Appropriating the Body as an Input Surface

Skinput is a type of technology that allows for using finger taps on the body as an input system. The paper describes both the design of a sensor to acquire finger taps in addition to a system to resolve the location of finger taps on the body.

The paper presents a new system of using acoustic information from taps to allow for on-the-move interactions, in contrast to previous systems of wearing fabric and color-marker-based tracking. Pros of this system over others includes less obtrusiveness and relatively high input accuracy. The ideas of this paper relate to that Microsoft's LightSpace depth cameras as both consider using the arm or body as an input device and an output device, through projection, for interactions. A similar system was presented in Wednesday lecture of a TED talk where someone projected data onto a piece of paper that you could manipulate with your hands. For real world systems, this could be useful as a universal input interface for mobile devices such as cell phones or mp3 players. You could punch in someone's phone number or change the song you were listening to, through Bluetooth for example, and leave your devices in the bag. Future research could be done on both minimizing the size and obtrusiveness of both the armband and pico-projection system to allow for the portable vision of devices you can use on the move that the paper describes.

The experimentation used to acquire accuracy results had both strengths and weaknesses. The experimenters did a good job adhering to experimentation standards by using randomization, reporting results, etc. I like the fact that they did not start by concluding that one method of control was the best, and instead tested various possibilities and did a comparison of them. However, it might be a weakness for a paper in where the system proposed is that of a mobile input device that most of the testing was done with subjects' sitting down at a table. Testing of users on the move was regulated to a side experiment of people on treadmills. In addition, I think the experimenters should have done a better job of dissecting the results. In all the experiments, a measure of accuracy and a standard deviation was reported. But if the results are binary, it would be hard to improve the system. For example, were the subjects making mistakes because of human error, or was the system registering taps incorrectly? Knowing what was going wrong in the experiments could lead to different avenues of research to improve the device.

Haptic Techniques for Media Control

This paper describes the use of haptic, or touch, techniques for controlling digital media. It goes on to describe various haptic devices created by the experimenters and observations of their use.

The paper presents a new approach to user control regarding digital media. The argument is that DJs and film editors see benefits in physical media editing tools in terms of accuracy and response. These benefits could be applied to digital media as well. Future design could be influenced by the success some of the various devices had. The integration of several of these ideas discussed into a general all-purpose device is one possibility. Research possibilities include delving further into what made these devices feel intuitive to users or the differences between the benefits of haptic tools in traditional media and in digital media.

The paper does a good job enumerating the design principles that the experimenters followed when building the tools. This allows the reader a roadmap into seeing why they built the tools they created. Another thing I liked was the exploration of the interactions of haptic cues with other cues: Sticky Channels used haptic cues with visual ones, while the Alphabet Browser took an entirely different approach with audio feedback. Human interaction isn't based only on touch, and in real life systems, both the visual and audio aspects of systems could affect a user's experience. A weakness of the paper's premise however is the specialized design of tools for users. While it makes sense to create special physical tools for the sound and video aspects of the movie industry or DJs, it might not make as much sense for normal computer users. For example, a game controller like the Rock-n-Scroll would require that video games be built around using this input device. Another problem is that an average user might have different types of media (videos, audio, etc) and would not be inclined to use a different input device for each. What audience this paper is aimed at, specialized professional media editors or the average user, is unclear. A second weakness is the lack of experimental results detailed: "Audio feedback alone provided some utility, but haptic annotation seemed to improve user's speed, accuracy and confidence of navigation as well as their aesthetic appreciation." What was the actual effects of haptic devices on user's performance? Was the improvement with this device minimal, or does it deserve future research?


Ali Sinan Koksal - 9/19/2011 0:29:43

The Skinput paper presents a new technology for using the human skin surface as an input method, evaluating it on a number of scenarios involving tapping regions on the arm and hand. The system uses wave propagation through the skin, as well as wave propagation through the bones, to sense taps. Adjustable sensors can be responsive to specific ranges of wave frequency, which helps greatly in input recognition accuracy.

This approached is very interesting in the familiarity of the input surface. "Proprioception", our sense of our own body in space, lets us perform relatively accurate actions without even looking at the zone to be tapped. Furthermore, a lot of the points of contact can be familiarly described (e.g. "wrist", "pinky finger"). The speed at which the input taps can be processed in Skinput allows for an interactive experience, which is another strong point. The used technique allows for not having to consider a great deal of noise signals, which make it computationally easy to process the input.

However, taking advantage of the "large surface" that is the forearm, as the authors point to, will most likely require a projection on it, as was demonstrated in the paper. I am not very convinced of the non-intrusiveness of a projector fixed to my arm for this purpose, and do not think this would be easily adopted.

Overall, the richness of the interface provided by the technique (sensing the nature of the touched material, different gestures as taps and flicks) seems a promising approach, and a step towards achieving a harmonious fit of technology into our everyday life, as championed by ubiquitous computing.

The second paper, by Snibbe et al., explores a number of tactile techniques for enhancing interaction with digital media. Haptic devices allow expressive ways of interaction by measuring applied force and giving haptic feedback in order to replace discrete modes (typically manipulated using buttons) by continuous interaction.

This paper relates to Direct Manipulation Interfaces in that it aims to reduce the cognitive effort that must be spent by eliminating the mental task of remembering in which mode the interaction is. The authors create model-world metaphors that are directly manipulated, bearing similarity to former devices of interaction such as knobs and sliders.

We see that this worked greatly for the particular example of alphabet browser: the ease of use of early iPod devices is mainly made possible by this brilliant technique. Another technique I found quite compelling was the preview button, which, as the authors pointed out, could be used for TV picture-in-picture.

Among the things I found less convincing was the tagged handle. Its incorporation of discrete controls seemed to hinder its intuitiveness. Also, systems that provide active feedback, such as the wheels that "eliminate friction", seem a bit "dangerous", as the authors mention too. One final remark: would we really want to "maximize" the vocabulary of information transfer? This might conflict with the goal of keeping interfaces intuitive.


Peggy Chi - 9/19/2011 1:40:50

How would people manipulate digital objects using their body? What kind of physical objects and sensing techniques would be required? In 2001, Snibbe et al. presented several examples using haptic for media control with various physical metaphors, including wheels, knobs, and sliders. The Skinput system in 2010 demonstrated an example to provide an on-body, mobile input technique using acoustic transmission in human body.

It might not be easy to take a leap from traditional input devices such as mice (include two buttons and x-y position sensing) and keyboards (lots of buttons), to 3D objects like joysticks or knobs that take our motion as control and go beyond flat, 2D surface with discrete input. This also involves the issues we've been discussing in the past weeks: what is natural? to what degree we need to "learn"? The devices as physical metaphors show how users can control digital content using similar concepts in the physical world that we are familiar with. There is one recent example I like that uses car driving metaphor to control video fast-forwarding [1]. It also brings up one question: for those who do not know how to drive (though their system did not really apply a physical pedal), would this interaction be "intuitive" enough to overcome with?

To push this to an extreme, Skinput showed an interesting technique that directly took human body as input. First of all, I'm quite interested to know their choice of acoustic sensing, which seems not too common to use recently. I tried to list the advantages and disadvantages of such a method - Pros: not affected by non-acoustic signals such as display, projectors, speed, etc. Cons: noisy analog input. This reminds me of a system that detected eating material (in the mouth) using a microphone attached to user's ear [2]. Perhaps acoustic sensing works particularly well in human body...?

Second, although I got the basic ideas of identifying features in wave data using machine learning, I have to admit that I do not fully understand the methodology. I'm especially not clear about its limitation: - It senses the signals of taps, but would larger area impacts such as clap or shake hands also work? - Does it support two or more taps at the same time? This is probably not supported, but I don't find any discussion in the paper (or I'm just missing the info). - If the system keeps sensing, would unintentional actions trigger any misjudge (false alarm)?

Reference: [1] Cheng, K., Luo, S., & Chen, B. (2009). SmartPlayer: User-Centric Video Fast-Forwarding. CHI 2009. [2] Amft, O., Stäger, M., Lukowicz, P., & Tröster, G. (2005). Analysis of Chewing Sounds for Dietary Monitoring. UbiComp 2005. doi:10.1007/11551201_4


Donghyuk Jung - 9/19/2011 3:17:27

Skinput: Appropriating the Body as an Input Surface

In this paper, researchers presented Skinput that uses the skin as an input surface. It captures mechanical vibrations that propagate through the human body by using a novel array of sensors worn as an armband. Their approach was very novel but I thick this type of input mechanism has the restrictive use. Skinput can provide always-available input device because it uses the human body itself as an interaction tools. For instance, joggers can handle their portable music player without looking at the device. However, Skinput might need some calibration procedures in order to detect user specific data before using it. Every user does not have the same properties in terms of body composition. There is no identical person in the world. In addition, people always can do some unrecognized behaviors so that these motions can trigger false input. Otherwise, the surrounding environment would disturb sensor’s accuracy by providing some noise to the device.

Haptic Techniques for Media Control

This paper describes several approaches to using virtual, haptically displayed dynamic systems to mediate a user’s control of various sorts of media. For example, users can haptically manipulate digital media such as video, audio, voicemail and computer graphics, utilizing virtual mediating dynamic models based on intuitive physical metaphors. By focusing on continuous interaction through an actuated device rather than discrete button and key presses, researchers have created a simple yet powerful set of tools that leverage physical intuition and reduce the complexity of interacting with media.


Suryaveer Singh Lodha - 9/19/2011 4:11:35

Skinput - Skinput utilizes natural acoustic conduction properties of the human body to provide a mobile input system. A wearable bio-acoustic sensing array is built into an armband in order to detect and localize finger taps on th forearm and hand. The experiment's results prove that there is only limited acoustic continuity. The paper suggests that if one is designing eyes-free on-body interfaces, the locations participants can tap accurately must be carefully considered. The experiment suggest that high BMI is correlated with decreased accuracies.In both walking trials, the system never produced a falsepositive input. In the jogging trials, the system had four false-positive input events (two per participant) over six minutes of continuous jogging.

Haptic techniques for media control - The author shows that continuous interaction through a haptically actuated device rather than discrete button and key presses can produce simple yet powerful tools that leverage physical intuition. People adapted easily to a multiple-behavior model. Compliantly mounted haptic displays reduce the impact of changes in texture and feature size. Some of Rock-n-Scroll's applications suffered from the rock axis absorbing subtle haptic signals and reducing its control-ability. I felt that prototype of a broadband universal remote (tagged handle wheel) to provide a consistent tactile interaction across disparate media was pretty interesting. Another interesting interaction was preview buttons (installing pressure sensors on the surface of normal pushbuttons) allows the user to preview the button's action before committing


Derrick Coetzee - 9/19/2011 4:14:45

Skinput, a recent 2010 work out of Microsoft Research, uses a combination of low-frequency vibration sensors and machine learning to accurately identify which of a number of discrete locations on the surface of the skin was tapped.

Other than demonstrating high accuracy of the primary task (distinguishing taps on specific discrete locations on the skin), the work identified several other classification tasks that could be done with high accuracy and open doors to new interfaces, such as identifying what material was tapped by the finger.

One of the main limitations of the work is that although the feature space is clearly defined, the system provides little insight into which features are most salient, and operates in a discrete space: simple questions like "how far apart must two spots be in order to be distinguished" or "how many discrete spots can be distinguished with > 80% accuracy" or even "how much training data is needed to achieve usable accuracy" cannot be answered in such a complex framework. A more predictable system that uses a physical model to construct a probability distribution over skin locations would be superior.

Other limitations were in the evaluation: the main trial used a within-subjects mechanism, so that familiarity with the interface accumulated over the course of the trial (although the order of tests for each subject was randomized). The in-motion experiment had very few subjects and few trials, leading to misleading figures such as 100% without clearly specified error bounds. The applications involving projection were only evaluated in an ad hoc manner.

"Haptic Techniques for Media Control", from 2001, summarized much of the haptic (touch-based) interaction methods designed during the last year of Interval Research's existence. Whereas the Skinput paper focused on a specific technique in great detail complete with formal user studies, this work summarized at a high level an enormous range of interaction techniques designed by many researchers, including a virtual notched "wheel" inside a physical one, an actual frictionless wheel, sliders and brakes, preview buttons, and other mechanisms. Ad hoc preliminary trials showed how the designs had been refined over time. An unfortunate limitation was that this left little room for background material, leaving those unfamiliar with the haptic interaction literature bewildered by jargon such as "F/T", "Hall-effect", and "FSR Pads".

Both works were able to provide a degree of eyes-free manipulation: for example the slider control could be slid to a particular well-known position, as on a radio dial, using only haptic and/or audio feedback, while Skinput was able to exploit proprioception to tap specific body locations relatively accurately without seeing them (particularly thumb-to-finger of same hand).


Apoorva Sachdev - 9/19/2011 5:41:46

Reading Responses for September 19th 2011 The two readings for this week were about SkinInput and Haptic interfaces. Authors Chris Harrison and Desney Tan describe how one can sense the human body for acoustic transmission and hence use the skin as an input surface while authors’ Scott Snibbe and Karon MacLean describe various kinds of haptic interface that can be used to control media application and describe the implementations of these interfaces.

SkinInput was a very interesting approach to sensing input on your skin. By using a novel input technique and using an array of highly tuned small, cantilevered peizo films accurate measurements can be obtained that allow you to resolve locations of finger taps. I like this paper because they have described in detail how the problems related to sensing were overcome; the reason for the particular mechanical design and they have also provided a lot of statistics on user study which really helps in gauging how good of an interface this kind of system would be. This reminded me a lot of the SixthSense interface where one could project a clock on one’s wrist or display a numpad on one’s fingers and it would be great to see how the two systems compare in terms of accuracy and costs. The authors initially spoke about how users interact with their skin in a eyes-free manner since we are very aware of our own body in the three dimensional sense, so it would be interesting to see if we could design applications/use cases that take advantage of this particular thing in the SkinInput interface.

The paper on haptic techniques describes the various ways in which we can use actuated knobs and sliders to enable continuous interactions when controlling media rather than discrete buttons. Some of the applications they described I felt didn’t really require the haptic interface and would just make it more confusing for the user to use. For instance, I would rather have a potentiometer controlled knob for controlling volume than a fancy haptic enabled knob because you can get feedback in terms of sound output anyway and don’t require that external feedback on your hand necessarily. One application that I really liked was the Video Carousel and the circle of channels. The ability to have your favorite channels be sticky would be a fun interaction to use. I feel like in some sense “the feeling of directness” that the haptic interface tries to provide might be lost in the complexity of the use of the knob. It took me quite a while to understand some of the interfaces they were describing and a lot of it only become clear after I watched the video demos. So I am not entirely sure how intuitive or feasible this approach would be. Also, there is a fine line between just enjoying the tactile feeling of interaction and its use so a lot more user study would have to be done to judge if this approach is practical and useful.


Cheng Lu - 9/19/2011 8:09:53

The first paper, “Skinput Appropriating the Body as an Input Surface”, presented an approach to appropriating the human body as an input surface. In particular, they resolved the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the human body. They collect these signals using a noval array of sensors worn as an armband. This approach provides an always available, naturally portable, and on-body finger input system. Results from the related experiments have shown that their system performs very well for a series of gestures, even when the body is in motion. Additionally, this paper presented initial results demonstrating other potential uses of this approach, including single-handed gestures, taps with different parts of the finger, and differentiating between materials and objects.

The second paper, “Haptic Techniques for Media Control”, introduces a set of techniques for haptically manipulating digital media such as video, audio, voicemail and computer graphics. The control systems are implemented on a collection of single axis actuated displays, equipped with orthogonal force sensing to enhance their expressive potential. In order to using haptically manipulating, the author mainly interacted with physical dynamic system, whose components’ movement is determined by physical laws. The dynamic systems were constructed as physical task metaphors, rather than as literal representation of the media. There are several design principles for this kind of system: Discrete vs Continuous Control, Information Delivery through Touch, Dynamic Systems for Control, Modeless Operation and Application and Interface Communication. The author describes several devices and its design details: Orthogonal Force Sensing, Multi-axis Force Sensing, Passive Haptic Display, Absolute Positioning, Descrete & Continuous Control and Rock-n-Scroll. After having the devices, the paper further discussed controlling media via these Haptic interfaces including Haptic navigation and control and Haptic Annotation. This paper is much intuitive about the Haptic control, but it provides us a brand new approach of HCI.


Hanzhong (Ayden) Ye - 9/19/2011 8:18:43

Reading Response for: Skinput: Appropriating the Body as an Input Surface. Harrison, C., Tan, D. Morris, D. In Proceedings of CHI 2010. Haptic Control for Media Navigation. Scott S. Snibbe, Karon E. MacLean, Rob Shaw, Jayne Roderick, William L. Verplank, and Mark Scheeff. In Proceedings of UIST 2001.

The Monday’s reading materials for this week is very interesting in that both articles discuss unusual ways of input, which are not commonly seen in our daily life. The first article discusses a complete model of new interaction device, while the second introduces a particular design and proposes several creative designs for haptic controlling.

The Skinput system is based on several previous works but also contributes creativity in several ways. The article discusses specifically about the underlying scientific theories of acoustic wave transferring and how it helps to differentiate different input stimulation. However in my point of view, its experimental method used in the design and the implementation of the system is more remarkable. The article also shows the data from experiments' results gained from real participants, and makes comments and evaluation of its accuracy under different circumstances.

The second paper introduces a set of techniques for haptically manipulating digital media such as video, audio and graphics. Using a metaphor between a heavy spinning wheel and a video sequence, the research establishes a connection between virtual data stream and real world entity. By controlling the clutch to spin or brake the wheel, one becomes able to manipulate the virtual controlled object in a more haptical way, and gains much feedback from the object than traditional interaction method.

In short, these two paper arouse my strong interest in a more natural way of human-computer interaction. It has become my deep understanding that the interaction between human and computer can always go beyond our current imagination and step into a new level of richer feedback and more natural interaction.

-Ayden (Sep 19, 2011)


Allie - 9/19/2011 8:19:49

The Haptic Control for Media Navigation paper introduces haptic approaches to help render/mediate user's control of mediating systems, such as video/audio/voicemail/computer graphics. 50 researchers gathered for 12 months to study the users' interactions with various hand-crafted and haptic technologies. Haptic technologies mimics familiar human acts in order to communicate its given task. Due to the low cost of buttons they dominate media control tasks. The various types of haptic technologies vary in their functionality and development: haptic annotation is a physical marking of content; alphabet browser consists of a haptic knob with an auditory display to browse audio collections, and haptic annotation represents frames of video/temporal audio intervals by physical markings of content.

Haptic dynamic systems have no physical analog, and engineering prototypes pay little attention to appearance, a drawback in UI design. In some case, textures worked better than forces fr emphasis and annotation. The paper marks an important foray into the study of haptic devices in relation to HCI, and while not all the technologies introduced have development potential, some, such as clutch and fisheye, may be integrated into our lives.

The Skinput paper delves into a form of bio-acoustic transmission of signals using the human skin as an input surface. The user wears a sensor armband, which analyzes mechanicals vibrations through the body as the fingers/arms are tapped. Proprioception describes how we perceive our bodies in 3-D space, transverse and longitudinal waves conduct sensors move along the arm's surface and in/out of the bone through soft tissues, computing 186 features to be passed to a Support Vector Machine classifier. The Skinput technology garners 86.6% accuracy, but the accuracy decreases for those who have BMIs of 50th percentile and above. The menus that are projected onto the arm, in which one could drop down and navigate/scroll through them is very much like the Kinet technologies we have explored in class.

Since the input system is always available, meaning it does not require users to carry or pick up a device, visual assistance is not necessary in order to perform tasks, so the blind can easily participate, but not without decreases in performance. Gesture performance is quite efficient, even when the body is in motion. The performances of Skinput varies depending on whether Fingers/Whole Arm/Forearm is used in experimentation. The technology reminds me quite a bit of DigitalDesk, but rather than bio-acoustic, DigitalDesk employs a table surface instead to perform various tasks. The motivation is pretty interesting and well-executed.


Vinson Chuong - 9/19/2011 8:37:37

Harrison, Tan, and Morris' Skinput paper describes an input device which allows skin to conveniently be used as an interactive surface and illustrates several proof-of-concept applications. Snibbe, MacLean, Shaw, Roderick, Verplank, and Scheeff's Haptic Techniques for Media Control describes several augmentations to input devices for media control (knobs, sliders, buttons) which provide haptic feedback and makes them more direct.

As in the previous papers on depth and IR cameras which allow ordinary surfaces to be used as multitouch surfaces with minimal equipment, Skinput enables similar interactions on a user's arms through small armbands. Skin-based input devices seem to be more convenient than traditional touch surfaces as users are not required to find or carry around external equipment and can locate specific positions on their bodies without visual confirmation with decent accuracy. Coupled with a projector, Skinput seems to be very effective for simple and discrete interactions: managing a media player, inputting a phone number or selecting a contact to call, etc. However, skin-based input devices seem to be limited to just those interactions that can be implemented with button presses. Touch gestures seem unlikely to be possible. Perhaps glasses-mounted depth cameras can be used in this context as well.

Haptic Techniques for Media Control describes several ways that interfaces (for media control in particular) involving knobs, sliders, and buttons can be made more direct via haptic feedback, taking inspiration from how similar tasks are preformed without computers. Traditionally, output from a computer has consisted of mainly visual and sometimes auditory feedback, which fails to take advantage of the other senses. I've heard of many attempts to change this: touch surfaces that change shape, specialized interfaces that mimic non-digital interfaces, etc. The concepts in this paper seem to only be applicable to designing specialized interfaces for completing very specific tasks (as opposed to design general interfaces or input devices). The authors of this paper mentioned augmenting a mouse; I wonder what kind of useful interactions that would facilitate. What I feel deserved more discussion was how the different types of haptic feedback presented compare to visual or auditory feedback. What kind of problems are these specific input devices solving? How are the solutions that existing input devices offer inadequate?


Yin-Chia Yeh - 9/19/2011 8:46:41

The two papers today are about how to make the best use of human body in human computer interaction. The Media Control paper aims to provide more continual sense by applying physical metaphor. Its idea is more related to the direct manipulation model we’ve read last week. On the other hand, the Skinput paper proposes a new always available user interface, our arm, by using bio-acoustic sensing array and SVM classifier.

There are two points I like in the Media Control paper, continual sense and modeless control. Continual sense provides user a finer control than normal input devices. It is also important in media editing because we don’t want to create abrupt media contents. The modeless control is also a good idea. I always get lost when looking at those buttons and modes on a DSLR. Among the applications proposed by the author, I like the alphabet browser the most. It seems to be a good solution for drivers. The preview buttons is also good idea and has been implemented on some TVs by the picture in picture mode. On the other hand, the final functional prototype controller seems to be too complicate to use. I think it is somewhat violates the design principle of modeless operation. I also want to know how to utilize the haptic clutch when editing a long video clip. I think I will want to use mouse to go to where I want to edit then use the fine control provided by haptic clutch.

The Skinput paper introduces a new interaction technique which detects the position user tapped on their arm. I like this paper more than the Media Control paper because Skinput uses a readily available technology and the classifying result looks very promising. The applications with pico projectors (especially the tetris one) look cool but I would like to see some applications without projector since it is not as available as bio-acoustic sensor. For example, I’d like to see if they can control the MP3 player with the single-handed gestures while jogging. The other question I have in mind is why they didn’t test people with BMI lower than 20. Since the distance of the ten tested positions should be more close to each other on a thin hand, I would like to know if it performs as well as on normal weight people.


Rohan Nagesh - 9/19/2011 8:59:02

The first reading discusses the development of an armband sensor to utilize our skin as input to applications, hence the name "Skinput." The second paper discusses creating haptic metaphors to provide tactile feedback for media control tasks.

I enjoyed reading through "Skinput" as I do believe there is potential for the skin to be used as an input surface. The paper describes some advantages. First and foremost, the skin is a large surface everybody has. Second, we have a great sense of where on our bodies we are tapping our touching or fingers to. Lastly, it is truly an "anywhere, anytime" medium.

The paper mentions some inconsistencies and inaccuracies in the results with higher BMI individuals, which I think can be overcome with enhanced learning and techniques. However, the fundamental issue I have with "Skinput" is that I found the range of example interfaces and interactions a bit limiting. One cannot get fine-detail motions and behaviors through skin as an input, and the example interactions the paper provided focused on menu navigation and number entry--tasks which I have no problem entering in my smartphone. Additionally, I think it's quite unnatural to have text project onto one's hands and arms, and I wonder if in outdoor settings or on-the-go settings such as jogging, if this technique will break down.

With regards to the second paper, I agree that direct manipulation is quite powerful in most situations. I also enjoyed reading through the paper's design principles, including discrete vs. continuous control, information delivery through touch, modeless operation, etc. I wholeheartedly agree with all of these design principles and view them as sort of idealistic gold standards.

My main beef with this paper was that I found these metaphors they were creating to not be very metahphorical, in that it didn't naturally connect to the actual function I wanted to accomplish. Additionally, you could imagine that for certain tasks, numerous such haptic devices are needed, and I would find it confusing as a user to keep all the devices and their functions straight. I particularly liked, however, the suggestion at the end of the paper to consolidate the behaviors and design principles of these haptic devices into a smart mouse or touchpad, or some other sort of multi-purpose device that is more scalable.


Manas Mittal - 9/19/2011 9:04:41

- Per user / Per sensor training based on location. Can you use multiple sensors on the body instead? - Very shaky ML Algorithm (but they generally are). I would like to understand how they came up with these algorithms (was it just crossvalidation?) - Research Question: Can you bring user into the loop when training (i.e., by visualizing training data, can the user mark what works and what doesn't) and therefore decreasing number of training 'shots'.

Haptic Techniques for Media Control paper is interesting. I would like the authors to have spend time in the gaming world and compared the standard input (joystick, gamepad) with the more game-specific haptic affordances. These represent real-world scenarios where it is easier to conduct a longitudinal study.


Sally Ahn - 9/19/2011 9:06:29

Skinput: Appropriating the Body as an Input Surface. Harrison, C., Tan, D. Morris, D. In Proceedings of CHI 2010. Haptic Control for Media Navigation. Scott S. Snibbe, Karon E. MacLean, Rob Shaw, Jayne Roderick, William L. Verplank, and Mark Scheeff. In Proceedings of UIST 2001.

These two papers explore new interaction methods for data input and manipulation. Harrison et. al. introduces Skinput, which uses acoustic transmission to detect taps on the user's skin. Snibbe et. al. describes how continuous interaction achieved through physical knobs and sliders can provide haptic feedback that enhance the interaction experience.

Snibbe et. al. motivates their problem by observing that popular computer input methods (at least at the time of this paper's publication) take the form of a mouse and keyboard, which lose "disstinctive physical sensations" by discretizing inputs into distinct mouse clicks and keyboard strikes. By now, the scroll wheel has become a basic feature of the mouse, and from sheer experience, I agree that enabling haptic feedback for scrolling successfully captures the continous nature of its scrolling function with physical feedback. In that sense, the paper makes an important contribution by conducting detailed user experiments to explore effective ways to create haptic feedbacks.

Harrison et. al. explores an altogether new input method that is still in its prototype form. The idea of transforming the human skin as a input surface is apealing for the benefits the authors cite: mobility, constant accessibility, proprioception. I think proprioception is a key distinguishing factor from other input methods. The authors seem optimistic about their results, but I question the reliability of input devices that average at less than 90% in accuracy. An important question to ask here is whether this gap can really be filled with better engineering, or whether the acoustic sensing technology itself has an upper bound on the level of accuracy that can be attained. I am also wary of projecting input interface onto the skin, which the authors describe as an example scenario. There is an important difference between stickers and projected buttons: projected interface can easily be shifted, and when the tap points are separated by as little as 2 cm wide radii on the forearm, this could result in completely wrong interpretations of the users' input. How they would ensure that the projected interface remains stationary seems like an important question for this technology.


Shiry Ginosar - 9/19/2011 9:11:11

These two papers discuss sensing and actuation from different direction. One is concerned with duplicating physical sensations of using physical systems when manipulating digital media, while the other aims to use the user's skin as an input interface.

The idea behind haptic techniques for input interfaces is a compelling one. With the move towards mobile touch screen devices, our current interfaces increasingly rely on visual ability and attention without the tactile feedback that physical controls provide. While these interfaces sometimes require devoted attention from regular users, they are practically unusable to visually impaired individuals. Explorations such as presented in this paper hopefully inspires integration of tactile feedback mechanisms in commodity devices despite the authors' concern with power requirements in PDAs.

However, when used as additional external physical controls as described in the first part of the paper, these extra pieces of hardware seem cumbersome in today's mobile environment. Perhaps when ubicomp brings computing back into the physical world, we can use everyday physical objects for control, without needing to develop special purpose ones.

On another note, the authors' argument seems to be weakened by the fact that they did not rely on perceptual studies when designing their haptic language.

The Skinput paper suffers from a similar issue since while the authors postulate about the relationship between bone density, BMI, join structure etc to acoustic properties, their assumptions were based on high level pilot data collection rather than on prior literature.

Nevertheless, the paper describes an interesting novel approach to input interfaces, albeit one that suffers from several drawbacks. First, the need to wear a sensor is a limiting one. Second, as the authors mentioned, the approach suffers from background noise such as whole body movement (walking) and even from casual tapping of fingers to a song playing in the background. While the paper describes preliminary tests in this area, the low accuracy was explained to be caused by poor training data. I find it interesting that the authors chose to not perform the test again in this case.