Input I: Models and Techniques

From CS260Wiki
Jump to: navigation, search

Bjoern's Slides

File:Cs260-slides-04-input1.pdf

Extra Materials

The class slides on models was based on the following article:

Other projects mentioned in class:

Discussant's Slides and Materials

Reading Responses

Airi Lampinen - 9/11/2010 17:56:22

Card et al. discuss input devices by means of a morphological analysis of the related design space. They point out the variety of input devices for human-computer communication and attempt to systematize these devices in a design space that they characterize by finding methods to generate and test design points. Their goal is to provide a method for bringing order to knowledge about input devices.

The authors begin by a discussion of previous systematization attempts, that they summarize under three lines of development: toolkits, taxonomies, and performance studies. They then go on to proposing a fourth line of development, a morphological design space analysis, that can be used to integrate the results of this previous work. The article considers input devices from the points of view of expressiveness and effectiveness. These aspects are further explained by discussing the footprint and bandwidth related to different input devices.

Perhaps the most interesting part of the article are the calculations the authors use to reason about the design space. It is shown how calculations can be used to show, for instance, why the mouse is a more effective device than the headmouse and where in the design space there is likely to be a more effective device than the mouse. The article explains how to determine approximate boundaries for evaluating different input devices.

In order to certainly convince the reader of the advantages of their proposed method, the authors conclude with an anecdote of a colleague who was too stubborn to believe that calculations can lighten the burden of testing out potential solutions and had to pay for his stubborness by not being able to move his neck for three days. It is claimed that, even with a highly approximative style of calculation, rapid, simple analysis based on simple assumptions can be useful for gaining insight into the design space.

Hinckley et al. discuss in their paper "Sensing Techniques for Mobile Interaction" their work towards providing context-sensitive interfaces that are responsive to the user and the environment. The text describes studies on a prototype-enriched mobile device to which the authors have added a two-axis linear accelerometer (tilt sensor), capacitive touch sensors, and an infrared proximity range sensor. Special characteristics and challenges related to mobile interaction design are discussed, the sensing of the context of interaction being the core issue.

While the authors acknowledge that the idea of creating smarter interfaces by giving giving computers sensory apparatus to perceive the world is not new, they point out a scarcity of interactive sensing technique examples. To address this issue, they explore some aspects of the design space as well as design and implementation issues. Finally, they report on user studies and usability problems that have been discovered in the process.

The article points out the difficulties of integrating interactive sensing techniques into a coherent system. The authors emphasize the essentiality of sensor fusion, that is, the aggregation of data from multiple sensors, in order to expand capabilities and support additions. What I most liked about the paper was the discussion on hybrid designs - designs that combine sensing techniques with other solutions. However, the authors are not offering clear guidelines as to how to determine which types of problems are best solved via sensing techniques and which would benefit from the implementation of other techniques. I don't doubt their consideration that "careful design and tasteful selection of features will always be necessary" but I found it a somewhat unsatisfactory conclusion.


Kurtis Heimerl - 9/11/2010 23:17:43

A Morphological analysis of the design space of input devices: Ah, where to start. Another period piece in the era of researchers trying their damndest to avoid doing user studies. This formalized the notions of "input devices" as a 6-tuple, and then created a way of connecting these input devices to each other.

I'm actually quite torn on this paper. It's value is entirely in prediction, and they did a terrible job of demonstrating that. They compared a completely abandoned interface to the mouse, and found the mouse to be superior. Fantastic finding there. That was the primary analysis. However, there were inklings of actual predictions in terms of multitouch and gestures. There definitely seemed to be a lean towards more recent apple products, utilizing fingers and the fact that we have so many of them. If I bought that argument, this becomes a powerful paper.

On a related note, is the keyboard really just 90 Zs?

A couple other features and discussions were just broken. As an example, in space, they argued that you need more mouse space for a bigger monitor. This is false, as people can just pick up the mouse and simulate a larger desk. The touchscreen, of course, doesn't allow that. This element was not gathered, as far as I saw. It was dense though, so I very likely missed things. Likewise, fingers are fat and actually bad at precise locations.

Anyhow, as I said, I'm torn. Maybe there's something here, but given an input device, I have no idea how I would go about classifying it. I think there were features of devices that were missed and so this is an incomplete telling of the story, even aside from the arguable value of the story itself.

Sensing techniques for mobile interaction. I actually don't have much to say here, I loved this paper. This is the HCI that I've done, build it and poke at it. I'm really surprised at the lack of evaluation, i'll submit to UIST more now that I know what you can get away with.

Anyhow, this is very similar to my Metamouse work, at least in spirit. We built it, and then piled hacks onto the system in order to map more directly onto user's expectations. My question with this work is roughly the same as my questions with my work; one of longevity. I don't believe that initial impressions (of those 7-8 subjects) is really a meaningful benchmark for anything aside from learning curve. Too often things are lost in the transition to expert user, and small frustrations become game-breakers. The authors discuss this briefly, to their credit. So many of their ideas seem great (why can my phone unlock without me holding it?) but have not hit market. Why?


Brandon Liu - 9/12/2010 14:06:35

“A morphological analysis of the design space of input devices”

This paper describes a framework for understanding the space of possible input devices. It uses the analog radio as an example of a object with several different input devices. The specific devices, such as knobs and scales, define different categories in the framework.

The paper’s argument is strong, since the description of the framework is clear (the axes of the space make intuitive sense) and also complete (the author describes other work that complements the work, for example, the space of display/output devices). Furthermore, the paper validates the framework by using it to analyze a head mouse. It makes a clear practical contribution to HCI by offering a model as an alternative to expensive user studies.

Something I would have liked to see in the paper was an analysis of scaling down, instead of just scaling up. The authors describe how the desk space needs of a tablet increase as the size of the display increases. A complementary, but different question would be how the devices change as the size of the display is scaled to the size of a watch face. There are a few obvious other areas for more elaboration, such as 3D input devices.

I thought this was an excellent paper since it stuck to one point and actually proved it. The graphs used were especially informative. Some of the ideas I had after reading the paper were 1. What if a traditional mouse and a ‚Äòfinger mouse‚Äô were combined (a la magic mouse, but the finger movements actually point ) ?


“Sensing Techniques for Mobile Interaction”

The paper describes a series of user studies done on sensor additions to mobile devices. Since these studies were done all the way back in 1991, I as a reader + smartphone addict can easily point out a lot of holes.

Some cheap shots: Android devices turn off the screen when the phone is held close to the face. This is done via a sensor above the screen that detects proximity. Unfortunately, if you’re left handed like I am, the way you hold the phone triggers the sensor and the screen turns itself off while you hold it away from your face. It may be a trivial example, but the author’s response would probably be ‘Add some other kind of sensor that detects the hand’. The orientation of the sensor means you can’t simply have a left handed/right handed setting.

The general problem is similar to the problem in last time’s reading about smart homes that infer. A system that cannot infer perfectly (i.e. all systems) is of limited use, since it costs more mentally to fix a wrong inference than just tell the system something in the first place. Thus, all the variations inherent in mobile device use, such as what hand is used, or whether the user is wearing gloves, become immensely relevant to the design and cost of the device.

The paper also doesn’t make any mention of privacy issues with inferring when to record voices. Some countries require that smart phones make an audible noise when they take a photo. For similar reasons, phones should let the user know explicitly when their voice is being recorded, or better yet, only have the user explicitly enable voice recording via a button.

What the paper really needs is a more in depth discussion of false positives and false negatives and their associated costs (which are usually unequal). In a system that infers when I want to record my voice, the cost of a false positive (recording when I don’t want to) is extremely high, while the cost of a false negative ( not recording when I want it to ) is just frustrating. Designing a system would revolve around minimizing these expensive false positives. If that results in more false negatives, then it may be to the point where the inference is useless. The authors could have described designing systems to optimize around these points, and not just attention.

Other comments: The paper kept discussing how devices could infer whether or not the user is walking or not. This seemed odd to me since walking while operating a device is a serious safety concern. That concern likely was not obvious to the authors.


Thejo Kote - 9/12/2010 15:11:28

A morphological analysis of the design space of input devices:

In the paper, Card and co-authors present a design space for input devices with a goal of systematizing knowledge about them. They argue that creating a design space will allow identification of empty spaces that will point to possible new directions of research in the area of input device design. It is an addition to earlier efforts at systematizing input device design in the form of toolkits, taxonomies and performance studies.

To test points in the design space, the authors follow Mackinlay's work and focus on the "effectiveness" of an input device. In particular, they focus on footprint and bandwidth among other desirable qualities of an input device when evaluating it quantitatively. Of particular interest to me was the discussion of the motor abilities of the different muscle groups of the human body and the bandwidth each provides when using input devices. It is a particularly important aspect to keep in mind when designing new input devices.

Though the contribution of the design space and systematic representation of input devices seems to provide a good way to identify new opportunites for the design of input devices, I think it is similar to GOMS, in the sense that the model is far too complicated for regular use by designers. Of course, it may be the generally inaccessible nature of the language in the paper that leads me to that conclusion.


Sensing techniques for mobile interaction:

This paper studies the impact of introducing sensors on the interaction with mobile devices. The authors modify a PIM device by adding proximity range, touch and tilt sensors. They then use the sensor data to augment interaction with the device and perform usability tests to determine if the new interaction techniques which use the sensors are an improvement.

The sensors are used in the following applications - voice memo recording, partrait / landscape detection in an application, title scrolling and power management. While individual techniques had been studied earlier, this paper is an attempt to study the role of sensors in a holistic manner. The study focuses on interaction techniques which use multiple sensors at the same time. The paper also discusses the implementation challenges involved in using sensors to improve interaction (like the introduction of false positives and negatives in each case). It was also interesting to note how advancement in technology has made obsolete the need for work arounds like contrast compensation on screens when tilted.

All of the sensors mentioned in the paper are standard components on smart phones today and many of the interaction techniques they describe are available to end users. But, the current paradigm is that individual applications can access sensor data and behave accordingly. The information available from the sensors is not deeply embedded into the user experience. For example, the study has the example of augmented voice memo recording. That is at a level above individual applications on the mobile device, which means that the OS knows which application to launch based on the interaction. That level of integration with sensor and context sensitive data is yet to arrive.

Though the authors contribution has a lot of face validity given their incorporation into current popular technology, I think they were a little lazy in conducting the usability tests.


Luke Segars - 9/12/2010 15:33:03

Sensing Techniques for Mobile Interaction

This paper describes a series of techniques incorporating sensors into mobile devices in an attempt to make these devices context-aware. The authors of this paper and those mentioned in the related work seem to have been publishing on this topic for a mere two years, suggesting that the idea of integrating sensors into hand-held devices was very new around the turn of the century. The techniques described in this paper are particular focused on determining a user's intentions based on the way (measured by tilt, touch, and proximity) that they are holding their mobile device. They are attempting to identify common behavioral metrics for users who are attempting to record voice memos, turn on the device, and other simple actions that typically require a substantial focus of attention.

The presence of sensors in mobile electronics is a concept that seems to be exciting to both the research and commercial worlds today. Hinckley et al describe a number a number of techniques that could be used for contextualizing our tools, including the landscape / portrait orientation switcher that many smart phones are already using. They keenly observe that the typical use case for next-generation technology may not require several hours of dedication, but perhaps only seconds or a couple of minutes to complete a particular task. We are already seeing this trend coming true with portable electronics, and one significant way to decrease the time it takes a user to perform an action is to decrease the time it takes to start it. If a device could be provided with enough sensors and supporting software, then the authors and I both believe it may be possible to, perhaps to a limited degree, “sense” a person's intentions based on certain detectable characteristics.

The sensors that the authors chose to experiment are applicable to inferring a large number of human actions. For example, the authors show that a tilt sensor can be used to determine how the user is holding their device, but can also be used to predict when the user might be running. The context prediction system, though it seems to face a number of false positives, seems like a fitting model of representation if the context identification could be improved. If a similar but more developed model of this type were to be implemented, it would abstract particular sensor measures away from the application developer, instead exposing properties about the user such as whether they are sitting, walking, running, holding the device, and so on. Applications could then be automatically triggered to respond to a particular set of events, similarly to the triggers that the authors described, while the operating system or “context daemon” could improve its measures of recognizing these situations transparently to both developers and users.

The paper presents a generally weak evaluation of the techniques in its user studies. The experiments rely on a very small sample size to determine the effectiveness of these tools. Their preliminary results are not only somewhat inconclusive but may also depend on the types of people chosen for the experiment. If everyone involved is a spatial person (i.e. someone how imagines problems visually), then they may be less apt to like this interface due to its lack of visual output. None of the techniques mentioned here, with the exception of the landscape / portrait rotation mode, have been implemented in a widely available commercial product yet and it's not obvious whether they'd even be desirable from a user's perspective, emphasizing the importance of the user evaluations.

There's also no discussion about what happens to the users (1/7 in this case) that don't like the sensed behaviors or think it would “drive them nuts.” Is it possible to turn the sensing off or should this replace the standard interface? A deeper examination into the significance of these major design decisions is certainly warranted to determine whether these particular applications are something that people would appreciate.


Dan Lynch - 9/12/2010 16:09:59

    • A Morphological Analysis of the Design Space of Input Devices

This paper provides methods for stratifying and classifying input devices, and their usefulness in human-computer interactions. The use various models and principles to derive the value that an input device can provide and vocabularies to describe them. A language of input devices is described in detail, with two primary ideas: a primitive movement vocabulary, and a set of composition operators.

The footprint, the space that a device takes up is discussed, which is very important. Consider a mouse and a mouse pad. I particularly don’t use mouse pads anymore because they take up too much space. Bandwidth is also discussed, and notions of precision and device bandwidth are mentioned to characterize this notion.

This is extremely important because it will allow us to stratify new ideas and designs for input to computers. This is how we build the world around us. Take for example the CAD designer. If we can optimize his/her interface by creating a new input device, we would greatly enhance the world around us. But how will we know if this device is optimized? That is what this paper can show you how to characterize such devices.

    • Sensing Techniques for Mobile Interaction

This article brings up issues in mobile interactions, an additionally introduces sensors and the functions that they can provide for a mobile device. The article also brings up how these functions can complete tasks for humans and their reactions to these interactions.

One very important idea is using the sensors to infer the behavior of the user. For example, the tilt sensor can be used to detect whether a user is walking, holding the device, or looking at the display. Also, the tilt sensor can be used in other user interface improvements such as detecting landscape or portrait mode can prove to be useful for displaying information in a more meaning manner. Also tilt scrolling was mentioned, which would use the tilt sensor instead of a scroll bar, a much more tactile operation than with software.

This article is important because mobile devices today are ubiquitous. At first, mobile devices (and many still do) had awkward interfaces for input. In particular, text input is one of the hardest problems to solve for mobile devices. Sensing techniques provide a way to transduce the user’s ideas into electrical signals that can be understood by the device, thus interacting with the world.


Shaon Barman - 9/12/2010 16:47:07

A morphological analysis of the design space of input devices

In this paper, the authors try to analyze different computer-interface devices in order to categorize them in a design space, which mainly consists of manipulation operators (knobs, switches, etc...) and combinational operators which tie together multiple operators. They then show that a variety of current input devices can be concisely represented in the mode.

Overall, I felt the model the paper proposed did not provide much insight into the design process. While the model was clearly able to represent different input devices, it did not clearly show how 2 input devices were related. In the second part of the paper, they talk about "testing points in the design space." But with the proposed design space, analyzing one input device does not seem to give insight into another "similar" input device. In addition, they claim it can be used to generate ideas for new input devices but do not provide any hints at this process.

I did like the definitions of expressiveness and effectiveness in order to evaluate an interface device. They provided many quantifiable aspects that can be tested accross all input devices, in order to analyze the strengths and weaknesses of a device. The footprint was a simple example of this analysis. But I thought that the bandwidth example had a few flaws. First was that the Fitt's law constants were taken from different experiments, each which had different tasks. Also, some of the hardest easy challenges and vice versa seem a little ad hoc. But overall, the work gives a foundation in which performance can be quantified. It would be interesting to see if the finger constants predicted how well touch-screens can work.

Sensing Techniques for Mobile Interactions

This paper explores how adding more sensing devices, such a proximity, touch and tilt sensors can enhance the user experience on a portable device, like a smart phone.

The paper explores different ways to use sensors. The main contribution of this paper is the use of novel interfaces to interact with a computer. By breaking away from the tradition mouse and keyboard, they designers are able to create a more intuitive experience for the user. One aspect I liked was the use of a middle interface instead of the raw sensor data. This provides a uniform view to the phone and was used to explore how conflicting gestures interact on the same inputs. They also test their ideas through a small user studies which provides invaluable insight. And once they collect enough data, they create a model which can predict how future users will act.

Its hard to find faults in this paper since many of the ideas are widely implemented in smart phones today. The tilt/scroll functions are one that was not adopted, but that is because touch-screens made such inputs obsolete. With the increase in processing power in mobile devices, it seems like even more sensory data could be processed, such as location and inter-device info, to provide a more productive experience.


Charlie Hsu - 9/12/2010 17:27:23

Analysis of the Design Space of Input Devices

This paper attempted to analyze the design space of input devices by using a set of physical properties and their deltas (position, rotations, force, torque) to describe input device taxonomy. Defining a design space, according to the authors, may help to highlight interesting areas of the design space that for input device research. The authors then described methods of testing points in the design space under two metrics: footprint (physical area consumption of device) and bandwidth (pointing speed).

The decomposition of input device into manipulation operator, input domain, state, resolution function, output domain, and other properties is a logical and sound way of systematically analyzing input devices. By looking at Figure 4, and seeing a design space of the typical physical properties populated with input device examples, it is easy to see how input device designers benefit from this design space visualization. Though the authors provide examples of how to test the effectiveness of points in the design space in a separate section, the visualization of the design space itself may lend hints to which areas of the design space would be best for certain input goals (ex: how well does the rotary Etch-a-Sketch work for actual drawing? what sort of output domains do volume/radio station selection knobs have in their rotary inputs, and why does a rotary input work there?).

However, I felt the paper as a whole became increasingly narrow and focused as it went on. There are so many different physical properties that could form the basis for interesting input device design spaces: audio, camera, and keyboard/text based input that could provide for very interesting design spaces today to analyze: what input devices have been explored in these design spaces as of today, and have they worked or not? The section on testing points in the design space became even more specific, focusing solely on pointing devices. There were not very many general guidelines on testing points in the design space, and the paper simply listed the authors' methods on testing the metrics of footprint and bandwidth for pointing devices. Although many of the metrics they described might have fairly straightforward methods of test (cost, user preference, time to grasp device), what about metrics for things besides pointing devices? It should be emphasized that testing input devices for effectiveness requires task analysis and strong investigation into what exactly is desired from the input device.


Sensing Techniques for Mobile Interaction

This paper described the findings of a team at Microsoft Research that integrated a set of sensors into a mobile device and implemented new functionalities with them. These features included recording voice memos via sensed triggers, switching between portrait and landscape mode via orientation sensing, tilt scrolling, and automatic powering up of a device based on motion sensing. The paper also described a set of important design considerations for mobile devices, such as the different context of interaction from a desktop computer, and the multitasking often accompanying the use of mobile devices.

It was particularly interesting to think about many of the functionalities explored in this paper and how they have been integrated today. Dynamic portrait and landscape mode has been implemented in many places already, and is an useful tool for users when certain parts of a small screen need to be emphasized. However, the "put-down problem" described in the paper illustrates the need for some sort of foolproof, non-sensed method of altering the display; users in the paper echoed these same thoughts. I have personally also experienced the inability to get the sensor on my iPhone to orient the display in the correct orientation, and I am unaware of a manual way to do so. I have seen tilt scrolling implemented in video games, but the issue of precision described by an user in the paper resonates strongly with me. I feel Apple's implementation of swipe-scrolling on the iPhone is a much more tactile, noise-free method.

I feel the issue of false positives and negatives that the paper described is also important, from personal experience. I highly suspect that the authors' concerns about their test users feeling features were 'cool' and disregarding the potential for false positives and negatives to inflate ratings is legitimate. False positives and negatives can be excruciatingly detrimental to user experience, forcing the user to redo work. "Allowing direct control when necessary" is a good way to perhaps limit the damage, but perhaps some functions should not be left to interpretable sensed data.

One feature that was implemented that I was skeptical about for that very reason of false positives and negatives was power management. Imagine accidentally turning off the phone during an important task. I feel that relatively rare but impactful tasks such as powering on/off a device should have a strong, foolproof control sequence: the iPhone, for example, requires the user to press a button and hold for a short period of time, then confirm with a swipe across the screen. However, the analysis of how to determine whether to power on in the paper might be sound for other tasks, such as powering the display on or sleeping it. The iPhone, for example, recognizes when you pick up the phone to answer a call and turns the display off as the phone rises.


Richard Shin - 9/12/2010 18:15:57

A Morphological Analysis of the Design Space of Input Devices

In this paper, the authors propose a set of attributes by which to analyze the large number of input devices which had been proposed by the time that the paper was written, in 1991. The authors call it a "morphological design space analysis"; they formally defined an abstract input device as a 6-tuple, encapsulating properties such as how the device is manipulated (position and force in its linear and rotary forms) and how it is measured (in absolute and relative terms). They project existing input devices into this "design space". Then they attempt to systematically measure the effectiveness of various types of input devices, by how much footprint they take and how much bandwidth they provide. Using these two metrics, it is possible to predict the performance of hypothetical input devices by plotting them on the design space and then calculating their performance characteristics.

In reading this paper, I greatly appreciated the very systematic and methodological view that the authors took in analyzing input devices. In particular, the methods proposed to measure the effectiveness of input devices seemed particularly novel. A part of it predicts how difficult certain specified tasks (like selecting a word or a character) would be, much like Fitts' Law, but now while taking into account the physical characteristics of input devices such as which muscles they exercise (since, for example, people can control their fingers more precisely than they can control their neck or arms). Now, by standardizing the specific tasks for measurement, the 'precision' of a device can be calculated, and different devices can be compared using this metric. In a sense, this paper seemed quite similar to the 'pointing and pondering' paper we read recently in the proposal of a framework for evaluating things, except that here they consider input devices rather than carrying out of tasks. It seemed that the techniques presented in this paper would enable faster research and development of input devices, but unfortunately, there didn't really seem to be new devices available today that the authors of this paper had evaluated in some fashion.

While this paper attempts to be very systematic, and it proposes a formal system for classifying and evaluating input devices, it didn't seem as methodological as I would have liked. Of course, in order to be able to model a wide variety of input devices, the definition of the design space cannot be too constraining; the authors noted that, despite their efforts, voice could not be plotted in their design space, for example. Nevertheless, it didn't seem to me that extrapolations of the input device measurements could be easily made by the definition of the design space. For instance, the authors note that the headmouse is unsuitable, compared to the mouse, for the 3D selection task they describe by showing a diagram of the headmouse's location in the design space, but it seemed quite unclear how this conclusion could be made solely by this diagram.

Sensing Techniques for Mobile Interaction

This paper describes the use of various sensors for improving the interaction with mobile devices (specifically, in this paper, personal information managers; but the same techniques to be extended to any other mobile computing device). The authors note that mobile devices present new HCI challenges, since people use mobile devices in a wide variety of settings (unlike a desktop computer, for example, which is always used at a desk) and that the building blocks of interaction with the devices might differ from the better-studied desktop computers. In order to address these challenges, the paper proposes the use of a plethora of additional sensors (specifically, a proximity sensor, touch sensors surrounding the screen bezel and the back of the device, and a tilt sensor) in order to better determine the user's environment and support the use of gestures more natural for mobile devices, and implements them in specific applications (voice memo recording, changing display orientation, tilt scrolling).

The idea that mobile devices need to be aware of their environment seems perhaps the most important point made in this paper. By adapting the device to its surroundings, from the parts that are specifically controlled by the user (e.g., whether the device is in a bag or in the user's hand) to those that simply reflect the state that the user is in (such as the location of the user or whether the user is walking), the device can better support the user rather than expecting the user to conform to its model of the user's state in order to effectively use the device. The user can communicate intentions to the device with less cognitive load compared to traditional interaction methods. This idea is quite similar to the notion of "background interaction" from ubiquitous computing that we studied, as the authors themselves note. In fact, some of the applications of sensors explored in this paper have now been widely adopted, such as changing screen orientation with tilt sensors. Future research could explore the use of a greater variety of sensors for better sensing the environment, as well as additional concrete applications for these sensors.

It seemed to me, however, that a significant limitation of this scheme would be that the device is essentially guessing about the user wants through these sensors; like trying to determine whether someone is hungry by observing their demeanor, rather than verbally asking about it. Given the limitations of what sensors are available, the device could end up performing unintended actions that are detrimental to the user experience, and throughout the descriptions of the various actions that had been implemented, false positives and negatives seem consistently mentioned. With automatic screen rotation, for example, I have often been frustrated when my phone rotates when I hadn't been intending it or if the phone doesn't rotate even when I try to hold it in a very specific way to trigger screen rotation. Given the limitations of sensing, minimizing the effects of unintended or unsensed actions and determining what actions would be best suited for these methods seem like important areas for future work.


Aditi Muralidharan - 9/12/2010 18:46:57

Card, McKinlay, and Robertson present an abstract way of organizing input devices into groups based on the physical properties of a device to which they are sensitive, and the physical feedback that the device gives in return. Their diagrams showing where different devices lie in comparison to each other aren't that easy to understand, but they make more sense than the incomprehensible setup they use to derive the simple position/force, and angle/torque x, y, z dimensions.

One of their motivations for doing this is that such an organization would help suggest new input devices. Nevertheless, I am skeptical of how productive it would be to create devices that match differnt input/output points in this space without at least some external principles as guidance.

By contrast, Hinckley et.al. propose concrete input ideas: tilt, proximity, and touch sensors in mobile devices, that seem so obviously good that it seems stange there were once mobile phones *without* these capabilities. Capacitive touch sensors, especially, go toward harnessing the high bandwidth of finger gestures mentioned in the first paper.

They conduct user studies of using the tilt sensors to automatically start and stop a voice recording application, to switch between portrait and landscape display modes, to tilt scroll, and to automatically turn the device on, and report success.

Input devices seem to be moving in the direction of removing everything but the awareness of a goal from between the user and the device. The user shouldn't have to think about how to accomplish his or her goal, but be able to just do it.


Luke Segars - 9/12/2010 18:51:37

A Morphological Analysis of the Design Space of Input Devices

The paper attempts the daunting task of creating a meaningful classification for the plethora of input devices that were available at the time (1991). The idea of design space analysis has apparently been used in another of other fields before and is intended to give users the ability to quickly estimate the potential usefulness of a particular idea based on where it falls in the space.

One of the biggest challenges I have encountered in the field of human-computer interaction is the fact that it tends to blend the boundary between science and art. Science, obviously a process that benefits from quantification, works in absolute terms that can usually be reduced to formulas. Art and design, on the other hand, are much harder to reduce to simple rules and possibility spaces.

Unfortunately, it seems to me that this is an attempt to structure something beyond its limits. The very premise of design is that it isn't confined to a single possibility space (except those of physics) but instead improves as previous design spaces are shattered and new possibilities emerge. In “Sensing Techniques for Mobile Interaction,” the authors describe how it may be possible to remove many physical elements from interfaces through the use of sensors and behavior prediction. This is a possibility that could easily have been left out of a 1990s-era design space but is becoming increasingly important today.

The authors show signs of struggle throughout their paper as they attempt to quantify input devices. For example, they define the precision of a particular device as “the [Index of Difficulty] that requires the same amount of time as the easiest hard task of the mouse.” When they later try to put this definition into practice, they have to assume that a particular task for a particular 3D input device is equivalent because it is difficult, if not impossible, to compare the precision of two tasks in this way. They also dedicate a reasonable amount of thought to the idea of muscular bandwidth, which is one of the factors that determines where a technology will fall in the design space. However, the authors give very little background on where particular muscle groups lie in this regard despite its use as a major classifier in the proposed space.

Ultimately I was disappointed by this attempt. I think that it is a particularly difficult goal to accomplish, and we are probably still not at a point today where the flood of incoming technologies is ready to be classified cleanly. New and previously unimagined input technology are, after all, coming out every couple of years. Even if the authors would have identified a satisfactory design space, they spent very little time discussing the benefits that establishing the design space would bring. I suspect that this goal may have to be shelved and given time until we move closer to the optimal range of technology to support our natural human strengths.


Drew Fisher - 9/12/2010 18:54:27

A Morphological Analysis of the Design Space of Input Devices, 1992

This paper takes a mathematical approach to the quality of physical input device designs to compare them in terms of precision and bandwidth, as well as to determine their relative appropriateness for particular targeting applications.

The concept of the paper - observing facts about the input device, the operator, and Fitt's law, creating a model to predict factors of the interaction like bandwidth, and then experimentally verifying results - seems sound. While the graphs could be slightly less confusing, I agree with their general conclusions.

The model described in Table 1 seems a bit odd, including force and torque, yet missing acceleration, which would allows an input device to sense gravity and motion. Perhaps these could be modeled as part of the "state" of a device with respect to position (taking derivatives), but since the physical properties also list movement, I suspect this to be unlikely. Perhaps the concept of throwing around electronics had not yet become widespread in '92?

The discussion on desk footprint seemed a little off-topic, and not really relevant to the discussion. Looking back with 18 years of hindsight, I can't help but think that a flat-panel monitor instead of a CRT would save far more deskspace than avoiding a particular input device.

This paper is valuable because it uses both math and experimental data to show that the mouse is an excellent pointing device. This is certainly upheld by the continued presence of the mouse in computing today.


Sensing Techniques for Mobile Interaction, 2000:

This paper discusses using sensors like a proximity sensor, touch sensor, and accelerometers to enable intelligent and automatic interactions with a PIM device, making use less effortful for the user.

Whereas the paper discusses a PIM device in particular, I can see its applications to cell phones and other mobile devices. Since this paper's publication, proximity sensors have been employed by many cell phone manufacturers to determine if a phone is being held up to the user's ear or not, particularly to disable touchscreens. Although the iPhone and iPad may have brought accelerometer-based autorotation to the masses, it appears that it was this paper that first proved the value of the mechanism.

Whereas the paper primarily discusses the sensing concepts in general, I wonder how much of the system's success or failure is based on the timing behavior of the devices - getting a boundary condition off by 100msec could be the difference between feeling responsive for the user or not, and getting a false positive or not. It'd be neat to see data on what timescales users find best.

I also liked how the experimenters provided user data, although I find a pool of seven people somewhat small for making any general conclusions.

This paper is important because it laid foundations for devices predicting desired interactions based on combinations of sensor input, rather than forcing the user to manually specify everything.


Linsey Hansen - 9/12/2010 18:55:18

Sensing Techniques for Mobile Interaction

In Sensing Techniques for Mobile Interaction, the authors discuss interaction with mobile devices and what new sensing techniques could be used to further improve the user's experience. They introduce several mostly new interactive sensing methods for mobile devices (some had been similarly implemented on existing devices at the time) which include voice memo detection, portrait/landscape detection mode, tilt scrolling and power management.

This paper is significant because it pretty much mapped out the next steps for improving interaction with mobile phones and similar devices. The authors share the idea of adding sensors to small mobile devices, thus allowing the device to have a better idea of its current situation and better adapt to suit the needs of the user. They also bring to the attention of the reader that mobile devices are much different than desktop computers, in that both their environment and actual usage is different, since mobile devices are used in a variety of places on the go, and are often only used in short bursts- thus interactions with these devices should accommodate these differences.

For what was probably the first time, this paper introduces the idea of having a personal information manager (or PIM as they called it) actually interact with the surrounding environment. The authors accomplish this with the use of three sensors: touch, proximity, and tilt sensors. Using these sensors, common tasks that the user once had to do manually can then be automated to some degree by the device, such as using a combination of tilt, touch, and proximity to allow the user to quickly record a voice memo (as opposed to going through numerous menus) or automatically shifting the screen orientation based on the tilt sensor input.

I feel like most of what is described in this paper can be related to the iPhone and its many functions. Being one of the first heavily commercialized products to implement these sensor based interaction techniques- and the success the iPhone has since seen should speak for how successful these techniques are to consumers (though most of that success is also in part due to how shiny and fancy iPhones are). However, even the iPhone does not implement everything described, if I could have a phone that had touch enabled on the entire device, so while not on the screen I could do different touch based interaction on the rest of the phone, that would be nice (so instead of needing to go into an app, simple tasks could be accomplished more directly). Having any easy way to make voice memos to myself would also be nice, since I actually don't use the voice memo application just because I am lazy and find the process cumbersome. The power management method described would also be great, because having my phone go to sleep while I am holding it is annoying.

A Morphological Analysis of the Design Space of Input Devices

In their paper, Card, MacKinlay, and Robertson discuss the design space, and a new method for development called morphological design space analysis. This method relates different input device designs to points in a parametrically described design space, which should allow developers to both generate the design space and test the respective designs.

The authors split up their new technique into two parts, the first being generating the design space, and the second testing points in the design space. In order to create the design space, one must first first model the interaction, then using that model one must model the language of input device interaction which involves creating primitive movement vocabulary and a set of composition operators. The design space is then created with all combinations of primitive movement vocabularies and composition operators. After modeling the design space, the authors then discuss how to test points within the space, where testing is mostly a matter of comparing the device's footprint and bandwidth.

This paper is significant because it presents a better way to systematically analyze different regions of the design space. At the time this paper was written, most design spaces were represented in such a way that they did not allow developers to be able to fully analyze the different aspects of their design space. Toolkits offered design choices, but did not help much with choices; taxonomies allowed a way to classify different devices, but these classifications are either ad hoc or only include continuous devices; finally, performance studies are able to create abstractions and study techniques, but many do not fully agree. The author's wrote this paper hoping to remedy these existing issues.

While this paper is not necessarily about technology, it does use the example of the mouse vs. the head mouse to demonstrate why it's development method is useful. While going over bandwidth, and discussing how the wrist had better bandwidth than the neck, it was also noted that the fingers had better bandwidth than the wrist, thus hinting at perhaps an interaction technique that is superior to the mouse. Given that this paper was written almost 20 years ago and that the mouse is still the primary device for pointing-related computer input, there are also touch pads, which seem to make a bit more use of fingers than the wrist (especially the new apple touch pad and wacom touch devices). These new devices definitely show more promise for simple tasks such as scrolling and browsing the internet, though they are still kind of clunky in some areas, but only time will tell if they can ever beat out the actual mouse.


Pablo Paredes - 9/12/2010 18:55:31

Summary for Card, S., Mackinlay, J. and Robertson, G. - A Morphological Analysis of the Design Space of Input Devices-

 The morphological approach presented in this paper allows us to describe an input device in terms that can be measured, as described by the primitive language and using composition operators that allow to map the input device in terms of it's capabilities, as well as it's integration between it's components and with other devices, evaluated via metrics of merit that integrate the skills of the device and the user.

The taxonomy classification based on metrics of linear and rotational displacement versus forces, movement and position values (i.e. morphological values) allows for a clear view of the type of devices and its applications. It also allows the possibility to potentially experiment with combinations of the elements to define new input interfaces.

Complementary, in the morphological view, it is simple to observe (via circles and connection lines) the integration of the elements of the devices in terms of the union of its components, the layout plane where they are integrated and the potential connection with additional devices.

In terms of merit metrics, two key elements are considered. Beyond expressiveness, measured as the ability  an input device has to map user desires, effectiveness is a merit element that be correctly evaluated to make sure that beyond performance, other nonperformance (aesthetically, ergonomic, etc) metrics should be considered... However, some basic performance metrics, such as pointing speed are of key importance, as a delta in the speed by which a human performs a task versus this ability for the device to actually accomplish the task can generate stress in the human, (and therefore poor interaction with the machine.  

The overall notion of including a metric of precision by decomposing it in terms of the Fitts law index of difficulty (bits/ms), and the human information processing ability (ms/bit), presents a rich view of what can be expected to achieve with a determined device used for a specific task via a specific muscle group, and yields a good opportunity for improvement and innovation of input devices.

Overall the paper indeed shows a great advance to explore the morphology of input devices as one of the key syntactic elements of HCI viewed as an artificial intelligence language to better integrate humans and machines... I believe the author failed to mention at least the notion that there are other cognitive and contextual elements, such as emotions, workspace, etc., which can affect the precision of the use of devices... for example, a hypothesis could be made that a mouse is not the most adequate device to use during situations of high external stress, as hand movement could be seriously altered.

Summary for Hinckley, K., Pierce, J., Sinclair, M, Hovitz, E. - Sensing Techniques for Mobile Interaction -

 The paper explores the opportunities that mobility brings to the design, as more information about  the user's context during his/her daily routine is available. It describes the use of interacting sensors as a tool to bridge a complex gap between a mobile user (one that moves across contexts) and the device. One key element of success in the design of the device is to provide it with the ability to have minimal (less disruptive) interactions, as the user is also involved in real life routines that demand it's attention.  A new paradigm of interaction is defined by using sensors to enable a background/foreground plane, where more readings and actions are incorporated in the background as opposed to current GUIs.

For some actions, where there is little control or no activation actions are performed, auditive feedback is crucial to make sure the user is engaged with the task being performed, and knows when it has been completed.

Another key element needed to accurately sense context, is to define a sensor information database architecture that must incorporate elements of interaction between sensors... This architecture should allow parallel access to the information state and application functions.

Another key element of mobile interface design is the detection of stable states of usability... Issues such as porting the device to change it's position due to space needs, showing the content to other users, movement due to environmental reasons (car, train, etc) must be incorporated to maintain a balance between the responsiveness of the automated tasks and the avoidance of noisy patterns of usage.  also, the notion of parsimony has to be maintained by making sure that the last state of the device is maintained when it is left to rest.  

Overall, the need for a cost-benefit balanced approach towards design and power consumption is an element that already is being pointed in light of this seminal work in mobile interface design.  good design, understanding of user needs, understanding of user interaction performance metrics and overall a calm and easy interaction effect should be the goal above any new cool feature that could prove irritating with time.  additionally, the notion of having the option to control the device is important.  

A notion of interactivity with similar devices, as well S environmental sensors distributed is not mentioned or explored.  this notion should be incorporated in the analysis at early stages ton avoid further IO issues.  additionally, the notion of security and privacy are not included... having a device turning on just by picking it up sounds nice, but potentially having control to who access it's functionality is important and must be included in the workflow analysis for mobile interactivity.


Siamak Faridani - 9/12/2010 18:56:17

Paper I: A morphological analysis of the design space of input devices In this paper Cart et. al. provide a framework to critically analyze and study input devices. They parametrize each input device by using parameters like their physical properties. Their framework places each input device in a design space. The resulting design space can be used to analyze the effectiveness and expressiveness of the device. In addition to analysis there are a number of design benefits in using the morphological design space, for example can also visualize the taxonomy of input devices as well.

They start by a through study of related literature and they built their model and framework on top of the existing research. They categorize the development of human-machine interfaces into three lines of development. From the three categories “Toolkits” are not covered as much as the “Taxonomies” and “Performance Studies”. They critically analyze the Foley-Wallace-Chan scheme and point out it’s defects and try to build their design space such that it will not posses similar problems.

I was more excited about their “Performance studies” section. It was interesting to see references to the second paper that we read (Researches at Xerox PARC that showed that the mouse if the optimal input device). It truly demonstrates that new science is built on older research. After reviewing the related work they move on to set up their design space. The design space, as they describe it, is the abstraction of device performance and their taxonomy in addition to a number of different parameters in one graph. They break down the job of an input device to translate a combination of moves to a desired function with desired parameters. In order to describe these movements they set up the “Primitive Movement Vocabulary” which featurizes each input device as a six tuple. On this vocabulary we can define composition operators and a mouse for example can be looked as a device with a merge composition of two one-dimensional sensors. The design space allows us to look at other factors quickly and easily analyze each device factors like desk footprint, pointing speed precision, errors, time to learn, cost, etc can be shown directly on design graphs and be used for analyzing or decision making (for example the footprint can be literally translated into the size of the dot associated to each device). Their discussion on the bandwidth of the human muscle group was one of the most interesting sections of the paper and later in the discussion section they report a funny experiment with one of their lab mates to prove him that the headmouse is not a successful device for text editing.

I found the paper a good read and I believe their design space can be used to critically look at different input devices however I am not sure if it can be used for designing new input devices. Authors do not make any comment about how an analysis on the design space can be translated into improvements on current designs and while I certainly understand the value of the analysis to reject bad devices I do not see how we can use it to iterate on good devices and make them even better.

Paper II: Sensing Techniques for Mobile Interaction This UIST paper is a report on developing new interaction techniques based on sensor feed backs in mobile devices. They look at the way we use mobile devices and try to build interaction techniques that blend into our lifestyles. For example they augment a device by 2-axis linear accelerometers and use that to sense when a person is walking. I used to own an HTC cell phone with Windows Mobile 5 operating system when iPhone came out and I am surprised to see many of these techniques used in iPhone while the paper is by people at Microsoft Research. As far as I remember features like portrait/landscape display based on the tilt sensor was not made available in the 5th version of windows mobile. Perhaps this is another example of how apple successfully uses HCI research that is paid for by other research centers.

Authors build a relatively cheap prototype by augmenting existing cellphones with censors. Using the tilt sensor they characterize the usage of the cellphone (Looking at display, holding at side, walking, etc) and use them to conduct a usable study with 7 participants. Throughout the paper they make it clear that their usability test is not rigorous (they call it “informal”) and they mostly use the results to improve their designs (In their informal experiment section they in fact use their 7 participants to do statistical hypothesis testing and using their low p-values their support their Hypothesis. Their approach reminds me of the paper “The Magic Number 5: Is It Enough for Web Testing?” also by MSR people presented in CHI03) .

The paper provides a look into designing an interaction technique, prototyping and testing it and reporting the result. It is less structured than the bubble cursor paper in which we had one interaction technique, a number of clear hypothesis and rigorous user experiments and statistical analysis to support it. Like every design report I like to know more about the cases that they failed, they report a couple of cases but they could include a lot more. Also They could have isolated each technique separate and study it more thoroughly.


David Wong - 9/12/2010 18:57:40

1) The "A Morphological Analysis of the Design Space of Input Devices" paper discussed a new way to systematize the design space of input devices. It illustrated the morphological analysis of the design space using the example of a mouse and a head mouse. The "Sensing Techniques for Mobile Interaction" paper discussed several experiments exploring the possibilities of adding cheap sensors onto mobile devices to enhance the user experience. It concluded with the notion that the addition of cheap sensors can play an important role in the future of mobile devices, but careful planning and integration of traditional mobile phone interaction techniques will be needed.

2) The "A Morphological Analysis of the Design Space of Input Devices" paper is similar other papers from PARC as it investigates another way to systematically describe a problem space. The approach taken in the paper is quite novel, as far as I'm concerned, and offers an interesting new way of thinking about input devices. It allows researchers to systematically prove whether one input device is better than another, and, more interestingly, allows researchers to systematically theorize about new input devices that don't exist.

The "Sensing Techniques for Mobile Interaction" contributed to the HCI literature an initial attempt at researching the possiblities with mobile devices. Many of the areas explored in the paper have already become commonplace with current mobile technology. As such, the paper was a significant contribution as it correctly predicted, and possibliy influenced, research in voice recognition and cell phone orientation.

3) I thought that the argument in the "A Morphological Analysis of the Design Space of Input Devices" paper was sound. Their methods were viable, which was also confirmed by the challenge by their colleague. The only aspect of their paper that seemed like a stretch was the ability to theorize new input devices given the design space. There seemed to be too many variables to effectively use the design space to theorize about new devices. It was more like over complicating the brainstorming concept and adding overhead to using common-sense. Nevertheless, it does seem like an interesting way to start the brainstorming process.

The experiments in the "Sensing Techniques for Mobile Interaction" seemed valid. The problem was well motivated as mobile phones around that time ('00) were becoming more powerful and capable of having sensors embedded in them. The paper also covered some of the shortcomings of the experiments.


Matthew Can - 9/12/2010 19:00:32

A Morphological Analysis of the Design Space of Input Devices

In this paper, the authors systematize input devices by organizing them into a design space based on device parameters. The authors show how this can help improve the analysis of input devices.

The main contribution of this paper is that it builds on previous work on systematizing input devices. The authors develop a morphological design space analysis that incorporates the results of previous work. This design space is generated by characterizing devices in terms of several parameters. In particular, I liked the concept of representing input devices as primitive movements that can be combined with composition operators because it can describe a rich set of input devices.

The authors show how tests can be used to characterize regions of the design space in terms of expressiveness and effectiveness. For example, an analysis of the footprint of various input devices reveals which regions in the design space have a large footprint and which have a small one. In addition, the authors analyze the bandwidth of input devices, plotting both device bandwidth and task bandwidth in the design space to illustrate which devices are appropriate for a pointing or viewing task. I am skeptical of the benefits of analyzing devices in terms of this design space. It seems to me that the bandwidth analysis requires no notion of a morphological design space, nor does it add depth to the analysis to frame it in terms of this space.

According to the authors, one benefit of analyzing the design space is that it can provide engineers with suggestions on which kinds of devices or device parameters are suitable for applications they are developing. This may be true, but it does not explain what additional power this method of analysis provides over the previous methods. This is something I wish the authors had addressed in detail.


Sensing Techniques for Mobile Interaction

This paper explores the way in which various kinds of sensors can improve users’ interaction with mobile devices by exploiting context. The researches test usability and user reaction for four applications mobile device input is augmented with proximity, touch, and tilt sensors.

One interesting aspect of the paper is that it considers ways in which information from multiple sensor types may be used in tandem. The authors give the example of the voice memo application that used only the tilt sensor suffering from false positives. With multiple sensors, they were able to create a unique gesture that did not suffer from the same problem.

I also like the focus the authors place on the context in which mobile device interaction takes place, namely that the form of the interaction can adapt to the context. Many applications are built for use in a single context, but what is interesting is how sensor data can be used to make application interaction more flexible for the user. For example, a mobile phone that can sense that I am in a loud environment may switch to vibrate mode to notify me of phone calls.


Arpad Kovacs - 9/12/2010 19:02:41

The Card, MacKinlay, and Robertson paper present "morphological design space analysis", a new approach to identifying and categorizing input devices which creates a parametric representation of the central ideas of input devices, and describes individual designs are points in the design space. They define a formal language of human-computer interaction (characterized by physical actions known as manipulation operator, input and output domain sets, the current state of device, a resolution function mapping the input domain set to the output domain set, as well as general purpose device properties) in order to determine the possibilities of sentences that may be performed from composition of primitive input moves. They then proceed to describe metrics for evaluating the expressiveness and effectiveness of the devices, namely desk footprint, bandwidth (pointing speed and pointing precision), error rate, learning curve, startup time, user preferences, and retail cost.

I found the main contribution of the paper to be the taxonomy of a wide assortment of input devices shown in figure 4, which effectively abstracts most input devices into the basic physical forces used to manipulate them (linear and rotary motion, with absolute and relative qualifiers). It seems that this system works quite well for characterizing physical input devices, although it may be a stretch to map some motions such as squeeze and shake to their analogy of n-dimensional sliders and knobs; perhaps this is why they hedge their claims with the "general purpose set of device properties" which seems like a catch-all fallback. It is quite interesting to look at high concentration areas in the table: the linear x-y positional sector seems especially saturated, probably because it this problem space can be solved using relatively simple, existing technology (optical encoders, or even simple. What I find even more intriguing are gaps in the table: force and torque-based seem underrepresented, while input devices that sense changes in force and changes in torque are completely absent, although this may be changing with the integration of servo motors into many devices, eg force-feedback steering wheel and brakes in newer vehicle simulators.

My main criticism of the paper is that it mostly ignores the entire spectrum of non-physical input, which seems to be the fastest-growing area of HCI research today. The authors do acknowledge that they cannot categorize voice input, yet there are a number of other possible future input devices that do not fit this taxonomy, for example thought (eg nerve-impulse/electrode), heart rate, muscle-tension, heat, and light input. The authors' approach merely seems to reinforce the computer's view of the human as a hand with an eyeball and an ear capable of merely sight, touch, and hearing, and completely ignores two of the five senses (taste, smell).


The Hinckley, Pierce, Sinclair, and Horvitz paper explores how adding sensors (such as accelerometers, capacitive touch, and infrared) to mobile devices reduces the cognitive attention required for interactions in challenging, high-variance environments, and require better integration with natural gestures to minimize disruption in the users' daily routine. Most of the paper is concerned with presenting new interaction modes (touch/shake, voice memos, portrait/landscape mode, power management), and then discussing implementation details, as well as testing results.

In the 10 years that have passed since this paper was published, most of the input devices the authors anticipated come standard (digital compass, tiltmeter, proximity meter) with the latest smartphones (eg iPhone or most Android devices), while a few unexpected others (GPS, camera) would provide for even greater integration and automation of routine tasks. I think that the most valuable contribution of this paper lies in their proof-of-concept implementation, and affirmative results, which probably encouraged additional research and eventual commercialization of these technologies. Given the high standards of today's devices, we take most of these interactions for granted, but they must have been pioneering, cutting-edge novelties in 2000.

I find it fascinating that although the authors were researchers at Microsoft, it was Apple and Nokia's operating systems, rather than the Windows Mobile which took the initiative in pushing these technologies towards wide consumer adoption. It seems that hardware/software integration is crucial in the successful deployment of these technologies, and although Microsoft owned the software side 10 years ago, they did not have the foresight to create their own reference hardware designs to push the envelope as Google have done with their Android Nexus One.


Arpad Kovacs - 9/12/2010 19:02:52

The Card, MacKinlay, and Robertson paper present "morphological design space analysis", a new approach to identifying and categorizing input devices which creates a parametric representation of the central ideas of input devices, and describes individual designs are points in the design space. They define a formal language of human-computer interaction (characterized by physical actions known as manipulation operator, input and output domain sets, the current state of device, a resolution function mapping the input domain set to the output domain set, as well as general purpose device properties) in order to determine the possibilities of sentences that may be performed from composition of primitive input moves. They then proceed to describe metrics for evaluating the expressiveness and effectiveness of the devices, namely desk footprint, bandwidth (pointing speed and pointing precision), error rate, learning curve, startup time, user preferences, and retail cost.

I found the main contribution of the paper to be the taxonomy of a wide assortment of input devices shown in figure 4, which effectively abstracts most input devices into the basic physical forces used to manipulate them (linear and rotary motion, with absolute and relative qualifiers). It seems that this system works quite well for characterizing physical input devices, although it may be a stretch to map some motions such as squeeze and shake to their analogy of n-dimensional sliders and knobs; perhaps this is why they hedge their claims with the "general purpose set of device properties" which seems like a catch-all fallback. It is quite interesting to look at high concentration areas in the table: the linear x-y positional sector seems especially saturated, probably because it this problem space can be solved using relatively simple, existing technology (optical encoders, or even simple. What I find even more intriguing are gaps in the table: force and torque-based seem underrepresented, while input devices that sense changes in force and changes in torque are completely absent, although this may be changing with the integration of servo motors into many devices, eg force-feedback steering wheel and brakes in newer vehicle simulators.

My main criticism of the paper is that it mostly ignores the entire spectrum of non-physical input, which seems to be the fastest-growing area of HCI research today. The authors do acknowledge that they cannot categorize voice input, yet there are a number of other possible future input devices that do not fit this taxonomy, for example thought (eg nerve-impulse/electrode), heart rate, muscle-tension, heat, and light input. The authors' approach merely seems to reinforce the computer's view of the human as a hand with an eyeball and an ear capable of merely sight, touch, and hearing, and completely ignores two of the five senses (taste, smell).


The Hinckley, Pierce, Sinclair, and Horvitz paper explores how adding sensors (such as accelerometers, capacitive touch, and infrared) to mobile devices reduces the cognitive attention required for interactions in challenging, high-variance environments, and require better integration with natural gestures to minimize disruption in the users' daily routine. Most of the paper is concerned with presenting new interaction modes (touch/shake, voice memos, portrait/landscape mode, power management), and then discussing implementation details, as well as testing results.

In the 10 years that have passed since this paper was published, most of the input devices the authors anticipated come standard (digital compass, tiltmeter, proximity meter) with the latest smartphones (eg iPhone or most Android devices), while a few unexpected others (GPS, camera) would provide for even greater integration and automation of routine tasks. I think that the most valuable contribution of this paper lies in their proof-of-concept implementation, and affirmative results, which probably encouraged additional research and eventual commercialization of these technologies. Given the high standards of today's devices, we take most of these interactions for granted, but they must have been pioneering, cutting-edge novelties in 2000.

I find it fascinating that although the authors were researchers at Microsoft, it was Apple and Nokia's operating systems, rather than the Windows Mobile which took the initiative in pushing these technologies towards wide consumer adoption. It seems that hardware/software integration is crucial in the successful deployment of these technologies, and although Microsoft owned the software side 10 years ago, they did not have the foresight to create their own reference hardware designs to push the envelope as Google have done with their Android Nexus One.


Kenzan Boo - 9/13/2010 01:21:52

Sensing Techniques for Mobile Interaction – Hinckley


I found this article most interesting of the two. Although most of the ideas described in this article have already come into commercial existence for a while now, the ideas were rather prophetic for the time. The article itself describes how we should incorporate “natural” gestures that users will make natural without additional cognitive effort in an attempt to do something. Some key examples were picking up the mobile device, holding it up to the user’s face. Although many of the features described have already been implemented in devices like the iphone and other smart phones today, things like pick up detection, which would be very convenient have not. This may be due in large part to power costs in the mobile device which is a much bigger concern than convenience from turning on the device with a button. Another item, the tilt scrolling described in the article is actually a brilliant idea, bringing in more usable screen space and allowing quicker and more direct controls to common things like scrolling. However in actual use on my iphone, this requirement of holding the device steady and level makes it much more difficult to use than just having that scroll bar there.


Morphological Analysis of the Design Space of Input Devices – Card


The article is primarily about stratifying and classifying input devices and their effectiveness. It provides details on models of classifying like Bandwidth, how fast can interactions be made, Accuracy, and footprint. It also describes the composition of the operators, specifically, merge, layout, and connect composition. These describe how the different inputs of one device like a mouse connect together. I.e merge, integrates the x and y axis inputs to create one input in both x and y.


They also go into detail on how accurate an input can be. I found this particularly interesting in that tools like multi touch surfaces are excellent for quick direct input with an accurate mapping in the users head, however are very limited in the accuracy. Inputs like that have high bandwidth, about as fast as the user can move his or her hands, but the accuracy compared to another input like a pen is hugely decreased do to peoples thick fingers.


The article later focused on the comparison of the headmouse, i.e. neck, and the actual mouse, comparing how much input bandwidth can be attained accurately from each. While I agree with the measurements, other problems also come into play when using something like the neck for input. This also detracts from the user’s input to his or her cognitive processor, the visual feedback from the eyes. If the brain needs to constantly shift that, the change required will slow the user down. This goes back to the actual usability of the input like with the tilt scroller.