A Tour of Sensing Techniques

From CS294-84 Spring 2013
Jump to: navigation, search

Reading Responses

Valkyrie Arline Savage, PhD - 1/26/2013 10:48:42

The readings this week explored what it means to perform sensing in various non-WIMP, non-GUI, non-dedicated scenarios. Saponas, et al., remarked that "computing in mobile scenarios is often peripheral to the act of operating in the real world", which seems to me like a good way to frame the readings as a whole.

The first reading considers what exactly changes about the interaction flow between the user and the machine when in a non-dedicated scenario. How do we address the machine, ensure that it is attending to us, and tell it what to do? How do we avoid giving it commands? Instead of "how can the user tell the system what to do", the question becomes what are "joint accomplishments of user and system necessary to complete the interaction". This reading was obviously written before Kinect, and in fact Kinect has done several things right in terms of showing a user what the system is thinking and that it is attending. It's left up to the individual game developers, but many show skeletons of how they understand a user to be moving in-game, and there is a distinct area in which interactions can be performed (the system warns you automatically when you have gone or are about to go out of range). However, there is still the question they raise about abstract actions like "copy" and "save". But what do these actions even mean for a physical interface? Storing the most recent state of an interface is implied by leaving it set that way. "Copying" an interface could be in the near future as things like 3D printers get better and better, and we are able to copy more objects and components in a very literal way.

The second reading discussed multiple possible future interfaces for always-available use. They shared similar concerns to the first paper, and went through listing how various interfaces did or did not measure up. They seemed very excited about visual output devices, and much less excited about audio output devices.. somehow, though, they were being described in the same way! Their concerns about audio rather than visual output to humans seemed to stem from its distracting qualities, while visual attention, they claim, is much easier to toggle. I don't know that I believe them, and in fact I think that audio feedback is a powerful and heavily underutilized tool for communicating interface state and actions.

Tovi Grossman's implanted UIs paper is very... interesting. I think the world is not yet ready to hear that we can embed interaction components subcutaneously, whether they work well (which they seem to) or not. I personally think it would be really cool to have embedded devices, both for the reasons they give related to personal feedback and just for convenience, but there are still a lot of challenges ahead for that, most important of which (I think) is power. A pacemaker doesn't have to perform wireless communication, so a surgically-implanted battery that lasts 6-10 years is ok. These things, in order to be useful, would probably need to draw power from body heat or bloodflow or something similar, which is a whole other can of worms to open.


Arie Meir - 1/28/2013 17:38:54

The paper by Bellotti borrows a model of HHI from cognitive sciences and proposes a set of similar criteria to be used as canonical dimensions to systematically evaluate new HCI modalities for always-available (ubiquitous) mobile interaction.

As a person who sometimes gets too excited about the technical aspects of a system, I think that the 5A-space (Address,Attention,Action,Alignment and Accident) can be thought of as a useful inspection framework in which novel devices might be evaluated to assess their utility. A clearly defined framework with concrete HCI examples can be a useful intellectual tool. The paper made me think of an entomologist who carefully classifies his insects into a nature-prescribed taxonomy in order to somehow reason about his object of study and attempt to generalize his findings.

The Morris et. al. paper is a ~50 pages smorgasboard of always-available input and output devices from different modalities focusing more on the dichotomy of bandwidth & SNR vs. the cognitive burden required to interact with the technology. The authors examine multiple interactive devices/approaches in the context of the sensory requirements imposed by the device (manual operation, requiring visual/audio attention)

Among the challenges of the field presented by the authors are Ambiguity Handling, Sensor Fusion and Cognitive Interference, which can be somewhat mapped to the framework suggested by Bellotti: A system that is able to handle well ambiguous situations, scores higher on the Address&Attention rubrics. Effective sensor fusion and a design with cognitive interference in mind, can facilitate action and alignment between the machine and its operator.

While enjoying this comprehensive review of the always-available devices, I think the paper could benefit from visualizing the different technologies/devices across the dichotomy axes suggested by the authors. This could present interesting comparisons between various classes of devices.


The last paper by Holtz et. al., presents a prototype of an implantable user interface device which was tested on a cadaver arm as well as on a artificial skin arm. The author focuses more on the technical issues of the device namely power, input/output and communication. While this work was the most detailed technically, as intriguing as I find the idea for implantable devices, there are clearly some issues which need to be addressed. First are the "BIO" aspects of the implantation process. The technology as described is a user interface device which doesn't offer any medical benefits, and although the FDA had approved an implantable RFID chip in 2004, the chip was 11x2mm which means it could be injected by a syringe. The authors don't address miniaturization in his work although it seems important for the progress & acceptance of this technology. Can this technology be miniaturized to the point it can be injected ? What would be the limitations of such technology (after miniaturization) ? Second, power related aspects are also address rather second-handedly. Since the device is likely to be battery operated (or supercapacitor) , a discussion of the power budget of such a device and the available technologies to provide this budget would be in place. Third, security & safety also raise several issues - what if someone can "sniff" your "skin-clicks" as you type in your password or imagine someone hacks into your implantable UI gadget and makes it generate heat, thus causing a physical attack on the "wearer".

Overall, the concept was cool, if somewhat Frankenstein-like and it made me think in the direction of miniaturization of mobile interfaces - how small can we go and still do something useful. Can a device the size of a grain of rice (for ease of implantation) be designed, manufactured and powered by readily available means (e.g. piezo-electric actuation or inductive charging). I found that the technical dimensions of power,i/o and communication coupled with the 5A framework suggested by Bellotti provides a good mixture of technical & social dimensions to keep in mind when thinking/designing new interactive devices.


arie - 1/28/2013 20:24:24

The paper by Bellotti borrows a model of HHI from cognitive sciences and proposes a set of similar criteria to be used as canonical dimensions to systematically evaluate new HCI modalities for always-available (ubiquitous) mobile interaction.

As a person who sometimes gets too excited about the technical aspects of a system, I think that the 5A-space (Address,Attention,Action,Alignment and Accident) can be thought of as a useful inspection framework in which novel devices might be evaluated to assess their utility. A clearly defined framework with concrete HCI examples can be a useful intellectual tool. The paper made me think of an entomologist who carefully classifies his insects into a nature-prescribed taxonomy in order to somehow reason about his object of study and attempt to generalize his findings.

The Morris et. al. paper is a ~50 pages smorgasboard of always-available input and output devices from different modalities focusing more on the dichotomy of bandwidth & SNR vs. the cognitive burden required to interact with the technology. The authors examine multiple interactive devices/approaches in the context of the sensory requirements imposed by the device (manual operation, requiring visual/audio attention)

Among the challenges of the field presented by the authors are Ambiguity Handling, Sensor Fusion and Cognitive Interference, which can be somewhat mapped to the framework suggested by Bellotti: A system that is able to handle well ambiguous situations, scores higher on the Address&Attention rubrics. Effective sensor fusion and a design with cognitive interference in mind, can facilitate action and alignment between the machine and its operator.

While enjoying this comprehensive review of the always-available devices, I think the paper could benefit from visualizing the different technologies/devices across the dichotomy axes suggested by the authors. This could present interesting comparisons between various classes of devices.


The last paper by Holtz et. al., presents a prototype of an implantable user interface device which was tested on a cadaver arm as well as on a artificial skin arm. The author focuses more on the technical issues of the device namely power, input/output and communication. While this work was the most detailed technically, as intriguing as I find the idea for implantable devices, there are clearly some issues which need to be addressed. First are the "BIO" aspects of the implantation process. The technology as described is a user interface device which doesn't offer any medical benefits. Although the FDA had approved an implantable RFID chip in 2004, the chip was 11x2mm which means it could be injected by a syringe. The authors don't address miniaturization in his work although it seems important for the progress & acceptance of this technology. Can this technology be miniaturized to the point it can be injected ? What would be the limitations of such technology ? Second, power related aspects are also address rather second-handedly. Since the device is likely to be battery operated (or supercapacitor) , a discussion of the power budget of such a device and the available technologies to provide this budget would be in place. Third, security & safety also raise several issues - what if someone can "sniff" your "skin-clicks" as you type in your password or imagine someone hacks into your implantable UI gadget and makes it generate heat, thus causing a physical attack on the "wearer".

Overall, the concept was cool, if somewhat Frankenstein-like and it made me think in the direction of miniaturization of mobile interfaces - how small can we go and still do something useful. Can a device the size of a grain of rice (for ease of implantation) be designed, manufactured and powered by readily available means (e.g. piezo-electric actuation or inductive charging). I think that the technical dimensions of power,i/o and communication coupled with the 5A framework suggested by Bellotti provides a good mixture of technical & social dimensions to keep in mind when thinking/designing new interactive devices. -- Arie


Ben Zhang, PhD - 1/28/2013 23:33:13

  • Five Question (Bellotti)

Classic GUI has gained tradition/paradigm of design, while emerging UbiComp lacks serious consideration in the methodology of design process.

The question of how interactions should be designed in new ubiquitous computing era is not clear; ideas from HHI might inspire the development of HCI when considering the communication between human and machine rather than a command from human to machine. The five questions raised in this paper provides a general framework for deliberation of design.

In this paper, abundant comparisons between traditional GUI and new interaction mechanisms are provided and elucidated in a table. The corresponding challenges and risks of failure are also discussed. Several recent projects which leverages novel interactions are summarized as supporting examples. They all have nice parts of design that fit nicely within this framework; but some may lack detailed consideration in some aspects and tend to be error-prone (such as failing to consider accidents and design "undo" mechanisms).

A nice paper in framing the considerations of designing novel systems, with the emphasis on communication.

- Clearly structured with a main table summarizing the main points.

- The difference of "Attending" and "Alignment" may not be that huge in the sense that they both need to provide timely feedback to the other side of the "communication".

- The over-emphasis on "communication" might introduce too much overhead because of the misunderstanding from the other side. Efficiency should also be taken into account for the design of novel interactions. (I have never seen anything better than keyboard for interaction...I love EMACS than most GUIs. Though this requires training of typing in the early stage, but it increases the efficiency by several order of magnitude, in some cases).

- I think this is a nice paper as a starting point for this class, since the methodology of analyzing the effectiveness of interactions can be applied later.


  • Always-available IO (Morris)

This is a survey of sensors and input systems that enables always available computing. With a sizable number of researches conducted in novel input modalities, it becomes very difficult to keep updated about these new devices. After a certain amount of time, a survey is always necessary to summarize the state-of-art technology and point out their pros and cons.

This survey limits the scope to be "always-available mobile micro-interactions". A relative complete view of various technologies for input: IMU, touch, computer vision, mouth-based, brain-computer interfaces, muscle-computer interfaces. Bundled with the input, the system have to come with appropriate feedback mechanisms for interaction. Haptic feedback, audio feedback, glasses and other displays, together with corresponding examples are then provided. A complete view of state-of-art technology enables us to analyze the developing trend and identify the challenges beneath the appealing future. Addressing, multi-sensor fusion, generic gesture design and cognitive interference are several major challenges before the large deployment of these always-available system.

- An interesting paper which lists so many great research projects in this scope. Coupled with figures, it's easier for readers to follow (you can get a sense of those projects without googling).

- For each project, this paper not only talks about the main ideas, but also provides some insight of that specific enabling technology

- The paper is structured in a less systematical way, though probably due to the variety behind this topic

- My personal feeling of reading this paper can be summarized using a quote from the paper "impressed by the scope and depth of existing work". It's great to know these projects. But I have too limited time to goes in depth now; hope this course will cover most of them.

  • Implanted User Interface (Holz)

This is an investigation of feasibility for implanted user interfaces, on input, output, communication and power supply. A increasing trend of miniaturization seems to enable the vision of implanted devices. To function as the user interface, such devices must be able to provide input/output/communication capability. The power is always a problem under mobile/ubiquitous contexts. As a proof-of-concept, this paper deals with these fundamental properties by carefully designing experimentation to quantize these characteristics.

This research surgically implanting devices into a specimen arm, and conducted a series of experiments towards the 3in3out feasibility, bluetooth communication capability, and a powering mat possibility. The primitive results demonstrate the entire system a promising future. Besides, with the involvement of an anatomy professor and other biology background assistance, the approach in this paper is rather practical.

- Explicit focus on the topics being investigated: input, output, communication, power.

- Practical experiments and systematic evaluations. A baseline is always provided to show readers about the performance when implanted.

- As the paper has discussed, some conclusion come too fast. With a single participant in some experiments, the results becomes less convincing. Extensive work might be needed before reaching the generalization.

- It is fairly cool to have such implanted devices working in an invisible ways, while providing many unexpected services. Such visions, previously enormously seen in movies, now come to be evaluated in a real-world experiments. I really like such practical work to measure and analyze the feasibility of these astounding ideas.


elliot nahman - 1/29/2013 1:05:21

Implanted User Interfaces This paper is an investigation of the challenges of posed by implanting devices beneath the skin. The authors list four challenges they examined: getting input, producing output, communication, and power. The authors find that traditional user interfaces do work when implanted beneath the skin.

This paper expands on the current literature by exploring the user interface question. Most implants that are used or have been proposed are “passive” in that they do not have mechanism for the user to interact with them. This study suggests that traditional HCI methods are still valid in tiny implanted devices. Future products that might be created for implantation can rely on these traditional interface methodologies. For those who wish to create tiny, implanted devices, this paper does seem to be an important first step.

I find it interesting that the authors define implanted devices as necessarily being implanted beneath the skin. One of the examples they cited a few times, hearing aids, do not fall under this definition. Also, they cite several benefits to implanted devices over wearable devices, such as they stay out of the way of activities and the fear of social stigmas. I suppose they are approaching this from a medical perspective whereas I think more about commercial, voluntary devices. As such, my gut reaction to those two comments are TSA and Google Glass respectively. It seems like there might be some activities where you want to be able to remove a device such as going through security at an airport. There is also just the concern of accidentally pushing buttons as your engage in other activities. The social stigma issue also depends on what the device is for; trendy devices are meant to be visible and shown off and not just hidden, such as Google Glass. Meanwhile, if someone were just tapping on their arm, wouldn't that be deemed peculiar and lead to the social stigma of being thought insane? The authors qualitative study does allude to this. Also, a major flaw I see is that by implanting a device into one’s forearm, this precludes you from being able to use that hard to activate the device. With my phone, I routinely use both hands and that flexibility is quite valuable. With implantation, you loose that flexibility. As put in Emerging Input Technologies, you cannot have your hands and use them too…


Making Sense of Sensing Systems: Five Questions for Designers and Researchers

This paper explores some interaction questions that have by-in-large been ignored in the field because traditional GUIs set up conventions which address these issues. Newer types of interfaces and modes of interaction lack conventions in addressing these questions and require reexamining how to solve them.

This paper deals more with abstractly examining the larger problems rather than examining specific interaction mechanisms. I find their concept of focusing more on the communication aspect quite interesting and like their reforming of Norman’s stages of execution. Address, Attention, Action, Alignment, and Accident seem like reasonable distillations and abstractions of Norman’s stages and how individuals communicate. I also think they do a good job pointing out the traditional GUI address these issues and what challenges lie ahead for alternate systems. These points are a useful reference to consider when designing a new device.

In the end, it seems like these problems exist not just because there is no precedent, but because implicit in ubiquitous computing and tangible interfaces is the notion of lowering the bar for computer interaction. The GUI solutions are not perfect, but that they are conventions and everyone has had to adapt to them. With new alternatives, peoples are less willing to adapt to the devices and more expectant that the devices will do things “right”.

Emerging Input Technologies for Always-Available Mobile Interaction

Basically: turn mobile devices into physical computing to make interaction easier and require less visual and manual distraction while the user interacts with the world. The authors cite the problem with mobile being that it requires visual and manual focus. Users enmeshed in their environment can spend 6sec on it before refocusing on their environment. So this paper is a survey of various input and sensing options.

I am really not sure what to say about this one…It is a very thorough survey of available types of sensors for interaction, including ones that are currently impossible/impractical at this point in time. They specify a focus on sensors that the user carries around with them rather than relying on sensors that could be situated in the environment. This makes great sense given the amount of infrastructure that would need to be built and questions of who would pay for such infrastructure. As a survey paper, its contribution is in laying out the toolbox of possibilities for designers.

The discussion on ambiguity at the end makes me think back on the Implanted interfaces reading which I did first and probably should have done last. Depending on the activities the individual is engaged in, implanted sensors could be quite prone to segmentation ambiguity. Someone could easily bump into the individual and actuate the sensor in a crowd. Overall, this paper seems quite extensive, and although does have a particular bias, is one that seems to match my own biases so it feel really thorough and reasonable in its assessments.


David Burnett - 1/29/2013 2:00:44

"Making Sense of Sensing Systems"

For most interactive computer systems today, how the system will react to user input is well-known. As systems integrate into more aspects of life, however, the standard input/output terminal format will be replaced with more pervasive controls. These controls, and their responses, must be carefully designed to provide the same level of interactive clarity as today's monolithic systems. This paper uses the current GUI-based paradigms to expose the underlying interactivity questions they answer, and re-direct those questions in new directions.

The framework discussed in the paper systematically describes is an effective approach for use in correctly designing our own research, as opposed to solving frustrating usage ad hoc. I expect the fundamental description of interactivity problems to be similarly be useful in the design of any interactive system.

The comparisons between HHI and HCI provide an intuitive analogue that can be easily understood by designers, especially when performing context-aware "frame" sensing. As an unintended consequence, these comparisons may lead to quality "HCHI" systems -- systems that capture nonverbal cues humans use when communicating with other humans to later retransmit those cues to a human on the other side of a computer-mediated communication system. Given the age of this paper such systems may already have been built, but given the predictably low quality of every videoconferencing system to date this likely has not yet happened.

In refuting Norman's cognitive interaction, the author claims humans and computers are not equal partners in a dialog. Assuming "dialog" here means one cycle through Norman's seven stages of action, I believe that the two parties are on roughly equal footing. This is because the computer was designed by a human with the intention of enabling this dialog; interacting with a computer is analogous to a time-delayed interaction with a human in another processed-based capacity such as buying an item from a store. Often times if one can try to think how they would organize such a system, the correct method of manipulation can be more easily discovered.

In addition, the paper claims "a trend in HCI towards sensing systems that dispense with well-known interaction genres," but among consumer-grade interactivity I haven't observed this. Buttons and RFID tag readers still light up, Xboxes beep and display on-screen dialogs, and Siri prints back what it thought you said. Anyone designing an interactive system knows these standard genres and must follow them, or else create a frustrating and unusable system. It may be that some research systems miss one of the five major pieces presented in the paper, but publicly-released products are usually very hesitant to break the interactivity mold for fear of slow uptake.


"Emerging Input Technologies for Always-Available Mobile Interaction"

This paper describes ways of implementing the first and last steps from the first, by identifying technologies capable of indicating system attentiveness and annunciating system responses. Instead of everyday elective-use technologies, this paper describes only devices that are always available for input or output instead of needing to be physically focused on for use. By focusing only on these types of devices, the authors make a detailed inventory of ways technology can be more useful to users without adding a proportional amount of strain.

The survey also described the details of modalities of which I wasn't previously aware. Where I knew that speech recognition was a difficult method of input in the wrong environment, it has nonetheless proven itself successful (e.g., "Xbox, next episode") and I believed it to be a viable candidate for always-available input. However, after learning about the cognitive load necessary to engage in conversation, speech-based input is now less attractive as a ubiquitous input.

The authors imply that always-available input and output devices will ease the mental burden of focusing on a mobile device and other real-world stimuli by reducing the amount of time engagement with the device requires to begin useful input or output. However, I have found friction between the user and the device is sometimes critical to reduce the mental burden of distractions, so framing the application of always-available inputs and outputs beyond the authors' task-juggling anecdotes is critical.

As a result of the always-available definition excluding devices that must be initially opened, activated, or otherwise enabled for input, any of the methods described in this paper will be subject to heavy false positive analysis. Further, simple use control is present on nearly every mobile device. Any attempts to implement the goals set out in this paper may result in significant hurdles to clear: input filtering of authorized accidental users and complete filtering of unauthorized users.


"Implanted User Interfaces"

Taking the idea of always-available one extreme step further, the last paper this week discusses implanted methods of interacting with technology. Primarily engineering challenges are described, including the act of sensing input and powering the implanted device, leaving exactly how to intuitively integrate such a device into an interactive system a subject for a later time.

The paper starts from a very wide field of input and output possibilities to introduce to a new environment and mode of operation. By weighing the pros and cons of each I/O technology as it relates to the implanted domain, it offers a new perspective on interactivity. It also exposed readers to the beginnings of how implantable devices could be used day-to-day by exploring which devices were most appropriate for unattended interaction.

My main criticism of a device such as this one is an amplified criticism of the previous paper: namely, such cognitive-loading distractions are now impossible to escape instead of simply difficult. Before the age of mobile communication, one writer described his ideal work environment as including "the telephone buried several miles away," and this sentiment likely resonates with a large segment of the population. The advantages of an implanted device are clear, but mitigating the disadvantages remain a significant task.

Secondly, the implanted devices were inappropriate for the application. Off the shelf electronics of the scale under examination are simply gigantic, and the modalities such as speakers serve no useful purpose if implanted. Given the lack of attempted bio-compatibility or bio-interoperability, the physical phenomena studied simply reduce to whether a given type of energy is transmissible through human skin and could have easily been concluded with minimal lab tests.


YU-HSIANG CHEN - 1/29/2013 3:49:03

Making Sense of Sensing Systems: Five Questions for Designers and Researchers

This paper believes that in traditional UI such as desktop GUI and remote controls, there are a lot of conventions already established and therefore designers only need to follow them without thinking too much. However, innovative UI such as tangible interfaces and ubiquitous computing lacks those pre-packaged answers and have to be examine those design questions constantly.

The differences of the innovative interaction techniques comparing to the GUI is that GUI seldom initiate an interaction. The approach of modifying Norman's theory of action to fit into the communication model is compelling. When working on the Smart Cup design last semester, many of the future work we discussed are related to the five questions brought up in this paper. Therefore I agree with the authors that designers have to carefully deal with these questions when working on innovative interaction techniques.

The paper is good at bringing up various examples of how other works tackle these challenges. It also explains some of the reasons that innovative interaction techniques encounter these challenges. Below are a few more that could also be discussed:

1. The power saving mode of sensing systems. Sensors and ubiquitous computing often use battery power and sleep mode to ensure long lasting powers. Such technique could cause the device still be sleeping when a user tries to interact with it. 2. How to deal with multiple inputs and outputs. For example, the Fitbit has a LCD screen on the device, it syncs with a smart phone app, and the user can also logon to the website to see their status. If there's a feedback or notification the system wishes to send to the user, which output should it send to?



Emerging Input Technologies for Always-Available Mobile Interaction

This study targets to create always-available interaction that allows the users to efficiently transit between mobile computing and real-world tasks. It survey and characterize various sensors and determine whether they could enable this feature and then discuss the challenges faced.

A few approaches such as BCI and electrical sensing are new to me and pretty amazing. And the challenge discussed are all put together well.

However, I believe that some of the inputs mentioned still require more attention then the others and can be challenge on whether they are really always-available. These inputs include Harrison et al.'s Skinput and Abracadabra.

In addition, we have to also take into consideration of the task content. If we were only to choose one of the four options shown as Abracadabra on a mobile device, we can also do it quite fast. If we were to reply a SMS using muscle sensing, it would draw users attention no less than doing it on a smart phone.



Implanted User Interfaces

This paper examines the possibilities of using implanted interactive devices. It discusses the challenges of implanted UI, tested that it works pretty well under the skin, and simulated implants via artificial skin to perform tasks.

This paper quickly reminds me of the wearable devices such as Fitbit Flex and Jawbone UP, which are both wristbands designed to be worn all-day and provide minimal interaction but tracks substantial amount of data from the user.

It is an interesting idea to implant devices that are not for medical use but for daily tasks into human body. The use of brightness and capacitive sensor for inputing hovering actions seem quite novel to me. It was an incredible work they did trying to test the I/O under the skin to validate that the devices do work when implanted.

There are a couple of challenges that they might encounter:

1. The disadvantages of wearable devices mentioned are being gradulaly overcome by new designs. Both Fitbit Flex and Jawbone UP are water resistant, allowing users to wear them in showers and swimming pools, letting the device been worn all-day. Fitbit One is designed to be worn on various places, giving the users flexibility and discreetness.

2. Comfort was not taking into consideration in this research, but it is a huge decision factor. Since it was only tested on cadaveric specimen, there was no way to learn that whether the user would feel pain or something like 'it feels unpleasant when I sleep and rest my arm against the bed'.

3. The research assumed that the user wasn't wearing any clothes on top of their forearm. But in reality that's only possible in the summer. I would imagine a lot of the components would lost accuracy under thick sleeves.

4. Update and replacement. For implanted devices, the cost of changing the device is much higher than that of a wearable one. The technology is improving rapidly, and it's likely that in an year or two the original model will soon become outdated.

The question for us to think would be, giving the risk and the little disadvantages of the wearable devices, what could be the applications and incentives for users to choose implanted devices?



Joey Greenspun - 1/29/2013 8:54:55

Making Sense of Sensing Systems:

This article focuses on using human-human interactions as a template to develop new human-computer interaction schemas. Various social science studies and techniques are evaluated to determine how best to develop novel interactions between humans and computers. From the start, this article touches on very interesting and novel ideas in the realm of human-computer interaction. The vast majority of these interactions are performed in the typical modality of a user interfacing with a GUI via a mouse and keyboard. The paper immediately touches on some fundamental assumptions that we, as users, never need to consider. We intuitively know when the machine is ready for us to interact with it, we know how to send commands and fix these commands when we send incorrect signals, we know when the machine is busy. All of these states are very well defined because of a set of pre-packaged GUI standards that have become commonplace. This article argues that these conventions do not exist outside the standard mouse-keyboard-GUI modality. In the area of Ubiquitous Computing, these new gesture interactions need to be developed and refined until they become as intuitive and commonplace as a dropdown menu that can be clicked on. This article did a great job laying out the issues and challenges that exist in moving forward with novel human-computer interactions. Leaning on Norman’s seven stages of execution, the author laid out a set of five basic challenges that need to be addressed and solved to facilitate communication between a user and computer. Each challenge was clearly laid out, explained, and valid solutions were proposed. The real world examples of devices and systems that functionally addressed and solved each challenge was especially effective in aiding understanding of the challenge itself. Additionally, explaining how the analog in the standard GUI world addressed this issue helped understand the crux of the issue as well. There were however some issues that were glossed over a fair amount. Namely, the idea that devices could start recognizing gestures not meant to bring about action, and begin performing their tasks, is not only problematic, but dangerous as well. However, all in all, the article’s goal is to make researchers and developers think about these issues and challenges from a different view point, and it is quite effective in doing so.


Implanted User Interfaces:

This paper investigates the idea of surgically implanting user interfaces under the skin. It focuses on solving the challenges of providing input to the device, obtaining output, communicating, and remaining powered. The idea of implanting devices in the body is not novel, we see it in many medical devices such as pace makers and hearing aids; however the idea of a user truly interfacing with these devices in a HCI domain is something we have never seen before. The article points out many benefits to having the devices we know and love implanted into our bodies. There is no possibility of ever being without it; it cannot be lost or forgotten. The challenges with such a device are bring up interesting engineering challenges. If the device itself is under the skin, how can we as users interact with it. What sort of feedback functionality is most intuitive. The researchers conducted two studies to start to investigate these issues. The first tested a variety of sensors implanted into a cadaver. The devices were directly wired to a computer which seemed poorly designed. Obviously this is not how it would work in a real world setting, but the goal of this experiment was to test the sensing modality itself, so communication was probably a key testing parameter. The researchers did perform a Bluetooth communication test between two of the devices to determine how quickly information could be sent. The second experiment was a 3in3out device, which was implanted into a Silicon “skin” and worn on top of a user’s arm. The researchers tested the functionality and ease of use of this device, as well as the social impact of a person interfacing with his/her arm. I thought that was a very interesting facet to test. The researchers claimed that more trials would need to be performed to fully study the social impact of such a task, however it was nonetheless interesting that this was a real experimental interest. Small aside: I thought that the idea of doing energy harvesting in the body to aid in powering these devices is certainly one of the coolest things I’ve ever read.


Emerging Input Technologies for Always-Available Mobile Interactions:

This article focuses on the idea of creating a realm wherein the user can seamlessly transition from interacting with a mobile computing platform and performing everyday real-world tasks. The ideas of creating gestures, fusing different sensors, and interfacing cognitively are all discussed in detail, with a large focus on surveying the state of the art in novel input schemas. This article shows big leaps in the realm of HCI in that it wants to remove the physical interaction with the computer and replace it with a more streamlined and intuitive interaction approach. A variety of sensor are touched upon ranging from inertial sensors, having the capability to know where they are and what they are doing in space, to touch sensing such as keyboards, to computer vision and environmentally situation cameras, to mouth (tongue) and speech based inputs, and finally brain and muscle sensing inputs. These are wildly new and different from the standard mouse and keyboard functionality we are all so used to. This article does a good job in giving a broad overview of many different sensing and input devices, in the first section. However, the real interesting parts of the article lie in the Challenges and opportunities section. Here we see the real problem solving at work. How can we take all of these new and interesting devices and systems, and start writing a gesture library for them. How can we deal with ambiguity in the system. How can we interface many different kinds of sensors and make them work together to more reliably interface with the user. The motivation for this article is very well supported. The tech industry is always looking to improve how we use the technology around us, and our interactions with technology have remained largely unchanged. This paper is essentially a review of what is out there and a call to arms for researchers and developers to challenge themselves and develop new ways to interact with our devices that are completely embedded into our everyday lives.



Hallvard Traetteberg - 1/29/2013 9:01:13

Belotti et al. "Making sense of sensing systems: five questions for designers and researchers"

The paper takes a deeper look at UI design issues, based on Norman's seven stages of execution and research on human-human interaction from sociology. The argument is that interacting with newer ubiquitous interfaces, where the interface blends into everyday things and activities, is more similar to this kind of interaction than the use of desktop system. Hence, we cannot just modify GUI interaction techniques to work with new modalities, we must learn from the way we communicate with each other.

The authors formulate five questions about interactions, for which there are well-established solutions for desktop GUIs, that do not have obvious answers for ubiquitous interfaces. The paper shows how some existing systems solve some of the problems, but their solutions cannot necessarily be generalized.

The paper's contribution is 1) asking new questions or rather reformulates old questions in a new way, that are more relevant to newer interface modalities, and 2) use results from a different field (sociology) to help inspire new answers.

The paper is more about what makes sense, rather than what's possible. The work reminds me of how speech act theory was helpful in the design of CSCW/groupware systems.

Dan Morris, T. Scott Saponas, and Desney Tan. "Emerging Input Technologies for Always-Available Mobile Interaction"

The paper reviews newer input (and output) techniques that they consider relevant for what they call always-available interfaces, i.e. interface that are ready to be used anytime and anywhere, with minimal overhead and errors.

As a review, the contribution is more in the discussion of the relevance of each existing technique, rather than identifying new ones. The review may be more useful to new-comers to the field, than established researchers.

The review seems very thorough. I miss a better characterization and analysis of what it means that an interface is "always available". The paper seems to use imply this is a general characteristic with general solutions, but I would guess it is very context dependent and so will the solutions be. The paper is more about is about what is possible that what makes sense.

Holz et al. "Implanted interfaces"

The paper describes experiments in implanting existing physical user interface elements into/under the skin, to empirically test how they can be used for body-based interaction.

The methods used seem sound, but I'm not sure the results are valid for actual use or relevant for UI design. I've heard CHI being criticized for accepting anything as long as the methods are valid, not considering whether it's worthwhile. The paper is about is about what is possible rather that what makes sense.