Future Interactions

From CS160 Spring 2014
Jump to: navigation, search


ACM XRDS: Crossroads - The Future of Interaction Volume 16 Issue 4, Summer 2010 , pages 21-34:

Interactive Surfaces and Tangibles
Interfaces on the Go

Reading Responses

Andrew Fang - 4/19/2014 19:13:04

A lot of these new technologies focus on removing the limit of the device being the only means of input. Whether we’re talking about electromyography, or using the surrounding environment, the technologies described in these two articles all want to find ways to use bigger surfaces as input for the device. For example, we have the system that uses the sounds of scratches as input, which maximizes use of the environment. We have the PlayAnyWhere, which uses any surface as a touch screen. And we have electromyography or skin-input, which uses the body’s natural signals to recognize gestures and skin touches. The main advantages of these technologies is that computer input can be scaled past the limits on the physical size of the device. This could eventually mean that future devices could be even smaller, being only composed of the processor, sensors, and memory — all things display would be outsourced to one of these technologies. The skin-as-touch device probably would not be useful for apps that require both hands, seeings as how one arm would be used as a vehicle of input. The scratching-input probably would not be useful in public places, due to the noise in the environment. Other of these technologies would probably not be suited for on-the-go use, seeing as how they require separate equipment.

I found this Ted Talk that John Underkoffler did on the future of user interface design, and one of the elements I particularly liked was his demo on 3 dimensional picture manipulation. Here is the link, it starts at 6:37: http://www.ted.com/talks/john_underkoffler_drive_3d_data_with_a_gesture#t-376153. This reminds me of Tony Stark’s fictionalized interface where he manipulates diagrams and drawings by grabbing, throwing, and touching hologram images of whatever he was working on. Because the world around us is 3D, it makes the most sense to manipulate objects in 3 dimensions. This interface that Underkoffler showed off allowed for this, albeit with special gloves and gestures. He navigated through a field of photos and sorted and categorized them by grabbing and moving the elements around.

Ziran Shang - 4/20/2014 1:39:38

The first method of interaction is tangible interaction. The article about tangible user interfaces discusses tabletop interfaces. The main advantages of tabletop interfaces is that they can support physical objects of different size and shape, which users can move around. These objects may be able to better represent things than gestures could. Tabletop interfaces also make it easier to share control. A problem with tabletop interfaces is that they are not portable, so they do not suit the needs of many users.

The second article discusses micro interactions. In particular, it talks about ways to use the human body as an interaction platform. The main advantage of this sort of technology is being able to do certain things without carrying around an additional device. However, such a device would not be suitable for multiple users because the sensors require careful calibration. Also, input can often be noisy, so such devices would be difficult to use accurately in many situations.

Another interface technology is Leap Motion (www.leapmotion.com). I think this is interesting because it allows users to make use of 3D space to manipulate objects on the computer. Although it is not quite as realistic as having physical objects on a tabletop, it is still far better than the 2D actions of a touch screen. Also, the device itself is small thus making it very portable.

Myra Haqqi - 4/20/2014 12:17:55

The “Interactive Surfaces and Tangibles” section explains the advent of tangible interaction, which essentially extends the concept of direct manipulation. For example, many gestures, including tap, slide, swipe, and shake, are used to allow users to manipulate interfaces with their hands.

The main advantages of these new technologies are that tangible interactions allow users to manipulate objects on an interface to perform tasks directly. It is very intuitive to press an object and drag it over to the desired location. This allows the user to obtain optimal control over the interface, and also provides a good way to represent the interface to the user. Another advantage of using gestures to manipulate an interface is that there are a myriad of different possible interactions that one can perform. There are many combinations of various types of gestures, allowing for several ways to capture user input. There are many options and allow for many possible features to be implemented using certain patterns of gestures.

Another new technology is tabletop interactions, which allows for multiple points of input. This is especially advantageous in the case where multiple people seek to register input on a system simultaneously. All multi-touch gestures are beneficial by allowing users to perform actions with their hands in a way that they are familiar with, and also gives designers many options for ways that users can interact with an interface. For example, the two finger pinch-to-zoom gesture is widespread and universally used by users. This is helpful in that all users are familiar with this gesture and can use it to perform the same action on many different kinds of interfaces.

Another advantage of tabletop interactions is that multiple people can perform tasks together at the same time. Due to the multi-touch registered input, there are many ways for people to collaborate and accomplish tasks at the same time as each other. For example, this allows for shared control over applications.

The tasks that they are not suitable for are accomplishing tasks on very tiny screens. For example, a tiny wrist-watch with a small screen is not very good for multi-touch gestures. One reason is that people with a watch on one wrist can only perform gestures with the other hand; therefore, there are only so many gestures to be performed using one hand. Furthermore, something with a very small screen, such as a tiny wrist-watch, does not allow users to perform many multi-touch gestures because it is too small to do something meaningful with large fingers.

In the article entitled “Interfaces on the Go,” devices that allow people to perform actions on existing surfaces, such as Scratch Input, is beneficial in that people can take advantage creatively of the objects already around them to perform actions. This would be useful in a musical context as well.

However, a task that this is not suitable for is when users are “on the go” and need to perform actions in different environments. Because it is inconvenient for users to take the device with them everywhere and set it up before being able to use its features is problematic for users who need to perform tasks while moving to different places.

Micro-interactions are very convenient in this regard, and allow users to complete tasks very quickly and easily. With the advent of mobile devices, users can easily perform tasks with their cell phones at any time and in any place.

Muscle computer interactions and bio-acoustic sensing are new technologies that are propitious when accomplishing tasks that can be easily completed by simply moving or touching your skin, for example. Using the body for interfaces is very advantageous because it is very reliable, consistent, and readily-available for interaction. Also, users are very familiar with their bodies and will therefore be able to perform the necessary interactions more easily.

However, tasks they are not suitable for include when users must touch other things in an environment without intending to trigger any input capture. This is because it is difficult for these interfaces to differentiate between touch that the user actually intended and other mundane touch that users normally execute, unrelated to the task at hand.

Another interesting interface technology that I found online is the EPOC neuroheadset, which captures input via users’ thoughts, feelings, and expressions.

Link: http://www.emotiv.com/apps/epoc/299/

I find it interesting because it is very interesting that users will be able to interact with an interface using their mind. It is a form of mind control, allowing users to think something and have that task performed just by the device reading the users’ thoughts. This has the potential to allow for many creative interactions, as one’s thoughts can dictate myriad different tasks. This would also be very advantageous for people who are paralyzed and are unable to interact with other interfaces in a meaningful way (e.g. if they are unable to speak or move). This interface would allow these people to perform tasks with their mind when they are unable to use any other form of interaction.

I find it promising because there are several different useful functions that this interface can serve. For example, for companies who seek to obtain honest user feedback, they will be able to measure users’ satisfaction and thoughts regarding certain products or ideas by using this device to analyze their thoughts. Furthermore, it allows for many fun and creative ways to perform tasks, such as playing games while controlling characters in the game with your thoughts.

Jeffrey Butterfield - 4/20/2014 16:51:43

The main advantage of the technologies that employ the Tangible User Interfaces is that real space and contexts are emphasized when data manipulation and other computational tasks are performed. This is an advantage because it prolongs the same benefits of direct manipulation in that the articulatory distance of the gulf of execution can be reduced due to the "directness" of the interface controls. The article's explanation of interactive surfaces explain how such interfaces can facilitate collaboration and multi-user interactions in a pragmatic way. While tasks like collaborative work on a table fit well with these tangible user interfaces, such interfaces are not always appropriate. For tasks that require a certain level of precision (like programming or data entry), more traditional interfaces might still be more effective than those discussed in the article.

As for portable interfaces described in the second article, the main advantage of these new innovations is clearly convenience. Before the age of mobile computing, individuals would only have access to the facilities of a computer when at their home desktop system. While early laptop users saw part of this advantage in being able to take their computers with them on business trips and to friends' houses, the true beauty of mobile computing was realized during the advent of the smartphone. From there, new devices (some of them deemed "wearables") offer enhanced experiences and convenience for today's users. Portable projectors and armbands that can read muscle movements are still in research stages of development, but new benefits could be seen in both their extreme portability and their ease of use. Because portable devices often need to be small to be practical, their computational powers are limited far more than traditional desktops and laptops. This means computationally intensive tasks are not appropriate applications for these devices.


I find this patent by Sony interesting both because it is so outlandish and because of what interactions the device might enable should it actually be adopted by the public. Though wigs are not commonly worn by a majority of people in public on a daily basis, the item could potentially become fashionable first as an absolute novelty and later as a utilitarian computing device in subsequent, refined models. The most promising part of the smartwig is its position on the human body. Being so close to the face, ears, and brain, the wig has great potential to receive readings from, say, brainwaves if something like an EEG were to be installed in the wig. Audio and haptic feedback could allow the device to report meaningful feedback to its wearer.

Shana Hu - 4/20/2014 17:53:51

Tangible user interfaces are interesting because they offer a direct line between input and output. Current technologies are often touch screens at best, and although this interface is often easy to manipulate (much of which is due to years of learned intuition users have acquired through using similar products), in actuality, touchscreen devices offer an unnatural interaction which finds no metaphor in the real world. Tangible interfaces bridge this gap, by taking full advantage of the human form in order to produce a more diverse array of possible interactions. Whereas touchscreen devices are often limited to tapping, tangible user interfaces move into the 3-dimensional realm of squeezing, twisting, spinning, etc, many of these actions we are familiar with in everyday life. The exciting possibilities of reducing the gulfs of execution and evaluation through exploring tangible user interaction will potentially lead to innovative interfaces which make full use of humans' full range of motion. Similarly, microinteractions aim to increase efficiency through easier-to-use gestures, which is beneficial when users do not have the luxury of a comfortable, stable setting. Being on the go often limits the efficacy of interactions of modern technologies, which is an issue that can be ameliorated by utilizing more microinteractions and similar concepts.

While tangible user interfaces provide great potential for more natural human-computer interaction, they may be problematic for cases when there is a lot of data to sift through. In that case, users may just want a simpler way to sort, filter, and categorize, undercutting a need to physically manipulate the data by hand.


I thought this interface created by MIT Media lab was interesting because it connects users across physical distances, creating an instant and tangible interaction versus the impersonal flat interaction of screens.

Nahush Bhanage - 4/20/2014 20:56:59

The given articles speak about some really interesting technologies we might use in future to interact with computing. One of the most important advantages of these technologies is that they make the interaction between the user and the interface extremely intuitive and direct, by enabling physical manipulation of interface objects. A tabletop interface is a great example of such an interface, as discussed in the "Interactive Surfaces and Tangibles" article. Active Desk showcases a feeling of directness by allowing the user to draw directly on the surface as if s/he is drawing on a canvas. Tabletops have affordances that facilitate a collaborative environment, encouraging interaction between the users while working together on the system. These interfaces are also a great way of creating a shared representation of a particular problem at hand. As discussed in the article, these interfaces have multi-touch capabilities that extend much beyond the commonplace "two-finger pinch-zoom". These new technologies speak about fusing the control of digital data and operations with physical objects, by enabling users to handle data directly with their hands. If an interface can handle multi-touch, there could be a large number of possible gestures and their combinations - such interfaces would also have a huge potential of facilitating multi-user collaboration. The "Interfaces on the go" article discusses interfaces based on micro-interactions that significantly reduce user interaction time. One such example could be double-tapping the phone in your pocket to switch to another track in your playlist. Such micro-interactions are extremely useful while interacting with systems while you are on-the-go.

Though these new technologies have a number of advantages, they may not be suitable for performing certain tasks. In most circumstances, in order to make interactions as direct and intuitive as possible, tangible user interfaces are tailored to address specific type of applications. If the interface objects are made abstract, the system would lose tangible interaction and the objects won't be able to directly correspond to the underlying digital data. As a result, it is hard to make a tangible user interface that is generic in functionality and hence cannot be used for a wide variety of applications. Another disadvantage could be the overhead of ensuring that the "midas touch" problem (unintentional triggering of interactions) doesn't occur.

I think one of the most exciting interface technologies currently under research is the brain computer interface (http://en.wikipedia.org/wiki/Brain–computer_interface), which involves transmission of signals generated by the brain directly to an external object. I recently came across an article that discussed about a research experiment conducted at the University of Washington (link: http://www.extremetech.com/extreme/165081-first-human-brain-to-brain-interface-allows-remote-control-over-the-internet-telepathy-coming-soon) - they demonstrated a "system that allows one researcher to remotely control the hand of another researcher, across the internet, merely by thinking about moving his hand". The fact that we can control something (a system or even another human as described above) just with the help of our thoughts is incredible! I am extremely fascinated by the enormous potential that brain computer interfacing might possess.

Tien Chang - 4/21/2014 8:14:55

One of the main advantages of these new technologies is the ability for users to have more control in interactions with technologies. "Interactive Surfaces and Tangibles" state numerous instances of bots that could hover over surfaces to detect motion rather than touch. "Interfaces on the Go" particularly goes in depth about every surface becoming a touching point for an interface. Another advantage is augmented reality, where it may become easy for users to speak as a hologram. This creates a new level of communication, with new consequences. While this technology of holograms may be helpful if used with good intentions, this technology could be exploited as well. If we are able to record holograms, it may be used to become alibis of murderers, a means to communicate with nefarious groups, or another form of pornography.

The following article provides numerous inventive ways to play games with new interfaces. http://www.technologyreview.com/news/522231/new-interfaces-inspire-inventive-computer-games/ One game I found interesting in the article was Private Eye. Now, gamers do the opposite of what interfaces seemingly allow us to do - instead of increasing our ability to do, it limits our (physical) ability to do. This game cages users as a wheelchair-bound detective who must derail a murderer's plot to kill. While this technology of detecting head motion is fun in a game, it is also extremely important for those who are truly wheelchair bound. This will help those who are physically limited if the technology expands into other realms, while also helping those who have the freedom to understand the perspectives of those who are wheelchair-bound.

Vinit Nayak - 4/20/2014 22:57:01

The new interactions focus mostly on having more direct interaction with the interfaces in a way which we (the user) feel is natural and comfortable. This is done by eliminating middle ware such as mouse or keyboard and having our hands directly interact with the object we are trying to work with. A main advantage could be the use of these technologies from an early age with children in school. The naturalness will make it easier for children to get acquainted with the technology and therefore allow them to use tools and be exposed to potentially new educational software which they might have had more trouble using previously.

Another interesting interface technology emerging is the concept of wearables: http://www.theverge.com/2014/3/18/5522226/google-reveals-android-wear-an-operating-system-designed-for

This tech is being explored by industry giants such as Google and Samsung and allows greater connectivity between all of our mobile devices. It is interesting because this shows us a trend in technology, and how it physically is getting closer to our lives. Starting from remote desktops to mobile laptops and phones and now wearable devices. Soon, the tech will be underneath our skin and maybe even in our brains. It's claims to make technology more unobtrusive is very promising, which if executed well can actually foster more human interaction than virtual interaction.

Bryan Sieber - 4/20/2014 23:00:41

The first article in the reading spoke of the potential possibilities of direct manipulation and also the potential downfalls that this form of interaction encompasses. The second article spoke of interaction methods that work without a device using your body as an interaction point. Both of these new technologies have unique advantages, but the main benefits lie in the direct manipulation, allowing users to use actions that seem more intuitive or are more easily learned. With both of these forms of interaction, it is possible that simple tasks could have become more complex which could increase the gulf of execution for the users and increase time spent on certain tasks. One article I read recently was about the overuse of interfaces (article: http://www.cooper.com/journal/2012/08/the-best-interface-is-no-interface). We have created many, many different forms of interfaces, some of which have become more complex and less helpful. Sometimes we think “slapping an interface” on an object makes it more intuitive and better for the user experience. Unfortunately this isn’t always the case. Sometimes no interface is the best interface, it is this concept that I find interesting. How can someone create non-UI way to accomplish a task with ease and make the UX better? The idea or non-UI that I found to be extremely interesting and awesome was the AutoTab by square. A user sets up Square and allows AutoTab to function with a shop, when the user is within distance of the shop the owners noticed as the profile is then displayed with the last purchased item. This idea allowing for a better customized, faster, and more unique experience for each user. I feel like sometimes we do try to over interface everything, when we actually need to find a way for tasks to be completed with more ease, without the extra use of an interface.

Michelle Nguyen - 4/21/2014 2:35:45

The first technologies that the article presents are interactive surfaces and tangibles. One main advantage of tangibles is that it allows a user to interact with the computer through familiar actions that they are accustomed to in their everyday life. For instance, the article presents the example of an interface where a user can physically move buildings and see the shadows change as a result. The action of picking up and placing an object is a big part of our everyday life and is simple to understand, in contrast to having to click and drag a building on a computer screen. As a result, tangible user interfaces are easier to learn; people can perform an action and see immediate feedback to what effect their action has. Another huge advantage is that this technology has support for multi-user collaboration, which will be very useful in classroom and professional meeting settings. However, tangibles are not suitable for many tasks that direct manipulation interfaces are not suitable for. One such case is when a user needs precision. Take for example resizing a picture to a certain size. It is easier and much quicker for the user to enter the height and width they want, rather than pulling, dragging, or pinching until they get the correct size. This task will be difficult because users are imprecise. Another task that tangibles are not suitable for are for tasks that require manipulation from the computer. Returning back to the building and shadows interface from the article, it will be impossible for the computer to move a physical block--the user must be the one to perform all the actions. An idea such as having an option for the computer to "Place buildings in optimal location", which could be possible in many programs, would be impossible in this interface. The next technologies the article discusses are interfaces on the go. The main advantage of this new technology is that it will not require the user to have any other belongings with them for them to use the interface. All it requires is their body, so it does not matter where the user is or what they are doing. Another advantage is that users are very in tune with their body. Therefore, it will be easier for them to tap their arm if necessary, even if they can't see their arm. In contrast, it would take a little more time to silence a phone by pressing a button when the user can't see the button. They can't instinctively reach for it, and must feel around for it. This technology is not suitable for tasks that require a long amount of time. For instance, imagine using a keyboard projected on your arm to write a long essay. It would be best to simply bring a laptop and use the physical keyboard there. These tasks will also become difficult when a user doesn't have their hands free. For instance, a user can use their phone with one hand, by using the thumb on the hand that they're holding their phone with. However, in the case of bio-acousting sensing, if a person is holding something in the hand that they need to use to interact with the screen projected on their arm, they can't use their other hand to interact with it.

The blogpost at: http://www.hongkiat.com/blog/next-gen-user-interface/ speaks of an interesting user interface that I have not heard about yet. It is called Sensor Network User Interface (SNUI), and is a fluid UI that changes according to the other SNUI devices that are in close proximity. I think it is interesting because while many of the technologies of today are allowing people to communicate with others across the world, this interface requires people to be close by to each other. The article mentions the UI mostly through a gaming aspect, which is interesting since many the games today are played online and allow people to interact with others who are all over the world. Many people argue that today's new technology limits people's social abilities (ie, choosing to text instead of calling, or meeting in person), but this UI is a new technology that does the opposite and brings people together.

Sijia Li - 4/21/2014 4:14:25

1. What do you think are the main advantages of these new technologies?

"Interactive Surface and Tangibles" has the following advantages:

  • (1) More intuitive and more natural to use; in the example of both the Active Desk (Page 23) and the Reactable (Page 24), users are able to perform the tasks in the most intuitive and natural way; the user can use both of his or her hands to operate the interface. There is no mouse or pointer involved.
  • (2) Great for multiple users to use at the same time; "Multi-user collaboration"(Page 25, Interactive Surface and Tangibles).
  • (3) Interactive Surface and Tangibles "break the WIMP— window, icon, menu, pointing device—limitations" (Page 21, Interactive Surface and Tangibles).
  • (4) It promotes "group sharing of ideas" and "foments discussions" (Page 27, Interactive Surface and Tangibles).

"Interfaces on the Go" has the following advantages:

  • (1) Super portable and adaptable: in the example of a pico-projector (shown on Page 33), the user does not have to carry a screen any more since the projector can just project what a screen would show onto any surface, e.g. the user's hand or a wall.
  • (2) Build-in (use of human body): no need to require user's full attention on the interface since the systems are build onto the user's body; for example, the Muscle-computer Interfaces
example, shown on Page 31, is one interface that allows the user to do other normal tasks while the interface itself records muscle contractions to "detect finger gestures" (Page 32, Interfaces on the Go).
  • (3) Hand-free: ideal for on-the-go applications.

2. What tasks are they not suitable for?

Interactive Surface and Tangibles may not be suitable for:

  • (1) It may not be suitable for mobile tasks, since it is almost impossible to carry a "Active Desk" or a "Reactable" around with the user.
  • (2) It also may not be suitable for some simple tasks which only involves one user, since both "the Active Desk" and "the Reactable" are mainly designed for some more complex group or team tasks.
  • (3) Moreover, disabled users who have trouble using his or her hands may not be able to use Interactive Surface and Tangibles. Sound-based systems may be more suitable.

Interfaces on the Go may not be suitable for:

  • (1) Interactive Surface and Tangibles may not be suitable for tasks that require accuracies, for example, driving.
  • (2) It may not be suitable for some sport-like tasks, since the user may need to wear the whole system on his or her body and it may be very inconvenient for some tasks which require a lot of big motions.

3. Find at least one other interesting interface technology online, paste a link and tell us what you find interesting and promising about it.


I found this interface really cool. All other (good) interfaces we have learned from this class are mostly based on the idea of "direct manipulation", which usually means "direct touch". This interface also embraces the idea of "direct manipulation"; however, it realizes the idea of "direct manipulation" in a way of "non-direct touch". There is no touch involved! Through this interface, the user is able to perform various tasks in a 3D sense, instead of 2D, which is implemented on most Direct Touch devices like Touch Screens.

There is one part that caught me attention. Our group is working on "Cook buddy", an application which helps users cook. In our application, users use voice commands to perform tasks. They can also use leapmotion to perform tasks like "next page", "previous step".

Emily Reinhold - 4/21/2014 9:31:22

Tangible interfaces naturally extend direct manipulation interfaces to provide an experience that is familiar to users by supporting interactions that users perform in real life. The articles mention two main areas of interest for providing more tangible interfaces: multi-touch gesture support and muscle-based gesture recognition. Both of these technologies are commercially in use today, but they have much room to expand.

Multi-touch gesture recognition has many applications for devices in which many users want to interact with presented data/objects. One of the main advantages of being able to sense multiple points of contact and recognize specific motions as one cohesive gesture is that more features of the device can be accessed in a shorter amount of time. For example, the most common multi-touch gesture supported now is pinch to zoom. Since users are now accustomed to using two fingers in a pinching motion to zoom in or out on a screen, this gesture is much faster than trying to find the setting on one's device to enlarge the text on the screen. Thus, after users learned this gesture, the gulf of execution was decreased. Other multi-touch gestures can act in the same way (reducing the size of the gulf of execution) by providing alternative ways to perform customary actions. Multi-touch gestures have one downside, in that there are no clues on the screen as to what gesture corresponds to what action, thus burdening the user with having to recall the gestures. Multi-touch gestures are also not suitable for performing tasks of mobile devices with a single hand. Users need to hold there device in one hand, usually leaving only the thumb available for performing gestures.

Bio-acoustics and muscle based gesture recognition are beginning to become more commercially used. They provide the significant advantage that people are intimately familiar with their bodies, and thus performing gestures just with their body may seem more comfortable. Further, since the hand does not need to be kept steady touching the device, it allows users to perform complex gestures in jarring environments. Muscle-based gesture recognition and bio-acoustics are currently not suitable for performing detailed tasks like entering text. Perhaps in the future everyone will learn sign language, and muscle-based gesture recognition will be used to interpret the muscle movements and translate them into text.

One technology I find very intriguing is here:


Essentially Tactus Technology is developing a screen overlay that provides tactile buttons for entering text when the keyboard is in focus on a mobile device, but the screen is smooth/flat otherwise. I think this idea is promising because, as the article mentions, typing on mobile device keyboards is currently a widely inaccurate, laughable task. Typos are inevitable when there is no feedback that your fat finger pressed the intended tiny soft-button. With tactile feedback, users can tell when they make mistakes. With Tactus Technology's solution, you don't lose any real estate on the mobile device to having a physical keyboard, but you get the added benefit of tactile feedback while typing!

Anju Thomas - 4/21/2014 9:47:42

What do you think are the main advantages of these new technologies? And what tasks are they not suitable for? 

The main advantages of the Interactive Surface and Tangibles technology mentioned in the reading includes advantages. This interface allow users to use the surfaces they already have instead of carrying or buying a new device. This also prevents the user from losing the device and instead use the available surface. It allows the possibility of collaborating more easily and effectively. For instance members of a group can use the multi touch interface at the same time allowing a broader and expanded input interface and allows users to learn how their actions might affect others around them.

Another advantage of the tangible interface include the ability for users to directly interact with a software. Through physical touch the user is transported into a more realistic world where they can directly interact with software in a physical and more natural form. The direct touch can also possibly allow the users to interact with the interface without looking. The user is able to have tangible feedback of their actions once again providing a more natural and intuitive form of interaction with the objects.

The interface might not be very suitable for disabled persons without hands who will not be able to physically interact with the interface. The interface might need to be bigger and take more space possibly make it not as portable as current devices such as laptops or phones. This might not be an effective way of interaction especially when on the go. In today’s world, many seem to be constantly on the go, from the car to walking around campus, to jogging, biking… In these cases, you can often see people using their phones or even laptops. However a tangible user interface such as the table top surface might not be as efficient in allowing the user to interact with anywhere anytime but instead might require the user to return to the fixed location fo the surface.

Another technology mentioned by the article are the interfaces on the go. These allow the users to use different part of their body, something they already have, to interact with an interface. The user is once again able to interact without need for any other physical device eliminating the need to buy or carry around a physical device. The hands free capability of this technology seems especially apealing as there is more user freedom and flexibility. The “whack gestures” mentioned in the reading seem especially effective in allowing the users to interact with their phone in a more quicker and simpler form allowing a more effective form of interaction. This seems to have not as many disadvantages, however possible disadvantages include the need to learn a new piece of technology such as how projector phones work as it is something new.

Find at least one other interesting interface technology online, paste a link and tell us what you find interesting and promising about it.

Another online gtechnology that I found to be very interesting is the brain computer interface which allows where the users do not even to use their body parts to interact with the device. Link: http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface . Another link : http://www.hongkiat.com/blog/next-gen-user-interface/ The software automatically carries out instructions from the user’s thought process. This new technology opens a wide new range of possibilities of interaction. The main advantages of this technology includes the user’s ability to have a hands free form of interaction requiring almost no physical effort, making it much faster and simple than other forms of interfaces.

However possible disadvantages for this form of technology includes more errors, as the user might think of something uintentionally causing unintentional reactions by the interface. Another is the lack of tangible user interface prompting the user to find another way to assess the output of their thoughts.

Rico Ardisyah - 4/21/2014 10:12:43

One main advantage of the new technologies that is discussed in Interactive Surfaces and Tangibles is that it allows users to collaborate interactively for co-located teams, and it also allows for more shared control and richer interactions. Hence, UI that we see in sci-fi movie may be achieved. This technology is not suitable for program that requires privacy. From the article "Interfaces on the Go", we learn one of the technology is skinput. It allows the skin to be used as a finger input surface. The main advantage of this technology is users does not need to interact device directly. It is also very natural as it maps to mobile phone that use today, one hand hold the phone and the other hand control the phone. However, skinput cannot really represent a device especially when we need to use hardware such as camera.

http://petitinvention.wordpress.com/2008/02/10/future-of-internet-search-mobile-version/ This technology seems promising since you can find information about a building, article putting the object over the camera. It also minimizing semantic and articulatory distance.

Ryan Yu - 4/21/2014 10:44:18

Essentially, the primary point about these new technologies (interactive surfaces & interfaces on the go) is that they have the potential to simplify and automate our lives to levels that are still unparalleled. However, judging from the progress of these technologies in the articles, researchers have not yet advanced these technologies to the appropriate levels to be able to integrate them into our everyday lives.

Imagine a world where virtually everything operates using a multi-touch interface. Imagine walking into a cafe where in order to place an order, you walk up to a counter, select a menu item from a touchscreen that is built into the counter, press your credit card on some scanner built into the counter, and your drink automatically pops out on a conveyer belt. Imagine schools where every desk has a multi-touch surface and computer built-in. Imagine, in regards to interfaces on the go, an arm strap that can detect your every movement, and sense whether you are in danger, whether you are excited, or remember where you are walking or where you need to go. These are the technologies that the two articles talk about, except in their most developed forms. Having these technologies would no doubt simplify and automate our lives even more so then they already are, but this is not, from everyone's perspective, necessarily a good thing.

There are, in this sense, many things that these new interfaces and technologies would not be good for. For instance, the article tells of a pen that "can store handwriting, markings, pen movements, and can identify on which document the markings were made and exactly where they were. This can be applied to 3D objects as well; for instance, if you draw a doorway on some physical model of a building, then the doorway will be added to the virtual model of the same building." While this sounds cool, imagine the privacy implications this could have. You would then have this pen, an everyday object, that could be capable of tracking a person's movements twenty-four hours a day. This could carry a plethora of negative consequences. Furthermore, using these tools within education could actually act as a detriment for students who are trying to learn material, as they could be greatly distracted by the innovativeness of the technology in front of them, and could be fixated on the technology rather than the educational material.

One cool new piece of interface technology I found was Private Eye, which can be found here:


The description for the technology reads:

"In this striking detective game for the Oculus Rift virtual-reality headset, you play as a wheelchair-bound detective spying on a building through his binoculars. Clearly influenced by Alfred Hitchcock’s Rear Window, Private Eye re-creates the sense of being a largely helpless voyeur with style.

Your only interaction with the game world is via head movements. By surveying the scene and catching important details you must work to solve the mysteries of the neighborhood, from finding a lost football for a group of children to uncovering the local Mafia’s plans.

The ultimate goal, however, is to catch a murderer who you know will kill at 10 p.m. It is a wonderful and inspiring exploration of the power of the new hardware."

Although I had heard of alternate reality games (ARGs) before, in my mind, Private Eye brings this to a whole new level. It actually immerses you completely in the alternate reality, and places you in a game-situation where your actions in the alternate reality directly (appear to) affect the consequences of those around you, something that I found extremely interesting. This carries implications to current "ARG" technologies that are receiving widespread coverage, such as Google Glass. Perhaps one day, a majority of people in the world will carry these ARG technologies to enhance their everyday lives.

Lauren Speers - 4/21/2014 11:06:11

“Interactive Surfaces and Tangibles” presents the concept of tangible interaction, which combines control and representation in a physical embodiment of data that users manipulate. The main tangible interaction technology presented by the article is table-shaped interface. These interfaces support multi-touch interaction as well as manipulation of data by directly manipulating objects. Some interfaces, especially those designed like round tables, work well for sharing control as well as for sharing data because of the equal positioning of the users. In general, these tangible interaction interfaces are best suited for projects where the data corresponds to objects in the real world, such as the city planning interface presented in the article. However, as the article correctly points out, these interfaces are not suited to support tasks like word processing or web browsing that are typically performed on desktop computers by one person and whose data does not have a direct correlation to objects in the real world.

“Interfaces on the Go” presents an interface that relies on physiological computing to allow users to interact with their mobile phones while on the go by performing gestures or tapping on their forearm. As the article explains, users can perform more actions while on the go with this interface than with their phone’s typical interface because of the increased UI size and because of people’s ability to manipulate their own body while looking elsewhere. The interface works well for navigating apps and does not require any voice input that may not be socially acceptable. However, it may not work well if the user is traveling with a group of people and constantly gesturing while talking or interacting with his travel companions.

OLED displays, thin and flexible LED displays, would allow for the screens of touchscreen devices to be flexible. As a result, a tablet could be foldable, and therefore even more portable, or a device could be wearable because the screen could bend to fit the body’s contours. From a UI perspective, these bendable screens are interesting because they have additional affordances, like bending and twisting. These affordances would allow users to rely on less visual input while interacting with devices, an issue physiological interfaces are attempting to address, and change the size of the device depending on how crowded or spacious an environment is. The technology is promising because its flexibility creates the potential for allowing one device to present the UI advantages for a wide variety of device types.


Andrew Dorsett - 4/21/2014 12:45:27

From what I read in the articles the two technologies they were talking about were tangible UI (TUI) with an emphasis on (multi-)touch and utilizing all resources on a device, such as the microphone, gyroscope, wifi antenna, etc. When I think of the advantages of TUI I think users being able to easily map physical actions with corresponding digital actions. I can use my finger to move a volume slider when listening to music. This is something I would do in real life and is easy to pick up and remember. Compare that to something like typing "increase volume by 3" into a command prompt. From what I've seen the main problem with TUI is that they work great for certain task but are worse for others. For example multi-touch is great for responding to actions like touch, drag, swipe, etch, but is mediocre for typing. A physical keyboard is much better designed for that but is terrible for opening an application on your desktop. From what I've seen transactions that utilize a device's components in the background to compliment another interaction tends to be the most successful. Such as using wifi along with GPS when finding a location on a map. Since GPS can be difficult in cities where wifi is abundant. Where it fails is when people have to switch between actions such as when the article talked about using the mic to recognize movements on surfaces other than the screen. When the user doesn't have a table they have to go back to using the screen. This is like jumping between speaking two languages depending on if you're standing or at a table. It complicates things and leads to more mental work for a user.


A new technology that I'm most looking forward to is wearables. Not so much because I want to wear devices but because I see them as a gateway to the next step in mobile tech. Wearables use your phone as a hub for processing and connectivity. Mobile phones are becoming more and more powerful. I could see a future where most things are just screens and interactive devices like mice and keyboards. Imagine sitting down to a monitor, keyboard, and mouse and having them all connect up with your phone. The tower is essentially replaced by your phone. Now imagine you're having breakfast, you are reading the news on a 10" screen that connects with your phone. A few minutes later you leave for work, you pull out a 7" screen from your work bag that also is connected with your phone and you continue reading the news from where you left off. This is the direction I currently see technology heading and would help resolve some of the biggest issues with TUIs. Actions that are best for a touch screen I can use a touch screen. Actions that are best for a camera I can use a camera. They all connect to one device.

Aayush Dawra - 4/21/2014 11:36:25

One of the main advantages of these interactive interface technologies, is that the interface is more closely tied with the notion of 'direct manipulation' and seems like an extension of it, apart from feeling very intuitive and natural to the user. Another key advantage that works in favor of these interactive interfaces is that they transcend the WIMP (Windows, Icon, Menu and Pointing Device) principle and therefore break out of the mould of regular interfaces, making the interface easier to user for the user. As far as interfaces on the go are concerned, their key advantages lie in the fact that not much user attention is required since the user's focus is bound to be divided when he/she is on the go. Another important advantage of such on the go interfaces is that users are more prone to utilizing different parts of their body for interaction, for instance voice control, since their hands are generally limited while being on the go.

Looking at the disadvantages for interactive interfaces, the main problem is the fact that these devices are hard to lug around while being on the go. Another disadvantage is that the time required to set up and get comfortable with the new interface is relatively higher, perhaps because it violates the WIMP principle, and therefore the overhead for performing simple tasks is slightly unjustified. As far as interfaces on the go are concerned, they also mirror the limitation of bogging the user down while moving. Another limitation for on the go interfaces is that they are relatively limited in their utility as compared to other interfaces, since they are targeting the user's divided attention as opposed to regular interfaces, that largely vie for the user's undivided attention.

Interface Technology chosen: Google Glass (http://www.google.com/glass/start/)

Google's Glass adds a whole new dimension to the way we interact with computers today. From checking our email to updating our Facebook status, nearly all mobile tasks can be performed on the Google Glass interface.

The thing that I find interesting about Glass is how it provides a truly non obtrusive interface by latching on to the user's regular glasses, which most interfaces strive to provide but fall short in some respect or the other.

It is promising because the possibilities of the applications that can use this interface as a platform are endless. For instance, taking pictures on the go is a piece of cake with Google Glass, as opposed to pulling out your phone/camera and physically taking a picture.

Brenton Dano - 4/21/2014 12:09:12

Out of all the technologies introduced in the article I thought using the arm as a surface for a projector and a touch screen was pretty revolutionary. Obviously, the technology isn't there yet but if it is an you don't have to strap a annoying projector to your bicep I think this is pretty cool use case. The main advantage of this technology is that your arm is always with you and you can't lose it! There will not be any more people losing their iPhones etc and stressing out about it because they can't lose their arms. Also, you can start using your arm as an interface a lot more quickly than reaching into your pocket to get your phone out. I think augmenting the human body into a wearable computer that is not too inconvenient is going to be the future of "mobile devices."

The arm is probably not suitable for heavy artistic work that you might do on a computer with stylus in Photoshop for example but it could totally replace the mobile phone. Some of the other technologies talked about in the article like the circular tables that you can manipulate in group settings are cute, but I don't think I can see them being adapted for widespread use.


Xbox Kinect is a recent technology that allows the user to control games etc with the use of their bodies and voice gestures. I think it is promising because as long as you have the Kinect attached to your TV you don't run the risk of losing your controllers etc. And also, for multiple users you don't have to have multiple game controllers, you just need one Kinect. The only problem I see in it is that if you are playing a tennis game for example, there is no weight of the racket in your hand so you might feel weird swinging around your empty hand, which is why I think the Wii is probably more natural for these types of games..I love Wii Tennis! :)

Gregory Quan - 4/21/2014 12:48:15

The main advantage of interactive surfaces is that they allow for collaboration between multiple users and allow users to interact with tangible objects, such as blocks that represent buildings, which are much closer to the real world representation of a building than pixels on a flat screen. Tangible interfaces are probably not well suited for typing, since most people are used to the tactile feedback of keyboards. Also, there can be issues with the users’ hands and fingers occluding the display surface.

The muscle-computer interfaces seem interesting, and are useful for quick interactions while on the go so that the user does not have to take out his or her cell phone to perform quick tasks. However, projecting images onto the human body is much less practical for reading or viewing content than the cell phone screen.

One interesting interface technology is g-speak: http://www.ted.com/talks/john_underkoffler_drive_3d_data_with_a_gesture It seems promising because it makes it easy to visualize data very easily and see patterns in the data that would not otherwise be obvious. It also responds to interactions that are richer than simple point and click gestures, such as the user moving his arm or twisting his hand. It seems like it would be hard to learn what all the available gestures are and how to use them, but this interface is very novel and interesting nonetheless.

Shaina Krevat - 4/21/2014 12:54:04

The main advantage of the new technologies are that they can be experimented with, iterated, and eventually will probably be engrained into our culture the same way iPhones, laptops, and GPS have, even though at the time of invention it would have seemed strange how dependent we are on them. The interactive surfaces are currently being proposed for interactivity between groups, but even in movies (like Iron Man) it is shown that they could possibly used as an easier way to design, like a giant art tablet. The wearable interactive devices/bio-acoustic sensing user interfaces could be used for things like silencing a phone with a movement instead of getting the phone out and turning it off, but possibly it could be used to, say, change air conditioning settings based on body heat or allow patients with limited movement abilities to still communicate with a computer.

Obviously, the interactive surface user interfaces would not be good for mobile use, as it would be difficult to take an entire table somewhere. The muscle and bio-acoustic interfaces would be great on the go, but there would be limited space for projection and would involve a lot of memorized movement-based commands (not that this would be difficult, as all of the movement commands like swiping left or right on a touch screen has been learned quickly).


The Virtuix Onmi Video Game system (which I first saw on Shark Tank) uses the user’s movement as commands for the interface. When the user runs, the first person POV video game character runs as well, so that in a way the user gets to experience the game itself. While it has been argued that some gamers wouldn’t want to have to move in order to control the video game, and obviously there will problems, such as ducking being semi-blocked by the bar around the waist, this is arguably a step towards the holo-deck/suite that all Star Trek fans are waiting for.

Christina Guo - 4/21/2014 13:03:43


An interesting interface technology is brain computer interactions. I find this interesting because it is completely hands free, and adds a lot of possibilities for people who are disabled without use of their hands. Although it seems like we are far away from this type of technology, there is still a lot of interesting research done in this sphere, such as those mentioned in the article like braingate.

The main advantages of these new technologies is the ease of the learning how to interact with new devices as well as greater ability to support multiple users. Because the gestures are designed to mirror real world interactions, it will be more natural and quicker for the user to figure out how to interact with the device. Similarly multi user interactions suport more collaboration technologies. However they may not be useful for tasks that require greater precision since using the body as an interfaceor using fingers is not always very accurate and precise.

Munim Ali - 4/21/2014 13:05:13

The main advantages of these new technologies - specifically the tangible user interface, is the direct mapping between user actions and system tasks.The interactive tabletop, discussed in the first article, also allows for multi-user collaboration.

These new technologies, however, are not suitable for things like video games (which require a high amount of precision) and word processing (Keyboards are still the way to go).

An interesting new interface technology is the oculus rift virtual reality headset: http://www.oculusvr.com/rift/

The rift takes immersive gaming to the next level, by providing extremely low-latency head tracking - which means that there will be no noticeable lag between real-world actions and in-game actions. The rift also provides a 3D experience, by utilizing our natural stereoscopic vision. It provides different images to each eye , which is merged by the human brain thus giving a 3D effect.

Seth Anderson - 4/21/2014 13:13:52

Some of the biggest advantages available in these new technologies is their ability to capitalize on "whacking" gestures, making it so people can quickly interact without having to stop and take out their phone. The ability to control something with only finger gestures or quick movements could rapidly decrease the gap of execution.

Mouseless: http://www.cio.com/article/693187/The_Future_of_Human_Computer_Interfaces?page=5#slideshow

This interface technology, in which a user can cup their hand on a table and have their hand effectively become a mouse with infared tracking, is promising because it eliminates the need to clutter a desk with a mouse. It also allows the user to use the mouse from anywhere on the table at any time, and because the hand becomes the mouse rather than the intermediate tool of the mouse we use today, interaction is far more direct. This could be paired with a laser projected keyboard to free up plenty of desk space.

Andrea Campos - 4/21/2014 13:18:33

The main advantages of these new technologies include greater possibilities for collaboration and sharing, as well as more ease of use of technology while on the go. New devices like multi-touch tabletop devices would allow many people to more easily work at once on projects. Also, the more intuitive "naturalness" of being able to manipulate data physically may make certain technologies more accessible and easier to learn. The enhanced mobile computing features would allow us to do more while traveling and moving, and in a more reliable way that doesn't distract us as much from the environment as it does now. However, these sorts of technologies are both not suitable for things like text input which may still be faster and more accurate on a traditional keyboard, and for fine grained tasks that require a lot of precision and control. Another interface that is interesting and that's been gaining a lot of attention lately is that of virtual reality. I think it may have a lot of promising educational and artistic uses, and not only in the ways it has been used up to this point such as learning to fly planes, but also to be able to learn about different time periods, people and perspectives by allowing people to experience these different settings. Perhaps it even has computing promise in that more than just manipulating data on a surface as with the tabletop devices, one would be able to virtually manipulate data, realistically and in a very 3D fashion. More than just planning an urban community on a tabletop, one would be able to use virtual reality to experience the planned community firsthand--walk through it, and decide if it would be a truly livable place.


Jay Kong - 4/21/2014 14:09:22

Interactive surfaces allow for intuitive affordances. In movies, we often see futuristic control rooms containing interfaces that are manipulable through gestures in the air. Interactive surfaces are a step towards that. A user can easily pick an object up and move it around as he/she pleases. The gap between execution and execution is very small. Interactive surfaces also allow for multi-user collaboration, as the interface can easily be shared, unlike a keyboard and a mouse. Different users can work on the same thing at once without having the need to take turns.

Interfaces on the go allows for portable computing as well as bio-based computing. With these interfaces, users can compute anywhere they want, without having to take out an apparatus. These interfaces also allows for biological input, meaning that a users body can easily become part of the computing cycle.

These interfaces, however, are not suitable for traditional keyboard heavy tasks such as word processing and programming. Those tasks require an input device that can quickly input characters. These interfaces might also not be suitable for very precise movements.


Leap motion seems interesting because it takes the interactive surface a level further by adding an extra dimension to it. Users can now use motion (meaning from far away) to control an interface. This is promising in the sense that we can now potentially create Kinect or Wii like interfaces for the public. A potential use case would be to have an interactive billboard that can be motion controlled. Compared to an interactive surface, a motion controlled surface will be a lot more hygienic and pleasant to use because no one is required to physically touch the surface.

Jeffrey DeFond - 4/21/2014 14:31:21

I think at of the two presented, the interfaces "on go" ie, on body, in body, reacting to body, controls will become more and more ubiquitous as mankind goes down the road toward the singularity when we will totally merge with our tech. The tangibles will certainly also play a role, but, I believe that more and more our tech will become part of us and become more literal augmentations to out own biological systems. The muscle interfaces, bio acoustics of today will lead to the bci's of the near future.

https://www.olimex.com/Products/EEG/OpenEEG/EEG-SMT/open-source-hardware These are admittedly fairly rudimentary devices now, and a big problem with them is that it is often hard for people to consciously control one's eeg signal. However there are a wide range of applications that can use this signal.

Kevin Johnson - 4/21/2014 14:35:23

I think the fundamental goal of a user interface is to translate thoughts into action. The closer we can get to enabling direct, intuitive thought control - with all of the complexities and confusion that defining "intuitive" brings - the better. These new technologies take many approaches, but their common goal is to lower the distance between thoughts and actions. This can be seen in the attempts to reduce the time required to initiate and complete a task, and in the direct muscle interaction. I'm really, really excited for muscle-computer interfaces; when the Myo band (https://www.thalmic.com/en/myo/) comes out and works through its initial bugs, I'll be one of the first to preach its virtues to everyone around me (and, time permitting, do some development to make it more useful).

However, these new interaction technologies are only suitable for people who are willing to learn new methods for interaction. We have not yet reached the point where we can consistently design these interfaces to interact "intuitively" so that people don't need to be trained to use them. Our methods for interacting with data and information are still distinct from the actions we take in our everyday lives, and it's unclear how to address that gap. Midair touch may take us farther in that direction (http://www.itnews.com.au/News/359694,researchers-create-mid-air-tactile-feedback-for-screens.aspx), but that remains completely impractical for consumer use, and is likely to stay that way for many years to come.

Sang Ho Lee - 4/21/2014 14:48:36

The biggest advantage of interactive surfaces and tangibles is that it takes human-computer interaction farther into "direct manipulation" territory. While we are currently mainly still in the WIMP phase of direct manipulation, interactive surfaces and tangibles affords users an even more direct form of manipulation of the computing environment, which may be more "intuitive", especially as multi-touch gestures become increasingly mainstream with the spread of smart devices. Large interactive surfaces, which the article mainly deals with, affords better collaboration as the input method can be shared and communal between multiple users. On the other hand, interactive surfaces and tangibles are not suitable for complex tasks that have many parts or actions. Without the combinatorial power of more traditional input methods such as the keyboard and mouse, it becomes increasingly difficult for users to remember a large number of unique gestures or physical actions that may be required for a complex task-- such as multi-spreadsheet management across an entire company.

The main advantage of physiological computing is that we will always be physically bound to our bodies. Because of this, if technology is well-integrated into our physiology, physiological computing could truly be more intuitive and accessible than any other form of computing. However, in this main advantage lies all of the downfall of this technology. The article states that the current hardware is about 88-90 percent accurate for sensing physiological states such as gripping of our hands. This is simply not accurate enough for commercial integration. Our physiologies are unique and dynamic-- therefore, there are potential margins of error for the correct sensing of our physiological states. In additional, each person's physiologies are different and the hardware and software must be calibrated for each person. Quite simply, taking into account that the hardware will be made efficient in the future, the largest barrier to the adoption of this technology is not in the development of the hardware, but in the economy and viability of individual user calibration.


Virtual reality technology such as the Oculus VR goggles and motion detection technologies such as the Kinect is a promising interface technology. It is extremely immersive and engaging, filling your entire visual and kinesiological senses. It requires no additional tools other than your body. Because virtual reality is a direct metaphor for physical reality, users may be able to experience new virtual environments in the exact same way they would a physical environment. Although not suitable for data processing tasks, VR interface technology is at the very least promising for entertainment purposes.

Dalton Stout - 4/21/2014 14:48:42

The benefits of the new technologies introduced in the articles can be seen with even greater clarity than when the articles was written four years ago. The author mentioned in 'Interactive Surfaces and Tangibles' that at the time only basic gestures were being utilized by multi-touch surfaces. Today, more and more developers (especially in the gaming sector) are coming out with new and creative uses for multi-touch. One of the main advantages of this type of interaction is how natural it feels. To swipe, to pinch, to poke, are all basic tasks that we already do everyday, so the gulf of execution for these interfaces is fairly low. The Active Desk and the Reactable are further examples of this advantage. Artists with no previous computer experience could use the Active Desk to create a digital painting. Similarly, the Reactable offers a unique music making experience that users can immediately experiment with, or in time eventually master. Unfortunately, these type of large surface/direct manipulation interfaces are not well suited for menial or mobile tasks such as getting directions or making private phone calls.

This is where the "Interfaces on the Go" article comes in. It discusses the possibilities of mobile tech, such as wearable and even body projectors. The main advantage of these technologies is that they are constantly available to use for use, in some cases they are ON us. But on the other hand they are not well suited for large scale tasks that requires a lot of management and a specialized machine, such as composing music.

The new interface I would like to talk about is the Android Wear device: http://www.theverge.com/2014/3/18/5522226/google-reveals-android-wear-an-operating-system-designed-for This device is made by Google and is meant to replace (and add to) the functionality of a watch. This small, circular touch screen that is constantly available to you on your wrist offers and incredible amount of interface possibilities. For one it reduced the physical time needed to get information. You no longer need to take your phone out of your pocket to: check the time, check the weather, see who is calling you, read a text message, etc. I imagine in the future Android Wear will also be capable of performing web chats and audio calls. The discrete circular design is inconspicuous since watches are already accepted as wearables.

Everardo Barriga - 4/21/2014 14:49:14

I think the main advantages of the tabletop interface is probably the ability to collaborate with people in a way that seems productive and natural. I think there is a lot of potential there considering most meetings happen at table tops and usually with people having their laptops out. It would be much more advantageous to gear people towards interacting with each other while simultaneously engaging in their work in an efficient manner. I don’t really see the value in a table-top with the ability of putting physical object on it. I would just rather have the object understand data about themselves and where they are in relation to other objects I don’t think the table is necessary. I think an advantage of the wearable to technology is one that they mentioned in the article which is the fact that it will be much more efficient to carry actions. Simply taking out your phone uses up some of your time and when you’re on the go that time is valuable. I think one of the disadvantages is the fact that things will be much more visible, and social interactions amongst people will tend to shift. If everybody is looking at their arm that might make things rather awkward, also it will be hard to keep private data invisible to others without having to disguise or hide your entire arm. I think an interesting interface is the Sensus phone case that allows to you interact with your phone in almost every way besides the screen. I think this interface capitalizes on the physical real estate of the phone and tries to make use of it. The article states “This mobile scenario is particularly challenging because of the stringent physical and cognitive constraints of interacting on-the-go.” I think this case relives some of the stress of interacting on the go, because you no longer even have to touch the screen to perhaps scroll but rather interactions with the device now happen by simply holding the device. https://sensusxp.com/

Will Tang - 4/21/2014 15:03:44

Some of the new technologies for computer interaction are interesting in their closeness to everyday physical interactions. One of the biggest advantages of these direct manipulation interfaces is the support of multi-touch. Typical computer interfaces have a mouse and keyboard as inputs, but the user's influence on the interface is limited to where the mouse is pointing. Moving the mouse and selecting a small target may also take more time than some find comfortable. Direct manipulation interfaces that support multi-touch allow users to manipulate multiple objects/areas at any time, and these interactions may be much more sped up as the selections are made with fingers and hands rather than a cursor. In addition, multi-touch interfaces may support multiple users, which is a tremendous advantage over traditional computer interfaces in the context of group collaboration or teaching. Sharing control may not be completely necessary for sharing all data, but for data that is generated on the fly, shared controls are much more suitable. As for on-the-go interfaces, the obvious advantage of these is that a user is not confined to a device that may weigh too much or take up too much space. The bio acoustic skin-input interface mentioned in the article is useful in that it eliminates a physical handheld device that a user must use to retrieve data, and instead leaves the user's hands relatively free. Inputting data is done through sensors that detect touches or depressions on the skin of the palm or forearm, which may speed up interaction as selecting objects on a phone or computer may take slightly more time. Despite these advantages, on-the-go interfaces may be limited by the type and amount of data they can represent. The skin-interface is probably not useful for watching videos, and the surface area of the display is limited to the surface area of a person's body.

One example of an interesting interface is the Manson M1D1 guitar and other similar Manson variants. The guitar has a x-y controller attached below the bridge that can be connected to anything the user may want to control. The guitar is used by Matt Bellamy of the band Muse, and he typically connects the controller to a Korg Kaoss pad which is used to apply filters and other effects to the guitar. This interface could somewhat be considered an interface on the go, as it's attached to an instrument and can be brought on stage without any additional baggage. The interface is also versatile as it can be connected to many electronic devices, and it provides sweet onstage effects without the need for a dedicated pedal controller.

https://www.youtube.com/watch?v=po1ojCMBjPs http://youtu.be/P69dBo6ZuuQ?t=3m8s

Armando Mota - 4/21/2014 15:04:49

Interactive surfaces

Advantages: Allow for greater ease of collaboration Allow for greater equity in collaboration and emphasize action instead of sharing of info It allows users to use their hands to physically manipulate information and pass it directly to others Easy to see, also easy to see, show, and look at others while communicating Disadvantages: Not mobile, only usable where the projector and all other equipment is located Might not be as well-presented or easy to use as different orientations (such as architect tilt) Not as easily seen by a large group of people as a vertically oriented surface (like a wall) Not suitable for: Working with or showing to large groups of people Individual work (this is debatable, but the article said that people preferred the architect-tilted interface for individual work) Remote interaction of users, ability to allow users to work both remotely and co-located if they want to

Using the human body as an interface/measuring signals from the body

Advantages: We take the human body everywhere we go With a small enough form factor, it is less noticeable and awkward in public Allows for a large range of differentiable inputs relatively robust among different physical form factors/body types Disadvantages: There are more available input types from voice commands and possibly touch gestures It will not be acceptably accurate, at least in the near future Requires external equipment to be worn and a processor to analyze (in addition to your normal phone, which is also a processor of sorts) It is not clear that you are engaged in something while using it - this might make for odd situations with others takes a fair amount of training and calibration at the moment Not suitable for: tasks that require close to 100% accuracy tasks that do not allow external equipment to be worn, or are cumbersome with external equipment quick sharing among people without training/calibration continuous actions (might get tiring)

Finger Gesture Tracking - http://www.technologyreview.com/news/507956/leaping-into-the-gesture-control-era/ While this technology shares some similarity with the gesture-tracking technologies above, this one uses gestures performed in 3-D space, as opposed to on a surface (technically the skin sensor would be recognizing you performing movements in 3-D space as well, however it’s not actually measuring the movements, it’s measuring your body’s electrical signals that correspond with those movements). This technology uses multiple cameras to track your finger and hand movements to within a hundredth of a millimeter, and allows direct manipulation of events on screen in games, multimedia applications, and just about anything you’d want it to. Screw intuitive - this is leaning towards total transparency with the added bonus of defining actions that the real movements in the real world don’t currently do. You could, for example, use cameras to film you swinging a golf club, and simulate a virtual golf game. In this sense, you would be doing the exact same thing you would in reality (just without the feedback feeling of hitting the ball). You could also tie a swinging motion, or perhaps a swinging motion in addition to another motion or in a certain mode, to some other action, like maybe throwing a digital file into the trash, or calling a contact, or going to your favorite golf website. We have a very large amount of variability with what we can map to gestures.

Sergio Macias - 4/21/2014 15:17:11

The main advantage to these technologies is that they are just as mobile as other new technologies, such as google glasses, but yet are much more discrete. This is due to the fact that instead of saying your commands aloud, you can merely tense a muscle or do some stuff on your forearm. The problem I see with the forearm application is that it removes the use of the arm to do other things if it’s some application that takes a while and having to hold up arm up for a bit could be strenuous. On the other hand, one could simply put their arm down if they are waiting for an application to process or load and perhaps have some kind of notification (bing or vibration) from the arm band to let you know that it is complete. This would allow you do start a long process, continue working, and then jump into the application once it’s done. One of my personal interests at the moment in developing technologies of user interfaces is the Occulus Rift. It’s promising in that no one yet has been able to really nail down virtual reality and make you feel as if you were really there. Seeing countless interviews and demos, I feel that Occulus is the real deal and has a ton of potential for growth. Not only that, but virtual reality! I been waiting on virtual reality since I’ve found out what virtual reality was. http://www.oculusvr.com/

Nicholas Dueber - 4/21/2014 15:17:52

The main advantages of these new technologies is the intuition behind interacting. This way of interacting with technologies represents a shift in technology interaction. The interaction of touch interface and motion detection give the devices an unparalleled method of direct manipulation. The user can simply look at a projection of a screen on their hand and then type in the number they want to call. More over, while looking at the interactive surface interfaces (interfaces built into table tops in casinos and restaurants) give users a streamlined ability to order. Their is no longer a need for a waiter to come up and ask if you are ready to order. The interaction is built into the table you eat on. This is just one advantage. Another example of this technology bringing us to new heights can be found when looking at speech recognition. The computer can simply respond to voice commands. This is more manageable for a variety of reasons. It reduces the age required to interact with said technologies; however, the drawback is clear, the tasks that you are able to execute have to be simple enough that a computer can parse your command and execute it in a manner which is desirable by the user.

Oculus Rift is an emerging technology that has captured the video gaming community in recent months. It is hailed as the most advanced virtual reality game system to come to the mass market. This console relies on the persons head movement as well as a game controller user interface. If a person leans forward, they can either peak around a corner or begin to walk. https://games.yahoo.com/blogs/plugged-in/oculus-rift-lets-dying-woman-final-walk-outside-194210990.html In this article, the gaming system gave an elderly woman that was close to death an opportunity to walk outside again. The game system simulated walking outside and looking around. This method of interacting is promising because it gives people the opportunity and ability to interact with the web in a 3 dimensional space.

Peter Wysinski - 4/21/2014 15:23:31

Large multi-touch surfaces allow for real-time sharing of on-the-fly-generated data between users. However, in practice, one user tends to be dominant in interaction and equal input from all users is not possible. Another method of interaction presented by the article is an armband augmented with a pico-project which permits for interactive elements to be projected onto the user’s arm. Such a technology is a shift towards wearable, always on computers which give users access to information on the fly. While wearable technology does present numerous benefits it is not without its drawbacks. Building a system which is able to use signals transmitted through the body and works across multiple user with minimal calibration presents a technical challenge. Furthermore, the stigma of an always on computer which one has attached to themselves presents a social challenges and the user is never fully sure of who has access to the data that is generated by the device. An interesting form of interface technology is Leap Motion (https://www.leapmotion.com/) which allows for users to control their computer with hand gestures. It allows for games to be more immersive and permits for interaction without the need for additional hardware. Furthermore it enables a user to ‘pick up’ or ‘rotate’ something in 3D on their screen. Once 3D display technology becomes prominent, interaction devices such as Leap Motion will become crucial to freely manipulate elements.

Emily Sheng - 4/21/2014 15:32:04

It looks like the main advantages of these new technologies is that they allow more natural gestures and forms of interactions between the user and computing. In other words, these new forms of interactions are trying to mimic actions that users are already very familiar with in their everyday lives. The researchers involved with these future interaction techniques seem to be very focused on narrowing the gulfs of execution and evaluation in terms of user interaction. Also, now that touch is starting to be a very large part of computing interaction, the concept of multi-touch has become more prominent--the advantage of this being that most people have fingers and can thus interact with devices with little to no prior training/other hardware purchases. Some of the more detailed applications may not be suitable for general touch interaction, just because the small details may require a more accurate kind of interaction (whereas with touch, the fat finger problem may be an issue). Researchers of human-computer interfaces have started working on ways to develop more feedback for touch screens:


What I find really interesting is that they can make the screen feel "slippery" and "sticky", and that the technology for these concepts is the same as that of phone vibrations, but just at a higher frequency. Feedback is definitely one of the most important aspects of user interfaces, so I think the work being done in this area will definitely be applicable.

Jimmy Bao - 4/21/2014 15:32:54

I think the interface where you can display the interface on the human body is pretty cool. The article has a point that we do really have our body. This was one concern I had while I was reading about the tabletop projecting interface; specifically, if I didn't have a table (maybe I don't even have a table big enough for my needs), how would I be able to use such a thing? I think these projection interfaces are neat, but I don't know how practical it really is. For example, I find this kind of weird to be using out in public. Can you imagine seeing someone on the bus typing on their arm? That'd be just kind of weird, until I guess it becomes socially acceptable..

I always found this tabletop keyboard projection to be kind of neat: https://www.youtube.com/watch?v=QHhL26Esn7w

It's promising because it could be pretty portable and I could carry it with me. I also *think* it could potentially do less damage to my fingers if I'm constantly typing. I can type very lightly as opposed to the traditional keyboard where I have to exert a certain about of pressure to get that tactile feedback to know I pressed the key.

Erik Bartlett - 4/21/2014 15:39:04

I think multi-touch interfaces have proved themselves very useful in our current world. Touch screens are ubiquitous in the current day in age, and the companies making them have been very good about following standards set by predecessors - allowing for easy use of each of the multi-touch interfaces a user interacts with. Multi-touch has all the benefits of a direct manipulation interface, and uses the metaphor of picking up objects, moving them, sliding them very well. The actions that we can do with these interfaces is also becoming more and more robust - giving us more power and control without having to have multiple tools (i.e. mouse, keyboard, etc). Touch interfaces only really seem to fall short in the area of use and input on the go. Because of the way your body moves while you’re walking and driving it is difficult to manipulate a touch screen without having a good reference point as you do using a physical keyboard.

The benefit of muscle-computer interfaces comes in the amount of ways you can interact with the computer. There are so many different muscles that fire during a given motion that they can be used to uniquely map onto many actions. The amount of actions that can be represented would be huge. The main problem I see is that multiple muscles fire every time you make a movement, so the actual actions would have to be mapped to from groups of muscles firing, as opposed to a single muscle. This also could hinder accuracy on the go. Because so many muscles are used to stabilize the body, whether sitting or standing, the inputs could fire more than wanted and cause invalid operations to occur.

http://www.oculusvr.com/ - Oculus Rift

Oculus Rift is probably the coolest thing ever. Being able to be in a virtual reality has almost endless possibilities. The one everyone jumps to right away video games - shooters or rpgs would be completely different, allowing the user to control their movements and character with their own body. But beyond video games, it could allow students to interact with things they otherwise could not - they could explore a volcano, practice martial arts, interact with a teacher, or hangout with a friend across the world. There are just so many applications and opportunities that get me really excited.

Steven Pham - 4/21/2014 15:41:39

Having a display project anywhere might be tiring if you have to do a long task since your arm has to be bent to see the projection. The advantage though is that you can have a screen interface literally anywhere you want. The table top seems useful for indoor places. But its liek a giant touchable TV. Projective keyboard: http://www.brookstone.com/laser-projection-virtual-keyboard . This seems useful if you have table and dont want to carry a keyboard case. You can type anywhere you want as logn as it is a flat surface

Daphne Hsu - 4/21/2014 15:44:54

The main advantages of these new technologies are that users can directly manipulate interfaces, and is more realistic and intuitive. It is also easier to interact with other users. Multi-touch systems aren't useful for being used on the go, if the user has to do other things like driving or riding a bike, so their hands are busy.

http://www.google.com/glass/start/ I think Google Glass is really interesting because voice commands are as important as touch to activate different features. The user experiences a hands-free environment where Glass "listens" to them. Glass is trying to change the way users interact with their environment, and thus how they interact with friends (i.e. texting, sharing photos/ videos).

Emon Motamedi - 4/21/2014 15:48:57

After reading about potential ways we will interact with computing in the future, a number of advantages make themselves apparent. Surrounding the usage of surfaces and their enhanced technology, it appears that there could soon be alternatives to direct touch as surfaces are becoming able to detect motion hovering above them. This will increase the functionality behind interactions with these surfaces as it will open the door to more than just multi-touch. A second advantage comes from the extension of the "Internet of Things" as it applies digital information to the objects we use in our every day lives. This development has huge benefits for teams that are co-located, as it allows for shared controls and collaboration that is much more interactive.

The extension of the body as a surface for interaction presents both advantages for certain tasks as well as disadvantages for less suitable tasks. By utilizing the body, our devices are consistently at our fingertips, literally, and could potentially be interacted with more naturally because it is our own body. However, tasks that require more privacy would be a little more difficult to accomplish given that our bodies are on display for the world. Secondly, the issue of the "Midas Touch" looms, where our every day interactions may accidentally trigger an action on the technology.

http://www.spritzinc.com An interface that I find incredibly interesting is Spritz. Spritz changes the interface we use to read, as it presents a word at a time at a certain word per minute rate and centers your eye on a predefined point to maximize your ability to read and retain the information. This has huge implications for the amount of information we are able to consume, as Spritz drastically increases the speed in which we can get through material. However, Spritz is not ideal for all forms of literature. Reading which requires a lot of interpretation or processing time would likely be difficult to consume using Spritz, as the interface moves too rapidly. However, news headlines would be perfect for the service.

Allison Leong - 4/21/2014 15:56:36

The main advantage of these new technologies are an expanded repertoire of ways that a user can interact with a device. Ever since the mass distribution of the Apple iPhone, the two finger pinch-to-zoom-in-and-out gesture has become commonplace as an intuitive way to interact with a device, but gestures like this are still very basic and limited. New technologies integrate alternative methods of sensing, including sound (mounting a microphone on a surface), etc. what broaden the number of ways that a user can interact with an object or device. New technologies such as a device that senses touch on the back of the device rather than the front in order to prevent occlusion of a surface, or a device that senses the users shadow offer additional new and improved ways to interact with devices. Bio-acoustic sensing offers yet another way for users to provide input to devices. Some tasks that these technologies are not yet suitable for are multi-user interactions. With current technology, it is still difficult to distinguish between users touch on the same device. Using microphone input to detect touch is also unsuitable for devices that are used on the go, since environmental factors create noise that can interfere with the user’s interaction. http://www.emotiv.com/apps/epoc/299/ This is a link to a recent brain machine interface gadget that records neural signals and translates them into the thoughts, expressions, and feelings of the user. This is very interesting because with current interfaces, users have to translate their thoughts into gestures that they then make to interact with an interface. However, with headsets like this, that translation step can be eliminated if the headset can directly detect what it is that the user wants to do.

Prashan Dharmasena - 4/21/2014 15:59:49

The main advantage of these new technologies is being able to multitask. For example, the tabletop designs are very good for expressing ideas or bringing up information while in a meeting with other people. The interfaces for "on the go" are obviously designed with the idea of being used while running/driving/etc. While these interfaces are optimized for usability in situtations with multiple people or situations where you can only give the device half of your attention, they fall short when you are using the device to sit down and do work. These interfaces will never be as fast as a keyboard and mouse for standard work such as word processing.

http://www.emotiv.com/apps/epoc/299/ I think brain-computer interfaces have a lot of promise and have the potential to completely remove the articulatory distance of the gulf of execution. Once you get past the actual language of the interface in the semantic distance, you need to just think of the action you wish to perform and the interface will do it. While this may not be the best interface for some tasks, as it probably requires quite a bit of concentration, the idea is pretty interesting.

Sol Han - 4/21/2014 16:03:21

Interactive surfaces and tangibles can make users feel more involved with the product they are interacting with. They can play with the user's senses more directly (sight, touch, sound, etc.) and perform more effectively than more traditional UIs because they can incorporate physical space into their design. Many interactive surfaces, including tabletop surfaces, support multi-touch gestures, which opens up new forms of input (one common example that is used today being pinch-and-zoom) as well as opening possibilities for multi-user collaboration. Interactive tabletop surfaces are not as portable or as private, however, making them unsuitable for tasks that require those factors.

Interfaces on-the-go are an excellent way of seamlessly incorporating UIs onto one's physical body. Muscle-computer interfaces, for example, can read muscular gestures to quickly react to the user's actions. This makes such interfaces suitable for everyday activities that need to be accessed on-the-fly, such as calling someone or doing a quick Wikipedia search. They have some limitations, however; they are probably not as suitable for complex tasks that require more refined control over the product (e.g., a drawing/painting app may require more precision, which may be clunkier to implement for on-the-go interfaces).


This article discusses wearable UIs. For example, one might "wear" a mini-computer in the form of a watch. Modern examples that fit into this idea are Google Glasses. This sort of UI technology is interesting because it combines form with fashion. For example, it would be neat to have shirts with designs/colors that can be changed on the go. Another example would involve shoes that have sensors that could detect features of the ground, such as slope, and provide appropriate feedback to the wearer. These features could enhance everyday experience for the user in a natural way.

Diana Lu - 4/21/2014 16:05:20

The main advantages of these new technologies is the ability for users to interact directly with the interface. For example, with the table top surfaces, users can utilize direct manipulation and interact with the technologies in a more intuitive and natural way, eliminating the need for users to "learn" how to properly interact with the tech. These technologies also enhance collaboration, because a tabletop surface allows for multiple users to use and see the same interface.

However, there are drawbacks to this kind of interface. For example, if the user is trying to do something computationally significant, like coding or writing a research paper, it would be inefficient to use a touch surface, since typing is much more efficient using a keyboard and mouse. Touch interfaces are not as intuitive for users in this sense.

An interesting technology that I looked at was the new Google contacts, which are not close to being usable by the masses but offer the possibility of a completely new interaction, using just eye movements. While currently all that is being considered is the ability to use the eye to take pictures, this can potentially be expanded to other interactions.


Daniel Haas - 4/21/2014 16:13:11

These technologies are advantageous because they allow users to interact with digital, non-tangible information using the same physical world that they are accustomed to interacting tangibly with. Like direct manipulation interfaces, this allows for a more natural, direct interaction for users. Also, these technologies are helpful in the context of collaboration--users in the same physical space can find it difficult to collaborate over a small mobile phone, and taking advantage of tabletops, walls, or other available space can make increase the variety and quality of collaborative tasks.

However, these interfaces imply a strong metaphor between interacting with the physical world and non-tangible computation, which isn't always appropriate. Physical interaction with the world takes time, and is inappropriate for tasks that involve high degrees of repetition. Simple scripting might be more appropriate in such a scenario

One promising new interface, under development at the University of Washington, is SqueezeBlock (http://www.cs.washington.edu/node/3856/, http://www.cs.washington.edu/sites/default/files/hci/papers/tmpPo5TLY.pdf), which uses haptic feedback to convey information without requiring visual interaction. This is promising because it enables interaction in settings where visual attention to devices is either inappropriate or impossible, such as while driving or in a meeting.

Opal Kale - 4/21/2014 16:22:11

These new technologies give the ability to control the interface better. For example, look at the "super pen" that will come out some time in the future, as mentioned in "Interactive Surfaces and Tangibles." This pen would be optimal for drawing, rotating, searching, etc, but not suitable for typing (potentially), and will eventually become as natural to us as the keyboard and the mouse have. Here is the link to another interesting interface technology online:


This is going to be interesting because it talks about using the microchips to implant into people's brains, enabling all sorts of useful treatments for conditions ranging from paralysis to blindness. The science is called "brain computer interface," or BCI.

Kaleong Kong - 4/21/2014 16:26:34

I think the main advantage is that they explore more possibility for human to communicate with computer in different way. Like muscle computer interfaces, it may allow users to control future robot (robot arms ) directly with users' natural body motions instead of a control panel with buttons and sticks. Different kinds of interfaces have their own limitations, such as pen input is not best suitable for doing documentation since typing is faster than writing. I think augmented reality is a interesting interface technology. https://www.youtube.com/watch?v=frrZbq2LpwI It give us a new way to interact with computer. You don't need to input by typing to get wanted information. All you need to do is to use your camera pointing to something that you are interested and the device will show you the information that it knows about this image.

Justin Chan - 4/21/2014 16:28:49

From the articles, it is clear that for most people, the way forward is touch. I'm honestly not surprised, touch is one of our innate senses, and technology is almost good enough these days for us to fully utilize it in interfaces (contrast that to external devices such as the keyboard and mouse). The definite main advantage of these new technologies is the precision by which we can navigate these interfaces. As I said before, touch is one of our innate senses, and being able to "directly" manipulate those interfaces gives us a level of granularity that wasn't possible with a mouse or a keyboard. For example, drawing a straight line with a mouse is pretty difficult. Doing it with your fingers is a lot easier. Using your touch is also naturally more intuitive, which will lessen the learning curve that most people have with a new technology. This will be most effective with seniors, where the learning curve with respect to technology tends to be a bit more steep (consider how much easier it is for your grandma to learn how to use an iPad as opposed to a laptop...). In terms of disadvantages, precision is gained often at the expense of efficiency, and this was touched upon in the reading about direct manipulation. Human actions usually take "more effort" -- if I want to type out an insanely long text, my actual keyboard is probably a better candidate than my fat thumbs. My view on this issue is that touch interfaces are good for simple, easy tasks -- more complicated tasks not so much.


This gaze interface lets you control your computer with your eyes, as it tracks your gaze and responds accordingly. I definitely see the use of it, especially when it comes to phones. When I'm at Wing Stop, the last thing I want to do is to touch my phone with my greasy fingers. This interface would definitely come in handy. Secondly, it could prove useful for people with disabilities because it just requires the eyes to control the computer.

Charles Park - 4/21/2014 16:35:02

New technologies allow for more direct manipulation of objects presented to them allowing them to use actions that are more native to the physical world. This would allow for the users to disregard memorizing counterintuitive actions and make the learning process of an interface much easier. Another advantage is that it will allow for much more interactivity which allows for collaboration and shared controls.

inFORM, a dynamic shape display (http://www.fastcodesign.com/3021522/innovation-by-design/mit-invents-a-shapeshifting-display-you-can-reach-through-and-touch), is a very interesting way for communication of the future. A future model of this technology can allow for immense advancement in collaboration and perhaps something like advanced surgical operations even when the surgeon is miles away. It also seems like something that can be a great day-to-day use item for everyone in the future.

Matthew Deng - 4/21/2014 16:39:02

Mentioned in the article are two new technologies: interactive surfaces and tangibles, and interfaces on the go. Interactive services and tangibles are beneficial because of how strongly they implement direct manipulation. In addition, they are known for their multi-touch capabilities, allowing for two-finger or pinching gestures as well as multiple users collaborating and sharing control. However, tangibles are not very suitable when it comes to certain tasks: for example, any task that might require hovering will not be able to be done with a tablet that doesn't afford hovering. The other technology mentioned in the reading is interfaces on the go. This is a great advance in technology as people are getting more and more busy, as shown by a continuously growing demand for faster, smaller, more interactive phones. Creating muscle-computer interfaces that use bio-acoustic sensing, however, is on a whole new level. To use the body as an interactive surface opens so many new doors, as you will always have your body available to use, and will be more naturally comfortable than when using an external device. Unfortunately, this new technology might not be extremely comfortable, so certain tasks that are meant to be done in a comfortable environment, such as watching television, might not be so applicable for this form of technology.

http://www.technologyreview.com/news/526641/shape-shifting-touch-screen-buttons-head-to-market/ A new interface technology that is in the process of being made is this tablet touch screen that physically blisters out into a physical keyboard at the touch of a control. When not being used, they screen feels smooth at the surface. Although its usage is still limited, this possibility can lead to many new interface possibilities, such as a touchscreen that can pop out based on what is on the screen, making games more interactive or videos 3 dimensional.

Seyedshahin Ashrafzadeh - 4/21/2014 16:42:38

One of the technologies that is in the process of development is tabletop screens. This technology enables collaboration between users. With tracking of control objects on the table and projection techniques, this technology enables different interactions. The position of controlling objects (if they are turned or not) let's the user to interact with the objects in many new ways. Also, with the projector, we can project digital information to everyday physical objects. Therefore, the advantages of this technology are being able to add digital information to the physical objects and have many new interactions based on the way we place the objects. One of the downsides of this tangible interaction is that we are not able to type very fast. This technology will have some sort of hand gesture and written recognition but we know that we type faster than we can write. The angle of view might be a problem. For example, for a collaboration interface, this technology might have a little problem in providing the same angle of view for everyone (everyone has to be on the same side of the table to view and collaborate at the same time). Another disadvantage of this technology is occultation. Another interesting and rising technology are muscle computer interfaces. One of the advantages of these type of technology is the direct interface with human muscle activity in order to infer body gestures. Speech recognition software and computer vision interfaces require the user to make motions or sound which has to be sensed externally. This cannot be easily concealed from other people. However, muscle computer interfaces provide this opportunity. With placement of a band of sensors on the upper forearm, the sensors are able to figure out finger gestures on surfaces and free space. However, this technology is more for on-the-go tasks. It is not very good for heavy interactions like typing. Another interface that I found and thought it was very interesting is MIT's shape-shifting table surface. The Link is http://deadstate.org/watch-researchers-at-mit-develop-a-bizarre-shape-shifting-table-surface-that-will-blow-your-mind/ This tangible interface is called inFORM. It can capture objects in 3-D by a camera, process that information on a computer, and change the height of solid rods on a tabled surface to be able to interact with that physical object. Through this technology, users are able to manipulate physical objects from far. They can lift objects and turn them even when they are not near the objects. It can show 3-D model of different objects (like cars). In the video, it shows different applications of the interface. And one of the things that I thought was very interesting was its ability to show different equations (like z=x.y).

Andrew Chen - 4/21/2014 16:50:36

The main advantages of these new technologies is that with touch-based interfaces, there is a lot more variability than in an interface with, say, mouse clicks or key presses. For instance, touch interface introduces the possibility of multi-touch gestures as well as touches of varying weight, duration, etc. Because of these variables, touch-based interfaces introduce a lot more opportunity to gather interesting and potentially useful data in terms of control mappings. More specifically, this article discusses the potential of table-top interfaces. One of the potential advantages of a table-top interface is the physical affordance of the table: a user may place physical objects on top of the table, and the table may measure different metrics from that physical object. Another potential is the possibility of collaboration, or even control sharing. However, certain tasks may not be ideal for a touch-interface. For example, text input still seems cumbersome for touch-interfaces; physical keyboards or speech recognition seem much better suited. Also, the table-top interface seems to be best suited for collaborative tasks. For individual tasks, it would seem a little too much; a table-top device is most likely larger than a personal touch screen device.

Another interesting interface technology is augmented reality (AR). The most notable example of this is the Google Glass. Combining AR with wearable technology is intriguing, because it provides users the ability to learn about objects by simply looking at them. In addition, the user no longer has to whip out his phone to search up information regarding something they see; he can simply direct his vision towards the desired object, and the interface should provide him with corresponding information. Furthermore and perhaps more importantly, AR’s potential goes beyond mere recreational use. If we combine AR with wearable technology, we can imagine a wearable AR device that has other sensors (temperature, sound, etc.) besides visual ones. Imagine, then, a fireman who is inside a burning building trying to save people trapped inside. With a wearable AR device, the fireman can simply scan the room with his eyes, and the device will automatically provide him with potentially life-saving information.

Cheng Sima - 4/21/2014 16:56:31

The articles present interesting future interface technologies and new possible interactions. The article “Interactive Surfaces and Tangibles” presents “tangible user interfaces”, and it specifically illustrated several “interactive tabletops”. These technologies have several advantages. They combine control and representation on the physical device. They also allow for tasks that require high degree of collaboration and shared control. The input can be more diverse, such as placing other objects on the screen, or moving things around, and rotating things. The input is no longer just limited to our own fingers and hand gestures. In addition, they minimize training because it is a form of very direct manipulation and mimics the tasks in real life. For example, the urban planning planning tabletop (URP) detects changes in position of physical objects on the table, and can analyze in real time different urban layouts. However, these technologies are not suitable for highly private tasks that should not allow multiple control and collaboration at once. Also, they are not suitable for small screen devices such the mobile phone because the physical limitation of the screen restricts complex multi-user interactions and collaborations on the physical device.

The second article “Interfaces on the Go” presents possibilities of micro-interactions, specifically introducing physiological sensing for interactions. The article talks about both the muscle-computer interfaces and the bio-acoustic sensing. The main advantages of these are that our body is always with us, consistent, reliable and always-available. Tasks that are not suitable for these are tasks that are more abstract, or tasks that are complex. For example, coding will not be effective on these technologies. Also, typing a lengthy document will also not be the most suitable task.

Another interesting technology I found is Leap Motion (https://www.leapmotion.com/product). This device connects to a computer via USB and recognizes objects within an arm’s length distance. This is promising because it not only senses hand and finger movements, but you can also pick up a pen or other objects to use as input.

Justin MacMillin - 4/21/2014 17:00:20

Pen based computing is effective because it imitates an action that users are already used to, writing with pen and paper. It appeals to people because it is an intuitive way to interact with a computer, especially for those who are not as comfortable with computers are others. Pen technology also makes it easy to switch between different colors, brush sizes, etc. Multi touch interactions also enable users to interact with devices in intuitive ways. Concepts like pinch to zoom make it easy to for users to both interact and figure out the capabilities of programs.

http://www.usatoday.com/story/tech/personal/2014/04/21/amazon-phone-potential-features/7960825/ This article details some possible capabilities of a new phone by Amazon. The main thing I want to point out is a 3D screen. I think this way of interacting with devices will be the new way of interacting with technology in general, not just phones. I am interested to see how they are planning to implement this without wearing 3D glasses.

Sangeetha Alagappan - 4/21/2014 17:00:16

The reading explores a variety of promising interfaces that could be ubiquitous in the future. The interfaces focused on are Tabletop interfaces like Reactable, etc. and Muscle Computer Interfaces as well as interfaces that utilise bio-acoustic sensing. The main advantages of these technologies are their interactivity and capacity for “direct manipulation”. A new user is bound to find these interfaces less intimidating than a computer interface as these interfaces (especially the Muscle Computer Interaction interfaces) tend to be intuitive - utilising normal human gestures. The Tabletop interface is a very promising idea that is bound to become popular, seeing as it affords collaboration and is inherently social. Tabletop interfaces would be advantageous for teamwork - users could collaborate and jointly manipulate their work on the same platform. Using the tabletop interface is more natural and direct as it affords direct manipulation and exploration, than using a traditional computer interface. The Muscle Computer Interfaces use micro-interactions in a way that allows the user to use his/her own body as an interface (which is an astounding idea!). It’s advantageous because it is portable, easy to use, mobile and has so far produced test results of reasonably high accuracy. Such an interface would revolutionise the way we interact with technology - making technology personal and wearable to be used as an interactive window in any environment. Skinput and other such interfaces that use bio-acoustic sensing are also interesting ideas as they are interfaces that will always be readily available. Interfaces that use the body’s modalities are not only more intuitive to users but also cut production costs in the long run.

However, all these technologies come with a number of disadvantages. They require a great deal of training and might not handle noisy signals well. The system must provide enough affordances that the user can learn the new system well. The Tabletop interface requires users to crouch or stand over a tabletop which may not be applicable to a number of situations - for example, during presentations, the tabletop interface would not be useful. Sometimes when working on a project, team members prefer to work on separate copies of the project and use version control, instead of working on the same screen. Muscle Computer Interfaces (MCI) might not be useful to people with disabilities. It might also be difficult to use Skinput and similar interfaces that depend on acoustics for people in noisy environments. A MCI is not useful in document editing or any significant task like data processing. In all interfaces, the challenge of handling the “Midas Touch” is always difficult due to the wide area available for manipulation and the challenge of integrating the interface with the user’s other modalities and devices is a problem worth solving.

One interesting interface technology is the Brain Computer Interface. One such device is the Emotiv Epoc : https://www.emotiv.com/epoc/ It is a fascinating technology that essentially “reads your mind”. While it is now capable of reading rudimentary actions like “turn off and on” or “picking up a box on screen”, it has immense potential if developed. I find it interesting because it utilises our minds to produce output - it requires no external effort except clear thought. This has incredible applications - especially in helping cope with disability. It also can enhance the video game experience considerably. We could essentially control our technology with our minds - and that is a very powerful idea.

Ian Birnam - 4/21/2014 17:00:27

The main advantages of these new technologies are that they're bringing in people who otherwise would not be interested in these technologies. There is a surge right now in physically interactive technology (like the Reactable), and within that, a big push in wearables. These technologies get people to become engaged in computers in physically interactive ways, and make programming seem more interesting than just sitting and typing up code. From these technologies, the art aspect of CS becomes much more apparent.

Some tasks they aren't suitable for are creating products where the physical is what you're trying to avoid. For example, Twitter or Facebook are social websites that allow people to interact with each other without physical contact or communication. Physical implementations of these products would be counterintuitive to what they're trying to achieve.

getpebble.com I think the Pebble watch is very interesting because it's a smartwatch, but it's much more minimal than other smartwatches on the market. There is no touchscreen, and the interface is only with buttons. It can control your phone's music, and you can get push notifications to it. There's also apps you can download for it. What I like about it though is that it's a wearable that doesn't distract too much from everything else. You can get notifications from it, or control your music on the go or from a distance, but you're not completely engrossed into your watch all the time. It's more like gaining additional features that can ease your daily life, without creating something new to be consumed by.

Conan Cai - 4/21/2014 17:00:57

Tangible user interfaces offer a good compromise between the physical and virtual world. Whereas direct manipulation has a user moving things virtually on screen, tangible user interfaces gives users something physical to manipulate. Just as how direct manipulation is seen is intuitive to grasp because of its mimicry of the physical world, tangible user interfaces will be even more intuitive. This easy to use interface is well suited to educational tools, for both toddlers and elderly people who may be using a computer for the first time. For example, building blocks linked to a computer could be easily used to teach a toddler basic math. However, I think that tangible user interfaces serve very specific tasks. The interface is not suited toward general computing, rather very specialized tasks where this physical interaction is actually useful (education, presentation, collaboration things).

The second interface, wearables and sensors directly on the human body is great for mobile use. There is no place more convenient, when on the go, that to look at your wrist to see notifications or quick updates. I think main advantage of this interface is its portability and easy access to your computer (mobile phone). However, again, as a general computing device, it seems impractical. These interfaces are suited to quick interactions that provide notifications.

http://www.oculusvr.com/ I think that the Oculus VR goggles are interesting because there are so many sci-fi movies and themes about virtual reality. While these may not be on the level of Tron and The Matrix, they represent a big step towards making such a world a reality. Movies and video games will be completely immersive. The maturation of virtual reality will open up a new multibillion dollar entertainment industry because everyone will want to be "in" the action. Virtual reality lets a person experience anything they want and that's why it's so promising.

Luke Song - 4/21/2014 17:01:15

The two articles contrast in the potential applications that their described interfaces will be used for; the first shows how group interaction is possible with multitouch and more complicated gestures on large touchscreens, while the second shows innovations in personal computing. Both are suitable for their own application and not the other. Both technologies increase the complexity and capabilities of computer interaction; the experienced user will be able to perform more tasks more quickly using more complicated interfaces. But, I think that drawing art will remain dominated by the pen and stylus.


The above link is a story about artificial sight to be really intriguing. Basically, a camera sends information to a computer, which sends information to a person's nervous tissue. Today, there already exist brain-computer interfaces designed to aid paraplegics. Perhaps tomorrow we will have the full technological capacity to both see things beyond the range of our senses and interact with them as well. That way, I'd never have to go outside ever again.

Haley Rowland - 4/21/2014 17:01:31

I think the main advantage of tabletop interfaces is the potential for collaboration with others. The large flat surface affords a larger capacity for people to work simultaneously and interact in a physical way with software objects. These technologies do not afford work in a mobile setting because of their stationary nature. The main advantage of physiological interfaces is the use of human's implicit motor knowledge. Because our proprioceptive capacity allows us to have knowledge about our movements without needing to focus our attention on them, humans can interact with physiological interfaces while attending to other things or while on the go. This technology seems well suited to smaller, less complex tasks (like interactions you would expect on a mobile device), so this technology is limited in its capacity to tackle more intensive tasks. I am fascinated by MIT's inForm because it allows for tangible interaction with physical objects remotely. While the technology is still in its beginning phases, I could see it having a large impact on society. Consider, for example, a doctor that is able to examine or perform a procedure on a patient without being physically with them. http://tangible.media.mit.edu/project/inform/

Brian Yin - 4/21/2014 17:02:34

The first future interfaces deals with tangible user interfaces, which are intended to provide a wider range of interactions beyond simply multi-touch and also foster collaboration and interactivity among multiple users. I think that the main advantage of this technology is that it provides alternatives to traditional WIMP interfaces, allowing for the usage of many different and innovative forms of inputs and displaying information. Such interfaces may not be as useful in cases where tasks are used by a single person and do not require additional gestures beyond what is currently provided (possibly typing?)

The second interface explores using the human body as an interface to provide inputs. This technology is definitely good for user's in crowded areas (e.g. subways, trains) or are preoccupied (i.e. driving, walking) who are unable to easily interact with a physical device. However, such an interface would only be useful if the hardware that provides detection and display are not obtrusive. The muscle interface only provides means of inputting instructions without a way to display the output. Thus, this interface is ill-suited for information display. However, this could be mitigated by combining with the arm projector.

http://appleinsider.com/articles/14/04/08/microsoft-hopes-to-counter-apple-with-ai-driven-invisible-user-interfaces-on-future-devices This article talks briefly about Microsoft's attempts at creating a user interface that requires as little user interaction as possible by anticipating what the user wants or needs. I find this idea interesting and useful, but seemingly impossible to always display information that will be useful to the user.

Doug Cook - 4/21/2014 17:04:30

I think the main advantages of these technologies are their mobility and speed. Tangible user interfaces are by definition freed from typical GUI confines, which, as the article implies, opens the door for interfacing with all sorts of physical objects. Mobility comes from the opportunity to use things other than mobile phones (or devices that rely on screens) as interfaces. The reading also revealed that a fair amount of time is spent by phone users just retrieving their phone and preparing it for their task (about 5 seconds), which could be reduced in the absence of the prototypal GUI model. This aspect of tangible and future interfaces demonstrates speed and convenience for certain tasks. Of course, there are still many tasks still best left to traditional computers such as laptops. Word processing, for instance, would be inherently unproductive in the absence of visual feedback. Photo and video editing falls in a similar category – removing the screen removes much of the data itself.

One other interface technology I’ve recently heard about is a set of gloves that allows musicians to control their production software by using gestures. Artist Imogen Heap is currently using it and she’s demonstrated how applicable it is for creative professionals in her performances. The technology relies on bend sensors and combines with existing products like the Xbox Kinect to detect gestures, movements, and spatial properties of a performance. This article describes it and shows some demonstration videos:


Robin Sylvan - 4/21/2014 17:05:42

The first new technology we read that is furthering into our modern world is the tabletop interface. These interfaces all have multi-touch functionality and are designed for many people to work on a table at once. The main advantage of having a tabletop is the potential for new methods of collaboration in a multi-user scheme. Lots of previous realtime work-related collaboration using technology generally was sharing between people in different places, i.e. shared documents and video conferencing. Tabletop technologies are able to aid in users working together to solve a project in the same-time same-space dimension. One limitation of these are that a large surface is required in these tabletop interfaces. Also, it is presently not super clear what kinds of collaborative work will be necessarily improved on a tabletop surface over writing on a shared white board or screenshare. For example, working on a collaborative writing project on a tabletop interface may be difficult, as the users on one end of a circular table may not be able to read the text upside down. Another technology we saw was muscle-computer interfaces. By sensing electrical impulses sent to muscles, an interface is able to tell what sort of action a person is performing with a high probability. A similar interface is bio-acoustic sensing, listening for the acoustic energy from different taps on the body. These interfaces give us the ability to interact with various devices through the use of our actual bodies, and enable new methods of interaction. One example was an interface using a projection on various parts of the body that a user is able to interact with, limiting the necessity for large mobile screens and enabling interaction with one's own body. This could potentially make interactions faster and seemingly more natural. A major disadvantage is that these interactions aren't perfect – they required a lot of calibration with each user, and even still they were generally around 80-90% accurate. This could be very frustrating to work with when attempting to take many steps to complete a task. Some of the new technologies I'm really excited about exploring in the future are brain-machine interfaces. A link to some cool toys is located here:


Thee interfaces measure brain waves to trigger various interactions. The mindflex game has you guide a ping pong ball through an obstacle course through changing your brainwaves. Growing up with a sci-fi background, I've always wanted to be able to control devices with my mind, and the idea of an interface powered by thought interests me a lot. It will be interesting to see if brainwave-based control will become common as part of our everyday computer interaction.

Alexander Chen - 4/21/2014 17:07:54

Interactive Surfaces are paving the way for collaborative work, with interfaces designed to accept input from more than one user. By moving away from the WIMP design, with only one cursor and menu adapted for a single input, users can all be manipulating data and information at the same time, rather than huddling behind a single user, who is in charge of data entry. A horizontal surface allows for many people to be in close proximity of the information and to be able to directly manipulate it. This improves the same time - same place workflow.

Many of these designs rely on a horizontal work surface, which requires the users to be almost right above (or below) the plane of information. This does not scale to large audiences, such as a lower division computer science lecture. To maintain the relatively perpendicular relationship between the user's eyes and the plane the information is presented on, large lecture halls rely on large projection screens for the audience to digest information. They are unable to reach the screen and interact with the objects presented.

Another new interface presented in the article was the use of a body appendage to provide user input. One of these was based upon the detection of motor movement through electrical signals. Because our brain sends commands to our muscle cells to contract through electrical impulses, researchers were able to create a contraption that attached to the user's forearm near the elbow. The band wrapped around the forearm had small electrical contacts that detected when the individual fingers were flexed. The novelty of this input method lies in the absence of an additional physical structure that the user would normally interact with (keyboard, touch screen, mouse). This eliminates the need for a signifier of controls for repetitive tasks, such as playing/pausing and seeking in a song.

This technology would make it easy for users to control the music playback, volume, wifi, ring settings, etc. These tasks take a very limited amount of input and if the user could specify accurately which parameter they wanted to change (perhaps through a mode), they could then simply wave their fingers in a predefined pattern to make the change.

This technology would work horribly for applications that take in a multitude of arbitrary inputs, such as panning in a maps application. There are a large number of parameters to the traditional pan input, such as finger speed, direction, start and stop coordinates, etc. However, some input that might appear difficult to could be interpreted through AI for a astonishing positive experience. For example, typing might be possible. By tracking which finger has firing motor neurons and the sequence, we can, with a little machine learning and user typing data, probably determine the words that the user inters to input.

https://www.youtube.com/watch?v=s-U1s0GZGvw Here is an example of gesture tracking by a single camera using OpenCV. It does a pretty good job of tracking the hand's orientation and the fingers that are displayed. This would be nice for users who prefer sign language to typing or haven't learned how to type. With the cost of cameras dropping to dollars per sensor, it is possible to have an array of CCDs to capture gestures more accurately. These gestures could be used for macros or shortcuts (such as loading user preferences)

Max Dougherty - 4/21/2014 17:08:55

One of the greatest advantages of these new technologies is their ability to conform to natural human actions and schemas. This helps eliminate a period of training required to learn other input formats (penmanship, typing ability). These interfaces represent bottlenecks in a user interface. With only a single source of input, they discourage collaboration. New interfaces like the Reactable provide a multi-point interaction system which allows for a greater number of simultaneous participants, and the direct manipulation actively engages the user to reduce the gulfs of execution and evaluation. Unfortunately, currently these new interfaces do not facilitate a number of important tasks. A prime example of which is the ability to enter long form text and programming. Text to speech recognition has come far, but still suffers from a number of errors and lacks the precision necessary for syntactic interpretation. Google glass represents the first realistic attempt at a wearable computer. This "always-on" nature of glass allows the user to interact without switching context (to look at your phone). Furthermore, the optimal position allows the application to interact with the environment, reacting to position and motion. This allows it to augment a surrounding by providing additional information quick access to the web. http://www.google.com/glass/start/

Eric Hong - 4/21/2014 17:10:21

Tangible Interaction: Advantages - allows familiar actions, such as writing with a pen, as control input - lessens learning curve Disadvantages - not suitable for fast data input (writing usually slower than typing after practice)

Sharing Control: Advantages - increased user interaction - more collaboration in that everyone has direct manipulation Disadvantages - who has priority in conflicting actions? - syncing and timing issues

Bio-Acoustic Sensing: Advantages - no need for additional objects just for control - saves space (no laptop or even phones) Disadvantages - might be harder to see than on digital screen - need skin to be showing (no sleeves)

Other Interesting Interface - Mental Images to Screen Link - http://www.bubblews.com/news/532559-new-technology-tested-to-039think039-images-to-machines Advantages - fast image production (thinking faster than drawing) - can be used in search engines combined with image similarity ranking Disadvantages - ethical issues about mind reading/privacy

Chirag Mahapatra - 4/21/2014 17:16:16

As interfaces keep developing over the years they will be more natural to human actions. We see this from the trends in previous years. Initially, we had punch cards as an interface which were very un-intuitive and cumbersome to use. Then we moved to direct manipulation systems which were loosely connected to real world objects. Nowadays, we have touch screen systems which are even more intuitive. The key in all the interfaces is that the training time for a user to learn these interfaces becomes progressively lower. Another key advantage is that interfaces have become more collaborative over the years. This has primarily been via the advent of the Internet. Today we can edit documents simultaneously with large teams. As they improve they will be able to interact better with other natural aspects like speech, gestures, etc.

An aspect which hasn't improved with these interfaces is the ability to code. People still prefer coding on desktop like machines with keyboards compared to touch screens. People with disabilities also have difficulties in using these interfaces.

Something I am fascinated about is brain computer faces. Link: http://bits.blogs.nytimes.com/2013/08/04/disruptions-rather-than-time-computers-might-become-panacea-to-hurt/?_php=true&_type=blogs&_r=0. A world where our thoughts can be read and translated into action will be really amazing. This will provide access to technology to even those who have physical difficulty in moving arms.

Tristan Jones - 4/21/2014 17:16:41

One major advantage of the future interfaces is their increased correspondence to the real world. Currently, people interact with their computer through a keyboard and mouse and receive feedback through a glowing rectangular screen. The interfaces discussed show *physical* direct manipulation interfaces that seem fairly intuitive to use. They also provide social feature that allow users to work with others across the globe. However, as a programmer, one problem I find with these interfaces is that it is more difficult to sense the underlying data structures that formulate the program. This makes it harder to understand exactly how what I am using works and how to resolve errors or write programmatic macros that speed up common user actions.

For the next part, I'm going to talk about the "what UI will look like in 2019 video by microsoft research". Here is a linky: https://www.youtube.com/watch?v=8uXu2XHLzdM

One of the core features of this video is how people can connect $DEVICE_A with $DEVICE_B and everything just seems to "work". There's no "Error: printor driver failed to install" or some unhelpful BS that the user has to debug and fix. The integration of all the devices seems to be the core concept of what's going on. Furthermore, the computer seems to "just know" what a user wants to do. There's one point in the video where the user taps the keyring on the bottom left of the screen and the computer automatically pushes his data to the nearby cloud computer. How a user figures out what this keyring does I don't see any signifiers that represent this affordance but it's nice to have a computer to automatically do what I want it to do. One issue with this demo though is the extreme reliance on Touch and Visual cues. I mean, there are more body parts than just the eyes and fingers. Although this is a great start, I wish we could be a bit more creative than solely visual and tactile interfaces. Perhaps direct brain interface modules are the true future of human computer interaction.

Insuk Lee - 4/21/2014 17:20:17

These new technologies remove the abstractions in the interface. Instead of using the mouse or the joystick to point and click at what we want, we are using our own fingers and hands and gestures to convey our intentions. In this way, things have become more hands-on, and there are a more variety of ways to assign actions to things. There are some drawbacks, as things such as PlayAnyWhere from Microsoft requires a projector(light), requiring the room to be dark and also is not portable. The muscle-computer interfaces, on the other hand, requires too much equipment, requiring yourself to be hooked up to all these sensors everytime you want to interact with your computer or device. Until we come up with easy-to-harness, easy-to-carry hardware that can make this experience better, these interfaces are not up for adoption just yet.

One popular and interesting interface technology surfacing these days is, by far, the google glass - "http://www.google.com/glass/start/" Interacting with voice and eye movement and swiping motion on the side of the glass, this is a piece of technology that has shown us a preview of future interactions. We will be able to interact with our devices without much thought because it will allow us to interact with it as if we were talking to a friend or another person. We will tell our devices what we want through various means such as sound, gesture, eye movement, muscle contractions, and neural impulses - all in the order of feasibility. We are in the middle of exciting times and a lot are to be observed.

Christopher Schechter - 4/21/2014 17:21:22

Interactive tables are an interesting technology because they draw so many parallels to real, physical objects. The article makes a good point in that the table can be used to both show virtual objects on its surface as well as interact with physical objects placed on top of it. This ability lends itself well to affording easy access and direct manipulation to many objects on the table's surface--it would be very good, for example, for playing a game of chess virtually, or laying out a scrapbook page.

The wearable devices are great because of their convenience factor. They're perfect for any simple task that is repeated often, such as a phone or texting as they exemplified in the article. In fact, they would probably occupy much of the same use cases as mobile phones do today. But they will have similar limitations to mobile phones as well, notably the small "screen size" and limited input feedback. Additionally, projecting light onto the body could potentially make it hard to see sometimes. These would limit the devices to fairly simple tasks where not much input or output occurs.

Here's an interface technology I've been interested in recently, the Omni: http://www.virtuix.com/ As someone who plays video games, I think it's an awesome idea to add a very active element to games that would normally entail sitting down and being sedentary for the entire duration of your playtime. And this is about as pure "direct manipulation" as you can get--running in real life translates to running in-game, simple as that. While it can be great for games, I'm also very excited to see if it could be used effectively in applications outside gaming.

Meghana Seshadri - 4/21/2014 17:26:47

Both articles elaborate upon a variety of technologies that have been built on the basis of advancing ways in which people interact with computing. From interactive tabletops to interfaces built right on our own very skin, technology is becoming more creative and handling more possible types of interaction that stretch much farther than WIMP and GUI based interfaces, which are built more for single-user interactions. There are a variety of advantages that come packaged in alongside these new technologies. Using the human body as an interactive platform can have a variety of advantages, such as the body is always consistent, reliable and available as an interface. Users will always be familiar with it, and can quickly interact with it, such as when making finger gestures or tapping our arms. A variety of technology that focuses on multi-user collaboration enhance social interaction and collaboration. The technology that provides group members with equal access and direct interaction with digital information displayed on an interactive table surface allows for a more collaborative decision-making process and equal participation amongst all members. However, while these technologies bring about a variety of new concepts and ways to achieve interaction, they also are just not yet suitable for some types of interactions. For example, plenty of researchers are coming up with technologies that use the capabilities of the environment around the user, hence making it a more creative use of the surfaces around them. The only problem with this, is that this kind of interface, it would be impractical for when the user is on the go. This can be further projected towards mobile applications in general, because of the physical and cognitive constraints of interacting on-the-go. A disadvantage that comes from using the body as a interactive platform is that processing external noises will interfere with the technology and thus affect the results.

Touché, a tool being researched by Disney Research in Pittsburgh, is a new sort of sensing technology that can detect not only touch events, but also more complex hand and body based configurations during touch interaction. One of the key examples of this that they research is the different ways a hand interacts with a doorknob. From one finger touches to a full grasp, the Touché technology can detect each of these differences through a range of frequencies. Some real world applications of this technology include hand motion/gesture based commands to control music playing on a mobile device. One doesn’t have to waste time by physically taking out their phone from their pocket, which takes about 4 seconds according to the research of Daniel Ashbrook of Georgia Institute of Technology (Crossroads – The Future of Interaction, page 30). Instead, all they would have to do is make certain gestures with their hands, such as touching two fingers on the palm of the opposite hand in order to indicate a volume increase. This technology seems very interesting in its ways of exploring a new range of possible interactions a user can have, and has the potential to minimize and enhance touch interactions such as with touchscreens as well as with the human body, which according to page 33 in the Interfaces article in the ACM magazine, is always a “consistent, reliable and always-available surface.”


Stephanie Ku - 4/21/2014 17:28:18

The main advantages of new technologies is the "hands on feel" that they have, which means that people will be more comfortable using them. We do not have to deal with any intermediary devices -- using our hands gives the best feel and hence best "connection" to the product. For example, it is probably the most intuitive to increase the volume of a song by a "knob-turning" motion or by using your finger on a slider a la the iPhone. This interface technique gives you the most precision, something that will come in handy if you don't want your eardrums to be blasted. Touch interface is not suitable for tedious tasks. My fingers are good for playing Flappy Bird, essay-writing not so much.


Siri is an example of a voice interface. I like it a lot because I can use it while driving to find directions or read my messages to me. It makes driving a lot safer because my eyes can be focused on the road. In general, it cuts out a lot of work I have to do -- I tell Siri to do it and she does it for me.

Christopher Echanique - 4/21/2014 17:28:30

The use of tangible user interfaces presents an advantage for users in that it allows for direct manipulation of the user interface. This helps to reduce the semantic and articulatory distances of the gulf of execution since the user will not have to rely as much on the mapping of interactive tools (such as a mouse) to the interface. Tangible interfaces have limitations in that they remove the computer from the interactive input tool (such as a tabletop). In this case it would not be suitable to use these types of interfaces when access to these input tools are not available.

The article from the link below introduces a new type of touch based interface that provides texted when the user swipes. I found this interesting because it provides additional feedback for the user as screens become sticky or slippery according to where the user is dragging their finger. These textures can be used as signifiers where the user can afford to swipe on the interface.


Patrick Lin - 4/21/2014 17:28:36

Interactive tabletops are good for displaying large amounts of information, especially naturally flat layouts like maps or graphs. They offer direct multitouch support or even use of objects to interact with data, and are similar to existing consumer level tablets, only larger, allowing for easier in-person collaboration. There could be potential ergonomic issues depending on how many people the surface is designed to support, and using it as a normal desk could obscure parts of the screen. The device is obviously not suited for portable work either, so people should weigh whether investing in the new technology is worth the increased interaction versus a traditional display screen or projector.

Micro interactions described in the second article are an interesting alternative to direct touch and speech while still reducing time to use and providing new features for mobile devices. The ability to use the human body as a platform benefits users who are frequently on the go by providing additional interactive surfaces without carrying extra equipment like laptops or monitors, but may not accommodate certain tasks like reading books or playing games requiring complex controls, which would be difficult on the skin. Carrying a mini projector around could also be troublesome, as would any false positives generated from doing certain gestures.

http://computer.howstuffworks.com/humans-interface-with-computers2.htm Computer interaction requiring no physical expression or movement is possibly the most advanced form in the field. Having the ability to type words, play games, or navigate the internet by merely thinking is something I’d expect psychics in science fiction to do, but could be a possibility at some point in the future. Current studies limit users to an EEG tethered to a computer, but the discussion of implanting electrodes directly into a person’s brain makes it seem like we all could be cyborgs someday, capable of calculating anything by merely thinking it with the assistance of a mentally linked computer.

Andrew Lee - 4/21/2014 17:29:30

I think the main advantages of these new technologies will be for collaboration and a faster/more intuitive way to get things done.

Hao-Wei Lin - 4/21/2014 17:31:48

One main advantage of TI is that it allows more bodily movement. It is quite important for ergonomics in a society where most people now sit in front of a computer only moving their fingers for a long duration of time. A advantage for Interfaces on-the-go is that we can have more mobility with daily electronic devices, which can add to conveniency. However, they might not be suitable for more privacy-related activity such as checking emails, and reading confidential documents since, in general, the contexts where these technologies are used are public.

An interesting interface I found is the Cleveland Museum of Art: http://www.fastcodesign.com/1671845/5-lessons-in-ui-design-from-a-breakthrough-museum. The use of TI here is excellent because it is meant to be used in the public, and it makes the process of browsing a gallery a lot more engaging and worth while for both the young and the adults.

Zack Mayeda - 4/21/2014 17:34:21

I think a downside of gesture and voice interaction is that the user must learn or be introduced to commands as opposed to seeing the available commands on the screen for touch or mouse input. A positive of these interactions is that they may allow faster input than by other interactions. They may also allow multiple input sources, by voice and touch for example.

I think the following article about eye movement interaction is interesting. I think eye tracking presents many helpful possibilities related to bringing pertinent information into view. For example, highlighting the screen that the user is looking at in a multi display setup is a useful application of this interaction.


Aman Sufi - 4/21/2014 17:39:41

These new technologies explore novel ways of interaction which have either been outside of the realm of imagination or too difficult to implement in the past. They allow new ways of more accessible, intuitive, and direct manipulation tailored to certain tasks, for example the use of a Surface table to plan a city which is accessible even to amateurs by moving around buildings on the surface of the table and seeing the resulting real-time analysis such as shadows generated throughout the day to aid in the design process. Such tasks used to only be accomplishable through complicated windowed software using several context menus, tools, and functions and hours of effort to achieve the same thing. In essence, the table surface works best as a collaborative tool and design medium where direct manipulation of physical objects can easily be represented, but it probably would not be any more helpful in abstract tasks such as browsing the web as it is today as compared to the experience on regular smartphones or tablets. In fact, the large display would probably be cumbersome.

The context-aware nature of technologies such as muscle sensing are useful in performing natural and spontaneous commands which would have required taking out your phone in the past, such as answering a phone call, or gesturing to take a pic at the perfect moment. However, it has limited potential in displaying complex feedback to the user and would be cumbersome in multi-layered tasks on its own.

A technology that interested me was the Google Talking Shoe. (http://www.theguardian.com/technology/shortcuts/2013/mar/13/google-talking-shoe) It’s a good example of when technology gets carried away and is being used in an effort to create added value using existing technologies rather than innovating on an entirely novel concept. The user is given motivation, encouragement, and reminders by the shoe through context-aware audible commentary on the user’s actions, such as saying ‘Are you a statue? Let’s do this already!’ if the user remains sitting for too long. This device is interesting, and it is rare to see a talking shoe in today’s world, but I could see it achieving even more potential if footwork based gestures could be incorporated to extend its functionality beyond just an overpriced fitness coach with a gentleman’s accent.

lisa li - 4/21/2014 17:41:56

The main advantages of these new technologies are providing users with new ways to interact with their devices. Personally I think these innovations can greatly increase user engagement and improve user experience. Also, just like the example about urban planning describe in the article, I can easily think of other ways of application in fields such as education, construction, design, where a lot of visualization and demonstration is required.

They are not suitable for tasks that does not require lot of visualization, demonstration and collaboration, such as mass computing. They are also not suitable for tasks that cannot be replaced by physical experiments, such as chemistry experiments (you will need to actually perform the chemical reactions to see the results).

An interesting interface technology: Brain Computer Interface (BCI) http://computer.howstuffworks.com/brain-computer-interface.htm BCI means directly controlling the device with nothing but thoughts. I think this technology can be widely used in industries such as gaming and entertainment (like virtual reality headsets).

Namkyu Chang - 4/21/2014 17:44:56

The main advantages of these new technologies is that it provides a new, more comprehensive and less-limited ways for users to communicate to the machine compared to the traditional keyboard and mouse. That being said, there are some tasks that they are not suitable for. To be more precise, take into account something that traditional UI/computing interaction does "well." Currently, I have no problem typing a response to the readings. Some alternatives, such as the multi-touch pen, or even the muscle-computer interactions, may be better for other tasks (such as computing on-the-go as suggested by the article's title), but sitting down to work is better done on traditional set-ups.

An interesting interface technology I found was SixthSense (http://www.pranavmistry.com/projects/sixthsense/) being developed at MIT. It's a wearable gestural interface that doesn't require additional hardware (such as keyboards) to communicate with the computer. Instead, it uses a camera and a projector to turn any surface (literally any surface) into a multitouch interface. Finger gestures are translated by the camera into actual commands that the machine could do (e.g. pretend you're taking a picture with a camera with your hands, and the machine takes a picture). If possible, it will get rid of all the clutter that traditional interfaces bring and provide a new way to interact with computers.

Romi Phadte - 4/21/2014 18:54:43

The main advantages of these new technologies are the ability to deal with information that is intuitively tangible. This is seen in iPhone games when manipulating small objects or blocks. The advantage of these interactions are also evident in data that that can be made intuitive with touch such as music in the reactable.

The tasks that these are unsuitable for are for long typing pieces. They also may not be useful for input and manipulation of conventional spreadsheets.Whenever you are dealing with large amounts of pure mathematical or text data, physical interactions are usually not appropriate.

Two technologies that I find interesting are leap motion (https://www.leapmotion.com/), and Myo (https://www.thalmic.com/en/myo/). Leapmotion is primarily promising since it is the first time a commercial company is able to create a sensor to detect finger locations at such a high precision. Sensors in the past have been fairly low quality or have got precision to the centimeter. Leap is submillimeter. It would be cool to see how the technology progresses. Myo uses electrical pulses from your muscles to determine the users gestures. This is not intrusive for the user since its on the upper forearm. This has a possibility to fundamentally change how we interact with our everyday devices.

Sol Park - 4/21/2014 20:09:37

The main advantage of these new technologies is that they do not include “training.” The new technologies support natural interactions (the user is already familiar with them in practice). If the user knows how to use a pen, he can quickly figure out how to use a pen input device without training. Similarly, if the user is familiar with working on a surface like a desk, he can quickly figure out how to use a “smart” surface without training. This is different from keyboard and mouse input, which require training. Another advantage of these new technologies is that they facilitate collaboration. It’s very easy for group of people to follow along what’s happening on a surface, since you can view it from all different sides. In addition, it is easy for multiple people to interact with a surface, since you can have multiple simultaneous inputs without conflicts. A keyboard and mouse input, or even a tablet device are designed to handle only one input at a time. These technologies are suitable for drawing, which is almost exclusively direct manipulation.

http://blog.laptopmag.com/use-smart-scroll-pause-galaxy-s4 The Eye Scroll technology let you scroll down the article you are reading on your Smartphone, with only your eyes. I found it interesting and promising since this can be useful such as reading an article on the bus. In fact, it could be useful whenever the user is in mobile since user can just manipulate the screen with your eyes.

Steven Wu - 4/23/2014 9:18:14

There is a strong push towards multi-user and multi-touch table surfaces as the future of interaction from the first portion of the article. Tabletop technologies have gained traction to welcome collaborative work. With tabletop technologies, it is required to rely on direct manipulation, which directly comes from the users' physical activity on the surface but also with the physical interaction with the collaborators. As for the bio-acoustic technology, the main advantage is the portability and working on many surfaces. With the projection of the interface located directly on the palm of your hand, there's less of that sentimental value you have damaged your smartphone since this is something that is not tangible but merely a virtual experience.

However the tabletop experience does not seem suitable when providing confidential information. Say your inputting your login credentials on the table and every one of the collaborators can view your information after logging in since it is in plain and visible sight. It can also be mentioned that these tables are not portable. Like multi-player video games that are more enjoyable experiences playing with your friends in the same room, the tabletop collaborative experience falls short when there is only one user. It may even become a chore navigation across a large surface, running from one side of the table to another. As for the bio-acoustic technology, one issue I immediately considered was how the projection technology works on people of different skin tones. I remember reading previously that the Microsoft Kinect had a difficult time registering players with darker skin tones.

The media player is loading...

The link I have included is the Microsoft Illumiroom, where the gaming experience is far more immersive than the constraints of the television monitor. The Illumiroom takes advantage of every shelf and corners of the wall behind the television set to project more of the landscape of the game a user is playing.

What I find promising about this is that is similar to the bio-acoustic technology, the projection projects to the entire wall and simulates the experience that the player is actually seeing everything that can be viewed in the game. Driving a race kart on a snow level? Your walls start drizzling down snowflakes. The idea of directly mapping out the room regardless of the lighting and the different dimensions you may have make this interaction promising.