- 1 Readings
- 2 Reading Responses
- 2.1 Lauren Fratamico - 4/14/2013 15:03:21
- 2.2 Colin Chang - 4/14/2013 18:15:54
- 2.3 Jeffery Butler - 4/20/2013 13:59:53
- 2.4 Jeffery Butler - 4/20/2013 14:05:45
- 2.5 Joyce Liu - 4/20/2013 22:02:28
- 2.6 Shujing Zhang - 4/21/2013 15:52:11
- 2.7 Soo Hyoung (Eric) Cheong - 4/21/2013 17:57:44
- 2.8 Cory Chen - 4/21/2013 19:51:25
- 2.9 Ryan Rho - 4/21/2013 21:52:59
- 2.10 Alice Huynh - 4/21/2013 21:53:25
- 2.11 Soyeon Kim (Summer) - 4/21/2013 21:57:43
- 2.12 Annie (Eun Sun) Shin - 4/21/2013 22:48:08
- 2.13 Elizabeth Hartoog - 4/21/2013 22:51:47
- 2.14 Yuliang Guan - 4/22/2013 0:19:00
- 2.15 Brent Batas - 4/22/2013 0:52:44
- 2.16 Elise McCallum - 4/22/2013 1:27:04
- 2.17 Eric Wishart - 4/22/2013 1:58:41
- 2.18 Brian Chang - 4/22/2013 2:01:16
- 2.19 Cong Chen - 4/22/2013 2:35:19
- 2.20 Mukul Murthy - 4/22/2013 3:04:22
- 2.21 Lishan Zhang - 4/22/2013 3:22:56
- 2.22 Zhaochen "JJ" Liu - 4/22/2013 3:23:05
- 2.23 Scott Stewart - 4/22/2013 6:52:08
- 2.24 Winston Hsu - 4/22/2013 7:17:00
- 2.25 Bryan Pine - 4/22/2013 9:24:23
- 2.26 Tiffany Jianto - 4/22/2013 10:31:24
- 2.27 Tenzin Nyima - 4/22/2013 10:59:19
- 2.28 Arvind Ramesh - 4/22/2013 11:21:58
- 2.29 Moshe Leon - 4/22/2013 11:40:22
- 2.30 Thomas Yun - 4/22/2013 12:04:34
- 2.31 Timothy Ko - 4/22/2013 12:09:23
- 2.32 Kate Gorman - 4/22/2013 12:24:51
- 2.33 Samir Makhani - 4/22/2013 12:28:30
- 2.34 Erika Delk - 4/22/2013 12:37:34
- 2.35 Timothy Wu - 4/22/2013 12:40:37
- 2.36 Sumer Joshi - 4/22/2013 12:46:42
- 2.37 Tiffany Lee - 4/22/2013 12:47:06
- 2.38 Claire Tuna - 4/22/2013 12:48:14
- 2.39 Raymond Lin - 4/22/2013 12:49:48
- 2.40 Monica To - 4/22/2013 12:51:46
- 2.41 Jin Ryu - 4/22/2013 12:52:44
- 2.42 Glenn Sugden - 4/22/2013 13:07:04
- 2.43 Andrew Gealy - 4/22/2013 13:07:23
- 2.44 Alexander Javad - 4/22/2013 13:14:21
- 2.45 Kayvan Najafzadeh - 4/22/2013 13:20:10
- 2.46 yunrui zhang - 4/22/2013 13:22:21
- 2.47 Matthew Chang - 4/22/2013 13:27:49
- 2.48 Weishu Xu - 4/22/2013 13:33:33
- 2.49 Alvin Yuan - 4/22/2013 13:33:51
- 2.50 Derek Lau - 4/22/2013 13:45:13
- 2.51 Oulun Zhao - 4/22/2013 13:46:57
- 2.52 Ben Goldberg - 4/22/2013 13:47:07
- 2.53 Ben Dong - 4/22/2013 13:51:59
- 2.54 Avneesh Kohli - 4/22/2013 13:54:05
- 2.55 Dennis Li - 4/22/2013 13:57:22
- 2.56 Alysha Jivani - 4/22/2013 14:02:16
- 2.57 Aarthi Ravi - 4/22/2013 14:02:58
- 2.58 Edward Shi - 4/22/2013 14:03:18
- 2.59 Sihyun Park - 4/22/2013 14:06:08
- 2.60 John Sloan - 4/22/2013 14:09:57
- 2.61 Linda Cai - 4/22/2013 14:12:06
- 2.62 Brett Johnson - 4/22/2013 14:14:27
- 2.63 Lemuel Daniel Wu - 4/22/2013 14:15:04
- 2.64 Minhaj khan - 4/22/2013 14:16:36
- 2.65 Juntao Mao - 4/22/2013 14:18:39
- 2.66 David Seeto - 4/22/2013 14:19:39
- 2.67 Haotian Wang - 4/22/2013 14:22:17
- 2.68 Sangyoon Park - 4/22/2013 14:25:00
- 2.69 Christine Loh - 4/22/2013 14:27:18
- 2.70 Nadine Salter - 4/22/2013 14:32:56
- 2.71 Tananun Songdechakraiwut - 4/22/2013 14:38:22
- 2.72 Brian Wong - 4/22/2013 14:39:08
- 2.73 Eric Xiao - 4/22/2013 14:44:47
- 2.74 Achal Dave - 4/22/2013 14:47:18
- 2.75 Zeeshan Javed - 4/22/2013 15:44:24
- 2.76 Ben Dong - 4/22/2013 21:09:48
- 2.77 Kevin Liang - 4/22/2013 23:21:48
ACM XRDS: Crossroads - The Future of Interaction Volume 16 Issue 4, Summer 2010 , pages 21-34:
- Interactive Surfaces and Tangibles
- Interfaces on the Go
Lauren Fratamico - 4/14/2013 15:03:21
I think these new technologies will prove to be very promising in the future. The main advantage is that we will be able to control more things than we can currently imagine, easily. With the former article, "Interactive Surfaces and Tangibles", there seems quite a bot of promise in the ability to hover over surfaces and be able to detect that motion (with shadow or other) to enable a mechanism other than direct touch. Soon we will be able to do so much more with multi touch than we are now (mostly 2 finger zoom stuff). The addition of digital information to everyday objects will also allow us to interact in ways we do not do now. These tasks are good for collaboration and augmented reality. We will be able to create realistic environments (maybe for military training or education) in labs that will allow people to communicate and act as if they (in the example of military training) were out in the field. This would result in better trained soldiers while risking fewer lives.
In the latter article, "Interfaces on the Go", we read about the use of the body as another medium for interaction. Our body is something we have with us everywhere and could be used as a surface to perform tasks. There are many drawbacks to this technology though, particularly the "midas touch" - other things in the wold interfering with our touching, eg touching a doorknob could inadvertently initiate our body touch screen. This sort of technology could be useful for communicating in ways we do not do now. For example, intentional body twitches could be used to type something. Maybe it could be combined with google glasses to search for things in an unspoken way. The body is great for "on the go" uses since it is something we always carry with us (though cell phones are also something we also carry with us).
The article I found discussed a device that would allow you to place your mobile device on top/in front of anything. You then would be able to interact with the thing you placed it on top of (http://petitinvention.wordpress.com/2008/02/10/future-of-internet-search-mobile-version/). For example, you could hold it up to buildings, then identify those buildings, or up to an apple and have it look up nutritional information, or up to some text highlight the text you want to learn more about and it could define the word for you. I find this interesting because there are often things I want to learn more about in the world. Technology like this would make discovering more about our world more seamless. It would be great as an education tool both for kids and adults.
Colin Chang - 4/14/2013 18:15:54
You've read two articles on possible future ways we will interact with computing. What do you think are the main advantages of these new technologies? And what tasks are they not suitable for? Find at least one other interesting interface technology online, paste a link and tell us what you find interesting and promising about it. The first article spoke about the possibilities and limitations of direct manipulation, with special respect to the advent of their dominance in mobile computing. The second article wrote also about mobile computing (in a sense), but instead focused on computing sans an independent (i.e. from your body) GUI device. Both articles address the growth of personal computing. And the main advantages of their suggestions are their accomodations with society's expectations of personal computing and also addressing the current limitations of the input possibilities on a mobile device. The first article, for instance, discusses tabular (as in, touch on tables) interactions to afford and allow for 'full hand, bimanual and multi-user interactions' (28). The second article addresses a mobile device's physical limitations by extending one's own body as a screen ready to receive input. These technologies are not suited for non-direct manipulation (unless there's some creative application I'm not thinking of) I find BCI's pretty intruiging (wikipedia: http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface) (an article: http://www.extremetech.com/extreme/149879-brown-university-creates-first-wireless-implanted-brain-computer-interface). One problem I have with many interfaces is to translate my intentions into readable actions (navigating the gulf of execution, if you will). Unfortunately, such a problem exists for any interface besides extremely direct manipulation. I'd be pretty excited about being able to type out my thoughts without using my body.
Jeffery Butler - 4/20/2013 13:59:53
The main advantages of these new technologies is that in contrast to the old Direct Manipulation technology of pointers and windows new technology now are Tangible interfaces. A tangible interfaces has the benefits of control and representation of a single storage device. For example, users now have the ability in some advanced interfaces to grab the devices with their hands increasing the tangibility of the particular set of data. In addition, new interfaces can have collaborative interaction. More than one user at a time can work with am interface which can in return generate a more collaborative effect to the user's common goals. User's can also, emphasize user better utilize creativity, and social action with interactive tools.
Tasks that these new technologies are not suited for are those that consist of when a user doesn't need to collaborate with another user or that large gesticulations aren't the most effective means of communicating a message. For example, I am fairly confident that the most efficient way to write a Word document will always be with a keyboard.WPM on a keyboard can be up to five times as fast as a person writing with a stylus. The only medium of communication that's faster than keyboard is speech, but that won't be considered writing.
Jeffery Butler - 4/20/2013 14:05:45
I didn't finish the response, here is the last bit,
I find the phone projector interface a promising interface. With this new interface user's can interact with a device without having to actually touch the device therefore you can touch any particular plane and communicate with the device. When this technology becomes more sophisticated it can be faster than actually pulling out your phone and unlocking it to perform simple tasks such as texting or emailing. Also, I think that this new technology really makes the data on the phone more tangible to the user because instead of touching the phone surface to communicate with the phone, you can touch any surface that the phone is projecting onto.
Joyce Liu - 4/20/2013 22:02:28
The main advantages of these technologies is that it allows for more interactive collaboration for co-located teams, and it also allows for more shared controls. Additionally, new technologies also allow for richer interactions, so the interactions that we have with our devices could potentially be more like how we interact with items in real life. If a task requires more privacy or would like one person to maintain control, this type of new technology would not be suitable. The new technology "Skinput" allows the skin to be used as a finger input surface. This brings the question of what else can use begin to use as input surfaces. What's cool about Skinput, and its main advantage, is that you don't need to pull any gadget out of your pocket—your body is the interface, so that ends up saving up much time. Skinput would not be suitable for tasks that require a lot of typing because it doesn't make sense to type out long messages on your skin. Skinput could be great for tasks in which the user simply needs to select or push a "button," but if it requires the user to input a lot of data, it would not be ideal.
MaKey Makey (http://www.youtube.com/watch?v=rfQqh7iCcOU) came out last year, and it's a pretty interesting interface technology because you can essentially make anything into a input key. You simply have to connect alligator clips to your desired input item, and the item becomes like a button, which I find pretty interesting. What I find promising about it is that it seems pretty easy to use, and as the video states, it does not require programming, so I think that allows for a lot more accessibility to the general public. The fact that this technology is easy for the general public to use is pretty promising and interesting.
Shujing Zhang - 4/21/2013 15:52:11
The main advantages of these interactive interface technologies are that people can direct manipulate the objects presented to them, using actions that would loosely correspond to the physical world. In this way, it is easier for people to learn and use the interface. Another advantage is that the tangible interface combines control and representation in a single physical device, which is very useful in multi-user collaboration case.
Although interactive interface are intuitive at processing small data, they are not suitable for loading large documents. The timeout limits are quickly reached, and often the command line is more efficient in dealing these cases. Also, they are not good at heavy data processing such as audio, image processing and sophisticated machine learning algorithms. The efficient for those tasks will drop significantly because those tasks do not require many interactive actions.
One of the most interesting technologies I found was during my interview with Oracle. They are developing mobile applications that support and integrate with back-end Oracle applications. These mobile applications will contain the latest technologies to provide the highest standard of UI design, graphical innovation and a solid robust user experience. People will access huge database just through their mobile tabs or smartphones. The link is as follows:
I found this promising is that people rely on data all the time and in this information age, it is important to access huge database even when you don’t have computer with you. So you still have resource to do the work at emergency.
Soo Hyoung (Eric) Cheong - 4/21/2013 17:57:44
- 1 For the "Interactive Surface and Tangibles, "The inputs by the user is more physical in that user can directly manipulate the input. For example, for "Active Desk" you directly write on the surface to draw on a computer software's pallet, and for "The Reactable," the change in location of the blocks causes the software to respond to the change immediately. Therefore, the interaction between the user and the software seems more realistic, and more intuitive in terms of the usage. These favor the multi-dimensional interaction between the user and software, therefore the technologies for simple interactions may not find such interface as suitable.
- 2 Interfaces on the Go. The main advantages of these types of interfaces is that it promotes, as the article would it, micro-interactions. The interaction is reduces the unnecessarily spent time and very conveniently gives the desired outputs. These however may not be suitable when the recommended surface of interaction is not readily available or not consistent. The reason for these is that it is not a useful interface if only certain conditions allow the interface to function properly.
http://www.technologyreview.com/view/512086/samsungs-eye-scroll-hints-at-post-interactive-interfaces/ This eye scroll feature would provide even more instant interaction between the user and the software. Since eye movements are pretty instinctive, the interaction with interface will be just as quick and instant. I think it is interesting that technology is getting even more and more getting integrated into our lives. It made me ponder what can be the upper limit to the interactions that a user and can have with an user interface. This article shows promise in that we are moving towards "interaction-less" interfaces that takes current user interface interactions to a next level.
Cory Chen - 4/21/2013 19:51:25
The new interfaces have several advantages. The main advantages for the tabletop/surface interface are the ability to interact with the interface with other users and the ability to use more fingers and 2 hands to interact with the interface (which opens up more possibilities for gestures). The tabletop interface also can potentially give us the ability to sense what items are on it based on their physical qualities. The main advantages for the muscle-sensing interface are being able to instantly access the interface, always having the interface within reach, and being able to use the interface without looking.
The tabletop interface does not seem to be very suitable for single-user work due to its form factor, and it is definitely not suitable for tasks that need to be done while out and about (checking maps on the way to a destination, for instance). I can see most tasks being possible on a tabletop interface, but it wouldn't be the most ideal form factor in many cases (ex: typing an essay). The muscle-sensing interface doesn't seem to be very suitable for tasks where you need to enter a lot of information, such as typing out a message. The user would only be able to interact with one hand since the other arm needs to be held in place for the interface to work. The muscle interface would also not be suitable for multi-user collaboration since having other people touch you would is not desirable.
A new interface technology that I found online is this "looking glass" idea: http://petitinvention.wordpress.com/2008/02/10/future-of-internet-search-mobile-version/ . Basically, it's a device that has a camera and displays what you put it over. It can recognize what's in its view and then give you information about it, such as car model, insect name, building address, font name, word definitions, etc. This device is really interesting, and the idea can probably be transferred to other mobile devices such as our phones and tablets that have cameras. The promising part is that it would give us a level of interaction and information about the world around us that isn't even imaginable at this point in time. By giving people easy access to information about their immediate surroundings, they're able to get a greater understanding of the things they interact with. I can imagine pointing the device towards an apple and finding its nutrition information, or pointing it at a painting and seeing who its artist is.
Ryan Rho - 4/21/2013 21:52:59
What do you think are the main advantages of these new technologies? And what tasks are they not suitable for?
One of the main advantages of a multi-touch screen is that the learning curve is low. Since the users can interact with their devices directly, the applications of the devices can easily apply everyday objects into their user interfaces, which effectively maximize affordances. However, it is not suitable for applications for accurate movements of tasks.
In order to resolve this problem, some devices also support interaction with a stylus pen. However, this is another disadvantage because an additional input device is needed which makes the interaction less direct than fingers. In addition, the pen gives users an impression that most interaction is about writing rather than touching, which limits some affordances.
Since it's a direct input, it is intuitive so that even toddlers can easily learn how to use the device. Toddlers can easily manipulate iPads which also proves that its learning curve is low. However, since most touch-based devices tend to remove physical keyboards, it is hard for users to type letters. So any actions of writing are unsuitable, such as word processing and programming.
Other new technologies like Reactable are great in that they use real objects to interact with devices. Rather than pointing and clicking an object from a screen with a mouse, a user can use an object as a controller and put it into a place on a screen. However, its learning curve is relatively high compared to multi-touch device. In addition, it requires more space control objects should be put on top of it. So everyday tasks such as texting and social media may not be suitable for this technologies.
Find at least one other interesting interface technology online, paste a link and tell us what you find interesting and promising about it.
Although the similar technology is introduced in the article, I would like to introduce this technology which has been recently viral. It is called MYO, which is an armband device that tracks your muscle in order to trigger commands. In addition, it has an accelerometer sensor inside the device so that the user can move his or her arm to indicate direction.
Although there has been interactive devices like Kinect, one of its disadvantages is that it's slow and costly because it has to process computer vision to recognize commands. However, tracking muscle movements through an armband is relatively cheaper so that it's faster to trigger a command, so it's good enough to play a video game.
Alice Huynh - 4/21/2013 21:53:25
Interactive Surfaces has the main advantage of “direct manipulation”. This is really good for the gulf of execution because the interactive surfaces mirror the real world actions of moving things onto a digital surface. The learning curve for how to interact with a interactive surface may be very simple if they are designed in a way to act just like real world actions. One task that interactive surfaces are not suitable for is portability. So far all the interactive surfaces that people of discovered are very large in size and made for interaction between multiple people. If someone were to want to use an interactive surface as a personal device that they can take with them anywhere and everywhere these interactive surfaces would not do a very good job.
Another disadvantage of interactive surfaces is if the surface is very large it might be hard to do normal tasks of moving an object from one side of the surface to the other. Computer desktop screens or track pads are small and thus a user can easily move from one side of the surface to the other side without too much effort. With a larger surface, for instance a surface the size of a large table, a user may have to get up and walk to the other side of the surface to do such a simple task.
Shown in Figure 2 with the Active Desk, artists are use to interacting on the actual surface of paper or easel. With this, artists can benefit of not having to learn anything new while being able to benefit from the digital screen saving their work in one place and not having to keep carrying more and more pieces of old work.
The idea of a muscle-computer interface is very interesting, but can lead to a lot of false information. It takes just a flex of the muscle to trigger any kind of interaction with the interface that could very easily be unintentional as well as intentional.
http://research.microsoft.com/en-us/um/redmond/groups/cue/MuCI/ Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces
The idea of using a human’s forearm as a computer has been shown in many futuristic movies and it’s another way of bringing a handheld computer with you. I imagine that this would be just as useful as “Google Goggles”, but the practicality of this new technology is unknown. Humans would have to wear a hefty bracelet or even the idea of embedding a chip into a person’s arm.
Soyeon Kim (Summer) - 4/21/2013 21:57:43
You've read two articles on possible future ways we will interact with computing. What do you think are the main advantages of these new technologies? - I think the main advantage of these new technologies is the fact that unleashed a set of possibilities in which we can do with these technologies. Especially the multi-touch technique and gestures allows us to collaborate inputs for collective purposes (i.e. playing interactive games) and it can also improve efficiency while using laptops (multi-touch on a Mac mouse pad for zoon-in and zoom-out).
And what tasks are they not suitable for? - In some occasions like driving a car or operating device within a pocket, multi-touch system is not suitable since it requires extra attention and extra motion inputs.
Find at least one other interesting interface technology online, paste a link and tell us what you find interesting and promising about it. - http://www.bbc.co.uk/news/technology-20970928 World’s first dynamic tactile touchscreen technology allows a flat screen to turn itself lumpy! This is very interesting because the biggest issue with touch-screen interface and technology is lack of tactile feedback and that is where the inaccuracy and inefficiency of the interface come from. This new technology will be so beneficial to everyone who uses touch-screen for input. It will be especially appreciated for visually impaired people.
Annie (Eun Sun) Shin - 4/21/2013 22:48:08
Multi-touch tabletops allow users to interact with computing in futuristic ways that appear in movies. I believe that such new technologies will become more common in years to come, and the transition will be smooth as consumers are already familiar with multi-touch devices such as tablets and iPads. These tabletops will help users multi-task, more efficiently carry out various tasks, and get additional information quickly--especially because the table can add "digital information to everyday physical objects." However, these tabletops are not suitable for person-to-person interactions. Although users will be able to communicate with others through multi-user functionalities of the tabletop, as users become more dependent on such technologies, they will interact less with others in person. Dinner table conversations may die as people focus more on what they are doing on their tabletops. The new technology that uses our arm or skin as a surface freaks me out (because of religious reasons), and I believe that it will result in negative health consequences (on the skin). The technology will however reduce hardware significantly. Google glasses (http://www.google.com/glass/start/) is an interesting interface technology that has been garnering attention. It allows users to interact with the world in unique ways that do not require our hands to be busy (an advantage Google glasses has over multi-touch devices). Natural voice commands to "glass" creates an illusion of a personal assistant that does whatever users command it to do (such as "record" or "take a picture"). The product seems promising because users will be able to more quickly send messages (without typing), video chat with people, and so on. I am personally concerned about the safety of users. If a user's mind is divided on "glass" as the user is focusing on what appears on the glasses, then the user will be unaware of his/her surroundings (such as cars).
Elizabeth Hartoog - 4/21/2013 22:51:47
Many of the new technologies embrace new forms of input that usually involve touch screens in some way. Such as touch screens with physical affordances that allow us to view data and images in a new way. Or the new muscle sensing that will allow us to use our bodies as an interface.
These kind of techniques are wonderful ways to rethink our everyday activities such as organizing our calendar events or trying to make a phone call to someone. The multi-touch surface with physical affordances even seems highly reasonable as a new way for engineers to produce designs.
However, what struck me as most important was that none of these physical interface research designs seem to have a dramatic affect on the need for a keyboard for large amount of writing/data input. While a multi-touch screen is a great way to do visual thinking, it is not a great mode for writing or massive data input. The only real development is speech recognition which would allow a user to speak instead of using a keyboard, but the fact of the matter is I can type significantly faster than I can speak and people around me don't get to hear what I'm typing (so no intrusion on privacy).
So the technology that I picked out is done by two MIT students. The user wears extremely colored gloves and a webcam, and the software can track the users hand movements in 3D. I found this super cool because it can allow a user to manipulate objects in 3D without the requirement of a 3D interface. The users just use their hands as they would normally, which means as far as hardware, this technology is not restricted. The gloves themselves would be cheap and easy to produce and the software could be distributed to any currently operating computer. I think this kind of technology is promising because it integrates human movement with 3D computer interactions without requiring the user to throw extra money on specialized hardware, something which I think many users would be unwilling to do on untested/not yet popular technology.
Yuliang Guan - 4/22/2013 0:19:00
(1) The two articles talked about tangible user interfaces and future user interfaces. Tangible user interfaces combine control and representation in a single physical device by tapping, sliding, swiping, or shaking. The authors mentioned some kinds of new technologies, such as active desk, muscle-based computing, siftables, the reactable, and turtan. These new technologies are great innovations. They are good combinations between hands and eyes so that the users can fell and control the interfaces more directly. Using human body as the interaction platform is great that we can assume a consistent, reliable, and always-available surface. Furthermore, compared to our current technologies, these techs are more interesting and simple for users to use (but the system must provide enough discordances that the user can learn the new system). Users will feel they are really “communicate with” the interfaces.
(2) However, tangible user interfaces also have ither disadvantages. First of all, they are not suitable for the very complex tasks (if using tangible user interfaces in such type of tasks, there are lots of gadgets are needed). Meanwhile, new technologies may cost more to develop and produce, therefore, they are not suitable for some low-cost interface designs.
(3) Link: http://www.core77.com/blog/technology/slap_widgets_virtual_controls_you_can_touch_13128.asp The slap widget is a new technology in the recent years. It it made with real live plastic ans silicone objects that are used in conjunction with a multi-touch table. It is an interesting way of computing since it allows users to control interface values through physical push buttons, sliders, knobs, keypads and keyboards. Slap widgets are contextually appropriate virtual controls that users can touch. The users can manipulate movable parts and the display responds to user input in real time. I believe virtual controls co will be widely used in the near future, and I especially wish our current keyboards can be replaced by slap keyboards.
Brent Batas - 4/22/2013 0:52:44
An advantage of many of these new technologies is that they eliminate “training.” Whether it is gestures, pen input, or interactive surfaces, these new technologies support natural interactions (natural in the sense that the user is already familiar with them in practice). If the user knows how to use a pen, he can quickly figure out how to use a pen input device. Similarly, if the user is familiar with working on a surface (like a desk or table), he can quickly figure out how to use a “smart” surface. This is different from keyboard and mouse input, which require training. Another advantage of these technologies is that they facilitate collaboration. It’s very easy for several people to follow along what’s happening on a surface, since you can view it from all different sides. In addition, it is easy for multiple people to interact with a surface, since you can have multiple simultaneous inputs without conflicts. This is in contrast to a keyboard and mouse input, or even a tablet device, which are designed to handle only one input at a time. These technologies are especially suitable for drawing, which is almost exclusively direct manipulation. It’s much, much more natural to draw using a pen that draws directly on the screen, rather than an external mouse that is off to the side.
However, these new technologies are not suitable for typing papers or longer paragraphs. It is much faster and less painful to type on a computer, rather than using a pen, and it is much less mental effort to type sentences out than try to form them in your head and speak them or have the computer otherwise try to figure out what you want to say. These new technologies are also not suitable for something like programming, where precision is a top priority, and expressions are very abstract. Natural gestures and interactions are great for working with concrete expressions like a piece of art, but not so good when things are abstract.
An interesting interface technology I found online is Google Glass. What’s interesting is that it is something that you wear, so once you put it on, you don’t even have to worry about it. Since it is something you look through, it can “know” what you are looking at. This means you can just tell it to “take a picture” and it’ll know what you want to take a picture of. Similarly, it can display information like directions/navigation right in front of you, so you don’t have to take your eyes off the road (or whatever you’re doing at the time). This is great because it means you don’t have to alternate between looking at something in real life, and then looking at a small screen, back and forth. What is especially promising is that it takes tasks that users already do—like take a picture, look up directions, or ask a question—and just does them better.
Elise McCallum - 4/22/2013 1:27:04
I think the main advantages of these new technologies are that they are more natural and mimic human movement more so than the standard mouse and keyboard interfaces. Not only does this make technology more accessible (as there is less of a steep learning curve because the movements are natural), but it further allows for a greater range of motion than the physical limitations of a mouse and keyboard. A keyboard is fixed for a certain amount of modes and a fixed range of motion (i.e. however one can move their fingers across the keyboard), and the mouse is also limited in the way it can be interacted with. Taking advantage of the full range of the finger muscles and movement provides a significant advantage in the number of movements that can be processed. There is, of course, a critical number of motions that could be processed before it becomes too hard for the user to distinguish and the server to still be sensitive, which presents a challenge. Looking at some of the other interfaces presented, we can also see the advantage of being able to mimic not only direct manipulation of an interface by finger gestures but also by whole hand motions (grasping) or potentially full body motions (as technology progresses). An additional advantage is that multiple input events can be processed at a given time, so one need not remember the sequential order in which they pressed buttons. Collaboration can also be made easier with these technologies, as multiple people can be working on the same device at one time and a more cohesive picture is painted. Technologies like the interactive tabletop exhibit how people can be in the same physical space and immediately see how their work is affecting the work of others. Interfaces like the second one presented also can potentially be used by people who lack normal hand control (e.g. people without hands, those with Parkinson's, and other neurological issues).
However, these technologies are not optimized for tasks such as text entry, as pointing to individual letters on a small screen is not nearly as easy as typing, and the accuracy is lower as well. An additional limitation is that one must consider how these devices act with each other and ensure that they do not conflict in such a way as to render the other unable to be used. For example, if you had sensors hooked up to your arms while trying to use a mobile device, the actions of your muscle groups interacting with the sensors could actually change what you're trying to do with your finger on the mobile device.
Another interesting interface technology I find to be very promising is that of neural interfaces, specifically that of neural prosthetics. Similar in a way to the second interface in the article, neural prosthetics are part of an up and coming field of technology that will be allow people who have lost limbs or lost feeling/control of their limbs to once again use them not with strictly muscle memory but with neural stimulators that provide therapy and allow the disabled to once again gain control of their body. What I find interesting about it is the fact that we have learned enough about the brain to be able to analyze the specific signals that control very isolated muscle groups and furthermore control their levels of stimulation to force action in either a prosthetic device or a real human limb. I find it promising because it presents a great advance in human technology. While it doesn't necessarily focus on the idea of spreading technology to make it more accessible to those who cannot afford it, it expands the accessibility of technology to those who never are suffering a less fulfilling life without it. The potential for this field is immense, as such neural stimulation analysis could be brought into interfaces and, at some point far in the future, allow for interfaces to be completely controlled by brain signals with no physical input needed. While such an idea is far off, the technologies that have been developed surrounding neurons and isolating critical pathways is making great strides towards such possible interfaces.
See link for more info: http://www.ninds.nih.gov/research/npp/
Eric Wishart - 4/22/2013 1:58:41
These technologies provide interesting ways for people to interact with devices, and to link up physical objects in the digital space. Both of these new technologies make interacting with a range of devices more convenient and exciting. Until muscle sensing becomes much more fine tuned, it will be difficult to enter text with this input device. Even the large touch screens suffer from text entry problems because you do not have feedback from the input device, and your hands obscure the only conformation that you have that they are in the correct spot.
I find being able to look at something as a way to select it very cool. This is very promising for quadriplegics.
Brian Chang - 4/22/2013 2:01:16
Some of these new technologies are what people need and are quite useful. Surface and Tangibles talks about multi-user multitouch which is not really seen anywhere right now in the mainstream, but can be quite useful for collaboration and competition (video games). The to go interfaces are nice too but it would be hard to use for things that need clear images and also it could be annoying for games.
Google Glasses! http://www.google.com/glass/start/ Its interesting because it's on your face and it see's what you see. The interaction is made simple and you can simply talk to it. There is also a button on the side for additional features.
Cong Chen - 4/22/2013 2:35:19
I think the main advantage of these new technologies is that the gulf between user thought disappears; it seems to me the goal of tangible interaction and such is that what the user manipulates is exactly how they represent it in their minds and thus, the gulf distances no longer apply. This is a great advantage because we are always trying to minimize these gulfs. However, these tangible and direct manipulation technologies may not be suitable for tasks like huge data manipulation or admin system managing. For tasks that are more complicated than what we humans are used to, it is less suitable for directly tangible interactions as for these big problems, it would be more beneficial for tools or representation that simply things so that it is more manageable for a human.
One interesting interface technology is Leap Motion. https://www.leapmotion.com/
I find this interface very interesting because it seems to be super sensitive and able to handle the most general gestures made my the user. Users can use pencils or fingers to directly manipulate a computer or whatnot just by moving their hands close to the screen. There is no direct touching involved. I think it is very promising because it really captures the ability for users to "tangibly" interact with the computer.
Mukul Murthy - 4/22/2013 3:04:22
The main advantages of these new technologies is that they are more natural and allow many different ways to interact with our computers. Currently, interactions with a desktop computer are mainly with keyboard and mouse, but the time may soon come when keyboard and mouse are used infrequently. The articles discuss new technologies that allow users to now collaborate on many different tasks. The problem on collaborating on documents - either remotely or all together - was largely solved by collaborative text editors, such as Google Docs. However, the articles discuss even more ways to collaborate, on things such as music. The task of writing music on the computer was difficult with keyboard and mouse, but with new technologies it seems easier. The task of drawing electronic art was challenging even for skilled mouse users, but touch screens have made that a lot simpler. So the new interface technologies open up new tasks and improve those that were difficult with keyboard and mouse.
However, not all tasks can be performed with the new interfaces. For example, any task that requires a lot of text entry - such as coding, or writing an essay - is difficult to do with non-keyboard (or voice) technologies. Tasks like coding are probably safe from these new interface technologies, at least for a while. Additionally, some of the interface technologies have limitations and usage restrictions. Interfaces that can interact with items sitting on top of them must be used horizontally and flat. Additionally, many of these interface technologies are still expensive, and so they will be used for very specific purposes, but none of them are as flexible as the current desktop with a mouse and keyboard.
One interesting interface technology I have heard about is flexible screens: http://ces.cnet.com/8301-34435_1-57563058/eyes-on-samsungs-youm-flexible-display-tech-at-ces-2013/ The basic idea is that screens - especially on smaller, more portable devices - can change their shape. This would be very useful for portability (rolling it up to save space), and it would also help maximize all space on the screen as well as create some interesting usage patterns. Currently, Samsung has screens that bend at the sides, but cannot be freely bent by the user. This is still helpful because it allows the device to display information that can be seen from even more angles.
Lishan Zhang - 4/22/2013 3:22:56
The main advantage of Tangible Interaction is to support physical items on the interface so that users have more freedom and performed more complex work. There will be more affordance between real world and interface. It is not suitable when users need to move with their device. They will not take those physical items with them.
The main advantage of mobile micro-interactions is to have a consistent, reliable and always-available interface we are really familiar with, that is our body. And it can act in harsh circumstances. So user can perform more tasks, which were previously limited by the size of device. It is not suitable to perform difficult tasks because the system is not really easy to learn and understand.
I will choose Google Glass because it is a revolutionary interface technology that will probably change the way people interact with device. The main advantage of Google Glass is to allow users leave their hands free for other things while still getting information as mobile device. And the interface input changes from touch to multiple ways like speech or eye trackers, which are more convenient to users.
Zhaochen "JJ" Liu - 4/22/2013 3:23:05
'What do you think are the main advantages of these new technologies?
Interactive Surface and Tangibles
- Break the limitations of WIMP: Windows, Icon, Menu and Pointing Device
- More natural interface, extension and deepening the concept of ‘direct manipulation’
- Multiple users can use them at the same time as it is sometimes not single-user interactive applications
- More social and sharing
Interfaces on the Go
- Optimize the use of applications on the go because it is hands-free
- Do not require much focus of attention when the devices are built on your body
- Utilize different parts of the body
- Get accurate and fast data from your body because some of your intension are not on purpose
And what tasks are they not suitable for?
Interactive Surface and Tangibles
- When you move around (playing soccer and go shopping), you cannot move the extra device with you
- When you travel to a mountain (hiking), it’s hard to carry the whole system with you
- Not good for simple tasks (look up weather, check out time) because it requires some knowledge and time to set it up
Interfaces on the Go
- Critical tasks (driving a car) because the technology might not be that accurate yet, some un-intentional feelings may affect the path of the task
- Sports because it usually requires the players wears light (not carrying too much extra stuff)
Find at least one other interesting interface technology online, paste a link and tell us what you find interesting and promising about it.
The Eye Scroll technology let you scroll down the article you are reading on your Smartphone, with only your eyes. For example, when you are reading an article on the bus, one of your hands may be holding the handle and your other hand is holding the phone. Then, you can just manipulate the screen with your eyes. It is also interesting when you are watching at a video on your phone: the moment you look away, the video will pause. It will resume when you look back on the screen again.
I find it very promising because eye tracking can finally been used in a general public commercial product. This technology has been there for more than a decade but it is not widely used. Many people try to manipulate the phone with both hands while driving the cars, which causes accidents and deaths. This new interaction method and perhaps futures improvements on this will solve the problem.
Scott Stewart - 4/22/2013 6:52:08
The main advances mentioned in the articles are collaboration and portability. I think that the collaboration mentioned in the first article is the biggest advantage offered by future technologies. Many offices follow a standard practice of holding meetings to determine goals, and then individuals go accomplish their specific goals. Being able to collaborate in real time would improve efficiency and lead to better results. The main advantage of the technologies in the second article is bringing the experience of a large device to a small device. The main disadvantage to current portable devices is that they have to be small for portability, so the area for input is small, making things like text entry difficult. By expanding the area for input, users would be able to do many more tasks that require a higher level of detail than is currently available. These future devices could also increase the area for output, making it more enjoyable to consume information, since nearly every activity on a computer, from watching videos to reading a spreadsheet, is easier on a larger screen. The technologies that were described would not be very suitable for large amounts of text input, which is a large amount of computer use. While there are professional tasks that would be aided by these new technologies, many common work tasks are designed to be accomplished by an individual with a keyboard, and a traditional interface would continue to be the most efficient interface for these users. An interesting article I found was: http://www.extremetech.com/extreme/153919-omnidirectional-treadmill-oculus-rift-at-home-virtual-reality-finally-arrives Virtual reality is something that has been envisioned for a long time, but is becoming much more promising as technology such as the Kinect and Wii become popular. The videogame interface has remained largely unchanged since videogames have existed. They have become more portable (from arcade games to handheld), but the typical buttons, such as a joystick and some extra buttons, is due for an update. The style of gameplay changes dramatically as the interface changes, which is what makes virtual reality so exciting.
Winston Hsu - 4/22/2013 7:17:00
I think the main advantage of these new technologies is that they allow for easier ways of visualized information to be manipulated physically. Additionally, I think that the advantages of the mobile-centric technologies are that it allows more opportunities for computing to be used in everyday life. However I think that some more traditional uses of computers are not served as well in these new interfaces. Typing and text input, for example, are still served better by a keyboard. Google glass, http://www.google.com/glass/start/, is a new interface technology that I find interesting because it takes augmented reality to a whole new level. Users can gain information by simply looking, without the need to touch or hold a device.
Bryan Pine - 4/22/2013 9:24:23
The first article discussed tabletop-based "tangible user interfaces" that allow users to directly manipulate a screen on the table with touch input and even by moving physical objects that the tabletop interface can track. This has two main advantages over traditional pointer / window based interface. First, the ability to manipulate physical 3D objects opens a lot of interaction possibilities that aren't available on a 2D screen. It allows for richer metaphors and more complete direct manipulation, which have the same benefits as the 2D versions of these concepts (aiding user understanding, providing comfort with a new system, etc.), but better. For example, if you wanted to mix music on a tabletop computer, you could actually use a record player shaped object and physically move it. In general, many automated tasks could be made to look and feel almost exactly like their non-automated counterparts. The second major benefit is the enhanced support for collaborative group work. Tabletop systems are made to support multiple users, which prevents one group member from dominating the system through control of the input device. Despite these benefits, tabletop systems aren't useful for everything. One difficult task would be the group editing of a text document, or some other data for which orientation was important. Either the users sit around a flat tabletop, which means some users see the document upside down at any given time, or the users huddle on one side of a slanted tabletop, which means that those closest to the device have more control than those farther away. There are definitely potential workarounds, but this is still a problem for which a traditional pointer-based system would be more natural.
The second article talks about micro-interactions for mobile computing that make use of the specifics of the human body. For example, the article suggests "whack gestures", in which a user can do something simple like ignoring a phone call by just whacking the phone without taking it out of their pocket. This is important and useful because users on the go usually only have 4-6 seconds to spare for their mobile device before they have to refocus on the outside world. Considering that it takes 4 seconds on average to get the phone out of your pocket, interactions that can save this step would be useful. Interactions like this would be great for quick, repetitive, simple tasks like ignoring phone calls. However, when cutting use time that much, the user probably won't even be able to take the phone out of the pocket, let alone see the screen, so the possible interactions are limited to a small set and allow for very little customizability of response. In other words, it is great for "ignore that call, I'm busy right now", but bad for "ignore that call unless it is my mom, in which case answer it".
http://www.google.com/glass/start/ This interface is the Google Glass, which is basically a computer in glasses form. It responds to voice, location, eye movement, and other input, and displays semitransparent images in front of your eyes. I think that the image display concept is very promising, especially for things like directions that work best when you are able to orient yourself in the same line as the image. I think the hands-free interaction is also very useful because it cuts down drastically on access time (the same problem that the second article was trying to address). However, the downside of voice commands is a lack of privacy, which will have to be dealt with somehow. It also seems difficult to turn a feature off when you don't want it, which means you could potentially be distracted at important times. I think these will definitely be banned in cars as well, which somewhat limits the usefulness of the directions feature. Overall though, this is a giant step in the right direction, making computing easier, faster, and more seamless.
Tiffany Jianto - 4/22/2013 10:31:24
Some interesting future or new ways we interact with computing include tabletops, nanotouch, muscle-computer interactions, and projects which can allow data to be seen on skin. The tabletops can provide multi-touch and tangible object interactions so that multiple people can interact with a table’s surface directly, and some tabletops even allow real physical objects to be placed on its surface. The main advantages of a tabletop is that multiple users can work together and all have their own control on a large display and multiple events can happen at one time; however, tabletops seem to be more useful for a standing person, especially tabletops which cannot be tilted and must be horiztonal. This could be inconvenient for people to remain standing while they work on the tabletop. Furthermore, there is a larger surface area, so it may be harder to manipulate some other parts of the screen, or it may require more work and movement. The nanotouch uses the back of a mobile device so that the fingers don’t interfere with the display on the front. While it is helpful that the fingers don’t block the screen, this seems like it would be problematic for accuracy on the screen, especially for typing since the user cannot see the fingers. Muscle-computer interactions use muscle sensing to map gestures to an action, and some even have a projector which allows data to be seen on the skin. While the muscle-computer interactions are very handy, this seems like it requires extensive setup, and there may not be many functions for it (aside for playing, pausing, adjusting volume on music, turning on and off lights). Furthermore, it is necessary to put on an arm band in order to recognize the gestures which may be inconvenient. Finally, the projector can also be a hassle since people may wear long sleeves or jewelry which interferes with the display.
Reading these articles reminded me of JARVIS from Iron Man. JARVIS is Iron Man’s home artificial intelligence system which assists him in every way possible at home, from turning on and off lights to helping him build intricate machinery. While technology is not so advanced yet, I think an artificial intelligence system which recognizes and responses to voice commands is very useful and seems to be in the works. In the first link, there is an example of Watson who is extremely fast and pretty accurate at Jeopardy, demonstrating accurate voice recognition and speech. In the second link, the idea of a voice controlled assistant is described. While the functionality is very basic, I think it paves the way for a more advanced system. While there are only basic models and prototypes out, a voice activated home system seems very useful and interesting.
Tenzin Nyima - 4/22/2013 10:59:19
Since its invention, human experience with computer has improved tremendously. And it will continue to improve. But every major ideas in human interaction with computer has seen many challenges. And this will hold true for the future changes as well. In the reading, we read about few of the future possibilities of exciting new ways of interacting with computing and, its pros and cons. For example, the idea of micro-interaction. You whack your phone without taking out from your pocket and make it silent. You are in the middle of a very important conversation with your friends and all of sudden the phone rings. Now instead of taking your phone out of your pocket and then make it silent, all you need is to give your phone a whack. But this kind of interaction has many limitations. For example, you can’t write a text message by simply giving a whack to your phone.
Another exciting new technology is Skinput, a novel input technique that allows the skin to be used as a finger input surface. According to the reading, when a finger taps the
skin, several distinct forms of acoustic energy are produced and transmitted through the body.
One great advantage of this kind of interaction with computing is that the arm is an attractive area to “steal” for input as it provides considerable surface area for interaction, including a contiguous and flat area for projection. Moreover, very similar to today’s mobile phones, which we hold on one hand and use fingers of the other hand, it will feel very natural for us to use the arm and use fingers from the other hand to type or dial numbers. One big advantage of Skinput technology is that now we don’t need a physical device. But such technologies has some bad sides as well. Besides the challenges in making this sort of technology work well, it seems almost impossible to replace all the features that we get from technologies such as mobile phone. How about the cameras on our phones? Can we implement the camera feature in Skinput technology? New technologies such as Skinput is not suitable to replace all the great features of mobile phones.
“Google Nose” was actually Google’s April fool joke. So Google Nose is fake but such technology does exist. Founded in 1997, Cyrano Sciences, Inc., commercialized a simple, accurate, non-invasive tool that enabled machines to smell. Developing an electronic nose can make lot of things easier. There are certainly many things that might not work well with sensing smells but there are also many that does work well. Forgot to label your medicines and now wondering what really it is? Just use your electronic nose and it tells you right away. Wondering what is the name of the beautiful plant that you saw somewhere? Just use your electronic nose and get the name.
Arvind Ramesh - 4/22/2013 11:21:58
I think one big advantage of these technologies is the increased ability of people to work together on a project. This is especially true with the multi-touch interfaces that are coming out today, as it offers hundreds of advantages when it comes to group work. People will no longer be defined by a "role" when working on a team, but can all work together with full control and access to all the information.
Another advantage of new UI technologies is the unprecedented ability we will have to interact with our environment. Once input methods go to muscle-sensing and acoustic forms, we well be able to essentially make our bodies a living computer, but gaining the ability to look at almost anything and gather information on it. Also, we will retain the advantages of mobile computing, in that we can access the internet anywhere, while overcoming its biggest limitation, the small screen.
The biggest limitation of these new technologies is that they will probably never be able to match the typical mouse-keyboard combination in tasks such as word processing, gaming, and programming. These are just a few examples, but there will always be applications in which the keyboard-mouse combination will reign supreme for decades to come. that said, the future of UI technologies is very promising for our generation.
Moshe Leon - 4/22/2013 11:40:22
What do you think are the main advantages of these (future ways we will interact with computing) new technologies? And what tasks are they not suitable for? Multi-user-Multi-touch on table interfaces is the foreseen future according to the first part of the article. The movement from single user, or single user dominated interfaces has been a challenge for various reasons. One block was the shift itself, and the possibility to think outside the box and be able to develop things outside the general flow and scope of today’s society. It is great for collaborators on projects that need a big surface to project multiple ideas out of multiple users. It really noting the end of the whiteboard era, however, the drawback is that it is still expansive, and the surface is horizontal. Also, the ability to not have a single lead is still being worked on and has not been perfected yet. Also, the ability to differentiate between users is still a challenge, but it has been solved in at least one platform. The other technology being discussed is the move from large size old computer, to today’s mobile computers, which give us all the ability to perform tasks that past generations couldn’t even imagine. Micro-computers are embedded within anything today, and that enables the creation of many new technologies. The Bio-Acoustic Sensing, muscle sensing and body tapping are only some of the new technologies being discussed in the second article, and I find them to be quite amazing, although they are still quite limited. These are like the beginning stages of building a larger, bigger idea that takes time to develop and perfect. These ideas are still raw, and it is natural to have many of them until a new generation will develop what it really needs out of the combination of all these new technologies. The body tapping has the most challenges, some of them are occlusions, and the fact that the interface used is unnatural, and is fit to be used on a phone.
Find at least one other interesting interface technology online, paste a link and tell us what you find interesting and promising about it. --> http://www.google.com/glass/start/ If I had the money at the moment, I would buy the Google Glass. I find it fascinating, and would like to tap into the possible things they have to offer. I think it is a smart move, from Mobile phone to such a device, and whether or not the answer is in glasses, the important thing is the willingness and the motivation to move away from the overall flow and think outside the box. Hand free interface is the ultimate thing as far as I am concerned, and I can’t wait to try the Glass!
Thomas Yun - 4/22/2013 12:04:34
The main advantages from having interfaces with direct manipulation are ease of use and possibly higher accuracy. It also eliminates the need for devices such as the keyboard and mouse. An obvious disadvantage to it is if there is something that requires mouse movement to be tracked, a direct manipulation interface would not be able to do that as its location registers when it is touched. For interfaces on the go, an advantage comes from the fact that you're using your body to interact which is somewhat similar to direct manipulation. However, things such as nanotouch make it hard to accurately touch things if accuracy is needed.
Having most used tasks right in front of you would make things a lot easier.
Timothy Ko - 4/22/2013 12:09:23
Tabletop technologies have the main advantage of having more potential for collaborative work. Since, tabletop devices rely on direct manipulation, that means users will be actively moving objects and doing other activities physically all over the interface. That means they will also be physically interacting with other users. This could increase motivation for collaboration. The main advantage of the bio acoustic technology is that, like the article discussed, it represents the ultimate goal in portability. You carry your body with you no matter what so it would be convenient to have interfaces directly projected onto your skin. This also removes the need for peripheral devices like watches or cell phones, which seem clunky and unnecessary by comparison.
Tabletops, however, don’t seem to be very suitable for single user activities. For example, you don’t want to do online banking or displaying private information on a giant tabletop screen. They also aren’t portable at all. Also, the focus on collaborative work limits the uses of tabletop interfaces, which poses a possible problem of not being able to reach out to a wide enough audience. For the bio acoustic technology, there is the challenge of being able to accommodate the many different shapes, sizes, and skin colors of human beings. How is a skinny person going to see a lot of information on their arms, for example? Your arms are also, relative to a desktop screen, small and unable to display a lot of information.
One interesting interface technology I found online was called “Siftables.” These are tile shaped “cookie sized” tiles with screens on them that can interact with other siftables by being able to physically detect them. What I find promising about this technology is that like the tabletop interfaces, it takes advantage of physical manipulation, making it easier for people to learn how to use. However, unlike tabletop interfaces, their size makes them very portable and allows them to be easily moved from one location to another. The website I used is listed below.
Kate Gorman - 4/22/2013 12:24:51
The Main Advantages of these new technologies are the emphasis on tangibility and bodiliy interaction, as well as the embedding of these systems in real spaces and contexts. These technologies allow for multi-faceted interactions, that involve many items, many touchpoints, and can allow for more creativity and flexibility through direct manipulation, which is closer to a real physical interaction for users.
These technologies are not suitable for heavy text based input tasks, and should be geared towards highly visual tasks where precision is not necessarily required.
This is an interesting wearable gesture based interface http://www.pranavmistry.com/projects/sixthsense/ . The is a step above google glass and allow projection onto items and reads. Being able to dial a phone number on your hand or control a display midair would mobilize computer even further
Samir Makhani - 4/22/2013 12:28:30
As computers, as the article states, "pervade almost all aspects of our lives," the number of possible ways that we will interact with computing in the future will greatly increase, especially because there is such a massive market for it now. I think the main advantage of these new advancements is that it makes our lives easier, and remain well-connected with the rest of the world. As mobile computing takes over, touch-based interaction has greatly increased, and 2D pointing devices and keyboards are being used less with the rise of tablets such as Nexus and iPads. Because the market of computing is so large, more money and research is being put into research on simpler interfaces with simpler interaction. These advancements will make interaction with computing a lot more intuitive, which in turn will make our lives a lot easier, instead of having to worry about long-processes to get our input recognized by a particular interface (punch-cards earlier in the 20th century). Clearly a lot of these new technologies are not suitable for all use cases. For instance, no matter how fancy and ergonomically friendly a keyboard can get it, it has no use case on a mobile device with touch-sensing. Similarly, when using a touch-based device to type a 40 page paper, it will be a lot less useful than a keyboard.
One interface technology that I find very interesting is Leap Motion (https://www.leapmotion.com), which is a gesture-based controller which is launching in less than a month away. The product improves how people interact with their current devices. You can pretty much do anything on your computer by just waving, pinching, sliding, or pretty much any gesture with your hand. I strongly believe this product will change the way we human's interact with our devices. The sensing technology behind the leap motion is "sort of like a Super Kinect" says CEO Michael Buckwald from Autodesk.
Erika Delk - 4/22/2013 12:37:34
The main advantages of these new technologies is that they take advantage of direct manipulation, and make using user interfaces even more similar to interacting with the outside world by taking advantage of motions the user is familiar with. However, these forms of technology would be less useful for tasks which require high accuracy (as it seems like there is much more room for error with these), or tasks which must be accomplished very quickly. Here: http://www.youtube.com/watch?v=JelhR2iPuw0 is a link to a youtube video about tactile touchscreens, or touchscreens where the screen deforms to display a 3-D surface, such as keys for a keyboard. I think it's interesting because it solves some of the main problems of touchscreen keyboards, namely, lack of tactile feedback. Also, from a pure engineering standpoint, it is fascinating that they have developed a surface that can deform relatively quickly and is still durable.
Timothy Wu - 4/22/2013 12:40:37
For the first technology, I think the main advantages are that you can manipulate 3-D objects on the surface of the table and have it register commands or input and that you can have simultaneous collaboration. The information contained in the spatial arrangement of the objects allows you to create things like schematic drawings with connections to the different objects, use it to synthesize music in a creative new way, and to critique urban layouts. For multi-user collaboration, the table-top surface allows you to share control between two or more users collaborating on the same device because there is no longer one person controlling the mouse and keyboard. Now the two users can simultaneously control the table-top surface, allowing for potential parallelism. This technology is not suited for mobile situations because it is a fairly large apparatus. Furthermore, it probably would not be useful for a single person to use on their own. Ideally, there would be multiple users using it at a time so that the users could reap the full benefits of the technology medium.
The main advantage of the second technology is that the device is always on a person and that it decreases the amount of context switch time. Since the user is wearing the device and it detects and projects onto the user's body, the user does not need to pull out a mobile device from their pocket and context switch to focus on it. Micro-interactions could be used to streamline the interaction, for instance by allowing you to silence your phone or checking your email by a gesture. The device does not seem to be suited for more complex tasks. There are only so many things a user could do with a physical action that does not require them to look at the screen. Furthermore, the projector on a person's arm would only allow the user to use one hand to interact with the device. It is easy to imagine how difficult it would be to use a keyboard with only one hand.
Here is a link of Leap Motion, a computer vision interface product: https://www.leapmotion.com/product I find it interesting because it seems extremely responsive to a wide variety of physical gestures and is a new way to interact with one's computer. Being able to use touch gestures without actually touching any surfaces would enable you to interact with your device from afar, which could allow you to more economically utilize space in your workspace. Also, motion gestures would allow you to give input that you wouldn't be able to be able to give with a traditional mouse and keyboard because you are now able to use the physical 3-D space as a means of encoding new information. Lastly, I think the device is promising because of its aesthetic appeal and plug-and-play attributes, which makes it appealing to average users.
Sumer Joshi - 4/22/2013 12:46:42
The main advantages of these new technologies is to provide people a way of giving them an added lens of how to do things and apply their talents. I really think that some of these new innovations really depend on how technological people want to be, and how we can "think outside the box", no matter how cliche that term might be. Another interesting interface that I found is Headspin, which is 3D teleconferencing. I find this interesting because teleconferencing has not been in 3D before unless you use a webcam, but the way its built to generate a 3D image seems like it could be in production in the future.
Tiffany Lee - 4/22/2013 12:47:06
The first article talked about interactive surfaces and tangibles. The main advantages of these new technologies is that it allows for many more possible actions from the users, many of which are intuitive, especially the technologies that interact with actual objects. Furthermore, tabletop technologies often afford group work; and when done right allows for everyone to control the data on the system so that everyone is able to equally contribute. This equal contribution doesn't really happen much in many of the technologies that exist now such as white boards and personal computers. The main disadvantages to these technologies are that are not portable and that occlusion happens.
The second article talked about mobile technologies on the go; specifically muscle-computer interfaces and bio-acoustic sensing. The main advantages of muscle-computer interfaces is that it makes computing very easy and discreet. However, a disadvantage is that you have to wear an armband which might be uncomfortable or unattractive in some people's opinions. Also, since people use their muscles for a variety of different actions in real life; these real life actions might accidentally be picked up as signals for computing. The main advantages of bio-acoustic sensing is that the user would not have to bring a phone or object around with them, they can just use their body. Of course right now, based on the pictures in the article, users would have to wear a projector somewhere on their body to see the screen.
http://phys.org/news/2013-04-mind-controlled-devices-reveal-future-possibilities.html The above link is about mind-controlled devices. Recently Bin He, Ph.D. and his team at the University of Minnesota were able to make improvements to non-invasive mind-control devices. These devices if improved to the point where they have high accuracy in reading minds, could greatly decrease the gulf of execution in technologies. The application of this kind of technology could also be very wide, from video games to artificial limbs.
Claire Tuna - 4/22/2013 12:48:14
main advantages of new technologies?
One advantage of the new technologies is moving away from the WIMP architecture and creating a richer set of input devices that can utilize our bodies and hands more fully than the mouse. Another advantage of “ubiquitous” devices and table-top type devices is flexibility, i.e., we wouldn’t need to lug them around with us. A promising possibility is that rather than transport the laptop and put it on the table wherever you happen to be, in the future, you could carry around some projecting device that turns whatever table is around into your computer. Another advantage of these technologies is that they could help with collaborative work. Examples like the Reactable show how control can be shared between more than one user on one device. Such control cannot be shared in the WIMP architecture, because only one person can use the mouse or type at once. The technology also holds promise for using 3D objects on top of a surface, as mentioned with the architectural example. This could create a richer visual experience for 3d modeling than a flat surface.
what are they not suited for? These devices are not suited for tasks such as single user word processing. Manipulating objects directly does not make sense in the context of word processing (unless maybe you were designing a layout or something where changing text size/position was necessary). It does not seem as though any of the technologies would improve upon the efficiency of keyboard typing.
1 interesting interface tech online: http://vimeo.com/omekinteractive/graspces2013 The Omek Grasp combines direct manipulation with gesture recognition. This has the mental advantages of direct manipulation-- well mapped metaphors, small gaps of interpretation, but with the possibility/flexibility to work from a distance. I think it’s exciting that in the future, people could be sitting comfortably on couches, away from the screen, but still be able to manipulate the screen’s objects. I think such far range gesture recognition technologies could help a lot with neck/back/general body problems that come along with long term computer use.
Raymond Lin - 4/22/2013 12:49:48
I believe that the main advantage of these new technologies are primarily the convenience and ease of use. However, I believe the ease of use comes with a cost, that is learning the new generation of interfaces. While, the first iteration of these new interfaces, will have some learning curve, it will definitely ease as more and similar interfaces arrive. I think one of the more interesting technologies to come out is Google Glass. What makes me interested in it, is primarily the fact it looks likes technology coming straight from a movie. But really, the interface seems to be integrate with people's natural tendencies. What I mean by that is, although we're now accustomed to holding cameras/talking through cellphones, at the base, these are all extension of human actions (i.e. seeing and speaking). Google Glass seems to play off these kinds of actions, and intends to be a more natural part of a person's life. http://www.google.com/glass/start/what-it-does/
Monica To - 4/22/2013 12:51:46
Many of the new user interfaces described in the two articles are trying to push the bounds of human interaction with computers with the conventional WIMP desktop interface. The main advantages of these new technologies are that they are trying to make human interaction with computing devices more closely aligned with natural physical human interaction. In other words, interfaces that allow a user to use their hands with more freedom to be able to physically manipulate the interface and to make the experience more shared and collaborative with multi-user multi-touch interfaces. The first article rations that tangible interfaces "can be seen as an extension and a deepening of the concept of 'direct manipulation'..." Although these more evasive user interfaces and ways of interacting with computing devices may improve collaboration in meetings or may allow for users to use their own body as part of interacting with a device, it is not suitable for all tasks. One task that I can think of is the programming or developing code. A single user task like programming requires typing a lot and also requires working in solitude. As cool as being able to type numbers on your forearm may sound, it doesn't seem like the most optimal way to develop for example, our CS160 android app. For this task, the conventional WIMP interface appears to be currently the most optimal choice for software engineers. One interesting interface technology not mentioned in the two articles is a technology like Google Glass (http://www.google.com/glass/start/what-it-does/). Google Glass is a headset that displays computing information in a user's field of vision through the pair of glasses. A user interacts with this device by wearing it and using voice to give it commands (google something, take photos or video recordings, etc). And user feedback is then displayed in the user's view. This technology is interesting because it is completely hands-free and it is a device that a user can virtually use the entire day with little effort. Instead of lugging around a laptop computer, a user could wear the Google Glass and go on with their daily life. I think this is promising because it gives the user a lot more freedom and convenience because a user does not need to put much effort in using it and it offers a hands-free way of interaction. Although it probably cannot replace the WIMP devices, it definitely seems like a promising device that could replace other devices like the cell-phone or conventional point-and-shoot camera (for photos and video-recording).
Jin Ryu - 4/22/2013 12:52:44
1. Tangible user interface: - There are a few advantages to a new user interface with various types of surfaces:
- Possible portability - No mouse or additional forms of input is required if all it needs is touch. A person does not have to carry more than the device itself and their own hands.
- Multi-touch and multi-dimensional - Incorporating more than a single touch focus allows the interface to have a more diverse array of commands. Also, multi-touch expands the interface to be used by multiple users since it is just additional touch points.
- Direct, intuitive interaction - Commands based on motion are more natural and expands possible forms of input to several that are less sequential (pinching, spinning, etcetera) instead of clicking and dragging and pressing some buttons in order.
- Use of a person's physical memory through more distinguishable muscle actions - Different types of movement may be more memorable to the user than remembering certain buttons to press in order.
- When this interface may not be suitable:
- People with limb disabilities (especially their arms) may not benefit from this new interface.
- If the interface is particularly small so as to be portable, anything requiring a big screen would not be ideal, such as watching movies in high quality or examining a lot of information at once.
- For some tabletops, they may not be usable when the user is actively focused doing something else like running and cannot spare too much attention looking at the device.
- If the tangible interface is split into many parts, it may not be ideal to use in a chaotic environment where things can get lost.
2. Biological interfaces (on the go): - There are a few advantages to having an interface that uses the environment and the body:
- High portability and adaptability: an interface like this makes use of environment which makes it very portable and very lightweight. User doesn't need to carry a screen because they can use anything around them (a wall or their own body).
- Use of human body: humans are knowledgeable and aware of their own bodies so interactions can be made extremely easy if they are simple and organic. They may even be able to use the device without looking, depending on how natural the commands are.
- People with missing fingers may be able to use their device if the biological sensors can still read inputs from higher parts of their arms.
- Some applications for which this interface may not be suitable are:
- Accuracy-dependent or delicate tasks: environment is noisy, and input must be very careful. Some commonplace motions could result in faulty or accidental input, especially if reading body signals that could come in anything from everyday movements to actual use of unique interface.
- Adaptability and generality: having multiple functions or heavy computing power may not be supported, especially if the interface focuses on being portable. In a small sized device, space and power is limited.
3. Other interesting interface: Leap - It is an user interface that utilizes motion-capture and brings possibilities of input into a 3-dimensional space instead of the usual 2-D. People can use their fingers, hands, or their entire bodies by moving them up, down, front, back in all areas of space. They also do not need to touch the screen or be in contact with any device, especially if motion is capture through a sensor. To interact with the interface, they need to put themselves within the boundaries of the sensor and perform actions that are known to the interface. The interface needs a screen and a sensing device so portability depends on if there is an area to place the sensor and a flat surface to place a monitor/screen. Url: https://www.leapmotion.com/
Glenn Sugden - 4/22/2013 13:07:04
The biggest advtantage, by far, is the ability to collaborate interactively with others within the same space, such as a tabletop system. As current technologies are designed for one user, the ability to interactively share and manipulate data is only now beginning to emerge. Sharing (and updating, modifying, etc.) resources for a software based product, such as source code, images, sounds, etc. are being handled in somewhat-realtime by technologies like SVN, CVS, GIT, etc. But this is still a client/server role, and not a truly multiuser interactive workflow. Closer would be something like Google Docs, where multiple users can work on the same document at the same time, but there are still issues with data "jumping around" when users make big changes (like deletions) to data that is above the place where you are editing, resulting in a large shift in focus, and subsequently slowing down your productivity. However, all of these technologies are still designed from a single user's perspective/tools, and not with multiuser interaction built in from the beginning. The tabletop/surfaces described in the article were much closer to immediate, multiuser collaboration, but will require a fundamental shift in users' workflows before they are readily adopted.
Personally, I am fascinated by the possibilities of brain/computer interfaces. The idea that we will be able to manipulate data with our thoughts instead of mechanically with our hands (or voice) opens up a ton of new areas of UI exploration: disabled individuals, gaming interfaces where you can issue "global orders" while still concentrating on the local task at hand, biofeedback software that provide visual and auditory clues according to your state of mind (stressed, blissed, relaxed, etc.), devices that can change their state according to what you are thinking at the moment (E.G. turn on coffee maker, turn off stove, dim the lights, etc.). The possibilities are enormous .. and it's a fascinting research area that holds a lot of potential.
Andrew Gealy - 4/22/2013 13:07:23
Tangible computing devices like Microsoft Surface seem to be strongest for novel tasks involving multiple users. In order to see their real advantages we need to look beyond traditional interfaces, as they do not translate particularly well. This is their strength and weakness, as many tasks we are accustomed to have been developed extensively with a classical WIMP interface in mind. For example, text entry is still going to be fastest with a keyboard, which also offers rapid interactions for the experienced user with program hotkeys. Anything involving text entry is probably not going to be particularly suited for these sorts of tangible interfaces, and many times (at least with current software) the ability to support multiple users isn't really an advantage.
The idea of a conversational, voice-based UI isn't particularly novel (Siri is basically that), it just seems as though the technology hasn't yet reached the usability threshold needed to actually be effective. I use Siri occasionally, but only when I'm unable to type. I always have to think very hard about what I'm going to say and try to speak clearly. Once users are comfortable speaking casually as they might to a friend (when the technology is sufficiently advanced to handle that), I can see this kind of UI becoming immediately much more powerful.
Alexander Javad - 4/22/2013 13:14:21
The main advantage of tangible user interfaces that allow for the direct manipulation of objects on a computer is that it is allegedly easier for a user to learn to use these interfaces due to their "natural" way of use. What I mean by that is, to move a data object from one location to another, a user would touch that item with their finger and drag it across the computer's screen to the desired location by moving their arm and releasing their finger. This is "natural" because it is analogous to moving a physical object, although we are manipulating objects on a computer. Another main advantage of TUI's is that they can improve the "richness" of human to computer interactions. An example of this would be the now famous "pinch-zoom" popularized by Apple. But with tangible, physical objects to manipulate... the possibility for new interactions has grown much much more.
However with all the promising capabilities that TUI's present, the current challenge with making these capabilities available for public use is there are "intrinsic limitations of cell phones and computer screens which are specifically designed for single user interaction. Additionally, in the case of phones, their small size limits full hand or bimanual interaction. On the personal computers side, developers are struggling for adapting multi-touch to the rigid structure of WIMP, which prevents the apparition of most of the features of multi-touch." (ACM XRDS: Crossroads - The Future of Interaction Volume 16 Issue 4, Summer 2010, Pg. 28)
I'm choosing the Nintendo Wii as my cool user interface I'd like to discuss. take a look at the Nintendo Wii. This is more of a tangible user interface because if you play a game like Wii bowling...to send your bowling ball down the lane you merely press a button and then physically "roll" the ball (move the controller as if you were rolling a bowling ball). The "command" you wish to execute nearly mimics what you would do in reality. This "natural motion" is the main exciting feature of tangible user interfaces. If you play another game, such as Zelda, to swing your sword... you swing the Wii remote! That's awesome. It make you feel like you are more a part of the game. It's a lot more fun... as well as "intuitive" in the sense that... pressing a few buttons to swing a sword doesn't make as much sense as just swinging the remote. Or if you're boxing... you punch with the remote! See the link below...
Kayvan Najafzadeh - 4/22/2013 13:20:10
To me the most interesting technology was the ReacTable. I believe this technology was not meant to be an improvement over an existed technologies but its more like a new musical instrument which gives the player a powerful tools to create music while entertaining viewers by the graphical effects on the table. A new technology that I found on the internet was Leap Motion. While this technology is really exciting and powerful, I can not see it replacing our existed technologies. It will have its own uses. https://www.leapmotion.com/
yunrui zhang - 4/22/2013 13:22:21
The main advantage of "Interactive Interfaces and Tangibles" is that they eliminate the physical confinement of a use interface, such as size, screen material, etc. They free the users from holding or operating on a device, and allow they to choose they own surface of interaction. But there are stills tasks they are not suitable for. Such tasks include heavy text edit, photo editing, and video playing that require high definition. Since the technology yet do not support retina display and a physical keyboard still wins a virtual one. The main advantage of the "Interfaces" is mobility. That is, devices themselves can be made tiny for carrying around, but we do not loose any control and computing power for this convenience. But they are still tasks they are not suitable for. Such tasks includes watching movies, text editing, and anythings that requires intensive input and a high definition displaying surface.
An interesting interface technology I find online is: http://petitinvention.wordpress.com/2008/02/10/future-of-internet-search-mobile-version/ It is a pictorial websearch technology that when you look at something through the device, for example, a building, a park, a car, etc, a websearch result will be generally based on what you see. I find it interesting and promising because nowadays people still go through the trouble to go through inputting text in their mobile devices, and text input is not a strong point of mobile devices. However, almost all mobile devices now have built-in cameras, and it is way more convenient to take a picture on a mobile device than inputting text. This technology use this advantage to eliminate the use of the disadvantage, which is a very promising idea.
Matthew Chang - 4/22/2013 13:27:49
The transition to these direct manipulation interfaces makes it easier for promoting collaboration in person as well as making interaction with the device more intuitive. At the same time, though, this type of interface is not suitable for bulk entry of information. This is meant for manipulating data or consuming content. It is arguably slower and more prone to error when attempting data entry through these touch interfaces.
The human body based interfaces is interesting because of its creativity. As pointed out in the second article, the concept has some guarantees such as always having a projection surface, as well as the familiarity with our own bodies allowing us to interact with the system with relative ease while jogging and such. The problem with this is that it inherently limits itself to situations where the user does not or has rolled up their sleeve. This severely limits where it can be used. Another aspect of this is that because the arm is being used and often at a non-natural angle, there is the possibility of fatigue, which restricts this interface to quick navigation. As mentioned at the beginning of this article, this sort of interface is optimized for quick reference and works well only in short bursts.
An interesting interface technology that has recently made news on many tech blogs is LeapMotion: http://www.engadget.com/2013/04/22/google-earth-leap-motion/. Their product allows for sensing position and orientation of a hand above the sensor, allowing for interaction through tilting of the hand. In the link is an intuitive example using Google Earth and a simple paradigm that has been seen in products like the Segway: tip your hand forward and it moves forward. While very much a novelty interface, this can further developments in interacting with virtual components using 3D space. As pointed in the first article of the reading, there has been a push to leverage physical interactions and this has the potential to explore the concept.
Weishu Xu - 4/22/2013 13:33:33
I think the main advantages of these new technologies is the fact that they are making computing easier and more intuitive for people to engage with without setting time aside to "compute." Through technologies that interact with motion and sound, it will be possible for individuals to engage with software anywhere anytime. However, I still feel that they are currently limited in the ability to register long inputs such as words and thoughts. While typing may not be the most natural or efficient way to input on the go, it is the most precise and relevant for registering complex thoughts until speech recognition improves.
This is a new technology that may eventually allow individuals to compute with only thinking and without the need to physically input words and expressions. It could be in the (semi-distant?) future of computing and allow individuals to quickly process and input on the go.
Alvin Yuan - 4/22/2013 13:33:51
The tabletop interface provides a great way to interact with objects encoding data on the table. It also can be effective for co-located collaboration since those situations usually already involve people around a table. The tabletop doesn't seem particularly effective for any mobile tasks, since the interface cannot be brought along with a person easily. Such a large visual surface also typically is poorly suited for tasks requiring privacy.
On the other hand, the interfaces making use of the arm seem well suited for tasks on-the-go, being highly portable. These interfaces don't seem suited for co-located collaboration though, as interacting with someone else's arm can often be a social intrusion. Tasks requiring high-precision also may have trouble here, as sensing on the arm is relatively inaccurate and can suffer from false positives. Two-handed tasks also get incredibly complicated if not infeasible when one arm is required for sensing.
An interesting interface technology I heard from Prof. Pister is here: http://robotics.eecs.berkeley.edu/~pister/SmartDust/ (brief description under Applications, Virtual keyboard). Basically, by having mini-devices attached to our fingertips, we can effectively type on a keyboard without having any keyboard actually present or visible. This sounds promising because it targets the major obstacles regarding keyboards today in a very clean fashion. It is very portable, doesn't suffer from limited screen size or occlusion, and takes advantage of experience with QWERTY keyboards. Because it doesn't rely on a visual, it also frees up screen space compared to soft keyboards used today.
Derek Lau - 4/22/2013 13:45:13
Some of the main advantages of these technologies is the decrease in the distance of the gulf of execution. These new technologies aim to emulate "natural" movement, evidently seen through tangible user interfaces, as direct manipulation becomes even more direct with navigation by gesture, compared to GUIs and peripheral input devices. In addition, much of this new technology allows output and input to be more tightly coupled, displaying information on the same surface as the surface which expects the input, easily seen in the Skinput device, but also present in tablet devices. This coupling reduces the complexity of accessing the output information, since the device is forced to consolidate the two together.
An example of a task for which these new technologies is not suitable is enterprise or professional work. Productivity becomes decreased with the lack of a dedicated input device, such as a keyboard, because typing on non-tactile surfaces poses a challenge for typing quickly and efficiently. Another task which would prove to be difficult for these new interfaces is any task that requires high velocity movements or rapid, successive input from multiple input devices, such as a fast-paced first person shooter game.
Intel has released a new interface idea: http://www.technologyreview.com/news/509941/intels-new-interface-idea-is-a-mash-up-of-all-the-others/ The interesting aspect of this interface idea is that it doesn't look to replace any existing interface technology, but rather augment and improve the capability of existing interfaces. By creating a harmony between multiple forms of input, the pros of each method can be taken and meshed together to create an experience that is flexible and can cater to a wide variety of applications and users alike.
Oulun Zhao - 4/22/2013 13:46:57
After reading those two articles, I feel overall the advantages of these new technologies are: tangible interaction and increased mobility. The users can interact with the interface in a more tangible way and also some input device can be simplified and carried with the user easily. These technologies can be very suitable for gaming, however they might not be suitable for programming, text editing, or tasks that require high precisions.
http://now.msn.com/the-leap-interface-technology-lets-users-point-at-the-screen-like-real-life-minority-report-scenario I found it interesting because it can recognize all ten fingers differently and it recognizes the motion of pulling a trigger which is quite amazing. It is also a very promising technology because I believe it can bring a lot possibilities to arts and game industry.
Ben Goldberg - 4/22/2013 13:47:07
The main advantages of these new technologies is that there are many new opportunities to bridge the gap of execution and allow direct manipulation of items. Using your hands and gestures makes interacting with objects a lot more intuitive in many cases. I would argue that a task these new technologies aren't good for is programming. There is an example of people programming logo with the interactive table but I believe this would be hard to translate over to other languages such as Java.
Another technology not talked about is the Xbox Kinect:
The Kinect receives input from a camera that measures how far users are away from the camera and that uses machine learning algorithms to identify players. I find this technology interesting because it works with preexisting technology (TVs and computer monitors) and it allows input with just your body. There's no special devices you need to have to use it, so there's nothing to potentially lose or misplace.
Ben Dong - 4/22/2013 13:51:59
The main advantage of these technologies is the increased level of control and the more intuitive mapping of actions that these interfaces provide. The surface computing interface allows for much more direct manipulation and interaction with applications and data, while using the human body is also a novel way of increasing interactivity. However, there are some tasks they are not as we'll suited for, such as video games (where the level of precision offered cant match WIMP) interfaces. Keyboard and text intensive tasks are also not very suitable for these interface technologies.
One other promising interface technology can be seen in Google Glass (google.com/glass). This has tremendous potential in truly making interfaces more mobile and immediately accessible. The 4 second requires to pull out a cell phone (as mentioned in the reading) would essentially disappear, leaving muh more time for the user to accomplish what they want. The fact that it can also function as a heads up display makes it amazing at conveying information, such as real time turn by turn directions. It really has the potential to revolutionize mobile computing.
Avneesh Kohli - 4/22/2013 13:54:05
The first article talked a lot about surface table technology, and I think the main advantages to this is clearly its use cases in collaborative environments. By diverging from the model of focusing on single-user input which came from WIMP, this will allow multiple users who are collocated to share and control information with others, in a device that utilizes some of the direct manipulation techniques that have become commercially popular today. The drawback that it's clearly not portable, but also will be a difficult adjustment for users by challenging the single-focus paradigm that users have become accustomed to through their user of computers for the last several decades. Bio-acoustic sensing holds phenomenal promise in allowing us to essentially have an input "device" wherever we go, giving us even more flexibility than mobile phones already allow us. Given our familiarity with our own bodies, this wouldn't be an input device you would really need to learn, and in some cases could probably operate without even looking. The challenges that remain are obviously the accuracy of such input, but also the speed with which you would be able to enter input as opposed to dedicated devices like keyboards. One interface technology that's captured my interest is LeapMotion, which will be released very soon (www.leapmotion.com), which for the first time brings us the ability to directly manipulate an interface without directly touching it. It's the kind of interaction technique that's really only been seen in movies. I think the advantages of this is clearly moving the input space from being 1 or 2 dimensional like mice and surfaces into the full 3 dimensions. I can imagine that with the extra dimension, you'll be able to be far more expressive with your input than you would be with other devices. Accuracy, and support in current OS/apps is obviously a huge obstacle to overcome though.
Dennis Li - 4/22/2013 13:57:22
The interactive surfaces technology is an expansion of a technology we are already familiar with in mobile displays. By allowing touch, we are given a sense of being able to directly and physically manipulate images displayed, a comfortable feeling for us humans. The advantages of this technology are vast. It allows people to create and share large displays of images and ideas in a intuitive and natural manner. We are not limited by the bounds of keyboards and mice and are able to really express ideas. These tasks are not very suitable for long distance interaction, however, because of the soley visual display.
The method of using of using other surfaces as input devices overcomes the small input surface of mobile phones. This ability is only available, however, when the user is not moving and thus vastly limits when the user can use this method. The method of reading your muscles as an input device allows the user to be able to interact with his mobile device without having to actually take it out or drop what he is currently holding. The limitation of this method, however, is that if he is currently occupied and unable to actually access his phone, he will not be able to read what is displayed on it.
I think this real world touch recognition technology is promising because it essentially has the potential to make anything and everything an input device. Using just the webcam, we will no longer be limited by input using just keys on the keyboard and the mouse.
Alysha Jivani - 4/22/2013 14:02:16
The first article discusses tangible user interfaces, namely tabletops, and the new interactivity that they can provide (beyond the multi-touch that we have on trackpads or on tablet devices). I am most interested in the affordances that tabletops provide for team collaboration and interaction and the fact that a tabletop might make control more equally distributed and accessible for the individuals who are working together. Additionally, the idea of allowing interaction and additional data by using objects on the tabletop is interesting because it could provide a more kinesthetic experience and better tactile feedback. I would really like to see how these can be implemented for classroom activities and educational purposes. Classrooms that use inquiry-based learning methods often use “manipulatives” (objects that aid in the activity) and I think incorporating tabletops would enhance a child’s learning experience (especially in group activities) even further. This probably would also have a huge impact on the educational paradigm for children with learning disabilities and/or developmental disorders. However, I think there are some challenges like portability and usability on the go when it comes to tabletops that might make it hard for them to become mainstream. Additionally, even if you’re working at a circular tabletop, you still have to deal with the potential of having objects appear “upside-down” to participants standing across from one another, which may or may not be an issue (most likely an issue in an educational setting).
In the second article, there was a discussion about the potentials of muscle-computer interfaces and bio-acoustic sensing. I think that they’re great in that they harness qualities inherent to the human body, but I think this also presents an issue because of physical/biological variability across humans. These devices would probably have to be calibrated each time to account for differences in human bodies and gestures of individuals. Also, the user has to buy/wear an additional piece of hardware, which is probably uncomfortable to wear at all times. I think one of the main advantages of muscle-computer interfaces is that it allows for interactions with a flat surface and in 3D space. It also eliminates some types of interference that we might encounter with technology that’s reliant on computer vision/cameras or speech/auditory recognition. I think a main disadvantage of this (other than the issues of calibration and accuracy) is that it doesn't seem to have obvious potential for collaboration or interaction with other people wearing these devices, i.e. it’s focused more on the personal computing experience.
https://www.leapmotion.com/ Leap Motion is becoming increasingly well-known; it’s a device that allows 3D gesture-based interaction. I think it’s very interesting since it combines the natural ease and kinesthetic interaction capabilities of gestures in real life. Again, I’m mainly interested in the benefits this could provide as a learning tool for students and in the classroom setting. Later down the line, I think it would be interesting if it could incorporate more of a collaborative aspect (i.e. allowing for multiple users), thus making it accessible for group activities in which every member can participate. It also seems like it allows for the use of objects, so perhaps that could be another added layer to the data embedded in the interaction.
Aarthi Ravi - 4/22/2013 14:02:58
Main Advantages of these new technologies: Allows direct manipulation of objects Portable and convenient Interactions are very close to the mental model thus making it very intuitive
Tasks they are not suitable for: Computationally expensive tasks which require a lot of power as long lasting miniature batteries are not available yet May not be applicable in certain lighting conditions like Kinect's Infrared technologies don't work outdoors
Interesting interface technology: http://www.designboom.com/technology/myo-wearable-gesture-controlled-arm-band/ The MYO band uses muscle activity to control devices, navigate through the application and control user interfaces through gesture control signaled by muscle activity. MYO band is light and could be worn around like a watch. If this technology is integrated to a watch it would have a promising future as the user needn't carry any extra device or no fear of forgetting or losing the band as you always wear it.
Edward Shi - 4/22/2013 14:03:18
I believe that the tabletop interfaces will be popularized in the future. I believe the ability for multiple users to interact with a single system at the same time is an invaluable tool. As we saw in this class, one challenge of brainstorming is allowing ideas to flow easily and not having one person dominating while stifling others creativity. We would obviously achieve greater amount of quantity of everybody is able to interact at the same time. Even whiteboards and markers on posters still lend itself to have one main user while others are just sitting on the side lines waiting to input information or having to rely on the marker holder to write down their idea. These technologies could also be useful for any group activity such as group meetings.There would be limitations as this may not be suitable for tasks where it needs to be mobile or having a single user. There would be no need for the big interface if it needs to be mobile as users would not want to carry anything big. Furthermore, size and mobility tend to be inversely proportional I feel that the bio-acoustic technology could be extremely useful in the future as well. This way we do not have to constantly take out phones or other devices. However, difficulties may be tasks that require constant movement where we can not focus on specific tasks. It may not be useful if we have to move our arms a lot then we would not be able to use an interface it projects or the movement ay be too erratic for the interface to filter out. Also specifically for skinput, it may not be suitable for tasks that require big display as it is limited to the surface area of ur skin and it would depend how thing or big someones arm is. Without the capability to hold more detail it wil lbe limited to simple tasks and nn detailed tasks.
http://www.technologyreview.com/article/413991/building-blocks-of-a-new-interface/ Siftables is looking to create an interface that's based on actions that we do on an everyday basis. It targets children where they use blocks to make words. I think this may be good for children as building blocks are tools that we use as children and describe. It also coincides well with concepts that big problems or concepts are simple building blocks and they can be an invaluable learning tool but parsing large problems into specific blocks or in this case siftables.
Sihyun Park - 4/22/2013 14:06:08
The main advantage of the new technologies addressed in the articles are they are suitable for the current trend of mobile computing. People nowadays carry substantially powerful devices in their pockets. Touch-screen and wearable suit perfectly for tasks users perform on the go. Most handheld devices have touch-screens that allow the users to directly manipulate the interface using fingers without any other additional input devices. Users often execute tasks using more than one item; for example, a user might have a product with a qr code, and he/she might scan the code using a smartphone camera. In such cases, a screen that scans the item when the item is placed on the screen (ex. the Microsoft example) is useful. The article also addresses wearable interfaces, like muscle-computer interface and bio-acoustic sensing. These interfaces allow the device to become less bulky (though modern smartphones are already quite portable) and truly become a part of people's lifestyle. However, these tasks are not suitable for professional and intricate tasks, such as web design, coding, and writing. When web designing, for example, a designer uses the web inspector often to select a div, preview a stylesheet, and write stylesheets/scripts on the fly. This involves "hover," an input state that does not exist in touchscreen devices. The lack of hover state makes web inspectors very difficult to use, along with the difficulty in writing stylesheets due to increased error rates from touch-screen keyboards. Coding and writings are other difficulties that arise from touchscreen technologies. What's more, these tasks are nearly impossible to execute using wearable devices. As such, these new technologies are suitable for light tasks, but for professional tasks, existing technologies serve better. An interesting interface technology that might solve a problem addressed above is Samsung's Airview on its Galaxy S4/Galaxy Note 2. (http://www.youtube.com/watch?v=VRzUzRD9Y8k) I pointed above that the lack of hover state makes many tasks in touchscreen very difficult, such as the web inspector. Also, some techniques that are prominently used in websites, such as showing popover when hover, are impossible to use in touchscreen devices due to the lack of hover state. Samsung's Airview eliminates this problem, as it achieves the hover state even on touchscreen devices. For example, when a user places one's finger slightly above the screen, it executes some tasks on "hover," such as showing a popover for preview. Right now, the feature is quite error prone, but when improved, this might open up an entirely new input method for touchscreen device - from just a tap or no tap to a tap, no tap, and hover (an intermediary).
John Sloan - 4/22/2013 14:09:57
The main advantages of these new technologies are that they strive to make the technology available in a more direct and convenient way, such that it is simply there all the time. For example, the muscle movement sensing armband makes it so you do not even need to pull out a phone, and the interaction is practically beyond direct manipulation because it uses your already known muscle movements. This is especially helpful in situations where you may not be able to see or your environment is moving like on a bus. People know their bodies so well, they can still use the technologies with ease in difficult situations. Also, you take your body everywhere, so its always available. The main advantages of these new technologies are that they strive to make the technology available in a more direct and convenient way, such that it is simply there all the time. For example, the muscle movement sensing armband makes it so you do not even need to pull out a phone, and the interaction is practically beyond direct manipulation because it uses your already known muscle movements. This is especially helpful in situations where you may not be able to see or your environment is moving like on a bus. People know their bodies so well, they can still use the technologies with ease in difficult situations. Also, you take your body everywhere, so its always available. The shift to bio-acoustic input definitely has its advantage in ease of use and direct manipulation, but also has some tasks they are not suitable for. For example, using voice commands cannot be private, nor can making gestures with you hands in public. As well, the bio-acoustic input is usually a very noisy signal and it is often difficult to meaningfully interpret the data. This can lead to less detailed functionality since it is not as precise yet. Also what if the user is doing normal activities that accidentally activate the system or perform an unwanted task. And while driving or some other situation where your body is preoccupied how can you use the system? Another new interface that I found that seems promising is FluidPaint. It allows touch interface to be used with wet paintbrushes to emulate the experience of real-world painting. It is really cool because the electronic paint is created using the touch input and sensing the water patterns that follow after the brush is lifted. This is a great example of bringing direct manipulation to art.
Linda Cai - 4/22/2013 14:12:06
I think that these new technologies can be much more intuitive than the WIMP paradigm, and hence easier for users, especially new users, to use. These interfaces should be able to make doing many everyday tasks faster than before. However, there are still a number of tasks that they are not suitable for, such as intensive word processing or coding, in which typing is still the best option.
Google glass seems interesting, in that it no longer really requires hands to operate, thus allowing the user to use them while doing other tasks. Further, since less area is needed, users can use them in more crowded areas, whereas using e.g. a smartphone requires a bunch of space in front of the user.
Brett Johnson - 4/22/2013 14:14:27
The muscle-computer interface discussed in the second article is advantageous in that it removes the time and effort of pulling a phone out of pocket. However, I think that because the interface, at least the one implemented and shown, projects images onto the arm, this interaction would actually be inferior to a regular smartphone. The visual distortion of a projected image onto something as irregular as the human arm is not going to be as clear as an IPS display like most current phones use, and as they state the touches will probably never be as accurate.
The touch tables described in the other article are interesting because they facilitate multiple users collaborating on one task, like a classic whiteboard brainstorm, with the advantage of being digital. One thing that this technology is not well suited for, at least from what I read about, is working when everyone is collocated. It would be interesting to see this technology work for many different users located in different places interacting on their own tablet or other multitouch surface.
One interesting technology that I found is the head mounted display for medical teams: http://hci.stanford.edu/publications/2013/medical-displays.pdf . I think this is promising because they took into account the fact that doctors and medical staff must make quick decisions and don't really have time to refer to external displays at a fixed location.
Lemuel Daniel Wu - 4/22/2013 14:15:04
The main advantages of these new technologies is the fact that they do not require as much human gestures or hardware - the touchpad on one's arm reduces having to hold a cell phone when trying to look something up or contact somebody, and the mutually-reactive chips/pieces like the Reactible reduces having to input one piece of information at a time.
The tasks on interactive devices and surfaces, however, aren't as suitable, most likely, for apps that do not require user interaction (reactibles are most useful for that kind of functionality). They also don't seem to be as useful for information that one would want to be able to carry with them and use in crowded areas, like a subway. Security would also be an issue.
Another kind of interactive interface technology can be found at flutter.io, which allows human hand/arm gestures to change what happens on the computer. These gestures are read by the computer's webcam, and send signals to specific programs to change their functionality. This is innovative, because this can reduce time that people spend on keyboards, which often leads to carpal tunnel syndrome. It can also reduce dependence on trackpads and the mouse, and allow freer interaction with the computer.
Minhaj khan - 4/22/2013 14:16:36
I found the ReacTable interface for musical composition very interesting. I feel it's a creative perspective on music creation and could lead to potential for higher creativity given the novelty and unique approach in interface interaction. One task it could be very useful for is social collaboration, here multiple users could be around the ReacTable and move pieces and compositions around. Aside from its unique use case, it does it doesn't seem like this is the best interface for general computing tasks, having a lack of traditional input and pointing capabilities and being bulky as well.
http://m.technologyreview.com/news/509941/intels-new-interface-idea-is-a-mash-up-of-all-the-others/ This technology by intel is more than just the capabilities if Microsoft kinect. They wish to pioneer the landscape of perceptual computing, in which the camera interface is another dimension of interaction to the existing touch screen mouse and keyboard interfaces present in laptops. Given its capabilities to detect user motion and visual hand gestures, I find this can potentially be a great addition to existing interfaces. Although its a bit harder to find use cases that are very immersive and dominate user interaction, the aim of this technology is to be an addon rather than the primary interactive interface, which allows for a lot of creativity in human computer interaction due to enhancing the combination of possible interaction gestures.
Juntao Mao - 4/22/2013 14:18:39
In my opinion, the main advantage of the tangible user interface, especially the tabletop technology is very powerful that it combines technology into everyday physical objects, not just limiting to phones and computers. This integration emphasizes on the physical embodiment of data, allowing even more direct manipulation that GUIs, and thus decreasing the gulf of execution and evaluation. With tabletop TUI, the users can also interact with objects other than the tabletop to create a rich physical experience not just limiting to the traditional concept of choosing and clicking only the objects in the UI. Depending on the physical object of the TUI, this technology may not be suitable tasks that would require repeated actions of large scale. For example, if a tabletop UI task requires the user to point from the left end to the right end of the table, and the user has to use this task a lot, it would tire the user out. The main advantage of Interfaces on the Go, like the examples of human body as the interaction platform in the reading, is that there is always “a consistent, reliable, and always-available surface.” In the Muscle-computer Interfaces, the user may even use fine gestures that they would normally use in the physical world to control. It also tightly integrates technology into everyday physical interaction. A task that the muscle-computer interface would specifically not be good for is where the task requires the user to do several unintuitive (not corresponding to the everyday life gestures). Another interesting interface is the brain-computer interface. I think it may categorized as an interface on the go, but I think it is quite interestly different from the mentioned examples. http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface http://www.cnn.com/2009/TECH/12/30/brain.controlled.computers/index.html I found the prospect of this interface being successful very exciting in itself. This goes beyond integrating technology into everyday object, to a level where it is really fully integrated with us. A lot of the current efforts in this field focuses on using BCI to help the sick and disabled, and are achieving great results, which shows the potential in this technology.
David Seeto - 4/22/2013 14:19:39
One of the main advantages of these new technologies is that it shifts away from the WIMP construct and focuses on other forms of input. This choice, whether it be on interactive surfaces or on our biological body surfaces changes the way people interact with technology. In the first case, the relationship between data and the users changes; data has now become more tangible, are able to be physically and directly manipulated, and most importantly in my opinion, can now be interacted with between multiple users. This sense of cooperation is central to CSCW, something that single user WIMP interfaces was lacking or insufficient in. In addition, the form factor of computers is being revolutionized through this new technology. No longer are users restricted to carry around a rectangular box to do their computing. It can now be mobile and always around, in the case of physiological computing, or can also take advantage of other objects that might make sense for computational work such as on tables.
Nonetheless, at least in the two articles in the reading, the field of personal computing might not want to adapt such changes. Surfing facebook for example may not warrant the use of a table top surface or a physiological surface. In the end, tasks that involve a lot of text input or for that require the user to feel a tangible input device such as a keyboard or a mouse may make such new technologies unsuitable. Moreover, perhaps all the user wants is a screen, not a surface with information projected on. This can be the case in accessing personal data or modern day computer gaming for example.
Typical, I know, but pretty interesting. Google glass is a HUD projected on a pair of glasses, allowing the user to interact with a computer that is just there; the user simply lives life as they normally would and allows Glass to intervene when its functions are needed. This is interesting because it is one step closer to computers becoming ubiquitous in every second and minutes of our lives. What is promising is honestly the fact that Google is working on it. They have the resources to promote such new technologies, slingshotting it into the mainstream population. If successful, HUD's can become more popular.
Haotian Wang - 4/22/2013 14:22:17
I think the main disadvantage of tabletop technologies is how non-mobile it is. That is, it's stuck in one place, since its size is part of the reason it can support multiple users. However, this size is also detrimental, since computers are becoming smaller and more mobile, so that these big tabletop computers almost resemble the old multi-terminal mainframes that are built and stuck at office locations. If a lot of people get together in the same location anyway, such as an office, this kind of technology would be suitable. However, they would not be suitable for anything which needs to be transported.
The main disadvantage of mobile technologies which use novel forms user input is that right now, they all seem to require too much power to run. The arm-band prototypes with projectors seem like they would not last more than an hour on today's batteries, which make their utility as actual mobile input units very limited. Until better batteries are developed, it seems that these new input technologies cannot be easily deployed. This kind of input is very suitable for mobile except for this battery issue, but they're not suitable for collaborative work or even any kind of complex work, since the displays of these novel devices are very limited.
http://kenhinckley.wordpress.com/category/input-devices/ The article I found is about GroupTogether, a project which uses over-head motion-detection cameras to deduce positions of multiple tablet users, to facilitate data sharing. I feel like this article is promising because it combines mobile to-go technologies and stationary stuck-in-room techologies. Kinetic cameras on the ceiling (stuck-in-room) can deduce user motions, so that when a user tilts his tablet (mobile), data is shared with other tablet users next to him. But this data can still be taken on the go and transported. I feel like this is a novel way of supporting multi-user interactions using a lego-building technique, such that each of the pieces (tablets) can be used as independent computing devices, but can be made part of a single task/data sharing computing "room" wherever kinetic cameras are present.
Sangyoon Park - 4/22/2013 14:25:00
Both articles are briefly explaining the history of user interface technologies. and examining the newer (tangible interaction) technologies that are still on research. One is using tabletops, projector, and motion sensor (to detect user motions), and the other was replacing tabletops with human bodies (i.e., an arm). All those technologies have an advantages that are focus on more tangible, and more convenient. However, those technologies are weak at some tasks that requires extremely detailed controls since those are using projectors and motion sensors to determine user inputs, but those are not suitable for a pixel by pixel task.
One good article I found is UI for directly touching target objects. Link : http://www.fujitsu.com/global/news/pr/archives/month/2013/20130403-01.html This is technically similar to the tangible interaction introduced in the article. The main difference is that this uses target objects as its medium where human interaction is happening (this does not require tabletops or human arms to display certain screen). This technology allows users to touch directly what they are seeing as an object, not a screen that is printing what they want to see, so users don't need to hold or fine another object (such as tabletops or his arm) to use this technology.
Christine Loh - 4/22/2013 14:27:18
The main advantages of these new interface technologies is that it gives us more to work with in terms of a single screen and functions that can go with it. For example, multi-touch will be extremely useful with more than one input that can go into the system at the same time. However, they could be unsuitable for certain tasks that depend on using one input at a time, or for tablets that don't have a large enough processing power to handle many inputs.
I think the following link about Facebook Home is interesting; it shows that even with a lot of user testing and design experience, Facebook engineers/designers came up with an app that still may not necessarily be the preferred choice of users around the world: http://techcrunch.com/2013/04/21/facebook-home-hits-500k-in-five-days-pales-in-comparison-to-instagrams-android-shift/.
Nadine Salter - 4/22/2013 14:32:56
These new technologies have a number of advantages: the futuristic "wow!" factor can lead to renewed interest in, and attention to, developing software that's different from the current lineup; enhanced collaborative technologies are useful in bringing more powerful replacements to current systems like whiteboards (which, as any Soda dweller can attest, routinely lack decent markers, and require additional manual effort to save or share information from); and wearable computing (notably Google Glass) allows for the expansion of computers into more and more areas of life.
They are, of course, not a panacea for all computer problems—e.g., wearable computers cannot currently compete with full-sized keyboards in terms of speed and accuracy of text input, and table computers are neither portable nor affordable. However, the Leap Motion (https://www.leapmotion.com/) is quite promising: I'm struck by the fact that it's affordable ($80 to preorder), works with many platforms, and has an open developer toolkit, meaning that it will be widely available and is well-positioned to become integrated with a number of current popular software packages.
Tananun Songdechakraiwut - 4/22/2013 14:38:22
Help facilitate collaboration - Particularly, in the case of sharing control, where people share on-the-fly-generated data in the form of image collections, will be useful in many real-time collaborating situations. Another new technology called micro-interactions will helps reduce time spending on completing interactions by expanding a set of tasks. Advantage of using skin as a screen surface is that the skin as the surface is reliable, consistent, and that we always take it to wherever we go. We're also familiar with our bodies and thus it allows us to interact in even an extreme situation, and we can do it quick too.
Since the technologies require the use of sensor, and thus the signal might contain noise that may reduce its accuracy. Also, Bio-Acoustic sensing requires a great amount of training and thus is not acceptable.
Interesting Interface Technology
It is interesting to see the interface that lets your brains(thoughts?) do things. I believe that people capabilities are limited by physical factors. Things we think of in our brains can be tremendous, but, in most case, they are just dreams because we are limited. Imagining we are able to make something real from just pure thought(by using somebody else! or dead bodies!!). That could possibly lead to many impossibilities and this is the very first step!!!
Brian Wong - 4/22/2013 14:39:08
Tangible interactions are great for movement based actions that need to be taken. The pinch to zoom example they use is a good case where in the real world there a movement for trying to compress an object and expand an object that corresponds to decreasing the distance between fingers and increasing it, respectively. The translation of thinking of letters and words to physically representing them, however, is much faster by typing, than by physically writing on a tablet surface.
The main advantage of "Interactions on the Go" is that these are inputs that have allowed a user to change their normal processes less. While a user is walking to a restaurant, for example, they may continue to walk and speak "directions" at the same time. On the go interactions, however, lose a lot of tangible interactions bring, because tangible interactions generally require a physically large input space, which on the go interactions do not afford at this point.
This technology is called the Leap, and it is only a tiny box placed in front of your laptop. It visualizes gestures in 3-d space, somewhat similar to a kinect. This technology is promising because 1) it's portable and tiny, and 2) it requires no other "tangible" object to be utilized effectively, meaning we could possibly see this being integrated into the bottom of a tablet or as a wristband attachement in the future.
Eric Xiao - 4/22/2013 14:44:47
Main advantages of these new technologies are using things and capabilities we have available to us in new and different ways. I really love the interaction and interface using the human skin. This task isn't suitable however for any task that needs two hands or a large surface area. They're meant for fast and quick interactions rather than slow and methodical ones.
This is really cool, because it allows us to interact with computer applications device free at a precision that has been unmatched. Soon, interfaces will match those found in the movies, where we can pull forms from one part of the screen to the other by grabbing it and pulling it aside with our hands.
Achal Dave - 4/22/2013 14:47:18
Tangible interfaces' key advantage is in the direct mapping between the user and the system--the idea of using your hands directly to manipulate an interface is intuitive and does not require any mental overhead. However, this same direct mapping is an issue when we require some input like text, which is not easy to do without some sort of intermediary (keyboard).
Biosensing interfaces allow the interfaces to be passive, thus also removing the users' mental overhead of working with an interface. However, because of technology right now, this technology can be an issue in that it can cause many false positives, and may not detect some impulses directly.
One interface that I think is very interesting is voice (http://www.pnas.org/content/92/22/10031.full.pdf). I believe that in many ways, voice recognition technology can allow for a very good understanding of users' intents, without causing too many false positives (e.g. via modes or activation words "Xbox play music").
Zeeshan Javed - 4/22/2013 15:44:24
The main advantage to new technologies in my perspective is that the abillity to learn changes as technology does. Being able to implement interactivity may help children learn how to read or simple mathmatics. Unsuitable tasks may be projector based technolgies that are high cost and and not practical to use.
This app is extremely interesting and promising as it allows simple user based actions to be implemented in music based applications.
Ben Dong - 4/22/2013 21:09:48
The main advantage of these technologies is the increased level of control and the more intuitive mapping of actions that these interfaces provide. The surface computing interface allows for much more direct manipulation and interaction with applications and data, while using the human body is also a novel way of increasing interactivity. However, there are some tasks they are not as we'll suited for, such as video games (where the level of precision offered cant match WIMP) interfaces. Keyboard and text intensive tasks are also not very suitable for these interface technologies.
One other promising interface technology can be seen in Google Glass (google.com/glass). This has tremendous potential in truly making interfaces more mobile and immediately accessible. The 4 second requires to pull out a cell phone (as mentioned in the reading) would essentially disappear, leaving muh more time for the user to accomplish what they want. The fact that it can also function as a heads up display makes it amazing at conveying information, such as real time turn by turn directions. It really has the potential to revolutionize mobile computing.
Kevin Liang - 4/22/2013 23:21:48
The wrist technology is great because it requires very little movement and energy. Instead you can just simply do some gestures and it will know what you are trying to do. The disadvantage to this is that it does not seem as intuitive to use. The learning curve is large meaning that non-tech savvy people are going to have a hard time adjusting. It may even be impossible. The other one is the tabletop technology. This technology is great because it doesn't really require any physical devices. For example the keyboard projected on the tabletop is amazing because it does not require an actual keyboard. But the disadvantage is that it will change the way people type. People are used to feedback with typing on a keyboard. This also is not the same as touch screen keyboards because people usually use 1 finger per hand to type on a touch screen keyboard.
http://www.oculusvr.com/ The Oculus Rift is a new technology that is emerging. It basically allows for virtual reality gaming. You put on a helmet and it seems like you are actually in the game. This is a great start but what would be better is if you did not have to wear a helmet. Gaming is a way to escape reality but it would be nice if it felt like the new reality which is what the Oculus Rift does.