Ubiquitous Computing

From CS260Wiki
Jump to: navigation, search

Bjoern's Slides


Extra Materials

Discussant's Slides and Materials

File:UbiquitousComputing CharlieHsu.pdf

Class Activity Handouts: File:UbiquitousComputing CharlieHsu handout.pdf

Reading Responses

Charlie Hsu - 9/5/2010 12:45:36

The Computer for the 21st Century

This reading described Mark Weiser's vision of 'ubiquitous computing', where computers as a form of information technology become as ubiquitous and commonplace as writing. Weiser explored computing from a perspective different from the typical 'personal computer' view and instead thought of a computing environment with many computers per person, of varying sizes according to their purposes and strongly networked in a location-aware environment. Weiser gives an example of three different sizes of computers: 'tabs, pads, and boards', addresses the necessary technology for ubiquitous computing: hardware, software, and network capabilities, and discussed the privacy issues raised by ubiquitous computing.

Certainly, we can see parts of Weiser's concepts of ubiquitous computing being implemented today. 20 years later, the hardware capabilities Weiser described have been achieved. The network is strong enough too; wireless networks are becoming more and more easily accessible, especially through mobile devices. Ubiquitous computing has created real applications in the world as well; for example, networked, embedded computers in home systems allow users to control their thermostats, home security, and more from their mobile devices. I have personally experienced a work environment where the idea of mounting iPads on conference room doors to display schedules and allowing users to swipe an ID card to instantly book a room was proposed, and the limitation was not in software or network, but only in hardware cost.

However, we are still far from the sort of cooperation between devices and 'invisibility' of computing that Weiser describes. Even with the advancements in mobile devices, the user's attention is often still focused on a "single box", a personal computer of sorts, whether it be a laptop or a cell phone. I feel that ubiquitous computing certainly has its applications in the work environment (offices, libraries), but I question its utility in home and personal life. Many of Weiser's examples with "Sal" are simply computers used for location tracking, which immediately raises issues of privacy.

I was not entirely satisfied Weiser's treatment of the security and privacy issues though. I feel that Weiser seemed to limit his view of ubiquitous computing to simply location-based tagging and office utility, and his addressing of privacy issues mirrors that. I also felt his thoughts on the software requirements needed for ubiquitous computing were lacking. I think these two go hand in hand: with software engineered to protect users from revealing too much with ubiquitous computing, engineered to carefully limit ubiquitous computers to only necessary and carefully chosen tasks, engineered to guard against human error (losing an ID tab, for example), privacy concerns can be addressed better in ubiquitous computing.

At Home with Ubiquitous Computing: Seven Challenges

In this reading, Edwards and Grinter explore the possible challenges of bringing ubiquitous computing to a domestic home environment. Technologies for the home address different needs than typical work environments. Home technology also raises more social and ethical issues than technology in the office. Edwards and Grinter describe seven challenges that designers for ubiquitous home computing face.

In my opinion, the paper does a comprehensive job of describing the design considerations needed for bringing ubiquitous computing to the home. The problem is an important one: the potential for quality-of-life improvements are dramatic and computing in domestic affairs is becoming more and more prevalent. Home ubiquitous computing is indeed a growing market: embedded home systems have taken advantage of growing mobile capabilities to offer remote control over systems. However, today's domestic systems are still relatively independent of one another, and the challenge of 'impromptu interoperability' remains yet to be solved. We also see these systems being implemented in parts, as additions to already existing homes, as the first challenge described.

Edwards and Grinter imply that a great deal of research and planning must be done to successfully implement ubiquitous computing in the home. Many research proposals were outlined in the paper, and I have some more that came to mind as I was reading. Research needs to be done on solid conceptual and mental models that account for the gradual implementation of ubiquitous computing into homes (the "accidentally" smart home). Research could be done on how to develop a home management system via the utility model, where information flows from the network and the client is relatively static. Perhaps research could be done a central hub that all home devices are required to be able to communicate with, instead of attempting to have all devices able to communicate with all others.

I was particularly interested in some of the concerns raised by the fifth challenge, the social implications of aware home technologies. Edwards and Grinter state that the combination of the washing machine, hot water heaters, irons, indoor bathrooms, etc. changed society's expectations of people and may have actually increased the amount of unpaid work done in the home by women. They conclude that designers need to be aware of the possible broader social effects of their work, keeping in mind how labor-saving devices may actually be labor-changing, and how technologies influence parenting. There are many more possible social implications of ubiquitous home computing that come to mind: the increased danger of user error caused by automation in the home, the impact on domestic service jobs that may be replaced by ubiquitous computing, and more.

Kurtis Heimerl - 9/5/2010 17:31:37

The computer for the 21st century

This paper gives a fantastical vision of the future, comparable to low-quality science fiction. It’s a few engineers attempting to predict the future, arguing for “ubiquitous computing”, a vision of the world where computers are everywhere and serve many functions. These computers are so commonplace as to be below notice; they’re in our clocks, our post-its, and everything else.

I’d argue this paper contributes highly to our view of HCI, but not in a positive manner. This is so full of whimsy, so completely divorced from reality, that only a select few could possibly take it seriously. This shows, to me, that HCI needs a much firmer grasp of economics than it currently does. The enemy of such visions is not competing technologies, but “good enough” solutions. The best example I have from the paper is the “digital book spines”, that for some reason these should be screens. This is data that doesn’t change often. There are real tangible costs and benefits to using static text like paper or billboards. The question is not “could this be done digitally”, but “should this be done digitally?” I can imagine a world where computers do everything, as these authors have, but I think the realistic vision is where computers do what computers do well.

This sort of research continues unabated. The “digital thread” project Bjoern highlighted at the beginning of the semester is a classic example, at least to me. Now, a world where people could change the entirety of their outfits with little effort seems like a big win to me, and maybe this is progress towards that end. As a standalone technology though, the tradeoffs seem to not favor the technology. And this is the key point, every decision is a tradeoff and we need to analyze them in that context, not alone.

Lastly, they guessed that people would continue to read papers. That’s hilarious in retrospect, they got that whole morning thing about as wrong as they could. However, it’s better than not guessing at all, I suppose.

At Home with Ubiquitous Computing: Seven Challenges

In this paper, they detail seven challenges that need to be resolved for “home-based” ubiquitous computing to really take off. These are: incremental upgrades, interoperability (given the previous), administration (of key interest to me as well), designing for domestic use (this is dumb, designing for your field is the research, not the key challenge), social aspects (also dumb), reliability, and lastly inference.

I think this work is phenomenal, and a perfect counterpoint to the previous reading. They even used some of the exact same terms, arguing that we should look at trade-offs in the space rather than rote gains. Some of the challenges are weak, for example designing for domestic use and social aspects. These are generic HCI problems, not in any way focused on ubicomp or home computing. I’m not saying they’re not relevant, but they’re a key challenge in a much wider way.

Some of the challenges were great and deep. Inferences, for example, is going to be a giant mess and there still hasn’t been much progress so far as I know. One bad inference means a conference is recorded when it shouldn’t be (or is not when it should), breaking some regulation or recording evil deeds that people want to keep private. When the costs for these failures are so large, we’ll only use inference in areas where cost is low or remedies are simple (turn lights on/off). This is a critical piece of the ubicomp story, and it’s one I haven’t seen fleshed out well.

Administration is a key issue in my current primary research topic. We hope to provide villagers with a full-featured GSM basestation that will likely require basic administration. Obviously there won’t be a lot of training or knowledge expected. I’m interested in potential overlap in topics, does ubicomp have much to say that could help someone focused in areas such as mine? I guess we’ll see, as it’s a big area for this class.

Airi Lampinen - 9/6/2010 9:11:27

In "The Computer for the 21st Century" Mark Weiser from PARC presents his vision of a world of ubiquitous computing, that is, a situation where computers are so strongly weaved into the fabric of everyday life that they are indistinguishable from it and, hence, practically "disappear". The vision builds on three types of gadgets: tabs, pads and boards, ranging in envisioned size from an inch via a foot to a yard.

Weiser describes a world where there are all sizes of computers everywhere but where they are so embedded to everyday activities that no one really pays attention to them anymore. In his reality of ubiquitous computing, the idea of "a personal computer" becomes obsolete as computers no longer have no individualized identity or importance. This depersonalization of technology is yet to happen - we have now seen the (i)pad arriving to the mass markets, but the device is far from being a "scrap computer"… The future, at least the 20 years during which Weiser states the vision could be realized, did not bring quite what what the author expected. At the same time, it seems that some of the things that Weiser envisioned are still on their way to the mainstream: there has been a buzz around location-awareness for a long time but I believe the big outbreak still remains to be seen.

While Weiser was to some degree right about how computer access will penetrate all groups in society, he did not foresee the tremendous importance that mobile phones would come to have in making this happen. Today, for more and more people, the first contact to internet happens via a mobile phone. Also in other regards, in my opinion, mobile phones have so far been the biggest step towards ubiquitous computing - a quite different one from what was expected in the original vision but a remarkable step nevertheless.

Perhaps the most troubling feature of the paper for me was how Weiser treats the questions of privacy. He is considering mainly problems of information security, criminal threats and the like. However, it seems obvious that his vision of ubiquitous computing includes many aspects that call for a consideration of social implications of introducing such technology. Also, Weiser seems to treat private and public as a dichotomy and, thus, fails to see that the two are relational and situational concepts.

All in all, the work in PARC that Weiser describes is an excellent example of setting out to foresee what the future could be like and then trying to trying to turn the most compelling scenario into a reality. This is what future(s) researchers are, quite bravely, doing in all walks of research. I also appreciate Weiser's insight of how powerful it is to make something mundane. He seems to get the idea, so often lost for technologists, that the real revolution is not in making something possible in the first place, but in making it so easy that it is truly accessible to the masses.

The second article, Edwards et al's "At Home with Ubiquitous Computing: Seven Challenges" dates from a decade later. The paper considers smart homes and seven challenges related to them. By smart homes, the authors understand "domestic environments in which we are surrounded by interconnected technologies that are, more or less, responsive to our presence and actions.

The article focuses on challenges that the authors feel must be tackled in order for the smart home concept to become viable as a reality. The authors consider issues spanning the technical, social, and pragmatic domains, such as interoperability, accidental and unintended connections, ways to make houses packed with technology to work without the need for a system administrator, the effects of technology on social practices and domestic life and the applicability to domestic use of inventions developed with the office context in mind.

Reliability is also pointed out. This issue is crucial as the more we learn to rely on technological smartness in our surroundings, the more dependent of it we will become. What will happen, when systems fail? Furthermore, the authors discuss the problems of technologies making inferences of human action, inevitably in the presence of ambiguity. While Edwards et al. are part of the movement that is working to make ubiquitous computing happen, they paint out a lot more problematic picture than Weiser did in his early vision paper.

These two articles brought to my mind Adam Greenfield's insightful book Everyware that considers the potential and problems of ubiquitous computing in many interesting details.

Siamak Faridani - 9/7/2010 15:28:26

Article 1: The computer for the 21st century This paper highlights the most crucial components for an integrated ubiquitous computing system, they report on what has been done in Xerox PARC and what they foresee to appear in the near future. Similar to two former papers that we read on Aug 31st, this paper also makes a number of predictions about the future technology. I was surprised that many parts came to life after about 20 years and I would like to see what else is in the paper that has potentials to appear in our daily lives.

The author starts by pointing out that the technology should be invisible to the user. The computing framework that he proposes is different from the classical view of a computer (a terminal, keyboard, mouse and a screen for doing word processing or spreadsheets). Throughout the paper the author emphasizes that machines should slowly come into human lives and adapt to it, not vice verse. He also emphasizes that ubiquitous computing is different than multimedia computing. And even though these tiny machines will have multimedia capabilities they are designed to communicate information and make the huge amount of information accessible to the user easily, without swamping them into massive amount of data. I really liked the idea that the author proposes that the information is presented to the user mostly subliminally and not as direct as in the desktop computing. He uses the example of candy wrappers and mentioned that event though they are covered in writing they do not require constant attention. Later in the paper he goes back to this concept and emphasizes that ubiquitous computing should not require constant attendance of the user and it should be an ambient intelligence.

The article is written in 1991 and I was surprised to see ideas in this paper that were not available in the commercial products in 20 years. The "Computer scratch-pad" idea is similar to what we know as tablets/iPads. In Figure 4 the operator is even using her figure to work with the UI. Something that may not have been well known until iPad came out. This article proposes three forms of devices: tabs, pads, and boards. From the three types of devices, two of them are present in our everyday lives (ipods, cellphones, interactive watches and in larger form factors we see tablets, iPads, eReaders). I am wondering if anything in the "boards" category has appeared in the consumer market. The closest that we have seen is Jeff Han's devices (like: http://www.perceptivepixel.com/) that are used in the military but I am not aware of similar devices on the market for everyday use.

The idea of a micro-kernel was also interesting to me. I imagine this is only 2 years before Java appearing as a mobile programming language. Micro-Kernels might have influenced tools like TinyOS, and even JVM and Microsoft CLR. They all in some sense appeared to provide a unified computing platform on different hardware. Customizable kernels and operating systems like Android now provide a more high level abstraction to programmers.

But what is missing in this article? I think the author has missed two important elements in his paper 1) smart materials: not every piece of data should be communicated via bits and bytes and electric signals. Your beer can may change color as it gets cold enough for consumption. Nature has been using this form of information visualization for many years (Our skin is bright if it is clean and it looks different when it needs a shower! raw meet changes colors when it is cooked and ready to eat) I think the author assumed that everything other than electric signals is hard to work with so he totally misses the chance to point out other forms of computing and communicating.

2) Form factors: he assumes that devices will become bigger and bigger while the trend has been in fact the reverse. Our devices are becoming smaller and smarter (MEMS are becoming a reality) so perhaps we are going in a reverse direction that was predicted in the paper!

Article 2: At home with ubiquitous computing: Seven Challenges

The second article is written after 10 years from the date that the first article was published. Much progress has happens since 1991. In the first article the author provides a detailed description of the wireless communication method that he is envisioning while in this article authors assume that the details of wireless protocols are abstracted and are invisible to the user. They point out seven challenges that need to be addressed before we can have seamless ubiquitous computing in our houses. It seems that they present the challenges in the following fashion: challenges that can be overcome easily are presented at first, these challenges can be addressed with the advancements in technology. Challenges that come later introduce more profound problems. These problems cannot be addressed unless we gain a deep understanding of human behavior as well as it's social behavior.

Authors start with addressing why it is important to overcome these challenges before we can develop a smart house. A house is where we spend most of our time, it is our safe place and our place to relax. To me it seems that this challenge has been solved by advancements in the technology. For example the NPR radio problem does not arise anymore since we have proper authentication and validation procedures for Bluetooth now. On the other hand we would like to keep our home appliances "hackable" too. We would like for people to be able to brows the Internet on their toaster's display if they choose to. Open source movement has helped us overcome this difficulty as well. For example if your toaster is accidentally talking to your washing machine you can probably download a patch for their OS from the Internet and patch in a matter of minutes. I believe platforms like Apple/iPhone have been very successful in addressing Challenge two (Impromptu Interoperability) I do not use Apple products but I have been observing how easy it is for people to sync their iphones with their Mac, then plug it to their sound system or use products like Sonos (www.Sonos.com) to move their music with them from one room to another. For me it is always a challenge to even download my photos onto my Linux/Lenovo system. I strongly believe that challenge 3 (no system admin) can be addressed if we get three elements right, if we have a proper UI and also find a solution to the former challenges we might be able to empower everyone to perform system administrative tasks without even training them and without them even noticing. If securing a home wireless was as easy as making a play list on an iPod every one was able to do it. I personally do not understand why authors present the fourth challenge as a challenge. Engineers build tools and people find new ways to use these tools. Sometimes we wish we could easily break the backward compatibility on these tools (For example we may wish to use URLs as our telephone numbers but the telephone infrastructure makes us stay compatible with a 100 years old technology) again as we are moving towards a more software based solution we might be able to update the software package on our landline phones and add new functionalities.

Challenges 5 and 7 might be the most important ones and the hardest to address. How can we prevent ambiguity from the intelligent systems? How can we predict the social impact of a smart home? would that make us more lonely in our houses? will we prefer to interact with our toasters and iPods more than we would like to interact with our friends? Perhaps we can test and see. Or alternatively, we may want to let the technology evolve naturally and we can observe how it changes our behavior!

One point that I believe authors have missed is the flexibility of technology in this era. While we cannot change tires on our cars if the car is running, we can perform system upgrades on our software systems even if they are under heavy workload. Unlike mechanical systems, software systems are easy to update. This fluidity has allowed us to perform more iterations on our software in shorter times. Operating systems used to have long release schedules but we update our kernels from the Internet pretty frequently now. This solves many problems that are addressed in this paper (i.e. reliability, accidental smartness, designing for domestic use, etc..) And one last question, why isn’t affordability among these items?

matthew k. chan - 9/7/2010 15:37:03

At Home with Ubiquitous Computing: Seven Challenges' In Edwards and Grinter's analysis of 7 challenges regarding ubiquitous computing in the smart home, the authors point out the importance of the pervasiveness of computers and how they will eventually be in our homes resulting in smart homes to improve our lives.

In short, the 7 Challenges are: 1. The "Accidentally" Smart Home 2. Impromptu Interoperability 3. No Systems Administrator 4. Designing for Domestic Use 5. Social Implications of Aware Home Technologies 6. Reliability 7. Inference in the Presence of Ambiguity

The results of their finding is very worthwhile and confronts the practical challenges and hiccups designers and engineers will face when building the smart home. WIth the first, the authors highlight the "accidental" smart home with an example when a neighbor's wireless device accidentally interrupts/intercepts the main user's home devices/radios/etc. This examples serves to highlight that smart homes won't be built from ground-up, but will involve existing homes and the slow accumulation and embedding of devices. Furthermore, how will users be able to debug when a wireless accident happens and even control their own devices when bugs are present.

Next is the fact that all the existing devices are made from different vendors during different times and made under different constraints and design choices. How will all these devices work together w/o having software explicitly figuring things out?

Regarding systems administrators, the authors yet again highlight that home users aren't going to be as technically savvy as engineers and explores how home users will take care of the network and security administration.

In snowballing the last 4 problems, the authors explore the domestic use of smart homes by visiting memory lane when the telephone was first introduce and how the expectations for it changed over time; even electricity had the same effect, and televisions in homes were more accommodating. More recently, teenagers use text messaging as a quiet form of communication. This leads to the social implications of a smart home and lead to unexpected and unforeseen results, as exhibited by mobile cell phones and washing machines. However, with all these devices, we are confronted with the issue of reliability and how the devices in the smart home must be as reliable as television, microwaves, etc. since consumers seldom expect appliances to crash. Finally, how smart must a smart home be? The authors explore the limited inferences similar to an oracle and to "discern what functions of the smart home are possible…"

This paper relates to today's technologies because we are getting closer to the smart home that is filled with so many devices. As awesome and spectacular ubiquitous computing is, the paper's only relation to my own work is the use of cell phones to detect depression and will go far beyond the setting of just a smart home.

The Computer for the 21st Century This paper is important because, similar to the smart home, we have a smart(er) office setting. The author, Mark Weiser, explores the potential of the 21st century computer by claiming that only once computers "vanish" into the background will we be freed to use them without thinking. A majority of the paper explores some of the research at the Xerox center in Palo Alto such as the ID tag and tabs, pads, and a live board for group collaboration, where one room/office will have hundreds of computers. What makes this paper relevant to today's technology is that we're either at that point or beyond: we have iPads, the ID tags that use RFID technology instead, auxiliary storage devices the size of matchboxes, and more. Still, there are some items that don't hold true such as the live board (we now use dry-erase markers and boards).

More interestingly, in the example of Sal waking up in the morning and circling her pen on a newspaper, the newspaper somehow sends the circled quote to her computer or her work office.

Again, the relation of this paper to my work is nonexistent because i've never been in an office-like setting myself. Blind spots are still everywhere, but maybe the paper is projecting into the year 2020 instead of 2010. The only significant blind spot noticed is the use of mobile devices and social networks (similar to Sal looking up Mary).

Matthew Can - 9/7/2010 16:20:42

The Computer for the 21st Century In this paper, Weiser describes the vision of ubiquitous computing, the concept of computer technology being integrated seamlessly into everyday life. The focus is on PARC’s research into ubiquitous computing and the technological requirements of ubiquitous computing. This forward-thinking paper presents a new paradigm for computing, one that has broad implications for HCI. For example, if ubiquitous computing is something to strive for, then HCI researchers are challenged with designing applications that can operate freely among various form factors. Interestingly, this is something that is beginning to emerge with mobile computing today. With Android, it’s possible to push information from desktop browsers to mobile phones for uninterrupted consumption on the go. One aspect of the article I liked was the emphasis on the network. This is important because the article describes computing devices such as “pads” as having no individual identity. This will only be realized when people stop thinking about their ownership of devices but rather think about their ownership of applications and data that move freely among devices. The advent of cloud computing and storage appears to be one step in this direction. In addition, the author does a good job addressing the technology required to make ubiquitous computing possible. What I find more interesting and are perhaps a bigger hurdle to ubiquitous computing are the social and ethical issues that arise. Does my employer have a right to know my location during working hours? Who owns all of the data moving across the ubiquitous network? Who even owns the network when devices are coming and going all the time? On the social side, the author does not address the issue of social adoption of ubiquitous computing. For example, if all the technology were in place today to make ubiquitous computing possible, would people even sign up? Perhaps the adoption has to be gradual. It’s important to think about which social conventions must be broken and which new ones must be forged before ubiquitous computing can become reality.

At Home with Ubiquitous Computing: Seven Challenges Edwards and Grinter raise seven challenges to realizing the vision of the smart home, in which the home environment has been augmented with ubiquitous computing. In doing so, they show how studies of the home environment can help inform and guide the way we design technology for the home. I thought this paper did a great job of addressing some of the social and ethical issues of ubiquitous computing, issues not addressed in Weiser’s paper. In particular, I enjoyed the section on “Designing for Domestic Use” because it highlights the fact that sometimes technology is adopted in ways unforeseen by the vendor or even the customer that purchases the product. This is an important point because the design of the smart home will be influenced more by the social interactions in the home than by the vision of computer scientists. The section on “Inference in the Presence of Ambiguity” was particularly interesting because it’s a challenge that had not occurred to me. Certainly there will be a trade-off between the benefits of using a “smart” system and the costs of incorrect inference. Where the accuracy is high and the cost of error low, it makes sense for the system to infer human intent and take action. For example, if Bill turns on the television at 7pm, it might turn on to the sports channel because that is usually what Bill likes to watch at that hour. Where the accuracy is lower, the system will provide suggestions to the user so that he can complete the action. If the system is not very sure which channel Bill intends to watch, but it is confident it is one of three channels, then a list of those channels will open up, allowing Bill to select the one he desires. One aspect of the paper I am critical of is the need to provide users with a mental model of how the system performs inference. Models of actual system behavior can be quite unwieldy, and it seems impractical to teach those models to users. At the same time, watered down models can leave out details that explain quirky system behavior. I think the best approach is not to worry about providing the user with a mental model for diagnosing errors. Instead, the system should provide a way for the user to recover from system errors and a way to shape the intelligence of the system by interacting with it over time.

Luke Segars - 9/7/2010 16:49:03

The Computer for the 21st Century

The paper establishes the dream of "ubiquitous computing," a somewhat recently established field of computing that allows cheap, connected technology to interface with reality in a useful and almost invisible way. The author discusses how current computing technology has a lot of room for improvement, what sorts of technology might be expected from a ubiquitous computing world, and even imagines what a day in such a world would be like.

The paper discusses the importance of technology in daily life (even in the early 90's) but points out that today's computing interfaces require users to dedicate themselves to the task of using a computer and be fully aware that they are doing something different from the rest of their day. The author draws a fantastic analogy between the dream of ubiquitous computing and the social evolution of writing in the minds of today: writing, once considered a novel act that could only be undertaken by experts, is now so inseparable from everyday life that it's hard to imagine the two detached. We often read without realize we are even reading -- this is rarely, if ever, the case with computing today.

The author makes one fantastically insightful statement that stands out among the others. He says, "the arcane aura that surrounds personal computers is not just a 'user interface' problem." This statement goes further to describe the range of the field of HCI more completely than I could do in a paragraph. Ubiquitous computing technology cannot be achieved by iterative improvements to the 'user interface' of existing software. Similarly, the field of HCI has exploded outside of user interface analysis and into the study of how technologies like ubiquitous computing can come to exist. There is a user interface component to this problem, but there is also a complex layer of system design, psychological research, networking, and a swarm of other topics that makes the technology possible.

The idea of ubiquitous computing is a particularly interesting one; it is distant enough to seem like science fiction, but realistic enough to imagine how society and communication might change if it were to exist. Product miniaturization as well as new display technologies have already diminished much of the physical challenge behind this technology; there is, still, a lot of dreaming and creating to do to make the network of systems come to life. Weiser mentions the desktop window in his overview, presenting the idea that it is a somewhat accurate (but very much detached) at matching the functionality of an office desk. Computing, however, has evolved so that the goal is no longer to emulate a desk -- we want to it follow us wherever we go. Desktops, windows, and information locked to a particular system are unacceptable limitations of this outdated vision.

We are already well on our way to developing many of the technologies that Weiser discusses in his paper. However, there is interestingly one topic that is perhaps further than the others that will seriously limit the capabilities of ubiquitous technology: location awareness. The author continuously references how ubiquitous technology will be "aware" of its location in some sense. This, among all of the issues discussed by the author, is a technical feat that is both central and intensely challenging. Taking into account the tremendous variety of environments a user may move in and out of in a day means that the location technology needs to be flexible, accurate and powerful enough to make this information usable to the rest of the system. I wish the author would explain how this technology could potentially (hypothesizing or guessing, of course) be created; would it require a entirely new infrastructure of radio transmitters? More accurate GPS systems? Digital blueprints of common destinations? Many of the potential benefits of ubiquitous computing are location-based -- this is a topic that has seen little improvement since Weiser's day. I suspect that this may be one of the limiting technologies that keep the the dream of ubiquitous computing out of mainstream society for some time.

Overall, the idea of ubiquitous computing is incredibly exciting. I think it is something that we are going to accomplish in the reasonably near future, but it requires us to think about computing in an entirely new way. This, perhaps, will be the hardest part for both the designs and the users. We can no longer imagine computers as static objects sitting on desks; this is already untrue thanks to smart phones, laptops, and similar technology. The incredible potential of these devices is hard to imagine, but the problems presented by shattering the computing paradigm of yesterday will certainly make for a worthwhile challenges for us as we move deeper into the 21st century.

Linsey Hansen - 9/7/2010 16:53:06

The Computer for the 21st Century

Weiser opens his article by stating that the best technology is that which is able to become fully integrated into everyday life. Therefore, if computers are to go from being state-of-art to invisible parts of a user's natural environment, the technology must adapt- and the remainder of the article discusses what niches computers will most likely be able to fit into via the tab, pad, and board.

To begin, this article is significant because the processes it describes can be applied to any new technology or interface, not just computers. Given a brand new discovery, it may have very few uses initially; overtime however, people can better analyze the technology and either find a new niche for it, or use it to replace a more feudal method. When thinking about this article in terms of computers however, it is significant in that it was able to predict the path computers would take to become further integrated with society.

Almost 20 years later, I would say that computers are definitely a lot more integrated into society than they were at the time this article was written, though they are still nowhere near invisible. Seeing people typing away on their personal laptops is commonplace, and many people treat their laptops as accesories. Part of this improved integration may also be because the prototype devices described by Weiser have been implemented today. For tabs we have smart phones, mp3 players, and electronic ids- I mostly say mp3 players because I know many people who use them to store projects and personal data. Then for pads we have iPads and various new tablets coming out, and while they are still new, these devices are already being integrated as clipboards, business tools, and public kiosk windows. I am honestly not aware of any mainstream board devices. The only things these components lack is a way to wirelessly communicate with each other (I am sure that some can, but there is no standard) or transfer data by presence alone.

I do feel that there are some blind spots, or at least points that I do not agree with, mostly because I feel like he did not predict how big social networking on computers would become. In his example of a world with better integrated computers, he does acknowledge video conferencing and using computers to look up information on other people. On a random note, I feel that a computer voice quietly asking me if I want coffee when I first wake up is creepy. For instance, Weiser's comments about virtual reality being a simulation of the world that draws the user's attention away from the real world instead of merely enhancing the world and allowing the machine's presence to fade is not true at all in many situations. I understand that trying to make people adapt to computers instead of making computers adapt to people is the opposite of his view, but if one were to look at say, some sort of virtual reality chat room (or VR facebook) in which case I feel like computers are being used to offer an extension (or in the case of an online game, elaboration) of the real world instead of just a static map. Similarly, while websites such as facebook and twitter do not exactly allow face-to-face conversation, and have many human-interaction barriers, I feel like they are sufficient enough to show users that there are “people at other ends of their computer links.”

At Home with Ubiquitous Computing: Seven Challenges

In their article, Edwards and Grinter discuss the possibility of creating smart homes in the future. They use seven challenges to explain both what will be required of future technology in order for a smart home to exist, and how designing these technologies should be approached based on what was experienced when designing current technology.

Being somewhat newer, this paper is still pretty relevant to technology today, especially considering that we are nowhere near having functional smart homes yet. While there are definitely a lot more remote-control homes, where you can control things such as lighting, fire places, audio, temperature, ovens, and refrigerators with a single computer system in a home, however there are still not many technologies of this sort that function completely based on sensors (except those found in Bill Gate's house).

I especially like the parts where the authors relate a smart house to a computer (though I suppose that a smart house will be a computer controlling many other computers), because even though computers have definitely come a lot further since this article was written, they still have problems- and while your computer always shutting down is just a mild annoyance, your house shutting down (or perhaps everything turning on at inconvenient times) would be a much larger problem. Also, assuming that a smart house would be like an computer with an operating system that instead of running applications would run appliances, there could be major compatibility issues, or at least annoying drivers to keep track of, unless everyone, including the people making the original smart house OS, would need to get together and decide on specific formats. However, since smart houses would probably need some sort of wireless or bluetooth, there is also the possibility of a smart house getting a virus, or being hijacked- instead of simple cat burglars there would be full on house burglars- which would imply there would need to be some sort of master power switch, though placing that switch in a place convenient for residents and inconvenient for thieves is a tricky problem all in it's own.

Regardless of how recent this article is however, I do feel that there are some blind spots, especially with the first challenge, or that of “The 'Accidentally' Smart House.” For starters, I feel like the stereo situation presented is rather irrelevant, since with most blue tooth devices you need to choose what it connects to during set up, and in most cases there is some sort of interface showing what is connected to your audio, making that situation unlikely. On just about any computer, one can easily check to see where their current audio output is being sent, and while this could be a problem if two people named their speakers the exact same thing, that strikes me as being just a simple user error. Then, as far as the “range of connectivity” goes without physical wires, again, I feel like there would probably be some sort of visual interface on either one of the individual devices or in the main house showing what is connected to what, making it a lot less complicated- and while the author does address this, he makes it seem hard, but I am pretty sure that there are already many operating systems that do this with computer devices.

Aroad Kovacs - 9/7/2010 17:09:05

The Weiser article explores the concept of embodied virtuality, a world where computers become so ubiquitous and integral to peoples' everyday lives that they effectively become invisible to our consciousness. Rather than the "personal computer" paradigm, where an individual carries around a laptop which becomes the locus of attention, the active badges, tabs, and boards envisioned by the authors have no individualized identity or importance, and like scrap paper can be grabbed and used anywhere.

Since the time of this article, technology advancements have removed all of the technical obstacles to the wide-scale adoption of ubiquitous computing: wireless communication (GSM/CDMA, WiMax, 802.11), processors (x86 Atom, ARM), and displays (LCD, LED, E-ink) have fallen in cost so much that they are appearing in almost every conceivable consumer durable goods category, from your car to your camera. Likewise, you could view the combination of your Cal Student ID, Calnet/Google/OpenID login, USB stick, and credit card as the manifestation of an "active badge" that lets you open doors or access your data from any internet-connected platform that can run a web browser (public internet kiosk, cellphone, eReader, laptop, etc). But even assuming that the security issues are resolved via cryptography, the question yet to be answered is user buy-in: Are people willing to trust and share these ubiquitous computers and cloud-computing services so much that they become transparent and freely available, or are our notions of private property and individualism too strong for us to give up our "personal computers" and cellphones? After all, even today we don't have "ubiquitous textbooks" or "ubiquitous notepads", so why should it be any different for computing devices?

Edwards and Grinter's paper seeks to identify 7 challenges that impede the acceptance of "smart home" technology, which can be grouped into two general categories: 1) Technical: It is difficult to design reliable systems for homeowners who lack technical expertise, and may utilize the technology with different intentions and goals than how the inventors designed the system. Existing homes and heterogeneous systems that are upgraded without a holistic design approach (which emphasizes affordances and seamless integration) and possess deficient infrastructure (which makes interconnectivity/interoperability explicit and simplifies debugging) may cause unpredictable behavior, reduced functionality, and lack of interoperability. 2) Bringing smart technology into the home have deep social implications that cannot be anticipated, such as changing relationships, responsibilities, and expectations between family members. In addition, some households have routines that they wish to preserve in the face of disruptive technology.

It seems that there are two paths for overcoming these challenges, as exemplified by the development of consumer products in the personal computing/electronics, and the automobile spheres. The personal computer and electronics industries have progressed primarily through disruptive innovation, and rapid/designed obsolescence. In an attempt to grab market share, firms released revolutionary new technologies such as spreadsheet/word processing applications (eg VisiCalc), wireless connectivity (802.11 WiFi), and now 3D displays, often with little regard to backwards compatibility or interoperability (just look at the wide array of incompatible/vendor-specific file-formats as an example). The ad-hoc de-facto standards that developed in the aftermath of format wars (Betamax vs VHS, Blu-ray vs HD-DVD) were slowly refined into formal specifications by committees of cooperating companies, to the point that after a few product generations, most standards-compliant devices can now work together with relatively few glitches ("plug and pray"). Clearly, the authors' challenges of the "Accidentally Smart Home" and "Impromptu Interoperability" are railing against this system where people are left to fend for themselves against the dizzying selection of incompatible, heterogeneous products and standards, or find an "expert" that can provide guidance. The social challenges also appear to criticize the tech industry's apathy towards how consumers use their products; only recently have companies such as Apple shown that they care about usability and design for a specific purpose, while competitors were happy to just cram as many ICs as possible into a generic box, and market their product as a laundry-list mishmash of features.

However in their overview of the challenges facing the smart home, the authors seem to have overlooked the alternative model of development followed by the automobile industry. Due to regulation and safety/liability concerns, the pace of innovation has been slow (no autopilot or flying cars yet...) with at most one new feature per year. However automobiles' systems and interfaces are quite standardized (steering wheel, ABS brakes, airbags, GPS navigation, etc), and consumers demand vehicles to be extremely reliable (the failure rate of modern consumer electronics is several orders of magnitude higher than defect rate in vehicles). The "No System Administrator" requirement holds, since manufacturers do not assume that drivers understand the inner workings of their vehicles (compare this to how Windows users need to learn how to configure and update antivirus/firewall software, and must respond to scary UAC prompts when performing ordinary tasks). I think that this incremental, appliance-style model of development is more applicable to the "smart home", since by building simpler, but well-specified and reliable systems in the first place, homeowners who lack technical expertise can avoid debugging excessive complexity and issues with interoperability. Additionally, "smart-home" consultants/mechanics who upgrade entire houses to modern specifications can add the holistic design and infrastructure that the authors emphasize.

Thejo Kote - 9/7/2010 17:12:34

The Computer for the 21st Century:

This paper introduces the phrase "ubiquitous computing" and defines it as model where computing devices are so thoroughly embedded in daily life that they are not distinguishable as separate information processing entities. The author describes the concept of tabs, pads and boards as envisioned in Xerox PARC and the role they play in enabling the idea of ubiquitous computing.

The author's main argument is that unlike other mature technologies like electricity, written language etc., computers have not faded into the background and taken for granted (at least as of 1991 when the paper was written). It is a forward looking paper that extrapolates the state of the art in technology at the time and tries to imagine how computing may become ubiquitous in the future.

The author correctly identifies some of the key advances in technology required to achieve that vision. They include more robust wireless networking, better quality displays and more storage. In the two decades since the paper was written, most of the advances have been achieved. But, the paper is also a good example of why it is pretty hard to predict the future. The author's key idea of computing revolving around tabs, pads and boards has not come to be and based on current trends, it seems unlikely the future of HCI is proceeding in that direction. Of course, the core ideas about identifying individuals and personalizing the environment, making devices more location aware and the value they add to HCI are important.

At Home with Ubiquitous Computing - Seven Challenges:

This paper examines the challenges facing ubiquitous computing in the home. The authors argue that designers of technologies for the home need to consider the technical, social and pragmatic issues involved.

The challenges they describe apply to any technology that aspires to be "usable", but is targeted at those which will end up being used in the home. For example, the challenges of the gradual accretion of technology over time, the desirability of increasing reliability and reducing interoperability and administration issues, all apply to technologies outside the home too. Like in the with the PC industry, as components become commoditized and the market becomes more mature, computing in the home will meet these challenges. Introduction of standards like Zigbee in the time since the paper was written are enabling that.

The more interesting aspect to me that I think the authors rightly highlight as a challenge are the social implications of these technologies. Generally, the design of technology for the home does not take this into account. Even the examples cited in the paper are studies conducted after technologies had established themselves in the home. While it may not be easy to predict what the social impacts of a technology are going to be, consciously thinking about them during the design process will definitely be beneficial.

Brandon Liu - 9/7/2010 17:37:40

Weiser, “The Computer for the 21st Century”

Weiser has a clear agenda in discussing his vision of ‘ubiquitous computing’. An implicit assumption in the paper is that Moore’s law will result in computer technology being cheap and disposable - hence ‘ubiquitous’. He sees two possible and opposite directions for the future of our interaction with computers. One is the computer as being personalized to the individual and providing a virtual reality. This is the direction that the author argues against.

Instead, Weiser advocates ‘embodied virtuality’. He uses some appeals to philosophy which I feel are out of place and undermine the credibility of the paper. He bases this position on the opinion that real-world interactions with objects will always surpass any constructed world in quality. He also supports this position by drawing an analogy to mechanical power in mills. What was once centralized is now distributed among many effectively invisible motors. Whether or not information and mechanical power have any similarities is a good discussion question.

One crucial part of the author’s vision is that computer displays will be trivial commodities. This is where his predictions break down. Since the time the article is written, processing power and storage space have benefitted from Moore’s law, but computer display technology has not seen the same exponential improvements. As a result, displays are the bottleneck in our interactions with computers. If we’re going to create a portable display (like the author talks about regarding ‘tabs’) we might as well stick storage and processing on it since the cost of the device is dominated by the display, both in terms of manufacturing cost and battery power.

The author describes a future ‘slice of life’ in the world of embodied virtuality. (The subject of the narrative gets coffee exactly three times in one day. ) Two of the situations described, reading a physical newspaper and ‘virtually highlighting’ a quote, and videoconferencing in a meeting with information about the attendants, have already been realized. For example, I can send a news article to myself via email in a link on the site, and I can use Google Calendar and events to track attendance at a meeting. These are all achievable in satisfactory ways using ordinary display technology. The author should have elaborated more on why his specific examples are desirable. I feel that he is prematurely celebrating the ‘decline of the computer addict’ without explaining why his specific interaction modes are better than plain displays.

Finally, the author references cryptography as a solution to privacy concerns, although the relevant concerns are from social engineering and usability problems, not lack of sufficiently advanced technology.

Edwards, “At Home with Ubiquitous Computing: Seven Challenges”

Like the Weiser article, the authors use a historical precedent to make a prediction about the future. Telephones, another interactive technology, are used for social purposes in the home, are ubiquitous, and at least as reliable as ‘dumb’ appliances such as microwaves and washing machines. This reliability and independence (from an administrator) are what distinguishes home-ready technology from industry technology. The author posits that a sufficiently advanced technology like telephones are successful in the home due to specific engineering challenges being overcome; a result of which is that the bulk of the complexity is in the network.

The author discusses ‘Islands of functionality’ as A Bad Thing in the context of smart homes. There isn’t really any discussion of why such Islands may be beneficial. For example, software systems have islands of functionality for the exact purpose of making problems easier to track. So really, what the authors are getting at is a. that systems that provide strong abstractions (are highly integrated) are difficult to get right, and b. that only highly integrated systems are of use to homes, where there is no expert to fix problems.

I felt that this paper was answering the wrong questions. The part I found most problematic was Issue 7: Inference in the presence of ambiguity. He describes a scenario where a number of people are gathered together in a meeting room. The issue he brings up is that it is difficult for a system to infer that a meeting is taking place. As a reader, and as someone interested in HCI, I wanted the authors to say why a system that infers whether or not people are having a meeting is useful at all. I’d much rather simply tell a system of the state of the world than have it be wrong part of the time.

The deeper HCI topic that Issue 7 is getting at is that ‘smart’ systems are only useful if they either infer nothing or infer everything perfectly, all the time. This is a principle of engineering topics in general, and also has as a consequence issues 1, 2, 3 and 6. I found all of the authors’ points relevant, but wish that he had taken more time to justify why several of the issues are uniquely relevant to smart homes, and not engineering problems in general.

Dan Lynch - 9/7/2010 17:58:34


The ubiquitous computing article was interesting since it was written quite a few years back, and is predicting what the world will be like once computers have been pushed into the background, quietly benefiting the human race one task at a time.

An interesting device they discuss is the "pad", which is described as being a cross between a sheet of paper and a laptop. Would the iPad be something like this? It seems as though they had a different idea of what its use and value would be. First of all, they said pads are intended to be "scrap computers" just as we have scratch paper, and would not have any individualized identity. I am not sure if I can agree with this, or what type of economy would support such a claim.

The paper continues to discuss future and emerging technologies, and then goes onto describe the day in the life of a woman named Sal, and how the little devices in her life help her out.

I would have to say that some of the predictions in this article are completely on, and others not so much. We have in my opinion become completely immersed in the world of ubiquitous computing ever since we had Facebook apps on our cell phones. These days I can't walk down the street without seeing somebody wearing a set of headphones connected to an iPod, or a person talking to themselves (their bluetooth phone clip), or some other engagement with a technological device.

These devices have in some sense gone into the background, however, I still think that they are also in the foreground. Its possible that I misunderstood what Mark Weiser means when he says we have reached a state of ubiquitous computing only when these technologies disappear. This example that happened to me may clear things up. In no way would I say the internet is in the "background". People use it everyday and its talked about, etc. Last week Comcast decided that the house I live in has exceeded its bandwidth cap (I live with 26 people in a residential zone), and we had lost our internet. It was made apparent just how much we need the internet... its like water these days! In the end, maybe it was in the background since it took losing it to realize just how important it was.

I also found the social benefits referenced very interesting. Sal was able to look up the name of someone she did not know, and find out information about her: Facebook.


This more recent article also focuses on the concept of ubiquitous computing, in particular, the challenges associated with its application and implementation. The challenges include the accidentally smart home, impromptu interoperability, no systems administrator, designing for domestic use, social implications, reliability, and inference.

I would like to discuss a subset of these challenges. The first is the impromptu interoperability. It seems as though there to this day is no protocol for communication between arbitrary devices. I think this poses an interesting question: should there be? I think sometimes it is best to know the capabilities of a device, and its usefulness. For example, I don't think a toaster needs to talk to a digital camera, however, maybe a printer could be useful for error messages.

Another challenge discussed what the reliability. They mention that this is something that embedded systems designers already have addressed since recalling their products would be devastating. It seems to me that we should be addressing the smart home implementation using an embedded systems mindset. Micro-controllers and simple code should be used to eliminate error. Bloated operating systems will only cause trouble when it comes to heating a house or turning on a stove---I wouldn't do it personally.

In the end, I think the smart home is well on its way. With the iPhone and iPad starting the touch screen revolution, its a matter of time before apps come out that control your home (I think this has already happened actually). I think that a protocol should be developed for some devices, but things should be categories based on usefulness and capability. For example, images, video, audio, appliances, etc. In addition, I think analog circuits and micro-controllers should be the brains for the device, leaving bloated OS's as user interfaces to these programmable systems.

Krishna - 9/7/2010 18:22:41

The Computer for the 21st Century

To summarize, the author provides us his thoughts on what it means for computers to be ubiquitous. He gives us examples of how computers can be designed and deployed to act in ubiquitous ways, and he warns us of challenges in designing and deploying such computers.

There are primarily two arguments in this paper. The author states that a technology can be considered successful only when it becomes indistinguishable from daily activities, based on this fact, he argues that computer technology is not as remotely successful as some of the older technologies. He gives much thought to the term ‚Äúindistinguishable‚Äù - I would argue that the entire paper is devoted towards explaining this term with examples. In short, he explains it as the act of a technology to disappear from our lives in such a way that we interface with it only at a subconscious level - that is without much thought, using less cognitive resources. The question is ‚Äúhow to go about this ?‚Äù

The author says that to achieve this, technology should be to a certain extent self aware; they should know their purpose under contexts and should be capable of deriving these contexts from their location, scale and other such external inputs(pp. 5). Throughout the paper, he illustrates these ideas using various examples of how such technologies can be developed and deployed using a suite of self, context aware devices. An important observation is his implicit emphasis on direct manipulation and interaction on all his examples.

From the examples, we can understand his general formula. First, identify a set of routine tasks given a context - a context could be collaborating in office on a project, a team meeting, making breakfast, driving to office, etc. Then, develop generic low cost computing devices(tabs, pads, etc.) that can be dynamically loaded with specific software that assists in performing each of these tasks. Finally, enable collaboration among these devices to solve the larger task(s).

Much of the challenges he mentions in the final sections of the paper can be directly related to these seemingly simple steps. For example, though costs of processors and screens will come down, and thus enable proliferation of such devices, there is this danger of devices reaching computational power limits under space constraints - thus limit what is possible with these devices; writing optimal frameworks for running dynamically reconfigurable software is not trivial - most of the time the frameworks are not generic enough; finally, collaboration is not trivial at all and would require design of specialized communication protocols, vocabularies and there is this dangerous trade-off between openly collaborative systems vs security and privacy.

In conclusion, a wonderful read, asking HCI researchers to be creative, develop solutions that enhance our experience of interacting with the world, that enable us to go beyond what we are biologically capable of and at the same time reminding us of the design and engineering challenges.

At Home with Ubiquitous Computing: Seven Challenges

The paper tries to answer two questions: what does it mean for homes to be “smart” and what are the challenges and issues facing its adoption and use. We see strong connections between the authors’ concept of a smart home as a “domestic environment surrounded by interconnected technology that is aware of our presence and actions” and the ubiquitous, self aware technology described by Mark Weiser in “The Computer for the 21st Century”. This makes the second question highly relevant, as it is quite apparent that despite the jaw dropping use cases of such technology, they haven’t really caught up as predicted.

The authors mention a set of seven important challenges faced by the adoption of such smart technologies. They mention that most existing homes and homeowners are not suited for such technological advances and may require complete architectural and psychological refit. Though this makes outwardly sense, it is not clear how a disruptive technology will go ignored for such reasons as this seems to be more of a design challenge - designers and engineers will work towards developing technologies that fit existing homes and users’ psychological expectations, the authors acknowledge this but do not give a convincing argument that this is beyond a design challenge (pp. 258).

An important challenge mentioned by the authors is that of interoperability. This is extremely relevant given the current complex, competitive consumer electronics space. It is highly unlikely that various vendors will accept on one standard, and as the authors mention, existing solutions that circumvent this problem using alternate architectures that enable user defined vocabularies have never been generic enough and never reliably tested beyond lab conditions. (pp. 260).

I do not completely agree with authors mentioning of system administration as a challenge. This again is a design challenge, even otherwise, I see emergence of new service businesses offering solutions to handle this problem. In India, most local neighborhoods have small scale computer service shops that offer in-house, on-demand computer administration at extremely low prices. Also, given the increasing technology literacy levels I don’t see this as a major problem in the future.

I am not sure whether the social implications of technology are relevant to the primary argument of the paper. These are general issues for any disruptive technology . Also, as someone who believes in social constructivism, technologies are bound to be manipulated by users despite efforts by designers. It would have been relevant had the authors emphasized on the need to create flexible, adaptable technology and argue that as a design challenge.

Finally, the authors’ concerns on reliability and uncertainty aspects of the systems are well justified. It is unlikely that users will subconsciously start using and trust systems that are not reliable, uncertain and whose actions they don’t understand. As the authors conclude, most of their concerns overlap and any research on these issues should take a holistic approach.

An important concern the authors have missed would be the potential for such smart homes to create economic and digital divides. It would be interesting to think upon the social implications of such smart homes from this perspective. Given the fact that such smart homes have the capability to enable us to go beyond what we are capable of now - spend less time on household chores, handle large amounts of information hence make creative inferences, etc, will it provide the respective householders with definite advantages and opportunities ? - similar to the effect Internet accessibility has had on poor nations.

Drew Fisher - 9/7/2010 18:26:30

At Home with Ubiquitous Computing: Seven Challenges

This paper serves to discuss previously unmentioned problems that will need to be overcome to provide quality ubiquitous computing in homes. In particular, it asks questions that may need to be answered by changes in development ideology and approaches to the problems at hand.

I particularly appreciated the authors' look at the social implications of such ubiquitous computing, and how they recognized that a truly profound technology would likely affect how people interact with each other. Further, they noted that neither users nor vendors anticipated the social aspect of the telephone - in the same way, we may not be able to predict the impact of such technology. I can't help but think of Pandora's box - we have no idea what will happen when our systems truly function seamlessly. An interesting question to investigate, to be sure.

I also liked the authors' attention to what I think of as "system misbehavior" - when the system tries to be helpful, but gets the user's intentions wrong. This is a difficult problem, and solutions tend to fall on a spectrum ranging from the system forcing the user to specify exactly what he/she wants explicitly (too dumb and tedious to become ubiquitous) to the system guessing everything the user wants done and getting it wrong some percentage of the time, in which case the "intelligent automation" is no longer improving the user experience. Since it is unlikely that systems will ever be perfect at inference, they need to limit themselves to the tasks that they can solve. The only thing users hate more than a dumb system that takes no action is a smart one that takes the wrong action, which they then have to correct. The authors recognize that ubiquitous computing doesn't (and can't) solve every problem.

I liked the strength of their arguments, with each concern supported by hard evidence of similar failures in the past, and the implications of what a true solution would have to provide.

I wish the authors had offered a new model to solve their mentioned problem of avoiding systems administration. Some intelligent way for end users to provide the home constraints to satisfy would probably go a long way to enabling the users to troubleshoot and maintain their systems.

The Computer for the 21st Century:

The main idea of this paper is that for computing to be a truly profound technology, we should focus on making it invisible in our everyday usage, rather than focusing on the use of computers for specialized tasks. Only when we no longer have to think about using a computer do we gain the full benefit of it as an intellectual tool.

This paper suggests that the approach of "virtual reality" is fundamentally flawed: it goes in the opposite direction of what we need to become more productive. It explores what the authors would consider an attempt at the correct approach - "embodied virtuality" - in making computers absolutely ubiquitous.

I liked how the authors describe what in effect is a ubiquitous computing model based on writing utensils. In the place of sticky notes, they have "tabs", "pads" replace paper pads, and "boards" implement whiteboards with bonus remote collaborative functionality. The unconscious use of technology to improve communication between workers seems to be their ultimate goal, and by digitizing paper, they make it much easier for all records to be shared between workers. But that's still only part of the problem.

One concern I had for the system was the lack of video aiding collaboration. One of the key problems in effective collaboration tools is that people communicate primarily with tone of voice and body language, rather than by words. As a result, a system which does not provide adequate telepresence is still inferior to meeting in person. Color television had existed for decades at the time of this writing; it seems like quite the oversight to think that seeing people's faces to improve communication would not be part of the incredible ubiquitous computing engine.

The authors also seem to have not yet grasped the incredible quantity of personal information involved in a ubiquitous computing system and the difficulty in providing adequate privacy controls. To be fair, this paper was written well before the rise of the huge social networks (most notably Facebook). The authors seem to have left security and privacy in ubiquitous computing as an exercise left for the reader, but the problems of securing data and even more importantly managing the relevent controls on said data are much more difficult than forseen at the time.

Bryan Trinh - 9/7/2010 18:33:31

At Home With Ubiquitous Computing: Seven Challenges attempts to set constraints on the design space of ubiquitous computing. In particular W. Keith Edwards and Rebecca E. Grinter define ubiquitous computing in the context of the family home. The principle academic question they wish to answer is "What technical, social, and pragmatic challenges must be overcome before computing can be ubiquitous in the home?"

Edwards and Grinter enumerated seven qualitative metrics that ubiquitous computing devices should be measured by. Ten years of history has shown that at least a few of these concerns have been of serious importance. What comes to mind first is of course the smart phone. Companies have made computing ubiquitous by exploiting the fact that people tend to carry cell phones with them everywhere. So instead of transforming the environment that we live, we first start by transforming ourselves. Of the seven challenges, the two that are most applicable to the mobile smart phone space is the "No Systems Administrator" and "Social Implications of Aware Home Technologies."

Mobile smart phones are interesting because I think that companies have designed the system so well that we have all become systems administrators. It's easy to be a systems administrator, and if there are any problems we have a very knowledgeable consultant named, Google. Those who have smart phones have all ventured onto a market place and uploaded a new application onto their phone. So with respect to the social changes that occur because of technology, this I think is the biggest one. The integration of technology in our daily lives will be seamless because of good design.

From reading this paper, you get the sense that, just ten years ago, when a computer scientist says "ubiquitous computing" his computer scientist friend would think "home". Ubiquity meant sprinkling our homes with computing technologies. It was, the logical line of thought; the home is the environment that surrounds people, and more importantly the home could be properly outfitted with technology by the dweller, but todays efforts seem to be more focused on the smart phone and the network.

I think that after the mobile devices and network technologies are explored more, companies will begin to integrate computing devices into other parts of our environment and the other challenges outlined in this paper will become more important.

A morphological Analysis of the Design Space of Input Devices attempts to define a design space in which to explore input devices. Card, Mackinlay, and Robertson also define a taxonomy to categorize existing as well as future input devices. By creating a design space that is complete in its description, they hope to adeptly explore and measure the possible input devices.

From the morphological design space, designers can more easily understand the range of possibilities and to a first approximation calculate the potential success of an input device. For instance, in the closing remarks, the author remarks that a possible input device that can beat the mouse in performance would make use of more than just one finger. They came to this conclusion by examining one of the figures of merits, bandwidth, or the speed at which an input device.

The finger input device is precisely what many companies are exploring today. Wacom and Apple are both creating products to compete with the mouse that use intuitive hand gestures on a touch pad. Unfortunately though, they are not fully utilizing the bandwidth by tracking all fingers in the interface language. I suspect that this is just a business decision that they had to compromise with though since there is a lot of research in multitouch input devices.

I think that a interesting avenue to explore would be a combination of hand and pen input device that would make drawing applications feel more connected. When first using a Wacom board, many users will complain that drawing feels very disconnected because rotation of the physical plane does not rotate the digital plane. With a combination of finger and pen input methods, a system could be built to simultaneously rotate the digital space with the fingers while drawing.

David Wong - 9/7/2010 18:46:34

1. The "The Computer for the 21st Century" paper talked about the coming advent of ubiquitous, rather embodied virtuality, of computing. Just as literacy and writing have come to be natural and "invisible" in everyday society, so too will computers become ever-present in our lives. Our world will soon have hundereds of computers enriching our everyday tasks and these computers will allow us to conduct these tasks in our own natural environment, rather than within the computer's environment. The "At Home with Ubiquitous Computing" paper highlights seven interconnected obstacles to the adoption of the smart home. The prevalent, and possibly all-encompassing obstacle stated in the paper is: "How smart should the home be?". The layers of detail in an answer to this question will address the six other obstacles towards the smart home.

2. The "The Computer for the 21st Century" paper gives a new perspective on the future of HCI and inspires future computer design to become more human-centric. Since the time the paper was published, some of its ideas have already been implemented, such as badges that indicate who you are that can open doors, greet you, etc. In comparing the adoption of computers to the adoption of literacy, the paper makes a good point that computers should progress to become invisible within our everyday lives.

The "At Home with Ubiquitous Computing" paper brings up many valid issues that must be addressed when designing products for the smart home. The paper inspires new research directions. For instance, there must be research done on how to have many different products communicate effectively given that these products will most likely be adopted incrementally at different times. I believe this paper is effective in illustrating how complex the problem of the smart home is and how that affects HCI research in this field and ubitquitous computing in general.

3. I liked many of the points made in the "The Computer for the 21st Century" paper. For instance, I agree that HCI should progress to a point where computers will become less and less "visible". Also, I would love for the use of a computer to be more social and "as refreshing as taking a walk in the woods" (11). This is a great direction for HCI, but I don't think the scenario the PARC researchers painted within the paper fits the description. Granted the paper was written almost 20 years ago and it would have been very difficult for them to predict, many of the devices in their paper (tabs, pads, and boards) seem like they apply technology to old concepts, such as using paper, a desk, and boards. For instance, the desk at one point was a new, unnatural addition to the human environment. So who is to say that desks now are the model to fit computers? Perhaps computers and HCI can expand human interaction onto a new, innovative platform.

I thought that all the obstacles pointed out in the "At Home with Ubiquitous Computing" paper were valid and offered an insightful and well thought out perspective on smart homes. The problem is well motivated as many companies now are looking at the idea of the networked smart home. For instance, Cisco, a business-grade networking and information systems company has been looking into this area for many years already. With Cisco being one of many companies trying to penetrate this market, the "accidentely" smart house, the impromptu interoperability, and the designing for domestic use obstacles all come into play. The other obstacles stated in the paper are also very relevant to the adoption of products for the smart home.

Richard Shin - 9/7/2010 18:52:20

=== The Computer for the 21st Century ===

This paper presents a gushing vision for the future of computing, based on extrapolation from ubiquitous computing technology being developed at PARC at the same time, and by analogy with other technologies which have become ubiquitous in our daily lives (writing and electric motors were two examples presented). The author argues that, instead of people computing with the combination of a monitor and an expensive desktop personal computer, or even laptops untethered from desks, people will compute mainly with a large number of cheap computing devices (called ‘tabs’ and ‘pads’ in the paper) and existing objects which would be smarter (alarm clocks and whiteboards, for example), where all these mini-computers would wirelessly communicate with each other.

Overall, this paper seems valuable to the field in presenting a cohesive, unified vision for where human-computer interaction should go. The paper is scant on details about how the described tabs and pads actually work, or what the user interfaces involving them will be, instead describing in broad terms what possibilities would be enabled by having computing embodied everywhere. In a sense, ubiquitous computing could be considered direct-manipulation interfaces taken to a logical extreme; instead of having user-interface elements which behave like tangible objects, why not have actual tangible objects with which to compute? Unfortunately, while the vision is grand, and is now implementable in many ways, the prophecies in this paper don’t seem to have quite come true yet; computing remains largely concentrated in single devices, rather than many, although those single devices have shrunk in size and become more ubiquitously available to us (from desktops to laptops, tablets, and smartphones).

The paper is self-confident about the vision it presents and the future that it predicts, but it doesn’t seem to really consider the possible negative implications of ubiquitous computing. There have been attempts to deploy, for example, an equivalent of the ‘active badge’ which can communicate the identity of the wearer to devices near it, to track attendance in schools; but they have met much opposition due to concerns about privacy of the wearers. Efforts to embed RFID tags in consumer products (to replace bar codes) and passports (to reduce counterfeiting) have similarly been criticized for enabling remote tracking of those who carry the tags. The paper mentions the possibility of privacy concerns but then simply goes on to say that cryptography could solve these problems.

At Home with Ubiquitous Computing: Seven Challenges

This paper collects and presents some obstacles to the acceptance of ubiquitous computing in the home. At a time when networked, wireless devices were starting to become popular on the market, and people were increasingly aware of the possibilities of ubiquitous computing, the authors present a somewhat contrarian view (especially to the other paper assigned for this meeting) as they feel several (to be exact, seven) issues need to be kept in the minds of ubiquitous computing researchers for this new paradigm to really become successful and accepted.

After reading the previous, rather optimistic paper, I found this discussion of the pitfalls of ubiquitous computing refreshing. Partly, the paper discusses some ways in which ubiquitous computing needs to be easier to use for the intended audience, such as reducing the amount of manual administration needed (to be more in line with traditional home appliances like refrigerators and washing machines, rather than desktop computers) and adapting the technology for home use. However, I thought that the paper’s discussion of the ways in which ubiquitous computing, in a sense, can be too easy to use, rather than too hard, the most insightful. For example, the authors mention the hypothetical example of Bluetooth speakers automatically pairing themselves with the nearest audio source, in that case in a neighbor’s house, rather than requiring the user to manually connect the speakers to the source. By analogy with previous home appliances, the authors note that while ubiquitous computing may, on the surface, reduce the amount of labor required, changes in societal expectation due to this new technology could instead have the opposite effect and end up disrupting the home environment. As an example, while mobile phones allow their owners to conveniently contact others as well as be contacted, they also make it much more normal for their owners to be contacted and to contact others in what would previously have been considered unusual situations or times, increasing their social obligations.

While the authors point out a comprehensive set of challenges in this paper, they don’t provide any tangible way to measure the ways in which these challenges have been met, or any specific rationale or empirical data to support that these are the most important challenges that ubiquitous computing faces in the home. While the challenges presented certainly cannot be trivially quantified, it would have been helpful to have some measure for them beyond unrelated anecdotes or hypotheses. Analogies and extrapolations from previous technology are provided to help support the validity of these challenges, but it is difficult to know if the concerns that applied then are still valid. Overall, however, the claims in this paper seem sound, and many of the challenges in this paper seem to be still-unsolved problems.

Pablo Paredes - 9/7/2010 18:54:53

Summary for Edwards, K. Grinter, R. - At Home with Ubiquitous Computing: Seven Challenges -

The overall paper presents the notion that ubiquitous computing can be observed as part of the daily lives inside a home... The overall need is to have reliable, productive and calm technologies... The challenges presented could help define the risks of actually deploying the technologies, and which are the risks of doing it incorrectly, without understanding the home social dynamics and ergonomics. Present a view of the need for an ethnographic followed by a technical design, which in turns should begin from the analysis of the UI to later incorporate the system support in the back.

The notion of an accidental digital home defines the situation where many humans live, where the level of technology incorporated to their home settings, especially when it is wireless, i.e. without an explicit connectivity model, demand an ever increasing learning curve, which could draw attention from other family interactions. Adding to this situation, the notion of an administrator-free system, seems more a "perfect-world" design principle, but one that will never be achieved, as there is always a need for some level of administration, wether in a utility-based model of any other more device-centric model... This time overhead once again tampers on the fundamental interactions inside the home, and will quickly be judged improper in case the user is not getting the key value element from home, which is a sense of shelter, family interaction and peace.

Another interesting aspect is The notion of balance between the core and the edges of a networked design in a setting where the implications and the evolution of the device cannot be fully predicted must take a flexible and dynamic approach, one that could adapt to the need of the device... As an example, symmetric communications protocols based on the previous phone systems do not allow for this flexibility, while the opposites true for data-centric network design... Both the devices and the network should be flexible in its computing and connectivity to accommodate to evolving models of usability.

Finally, the inference approach in light of ambiguity proposes that functions should be grouped based on the levels of inference attached. I believe this notion should also incorporate a definition of the end user as a dynamic entity that suffered several psychological/emotional as well as physical challenges... A priority analysis of the potential system states require exhaustive analysis of the different states the human model takes; additionally a potential circular problem: the system is trying to infer the state of the user, but the user is aware of the system, which in turn alter their cognitive state... As a whole, ambiguity in the semantics and the syntaxes of the informational state of the system present a complex challenge for the designer, which approaches an almost open-ended problem making various assumptions of previous knowledge... After all, the home system and its basic utensils have a history of thousands of years of evolution, while computational systems have merely decades of existence... Again, an evolutionary approach might be needed to incorporate new knowledge... additionally, comparative studies using the Internet could be incorporated to acquire information from other settings that might be confronted by higher stress situations, as per example, homes that demand more strict use of devices such as homes with mental health patients, children, elder adults, etc..

I wish the paper had a better assessment of the actual return on investment (ROI) of adding technology to homes... The simple notion of simplicity and of calmness cannot be assessed only from a technology design perspective, but first the question of what are the dynamics and needs of a home member. As an example, many persons do not want to make decisions when at home, they want to be lead or to be left alone... This could accomplished with technology, but could in some ways be also acquired by consciously eliminating technology. So, discussing where and when NO technology actually makes more sense than developing technology

Summary for Weiser, M. - The computer for the 21st Century -

The notion of 21st century computers comes from a view of computers being less the center of attention and more a background invisible tool that enables humans cope with the issue of information overload, and to allow the human to get out from a computer-centric approach to a social human-centric approach.

The recognition of a notion of egocentrism is clearly made evident as personal computers are not social computers... If we locate ourselves in our age, where social networks play an important role, we must still recognize that we are not "networked" individuals, but we are social individuals, that exist mainly in the plane of the inner social circles, where computing still has yet to make a larger impact to evolve this relationships, especially to support integration of individuals across different levels of abilities, intelligences, cultures, creeds, etc.

The challenge then requires computers to vanish in the background, which in turn requires a complete notion of design based on location and size. The authors reference at a system that incorporates different devices and sets as an example pervasive pads that do not have a "personal" attachment, but that are more treated as information ports that recognize and support the user that is using them at a determined moment. Beyond the UI design challenge to make information available, the system underlying this new device topology should be able to breath (grow and shrink as needed).

Additionally, the notion of privacy in these new highly distributed systems poses a complex problem, especially in more informal settings, where we expect to be able to be "ourselves" and eventually do things that maybe criticized... The notion of virtual identity, which many times is altered, interferes now with our "real" identity! Leaving us less space to be creative, as there are tons of eyes watching... The paradox is that the deployment of technology to free us and enhance us, could actually be a coercive factor. So the notion of security to overcome the privacy issues arises, but I believe it should not be only treated as a secure design issue, but also as a perception issue. The user must feel the system is secure, not only know that there are guarantees embedded.

Although I believe the author describer the notion of different levels of connectivity ranges, I believe the author should have analyzed the issue of the carrier as a "man-in-the-middle", which is unavoidably part of the equation. If analyzed as as business model, this could have risen some questions related with net-neutraitly and other issues relate with regulations that are actually going to affect the path of ubiquitous technology evolution.

Luke Segars - 9/7/2010 18:56:28

At Home with Ubiquitous Computing: Seven Challenges

This paper summarizes modern advances in “smart home” technology and some of the major challenges that are left to overcome before they can become a reality. Smart home technology is a class of ubiquitous computing that is exploring the possibility of deeply integrating technology into our living spaces in the form of adaptive controls and systems that handle many of the menial tasks required to keep a home running smoothly.

The paper wisely includes a number of challenges that are directly targeted at the person who will be using the technology: an average person without specialized training. One of the challenges focuses on the fact that we cannot expect users to become systems administrators; this is already somewhat expected (and very troublesome) for a number of today’s technology users with devices like wireless routers and network configuration. Continuing to expand the number of services offered by a home would naturally seem to inflate the technical expertise required to manage the services; this is a problem that has to be addressed before technologists can expect any sort of widespread adoption from the masses.

Several of the challenges actually build on this point, asking whether adding “smart” features to a home will actually reduce the amount of work done by its inhabitants or merely shift it away from the task itself and onto maintenance of the systems performing the task in their place. The author gives the example of the washing machine as a preexisting appliance that doesn’t necessarily reduce the amount of work, but shifts the burden of work to a single individual. This is an interesting point; in my personal experience, I can think of several occassions where I have spent more time setting up a tool or application than I may have used by performing the task without the tool (especially in technology).

Certain technologies, like interactive furniture, are already beginning to change the way that people are viewing systems that they once considered to be static. These systems haven't made it out into mainstream markets yet so it is hard to predict whether general users will accept them as "useful." The desire for users to adopt technologies will likely be another barrier that must be crossed -- many people may not "see the need" for deeply incorporating technology with their home.

I found the challenges posed by this paper to be both realistic and all-encompassing. It is interesting that the majority of the challenges that the authors mention are actually important social ones rather than technical ones. The social problems may often be the more difficult problems to address given the "expert" attitude that a large amount of modern technology has. Similarly to The Computer for the 21st Century, this paper describes problems that require that we reevaluate the way we view technology today. Setup and management of software tools in the home has to become something that anyone can do, which is a tremendous challenge in itself. Nevertheless, if the appropriate social and technical hurdles can be cleared, it seems that the “smart home” has the potential to open an entirely new market for ubiquitous technology if the challenges can be tackled.

Aditi Muralidharan - 9/7/2010 18:58:39

The two papers examine two sides of the coin of ubiquitous computing. The 1991 Scientific American article, "The computer for the 21st century", looks like an attempt to get laypeople excited about the possibilities of ubiquitous computing. It talks extravagantly about intelligent window displays, alarm clocks, and coffee machines, not to mention tabs, pads, and boards. The other article "At home with ubiquitous computing..." is a scholarly paper that takes a some what more level-headed point of view, setting out the technical, social, and practical challenges that the manufacturers and designers of ubiquitous computing devices have to face.

Both articles made accurate predictions. Devices like the iPad and RFID tags could be seen as modern incarnations of the "pads" and "smart badges" in the1991 Scientific American article. And on the other side, the scholarly article make sound arguments abot the importance of solving the problems of open standards, interoperability, privacy, and reliability - problems that have not been overcome ten years later.

In its optimistic free-wheeling way, the Scientific American makes a valuable contribution by predicting specific new ways in which ubiquitous computing could be put to use in our lives: it gives researchers concrete goals to work towards. The UbiComp article frames the issues surrounding ubiquitous computing but does does not offer any concrete solutions. It identifies the problems that must be solved in order for ubiquitous computing to work.

Aaron Hong - 9/7/2010 18:59:09

The paper, "At Home with Ubiquitous Computing: Seven Challenges" by Edwards, et al. is about the challenges of bringing to reality the smart home. There poses itself 7 challenges: (1) Accidental smart home, (2) Impromptu interoperability, (3) No Sys Admin, (4) Designing for Domestic Use, (5) Social Implication, (6) Reliability, and finally (7) Inference in the Presence of Ambiguity. What the paper discribes itself to talk about is "based in the technical, social, and pragmatic domains."

What is one of the most interesting questions is the first one, that the smart homes will not be designed from the top-down. There will be no master architect who will create the devices all specific for the purpose of having a smart home. A home will get smarter incrementally and with disparate parts. That I see as being of the biggest challenges to the smart home, due to practicality and legacy reasons. Although feature homes may be designed with these features, there are so many old homes out there that are 50-100 years old. Homes have slow turn-over rate, which is different from cars. Cars are replaced much more frequently. So these practical considerations are, as the paper says, one of the biggest obstacles.

In the paper, "The Computer for the 21st Century" Mark Weiser from Xerox Parc discussions different ways to make computers more ubiquitous in our society. The most ironic thing is that he says "My colleagues and I at the Xerox Palo Alto Research Center think that the idea of a "personal" computer itself is misplaced and that the vision of laptop machines, dynabooks and "knowledge navigators" is only a transitional step toward achieving the real potential of information technology." That is not true though. Although these laptops were predicted to be a transitional step, they have become more entrenched than Weiser may have imaged. That is true that there is probably more to be innovated, the computer scratchpad being kind of like the iPad. There is more to be done.

Thomas Schluchter - 9/7/2010 19:00:35

    • At Home with Ubiquitous Computing**

The paper addresses the challenges of ubiquitous computing technology by looking at some of the obstacles to integrating into people's living environments.

I find the perspective taken by the authors very valuable: Shifting the focus from ubicomp as an abstract technological fantasy to a puzzle piece in an already complicated social world. This highlights that there are design problems that go beyond the technical aspects, and that might be outside the designers' control after all.

The conceptual problems raised in the paper on Direct Manipulation Interfaces are augmented here: There is the feedback loop between user and system in which the user needs to construct a model of the system to successfully interact with it. In the context of a "Smart Home", the system's output might be much more subtle than that of a DMI. Also, by attempting to sense the user's needs (or even his/her intentions), the system proactively intervenes in the conceptual space. These circumstances potentially widen the gulfs of execution and evaluation, making ubiquitous systems difficult to learn.

It is also fascinating to think about the consequences of delegating responsibility to ubiquitous technology. By making inferences and 'acting' on these inferences in ways that are materially consequential (regulating temperature, locking doors, allowing access to certain areas), Smart Home systems shift initiative and decision making away from the occupant-user. Even if the user can control the system and reverse its decisions, (s)he reacts rather than taking initiative. It would be interesting to know how people react to long-term exposure to these conditions: Do they perceive a stubborn piece of technology merely an annoyance, do they eventually give in and adapt, do they develop anxieties...

    • The computer for the 21st century**

The article explains the vision of ubiquitous computing at Xerox PARC in the early nineties. The central argument is that only a technology that fades into the background and enhances the world unobtrusively is truly revolutionary. Ubiquitous computing has, in the authors minds, the potential for doing just that.

One of the most striking things about this article is the vision that *the* computer of the 21st century is no longer *a* computer but a mass of (literally and figuratively) thin clients. The real power of ubicomp lies in the network that connects the devices and makes them intelligent. There is a clear awareness of the potential problems in terms of security and privacy, and we see the same discussion surrounding cloud computing today. Unfortunately, Weiser's reaction to the problem he articulates is solely technological: he lays trust in the development of more powerful security mechanisms.

The scenario that illustrates the potential of ubicomp technology made me think about the practical implications of having computing power everywhere: It also means having devices everywhere that can potentially interacted with. Already, anecdotal evidence suggests that there is a constant influx of signals from the devices that surround us that negatively affects our ability to focus on tasks. The "activation" of the world, as Weiser calls it, might supercharge this problem.

In a way, he tries to resolve the issue by claiming that the informational magnitude of a walk in the woods is higher than that of using standard computing technology, yet we don't experience it as such. It remains unclear what the concrete design implications of this insight are for ubiquitous technology. As the first article outlined, it will be infinitely more difficult to achieve the interweaving of real world and technology than it appears to be in this article.

Anand Kulkarni - 9/7/2010 20:54:45

I wish to pass on this week's commentary.

kenzan boo - 9/7/2010 21:39:42

Computer for the 21st Century ‚Äì Weiser The article describes the most profound technologies are the one that disappear into our daily activities. These are such technologies like writing, so essential and unnoticeable that they are ubiquitous. He contrasts this with the computer, which is a device we have to actively go to in order to use. On the opposite end of ubiquitous technology is virtual reality, a reality where the user has to escape from his or her own real world and put on glasses to enter into a separate world, completely isolated from their world. For computers to become a fundamental technology it needs to disappear into our surroundings. Such a consequence is that of our own psyche, not the technology. I agree with much of Weisers observations, that computers need to integrate themselves into the daily human interaction so that it becomes native to use rather than something foreign the they have to go into, separate from their world. Some of Weisers hopes are become more and more of a reality with TV screens almost everywhere displaying information, giving updates and news on what is going on in the world at offices and lots high traffic area. Even our own cory hall has a display of computer screens displaying news and updates. They have ‚Äúdisappeared‚Äù into our environment, become part of the wall of cory hall. Students walk by and glance at the screen on the way to class without having to exit their current environment and enter another virtual world. The screens become virtual bulletin boards. At Home with Ubiquitous Computing : Seven Challenges ‚Äì Edwards and Grinter The article details and provides examples of the main challenges towards integrating computers into a smart home. The seven of these are: Lack of infrastucture and initial setup; having to add on without the whole house built to support it, impromptu connections and protocols, lack of sysadmins, designing for non technical use; domestic use, social implications and resistance, reliability, what to do in case of ambiguity. The two I found most profound and that I believe will be the hardest issues to overcome are social implications and reliability. I remember the old Disney show smart house exemplifying the horror what happens to a family when the house becomes cognoscente and tries to remove its owners. This is one worst case scenario but it is a prevalent thought of many who oppose technology. This stems from many other thoughts of robots going rogue and it is a critical part of social resistance to this like a smart home. In their thought, we are relinquishing too much of our independence to a computer. The other part is reliability. If the house crashes like a computer does, the users could very well die if they become trapped inside, or if the computer controls a critical component like gas used for cooking and on error burns down the house. These errors may be one in a million, but the cost of that one accident is human lives. And when that becomes public media, the publics trust in the system will be lost.

kenzan boo - 9/7/2010 21:41:26

Computer for the 21st Century ‚Äì Weiser The article describes the most profound technologies are the one that disappear into our daily activities. These are such technologies like writing, so essential and unnoticeable that they are ubiquitous. He contrasts this with the computer, which is a device we have to actively go to in order to use. On the opposite end of ubiquitous technology is virtual reality, a reality where the user has to escape from his or her own real world and put on glasses to enter into a separate world, completely isolated from their world. For computers to become a fundamental technology it needs to disappear into our surroundings. Such a consequence is that of our own psyche, not the technology. I agree with much of Weisers observations, that computers need to integrate themselves into the daily human interaction so that it becomes native to use rather than something foreign the they have to go into, separate from their world. Some of Weisers hopes are become more and more of a reality with TV screens almost everywhere displaying information, giving updates and news on what is going on in the world at offices and lots high traffic area. Even our own cory hall has a display of computer screens displaying news and updates. They have ‚Äúdisappeared‚Äù into our environment, become part of the wall of cory hall. Students walk by and glance at the screen on the way to class without having to exit their current environment and enter another virtual world. The screens become virtual bulletin boards. At Home with Ubiquitous Computing : Seven Challenges ‚Äì Edwards and Grinter The article details and provides examples of the main challenges towards integrating computers into a smart home. The seven of these are: Lack of infrastucture and initial setup; having to add on without the whole house built to support it, impromptu connections and protocols, lack of sysadmins, designing for non technical use; domestic use, social implications and resistance, reliability, what to do in case of ambiguity. The two I found most profound and that I believe will be the hardest issues to overcome are social implications and reliability. I remember the old Disney show smart house exemplifying the horror what happens to a family when the house becomes cognoscente and tries to remove its owners. This is one worst case scenario but it is a prevalent thought of many who oppose technology. This stems from many other thoughts of robots going rogue and it is a critical part of social resistance to this like a smart home. In their thought, we are relinquishing too much of our independence to a computer. The other part is reliability. If the house crashes like a computer does, the users could very well die if they become trapped inside, or if the computer controls a critical component like gas used for cooking and on error burns down the house. These errors may be one in a million, but the cost of that one accident is human lives. And when that becomes public media, the publics trust in the system will be lost.

Shaon Barman - 9/7/2010 23:59:30

The Computer for the 21st Century

Weiser predicts how computers will be used in the future as they become ubiquitous in day-to-day life. He predicts in 20 years, almost all objects will have a computer of some sort built into them.

What struck me about this paper is how relevant it is with computing today. Even though it was written more than 20 years ago, Weiser brings up many of the issues with smart phones today, such as privacy and location based services. And his final thought that ubiquitous computing would reduce information overload is slowly being realized: in today's world cell phones and the internet allow people to quickly find very localized, detailed information in seconds.

In this paper, he mainly talks about tabs, pads and boards and how they will be used in day to day life. While this paradigm has not been realized, cell phones and the cloud have made computing ubiquitous. Computers are also ubiquitous in many household objects, such as your car, but most of these "computers" are not connected to the internet and do not directly communicate with people. Weiser also points to software and networks as bottlenecks for the spread of computing, while it seems like currently, most limitations are power and hardware based.

Overall, Weiser's general predictions about computing are accurate but he misses several key elements. First, most devises aren't as "connected" as he envisioned. I feel this lack of connectedness is due to communication complexity between different devices. While he sees low power devices as a requirement for pervasive computers, he doesn't point it as a major problem.

At Home with Ubiquitous Computing: Seven Challenges

Edwards and Ginter discuss difficulties with integrating technology with people's daily lives at home.

The list they go through is quite comprehensive and tackles both technical and social issues. Some of the issues which they address, such as the "accidental" are predictable and can be addressed preemptively in order to create a good experience for the user. Other problems, such as how people will adapt and change due to technology are more difficult to predict. Social networks and cell phones are good example of this. With this infrastructure in place, it seems like peoples view's on what is considered "private" data is changing, with more people opting to give out more information, such as their location, in order to get new services.

Overall, even though computers are becoming more pervasive, technology in the home is not keeping advancing at the same pace. Besides advances in TV and internet, much of the technology is the same as it was 10 years ago. There does not exists a network of sensors which can react to a person's wishes and automatically changed the home environment. I feel like much of this is due to many of the issues brought up in the paper. The home is a mashup of technologies, brought together without a single plan. Having unrelated technologies working together is quite difficult, especially when the average person must fix all the quirks. In order to see more technology within the home, companies must work together to create technologies that can work together, and also be reliable.