UI Toolkits

From CS260Wiki
Jump to: navigation, search

Lecture Slides

File:Cs260-slides-20-toolkits.pdf

Extra Materials

Discussant's Slides and Materials

File:UI ToolKits PPT.zip

Reading Responses

Charlie Hsu - 11/9/2010 10:54:50

Past, Present, and Future of User Interface Software Tools

This paper analyzed the history of user interface toolkits. The authors examined past trends and toolkits for their successes and failures, looked into the current state of the art and made predictions on future technological trends, recommending approaches to UI toolkit design based on all of their analysis. Some themes concerning the success of UI toolkits that the paper cited were: well-defined problems to address, low thresholds and high ceilings for toolkit use, leading designers down the 'path of least resistance' to good interfaces, creating predictable tools, and being able to attack a moving target, keeping pace with new interface technology.

I thought many of the themes of good UI toolkits were relatively obvious and not particularly insightful. Observing constantly that high-ceiling-low-threshold UI toolkits are optimal is a no-brainer, and applies to any sort of design/user-creation tool. Tools that address clearly defined problems and act predictably also seemed to be catchall problem with any sort of HCI design consideration; they come down to the well-known necessities of task analysis and user-centered design, and the tradeoff between automated/predictive interface agents and direct manipulation.

However, I found much of the historical context enlightening. Even though the paper was written in 1999, I found many of the predictions sound and true, and are still playing out in the way the paper predicted today. Concrete examples of this are the continued march of Moore's Law, the increasing ubiquity of computing in mobile devices, and the current fights over mobile application development platforms. Recognition-based user interfaces have sprung to the forefront of UI technology; touch-based interfaces and other rich input media are becoming ever more present in the mainstream (and even as a main theme of our class).

As more of a side note, I also personally found the "Further Issues for Future Tools" much more enlightening than the themes. The problems and themes they addressed in that section seemed to open up maybe new avenues for pursuing continued HCI research (usability for users of different ages, designer-created UI library elements, minimizing cognitive load, etc). I particularly liked the opaque vs. transparency component discussion… might I add that I personally am enamored with the semi-transparent Mac OS X Terminal window?


Reflective Physical Prototyping

This paper introduced d.tools, an UI toolkit that focused on being a low-threshold, high-ceiling toolkit for designing functional physical prototypes. d.tools offered a statechart-based visual design environment aimed at providing a low threshold for early prototyping, extendable by Java code and the d.tools API. d.tools also offered hardware extensibility at multiple levels, and integrated test and analysis of designed prototypes.

The automated video analysis portion of the paper is extremely relevant to my own research project for the class. The authors cite one of our basic research problems to address: "Manual video annotation and analysis of usability tests is enormously time consuming. Even though video recording of user sessions is common in design studios, resource limits often preclude later analysis." d.tools addresses the problem by linking timestamps from the video to the statechart; we intend to do so by linking timestamps from the video to researcher notes taken during video capture. d.tools has the advantage of an integrated design and test process, allowing actions to easily be linked to the specific UI element in the design window. This integration also allows strong automation of video annotation, by color-coding interaction time with each UI element and creating a rich, information-packed video timeline. Unfortunately, we are unable to do this in the scope of our research project, since we do not have this sort of integrated environment. However, we have already taken the idea of using symbols to quickly mark positive and negative sections for later review. The d.tools video analysis also mirrors our ideas; annotations will be highlighted as video plays (video-to-statechart), video can be indexed through annotations (statechart-to-video), and there is search capability for text-based annotation.

I felt d.tools adequately addressed some of the themes introduced in the first paper in this week's reading. The statechart-based design process targeted a low threshold, and the extensible Java code and API raised the ceiling further. The moving target problem was addressed by hardware extensibility and Wizard of Oz support. d.tools specifically targeted functional physical prototypes, which addresses a concrete, specific area. However, the design space of physical prototypes is so large that guiding designers down the 'path of least resistance' to good prototypes is difficult to imagine doing; I was unable to find evidence of this guidance in d.tools.


Linsey Hansen - 11/9/2010 15:28:30

Reflective Physical Prototyping through Design, Test, and Analysis


In this paper, the authors introduce a toolkit, d.tools, which is meant to assist designers with their early stage prototyping for ubiquitous computing. They ran three different studies in order to determine the effective of and improve upon the toolkit.

In the beginning of the paper, it brings up the concept of reflective practice, which basically says that people can often learn more (or at least additional information) about a potential design by working it through and creating it as opposed to just thinking it about it. While this is probably not something that I have ever doubted, it isn’t necessarily something that I think about when I create a prototype. I generally think of prototypes as being more for improving other people’s understanding than the designer’s, but as the above states, building a prototype can improve the designer’s understanding as well

One thing that d.tools implements in order to provide a low threshold for earl stage prototyping allowing people to prototype with flow charts. I think that this is pretty neat since most people do flow charts anyways before making the actual prototype, and then might use the flowchart to guide early prototyping interaction. This, however, combines creating a flow and creating a prototype so that the flow chart becomes, and helps define, the prototype. This is great because updating one will automatically update the other, and it makes it a lot easier to keep track of the two across various iterations.


Past, Present, and Future of UI Software Tools

As the title suggests, the authors in this paper mostly discuss the effectiveness of UI software tools. They begin by discussing past successes and failures, then go on to predict what might be required of future tools based on past/present themes and where technology is headed.

One section I found interesting was the one that discussed “Promising Approaches that Have Not Caught On.” While I do see why they didn’t work with the desktop paradigm, I wonder if implementing some sort of “new” system design (perhaps via ubiquitous computing), would make some of these approaches a legitimate way to handle things. I feel like concepts like the “user interface management system” are definitely interesting ideas, they just don’t work with current desktops, however, that doesn’t mean that someone cannot create some new device for which this concept could then be valid.

The other part I found interesting was the predictions. Since this was written in 2000 and not the early 90s, I suppose that it isn’t as impressive, but the authors were still pretty dead on with their predictions for what was to come. I suppose that at the time this article was written, research for these things was in progress.



Aditi Muralidharan - 11/9/2010 16:14:52

In "Past, present, and future of user interface software", Myers et. al. give an overview of the software tools UI designers have used in the past and where such tools might go in the future. They cover a few themes: the influence that UI tools have over the interfaces produced with them (path of least resistance), the ease of learning such tools and the complexity and predictability of the interfaces that can be produced with them, the variety of things that can be done using them, and whether they became obsolete quickly.

The authors then give examples from the past of good and bad UI software with reference to these themes. They claim that UI's are about to break out of the "desktop box" because commodity computers, ubiquitous computing, and speech and image-recognition technologies will create the need for very different kinds of interfaces. As a result, new UI toolkits will need to be developed, and will have to extend into the realm of rapid device prototyping.

The second paper, six years later, gives an example of exactly such next-generation UI software. d.tools, a rapid prototyping interface for physical electronic devices, makes it easier for designers of physical devices with embedded sensors and controls (as opposed to the desktop software products of the past) to build early prototypes physical prototypes that "looks-like" and "work-like" their designs. This allows them to try their designs out in the physical world - they are no longer limited to paper prototyping. This design-by-doing allows them to discover aspects of their ideas and constraints on their designs that they would not have been able to otherwise.

I don't design physical devices, so the second paper is less relevant to me than the first. The first paper puts into words all the arguments for using a toolkit for a component rather than building one for yourself


Richard Shin - 11/9/2010 17:10:54

Past, Present and Future of User Interface Software Tools

This paper describes the importance of user interface tools, summarizes past developments in them along with what made them successful (or unsuccessful), and presents what the authors consider promising and future prospective approaches to user interface tools. The authors first identify themes for evaluating user interface tools and explain the reasons these systems have been successful or unsuccessful through them.

Overall, this paper seemed mostly to summarize other work, rather than to contribute any significant new ideas or systems on its own. Nevertheless, I felt that the paper provided a useful overview of existing systems that concisely describe which approaches have found success, providing guidance for building any future systems. The themes for evaluating tools in particular seem helpful to consider when creating a new user interface tool.

While this paper focus on user interface tools, it seemed that it perhaps viewed too many aspects of HCI from this singular lens. In particular, I wasn't convinced that user interface tools were the right viewpoint from which to consider, for example, ubiquitous computing; that arose mostly out of greater availability of computation for embedding into many devices, and the most important concern remains ensuring that computers are actually ubiquitous. It's also unclear that tools would really help with solving new user interface problems which arise with new form factors and computing methods; new paradigms for how to construct user interfaces seem more pertinent than building tools to somehow automatically accommodate the new platforms.

Reflective Physical Prototyping through Integrated Design, Test, and Analysis

This paper introduces "d.tools", a system which enables iterative design in prototyping of computer user interfaces and electronic appliances. The authors identify iteration as a core concern in designing interfaces, where building imitations of the final product help designers evaluate and evolve the design compared to simply reasoning about it. D.tools offers three main components: a visual programming tool for designing the interactions for the device, a library of high-level and interoperable hardware components for prototyping the devices, and tools for recording and analyzing test interactions with prototypes.

By unifying most aspects of the reflective design process, this paper contributes an interesting and useful system that enables quick prototyping of electronic devices. Given what we have learned from previous papers we read about the challenges in design, supporting this iterative workflow lets designers solve closer and closer approximations of the final problem, which seems particularly helpful when the final problem that needs solving is intractable. In particular, the test session analysis tools appeared highly useful for debugging problems, even more so than software-only debugging tools; the types of information recorded don't seem particularly novel, but the idea to show them all in conjunction do.

While the paper addresses the topic of hardware extensibility, I fear that the system might be insufficiently or too inconveniently extensible, and end up lacking some critical feature to building a desired prototype. It seemed that the authors largely considered a fixed set of hardware and software tools, and I wish that they had placed greater weight on ensuring that the overall system is easily extensible so that it can be more useful to those wishing to build a wide variety of prototypes.


Luke Segars - 11/9/2010 17:26:47

Reflective Physical Prototyping through Integrated Design, Test, and Analysis

This paper describes a system called d.tools that enables designers to create fast iterative prototypes. The d.tools interface emphasizes visual design over programmatic design and allows for the simple modeling (and remodeling) of possible user actions through a drag-and-drop graphical interface. The authors emphasize that design is something that is often best achieved through iteration and user-centered thinking; both of these aspects are emphasized significantly more strongly in d.tools than they are in traditional development environments.

The mixed view paradigm (using visual drag-and-drop as well as text-based programming) seems like an effective approach to both user-centric and rapid development. Additionally, one of the biggest advantages from a design perspective is the ability to get immediate feedback on the real-world device connected to the PC. Overall, this tool seems like an effective and logical way to experiment with iterable prototypes. I'm having trouble finding downsides to the approach taken by the authors and it looks like a number of significant features were added to address the concerns of users.

The evaluation stage of the paper suggested that the authors considered a number of use cases that d.tools might be applied in. The evaluations seemed to be well thought out and covered a large breadth of potential users. One group that was missing, and may not be considered a part of the intended audience, is novice users. The subjects that were interviewed in the evaluations were all familiar with interaction design and how user interfaces work, but it is unclear how someone who didn't have this knowledge would benefit from d.tools. I would argue that the usefulness of this tool with teaching new designers (particularly non-technical ones) should be investigated because it seems like it might hold a lot of potential. I suspect that they may receive even larger relative improvements in performance (time and quality) than experts due to the ease of experimentation and a paradigm that has a strong correlation to how they think about interaction anyway. d.tools seems to show strong potential as an education tool for teaching about design that would be superior to a programming-driven model or even Wizard of Oz prototyping.


Siamak Faridani - 11/9/2010 17:33:26

Past, Present and Future of User Interface Software Tools

In this paper, authors look at the old HCI research hoping to find trends that guide us in predicting future trends in user interfaces.

Almost all of our interfaces are build upon a rich body of research that was conducted in 70’s to 90’s. As a result almost all of our desktop applications use similar interaction techniques. As a result skills in working with any UI translates to skills in working with other types of packages.

There are a number of UI elements that are now part of every operating system or UI design tool but they were all still in research phases in 70’s, after Macintosh appeared many of these elements became part of Mac and other systems. Window managers, component systems are two examples of these.

Authors also point out a number of research themes that have not appeared in final products yet. For example we still do not use tools that are based on formal language.

By using historical pattern the author tries to predict the future of HCI, I am surprised that he totally overlooked the online/webbased UIs. Perhaps they consider webbased UIs as an extension to desktop UIs.

It is interesting to see the idea for the second paper as part of the “Tools to rapidly prototyping devices, not just software section” for the last couple of years we have seen an increase in this category. For example makerbot kits and Arduino parts are used to design new hardware. And tools like Microsoft Kinect are examples of recognition based user interfaces.

I found Figure 1 confusing and inaccurate. And I am not sure if 3D has appeared in UI’s at all (there are compiz effects in linux but I am not sure if these are considered 3D UI)

The second article has a fascinating idea. Authors combine existing programming models (Flow programming that is used in tools like LabView and Simulink) in d.tools. And Arduino style components allow total integration with the hardware and software. The tool provides quick prototyping and allows researchers and HCI people to quickly iterate on their ideas.

The video on the website was also very helpful in convening the idea. I am interested to know what the next step is going to be.


Matthew Can - 11/9/2010 18:17:44

Past, Present and Future of User Interface Software Tools

This paper begins with an overview of the history of user interface software tools. The authors describe the factors that contributed to the success or failure of various systems. Then they provide their predictions for the future of UI tools based on the computing paradigms they see emerging.

In evaluating successful UI tools, the authors came across several themes common to these systems. For example, the best tools have predictable behavior. An ideal UI tool will have a low threshold and a high ceiling, although the authors say that current (1999) successful systems are either low-threshold and low-ceiling or high-threshold and high-ceiling. For me, the path of least resistance is a particularly important theme. The idea there is that successful UI tools direct the developer toward making the right decisions. I think this principle can help explain why iOS apps tend to have better UI design than Android apps; it has less to do with the skills of iOS developers and more to do with Apple’s UI tools guiding the developers to do the right thing.

This paper’s contribution to HCI is the set of guidelines for building successful UI tools, distilled from the analysis of past tools. A further contribution is the discussion of user interfaces for future computing paradigms, along with suggestions for designing the next generation of UI tools. However, it seems hard to measure this kind of contribution because it is fundamentally the authors’ prediction (albeit an educated prediction). In hindsight, we can say that the suggestions were useful because many of the predictions played out (or alternatively, the predictions came true because the suggestions were followed). Recognition-based, in particular gesture-based, techniques have taken off commercially.

I liked that this paper put an emphasis on developing UI tools for non-programmers. Interactive graphical tools, for example, are accessible to people like graphic designers because they allow users to control some aspects of UI development through direct manipulation of interface objects. Hypertext is another good example; designers can create web pages with a simple markup language. In both examples, the common thread is a low threshold. The high threshold problem is also what plagues end-user programming. The authors suggest that end-user programming facilities be included as part of UI toolkits rather than being incorporated at the application level. This sounds interesting, but I don’t see how the programming model can be both sufficiently general (for cross-application uniformity) and customizable (for application-specific needs).


Reflective Physical Prototyping

This paper presents d.tools, a toolkit for physical prototyping. The system is motivated by the theory of reflective design, the notion of working the design through rather than thinking it through. Furthermore, d.tools is designed for rapid iterative prototyping.

d.tools uses a statechart-based visual design tool to author the hardware interface. This tool has a low threshold, so it is easy for visual designers to learn. The d.tools architecture provides a software interface to the hardware, hiding the complex hardware details from the designer. However, the d.tools architecture is also extensible, allowing skilled users to extend the library to additional hardware components. d.tools also facilitates the testing of physical prototypes by recording video of a user’s interaction with the prototype, along with a log of the interaction events.

While the statechart approach is a good way to keep the threshold low for non-programmers, it seems that it cannot successfully scale to complex physical prototypes with many components, states, and transitions. It would be too difficult to program a complex device with the visual tools because of the visual clutter. d.tools does provide the parallel statechart mechanism to address this problem. But, this only abstracts away parallel functionality. A potential solution might be hierarchical statecharts, so that one statechart can be condensed into a single component in another statechart.

Something I really liked about d.tools is that it allows the designer to program the prototype by demonstration. This is a truly reflective process, unlike the traditional trial-and-error process of setting parameter values.


Thejo Kote - 11/9/2010 18:50:22

Past, present and Future of user interface software tools:

This paper focuses on tools used to build user interfaces. Myers and co-authors survey the history of these tools and suggest themes that could be helpful in the design of the next generation of user interfaces. Their motivation for this survey is because they believe that, at the time the paper was written, the world was at the cusp of an explosion of new types of devices and UI paradigms, which needed new tools for their creation. They point out that for the most part user interfaces have been a bit mapped display for output with a mouse and keyboard as input devices. Their prediction that devices of other form factors, mobile or otherwise, would soon be widely used is slowly coming true.

The themes that they identify when evaluating past tools include the difficulty in learning how to use it, the parts of the UI that are addressed by the tool, the path in which the tools lead designers, predictability and how they have kept up with the constantly moving target of UI paradigms (till they stabilized). They also provide an overview of the tools that worked (window managers, event and component systems etc.) and those that didn't (UIMS formal language based tools, models based on automatic techniques).

In the final section the authors address the changing nature of computing and how it impacts UI design tools. This section is a treasure trove of ideas for research topics. I realised after reading this that a number of papers published in the last decade we have read in class build on ideas discussed here. One area that I feel did not get a lot of attention was the design challenges for computing devices that are not at the cutting edge. With the immense pouplarity of mobile phones around the world, the only computing device billions of people will use for a long time is a cheap cellphone. I think there is a lot of room for the development of tools which targets applications in that area.


Reflective physical prototyping through integrated design, test and analysis:

In this paper, Hartmann and co-authors present d.tools, a tool that enables iterative prototyping of hardware devices. The motivation for this work is the very important role that iteration plays in the design process. d.tools attempts to allow designers of hardware UIs to iterate at a much faster rate, which then, hopefully leads to better designs. This is the essence of reflective practice that the work focuses on - that designers would be able to work through designs and not just think through them.

d.tools makes three main contributions. Interaction techniques and architecural features that help with prototyping, an extensible architecture for physical interfaces and tools for the integration of design, test and analysis of hardware UIs. The paper presents how each aspect of d.tools works and an evaluation of the hypothesis that it enables faster prototyping while allowing users to focus on the design of the information applicance.

Iterative design with hardware is always a challenge and I think d.tools is an innovative contribution to the domain. It's an implementation of the observation made by Myers and co-authors about the need for tools which enable rapid iteration of hardware user interfaces given the increasingly important role that portable and ubiquotous computing devices are playing.


Matthew Chan - 11/9/2010 18:52:57

Reflective Physical Prototyping ... d.tools

In this paper, the authors introduce d.tools as a new way for rapid prototyping for physical interaction. This paper is important because it addresses the difficulties that designers and developers often encounter when making prototypes, especially when they lose time in implementation tinkering versus design. Moreover, the d.tools suite enables the user to quickly design, test, and analyze (and iterate through this 3 piece routine). The most intriguing and fascinating part of the paper is how it has a lower threshold when it comes to putting hardware together. For their GPS navigation example, designers/developers can set the limitations for the accelerometer, easily make a state/graph chart, and access a variety of hardware components. The methodologies were testing it on Masters students and at design firms, and the results were astounding, such as spending more time with design thinking! This was shown in the range of different projects and interfaces such as a color mixing interface, the augmented clothes rack, etc.

I'll admit that reading this paper was a little hard to follow because it was all text describing the system and lots for me to keep track of. Luckily there was a demo video online that shows d.tools in action and Bjoern with dark hair. The paper might relate to my field of work soon because many areas in HCI and CS will need some form of prototyping as a proof-of-concept, designing the interface for the first time, and even iterating to improve on the idea.

One of the great things explored by the authors were the limitations. Simply put, d.tools did not deal with Flash, Phigets, or other 3rd party applications. Blind spots were thoroughly explored such as a lack of software simulations, but in 2010, a possible new blind spot is dealing with capacitative multi-touch devices. The hardware might be harder as well and be a limitation itself.


Past, Present, and Future of User Interface Software Tools

One of the neatest things about this paper is that it is a comprehensive overview of HCI and user interface over the last 4 decades. The authors thoroughly explore the successes and failures (such as the concept of a moving target). The authors believe that the tools designers/developers use are a great limiting factor for innovative designs; however, the homogeniety of toolkits provide familiarity and quickness for developing software for various operating systems. One of the important highlights is the trend that there are UI with a low threshold and a low ceiling or a high threshold and a high ceiling. Why not a low threshold and a high ceiling?

After listing many successful examples (and the failures), the authors also explore the present and ubiquitous devices and how the future will need to change it assumptions (such as having the person's full attention or both hands, a mice and a keyboard, etc.). Moreover, the authors use the WWW as the best example for a low threshold and high ceiling because it was easy for anyone to author webpages. Also, the authors note that the WWW and computers were shifting from that of computation to that of communication. Items such as the PDA and other wallet-size devices were mentioned and how the UI's will have to change because, for example, screen size.

This paper is very important and relates to everyone in the field of HCI. New devices are coming out much quicker now. PDAs don't even exist anymore. It's now smart phones, tablets, and laptops essentially. Some blind spots for the paper is that the authors might not have anticipated how quickly PDAs and the stylus would become obsolete, the rise of multi-touch technologies, social networking/crowdsourcing on the web, etc.


David Wong - 11/9/2010 18:54:37

1) The "Future of HCI" paper discussed the history of UI tools that have been developed, what made them work, a few design that didn't work well and what made them fail, and finally a discussion of the tools of the future. The "d.tools" paper discussed a prototyping system for physical devices that included a state diagram driven UI with programming capability and a testing mode of the system that logged and recorded activity.

2) The "Future of HCI" paper was written in 1999. The paper accurately predicted several research directions, such as using context information (e.g. location) and low threshold tools (e.g. touch interfaces). One intersting development is that much of the UI innovation in the mobile arena has been conducted by corporations, Apple namely, rather than research organizations, which both contrasts and parallels the development in HCI in the past. Research drove the development of GUIs and the Macintosh was the first commercial product which featured the GUI. However, Apple is spearheading its own research and is incorporating its research into devices like the iPod and iPad. The paper also predicts the rise of user programming within user interfaces. This has not yet come into fruition and I don't know if its an active area of reserach in HCI.

The "d.tools" paper relates to the first paper we read in the sense that it offers an effective, physical device prototyping tool, with low-threshold and high-fidelity, that the "Future of HCI" paper predicted. I think the d.tools system offers a good step in the right direction in regards to physical prototyping tools. It also comes with a test mode that I think is quite novel. With the logging and video indexing tools built-in, testing a prototype is quite easy. This relates to the benefits of indexing video as was stated in Goldman's paper.

3) The "Future of HCI" gives a nice summary of the development of HCI. While it is quite short and very brief, it's argument is validated by its accuracy in its predictions. While its predictions weren't that insightful, the paper had foresight in the next areas of HCI research.

The "d.tools" showed that its tool was relatively successful with designers through its informal user study. As a proof of concept, I think this is sufficient in illustrating its added-value. I think the concept of the system is strong and that with further development on the system, it can really help make physical prototyping easier.


Shaon Barman - 11/9/2010 18:58:19

Reflective Physical Prototyping through Integrated Design, Test, and Analysis

A new system to quickly prototype and evaluate new interfaces is developed. The evaluation consists of allowing graduate students use the tool for a class project.

Developing new interfaces is a time consuming task, especially if the interface involves integrating hardware components with different interfaces. This is an especially high hurdle if the user is not a programmer. On the other hand, creating high fidelity prototypes can provide valuable insights which are unavailable from other forms of prototyping. One aspect of the tool I liked was the graphical user interface. This reminds me of the LabView programming environment which is used to prototype and develop control systems. I also feel that this tool could have been built on top of the Labview environment, which would extend the range of users who would use this tool. Although, this would provide a barrier because the suite costs a lot.

Having done some robotics, integrating sensors and motors can be quite difficult. While this tool allows for integrating known I/O, unknown I/O seems difficult to integrate, since it requires new drivers to be written. Overcoming this hurdle would greatly improve the power of the tool.

Past, Present and Future of User Interface Software Tools

The paper dicusses how user interface prototyping has evolved in the past and discusses future directions.

The part I liked was the discussion on failures of past UI research. One aspect that was common in these was difficulty using the system by programmers and inability to predict the results. This theme will occur in the future with the invention of new user interfaces which "predict" what people want. This paper also is relevant to my own research in programming languages. Most languages are equivalent in their power, but some provide a lower "resistance" that other languages. This may explain why some scripting languages are more popular than say C, or Java even though they do not scale.

One part that this paper did not predict well was how new user interfaces would be adapted. One example is mobile phones. Instead of generalizing a UI for multiple devices, such as projector, phone, laptop, etc..., each interface is designed separately and specialized for the viewing screen. As things go forward, it seems that more of the application logic will be pushed to the cloud or the browser, with the user interface code being specialized for the device. This model allows greater specialization will reusing most of the code.


Drew Fisher - 11/9/2010 18:59:37

Future of HCI:

The notes on differing input modalities struck home for me particularly hard, since that's one of the major things holding Adobe Flash back. It's too tied to the mouse metaphor to really support any other interaction. That said, I'm not sure about the value of a multimodal UI representation language - it seems that anything flexible enough to support speech, biofeedback, and pointing will inherantly lose the advantages conferred by developing for a known target platform. The reason Apple does so well is that the UI of their devices is extraordinarily inflexible but consistent. Does enabling multimodal interaction mean confusing users?

Speech-controlled interfaces have not progressed as much as the authors suggest it might. Perhaps this is due to the fact that even when people communicate verbally with each other, very little of the information is actually exchanged as speech. It may be that the "effective bit rate" of an audio channel is far inferior to those of other input devices. Mix that with the unreliability of voice recognition and you have a losing technique.

Regarding "requiring the user's full attention" - I think our devices as of late have made a huge step backwards in this area. Touchscreens offer next to no tactile feedback, so now phones must be operated with both hands and eyes, rather than simply the former. I wish there were more tools and software for designing interfaces intended to be used either 1: blindly, or 2: with no regard to system state. Whenever I sit down in front of a vim session that I left, the first thing I do is hit Escape a bunch of times to get the system into a known state. It'd be neat if we could produce systems that you could trust to be ready to interact in a certain mode.

The authors did nail the conclusion though: user interface design is more flexible thanks to ubuquitous computing: precisely where we have the opportunity to abandon the desktop metaphor.


Reflective Physical Prototyping through Integrated Design, Test, and Analysis

Based on the study results, d.tools appears to have succeeded in its goal - allowing designers to make working functional prototypes and think about interactions, without having to deal with the painful details of system implementation.

d.tools reminds me very much of processing, VB macros, and other forms of programming targeted at non-programmers. The lower barrier to entry allows it to aid a much wider audience. The extensibility of d.tools provides a relatively high ceiling on what can be created with the system.

One limitation with d.tools is that the hardware supported must be rather simple. High-performance components of the system will need to be mocked up on the computer, as will most drawing functionality. This may limit the types of techniques that d.tools allows designers to explore. However, I think this is just a limitation of the current implementation of d.tools, rather than a systematic design flaw.


Arpad Kovacs - 11/9/2010 19:07:08

This paper describes d.tools, which facilitates the iterative-design approach to building prototypes by enabling early-stage prototypes to be extended into high-fidelity prototypes in both hardware and software. The d.tools IDE combines a visual state chart that represents physical devices, and a text code editor, as well as a set of library components that abstract away the mechatronics interface. However, the most powerful part of d.tools is the flexible architecture, which allows the hardware-to-PC interface, intra-hardware communication, and circuit-level hardware to be extended beyond the limitations of the built-in libraries. Finally, d.tools provides an integrated design, test, and analysis suite to help designers more efficiently review data from usability studies with the help of computers.

The most valuable contribution of this paper is combining prototyping software as well as physical hardware into an integrated design environment that can effectively be iterated upon and extended without limit. This allows designers (who may have little programming experience) to create an overall vision of the project, and then pass on the components to programmers for implementation. Thus the prototype can evolve into the final product, rather than merely being a byproduct of the process. Another advantage of this techique is that the level of fidelity can be increased as needed by adding code, while other low-fidelity prototyping techniques cannot seemlessly integrate the physical devices into the initial prototype.

The d.tools system appears to be an excellent way to start iterating at an earlier phase in the prototype design, and develop a working product faster than using tradition tools. In my experience, many managers do not devote enough time to low-fidelity prototyping and usability testing, since "we are going to have to rewrite the entire project from scratch anyway" once they reach a final design. Fortunately, d.tools removes that lame excuse, and allows further iterations even after a high-fidelity prototype has been made. A unique possibility for this integrated environment is the ability to write unit-test code for interface. This is generally not possible with other non-integrated environment could be very useful for catching usability regressions during the prototyping phase.

=

The Future of HCI paper begins by claiming that although the homogeneity of interfaces in the 80's and 90's has had positive benefits (users can transfer their skills between platforms, and tool builders can bring concepts and toolkits to a high level of refinement), the time of stagnation should come to an end, and we should begin a new era ubiquitous computing and diversity in interfaces. The key to this revolution will be more effective tools, that should ideally have the following properties:

  • Low threshold (easy to learn), but high ceiling (powerful)
  • Encourage good implementation
  • Automatic activities should be deterministic and predictable
  • Quick understanding of the tasks, and adaptability to change

The paper then proceeds on a historical survey of tools that satisfied the above goals, and thus became successful, eg window managers, ui toolkits, event-driven scripting languages, interface builders, and object-oriented programming. The author then contrasts this with paradigms that have not caught on, such as high-level abstractions of the ui, formal language-based tools, constraint solvers, and model-based/automatic approaches.

I found the most valuable part of the paper to be the future prospects section. Although I was familiar with some of the trends mentioned such as commoditization of computers due to Moore's law, as well as ubiquitous computing, some of the insights were quite interesting. For example, it is interesting how computers are becoming communicating and coordinating devices, rather than merely being used for computation, and how future tools may be used to prototype devices (actually this is slowly becoming possible via FPGAs and d.tools). 3D technology and voice/writing/other recognition are also quite popular today, however I did not previously consider how this would require new selection and interaction techniques, and break event-driven programming. Finally, end-user customization and scripting appears to have the noble goal of giving users more power, although I think that recently users have embraced simplicity over configurability (see the popularity of the iPad and other appliance-style media consumption devices).


Luke Segars - 11/9/2010 19:11:50

Past, Present, and Future of User Interface Software Tools

Myers et al describe their vision of the future of machine interfaces with a set of predictions and trends in 1999. It's interesting to look back on their predictions, read their rationale, and see which ones have played out and which still haven't arrived. I was impressed overall with the "future prospects and visions" that they pointed out such as the emergence of ubiqitous devices, but some of the problems with the systems that they pointed out have turned out to be less significant than they first imagined. All in all, they provided interesting perspectives on a decade of progress, but some of the justifications for their predictions seem somewhat shallow and unfounded. Or maybe that's just the guy from the future talking.

The two prospects that I found to be the most interesting (although neither has arrived in full force yet) was the prediction for the emergence of cloud computing and the availability of tools for coordinating multiple distributed communicating devices. These two ideas are strongly tied to each other and, in fact, depend on each other for their own success. While we haven't reached a point where either paradigm could be called "widespread," its interesting to think about the progress we've made towards ubiquitous computing since 1999. Cell phones, perhaps the most obvious new technology, have made it so that most people have significant amounts of processing power with the power to run a variety of applications in their pocket at all times. Miniaturization has continued to proceed at unbelieveable rates and the power available in these cell phones today wasn't available anywhere in 1999. A huge array of sensors have also emerged that make interacting with these new devices a totally different ball game.

Ah, interactions. One of the points that I think Myers et al missed on most significantly was the disappearance of the event-based action paradigm. Given ten more years of progress, I'd argue that it actually looks unlikely that the event-based paradigm will be going anywhere. I do, however, agree that our concept of an "event" will most certainly have to be enlarged. Nevertheless, we as humans must use some sort of behavior or action to indicate our desire for a machine to perform an operation for us; even if we don't consciously want the operation performed (i.e. recording our velocity on every step of a morning jog), we still unconsciously want our machines to be responsive to the overall goal of recording our fitness indicators. That being said, we also shouldn't assume (modern systems do not) that events will be the only form of requesting an operation -- cron jobs and similar operations already show that each action need not (and perhaps should not) be consciously requested every time it should be performed.

I really like reading people's attempts to predict the future. All in all, given the amazing surge of progress that has been made in the field of computing in the last 10 years, its interesting and enlightening to hear what the scholars of the time thought would most readily change. I suspect that we're still on the cusp of innovation in these fields now -- give it another 10 years and newer technologies may cast an entirely different light on Myers, Hudson, and Pausch's predictions and their importance to the future of human-computer interfaces.