UI Toolkits

From CS260 Fall 2011
Jump to: navigation, search

Bjoern's Slides

Extra Materials

Discussant's Materials

Discussion Slideset

Reading Responses

Valkyrie Savage - 10/31/2011 11:04:00

Main idea:

As computers move above and beyond the paper metaphor that they’ve been built upon for so long (whether in dimensionality or pure size), we will have to change the ways in which we interact with them. At its core, this means that we’ll have to make modifications also to the way that developers interact with them.

Reactions:

The Myers et al. paper was an interesting discourse on the state of things (10 years ago) and the dream for things. The tone was hopeful and excited, which made it great to read! One of the parts that I found particularly interesting and apropos (considering the remarks Björn made in class about using a Kinect and the parabolic mirror trick) was the discussion of how to make 3D interactions now that we can no longer use a paper metaphor. This also sort of fit into the second paper, which discussed what to do when we have interactions with datasets so large that paper isn’t a reasonable metaphor, either. It’s going to be something that must be overcome at all levels: as discussed in the papers, developers will need to wrap their heads around what it could mean to use all that space or all those dimensions and what’s a computationally effective way to do it, but it will also potentially be a challenge to find the right metaphor for users to understand how to interact with it.

One thing in the Myers paper that made me sad was some of the historical discussion it involved. I wish we had more of an emphasis on that in general CS courses... (I do appreciate things like the seminal papers set that we read in this course!)

The Pietriga et al. paper was exciting in the sense that those sorts of graphics will become standard (almost assuredly) on desktops and even cell phones within the next 10 years. Supercomputing and gorgeously detailed graphics are the current hotness, for sure. Making it easier for people to switch between enormous screens and desktop ones is a noble cause for sure. I’m just not certain that the change in rendering size and hardware is the only difference between the two; as I mentioned, the paper metaphor seems to break down at such huge resolutions, and I feel like there should be... something more to scaling up?


Laura Devendorf - 11/13/2011 14:36:10

Past, Present and Future suggests themes for future UI development through an analysis of the successes and failures of past systems. The paper describing jBricks discusses a toolkit that facilitates development and post-WIMP interactions for multi-screen displays running from clustered computers.

I agree with some of the benefits of UI toolkits presented in this paper but I have always been frustrated by them. While the increase uniformity across systems, they don't necessarily support good design. I appreciated the discussion of less successful approaches as a way of understanding what doesn't work. However, the primary reason the UIMS model failed is the same reason I am bothered by some of the most successful models like Window Managers and Toolkits. In regards to the statement, "3D interfaces are probably in worse shape than 2D interface were, because 2D interfaces were able to adopt many paper conventions, for example, in desktop publishing," I would be interested in the authors perspective now that tangible computing is a viable area of research.

jBricks provides an interesting take on developing large visual applications that run on clustered computers. While the technology seems solid and the applications useful, it seems as though their development only benefits a small portion of people who are using this specific set up for development. I am also curious if the system is really easy to use because the descriptions make it sound as though it involves the coordination of many moving parts.


Amanda Ren - 11/13/2011 21:20:55

The Pietriga paper introduces a system that helps make it easier to develop interactive applications on cluster-driven displays.

This paper is important because it takes into consideration the move to using arrays of LCD panels to display information or allow for collaborative work. By using their systems, developers can more easily create interactive applications for use with these displays. I like how they developed their system to easily handle a range of input devices - given there's a variety of devices people can use. I thought their experiment was convincing of their goal of using jBrick for rapid prototyping of new interactions and visualizations using the cluster-driven displays. It allowed the two developers to work on graphics and interaction techniques separately. It was also shown that going through multiple iterations would be simple too as well as switching between running the application on personal computers and the actual wall display.

The Myers paper covers successes and failures of past, present, and future of UI toolkits.

The paper is important because it talks about both what worked and didn't work in past tools. It provided the themes that helped determine the success of these tools. They pointed out ideas that initially seemed to have potential, and pointed out why they were unsuccessful at the end. The paper also makes a pretty accurate prediction of the "future". In today's technologies, we see the dominance of recognition-based interfaces on our mobile phones and tablets. I thought it was interesting (and a common issue now) that they mentioned was that tools in the past assumed they had the user's full attention - whereas this will have to change for future tools.


Hong Wu - 11/13/2011 22:02:32

Main idea:

The two papers discussed the principle and experience of designing User Interface (UI).

Details:

“Past, Present and Future of User Interface Software Tools” reviewed the success and failure of the design of user interface. The paper proposed several themes in evaluating tools. It also mentioned that designing for different devices would be a challenge as the unknown or variance of the specifics of the devices.

“Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks” proposed a framework on how to rapid develop a prototype on multi-device wall display. The framework is based on two separated components. One of the components is for input and the other is for graphics. The connection between the two components is using network prototype OSC. The decomposition makes the framework more flexible.

User Interface affects the efficiency and the functionality of the software. For example, Windows system always collapses and needs to reinstall. Lenovo has a good feature which user can click one button and Windows goes back to the initial state when the computer goes out of the factory. Programmer also likes to stick to the platform which they are familiar with. For me, though Eclipse and Processing are both designed for Java. I’m still reluctant to switch from Eclipse to Processing for quick prototype at initial. There are some threshold and overhead of adopting new software. Some software’s strategy is to design a similar UI with the existing or dominant software.


Viraj Kulkarni - 11/13/2011 22:42:56

Past, Present and Future of User Interface Software Tools' summarizes the history of UI design tools. Although the paper does not offer much original work, the historical perspective it provides on UI design tools and UI design in general is very helpful. I found the section on model based and automatic techniques for UI generation quite interesting. The authors do point out that such techniques never picked up because they had a high threshold but low ceiling. High threshold because programmers required to learn a new language for specifying the UI and a low ceiling because the kind of UIs you could design using such techniques were limited. However, I feel that UI design has, of late, been shifting towards this approach. I feel such automatic UI generation would pick up for mobile applications.

jBricks focusses on providing a toolkit than would enable developers to rapidly prototype applications for cluster driven displays by hiding the complexity entailed by distributed rendering. The architecture is composed of two loosely coupled modules for managing the display and handling the inputs. These modules communicate using OSC. One thing that the paper lacks is a proper explanation on how data can be transferred between the input devices and the computers driving the displays. Another thing that could have helped is a scenario or a simple example application developed using the jBricks framework.


Ali Sinan Koksal - 11/14/2011 1:12:56

This week's first paper, by Myers et al., is both a survey of existing user interface design tools, as well as a visionary work on how these tools should evolve to support emerging needs, such as the ones related to ubiquitous computing, from a point of view in 1999. Successfulness of UI tools are measured in terms of learning/usability thresholds and ceilings (what can ultimately be expressed using the tool), the paths of least resistance that guide users of these tools to design interfaces in a particular way, and whether the targeted tasks have stayed relevant for enough time.

The visions presented in the paper seem highly relevant to me: automating the retargeting of applications on multiple platforms, in a world where ubiquitous computing is 'already there', has great value. Being able to prototype new devices rapidly has also proven to be important and has been adressed in HCI research, as we have already studied. The implications of cloud computing on UI software tools are also quite important, and current research aims to provide security support at hardware level for application-level privacy for cloud applications.

jBricks is a very recent example in the class of toolkits for UI development. This framework aims to abstract away low-level concerns in rendering, visualizing and interacting with two-dimensional graphics in wall displays, composed of multiple monitors. This work provides a useful way of rapidly prototyping such applications by hiding the complexity of e.g. distributed rendering for multiple displays. Thinking in terms of the themes presented in the first paper, this work can be considered successful in having a low threshold (by hiding the distributed rendering, for instance), as well as addressing a well-defined part of the user interface.


Cheng Lu - 11/14/2011 3:07:58

The first paper, “Past, Present and Future of User Interface Software Tools”, considers cases of both success and failure in past user interface tools. From these cases it extract a set of themes which can serve as lessons for future work. Using these themes, past tools can be characterized by what aspects of the user interface they addressed, their threshold and ceiling, what path of least resistance they offer, how predictable they are to use, and whether they addressed a target that became irrelevant. The paper believes the lessons of these past themes are particularly important now, because increasingly rapid technological changes are likely to significantly change user interfaces. The next millennium will open with an increasing diversity of user interfaces on an increasing diversity of computerized devices. These devices include hand-held personal digital assistants (PDAs), cell phones, pagers, computerized pens, computerized notepads, and various kinds of desk, and wall-size computers, as well as devices in everyday objects (such as mounted on refrigerators, or even embedded in truck tires). The increased connectivity of computers, initially evidenced by the World-Wide Web, but spreading also with technologies such as personal-area networks, will also have a profound effect on the user interface to computers. Another important force will be recognition based user interfaces, especially speech, and camera-based vision systems. Other changes they see are an increasing need for 3D and end-user customization, programming, and scripting. All of these changes will require significant support from the underlying user interface software tools.

The second paper, “Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks”, presents jBricks, a Java toolkit that integrates a high-quality 2D graphics rendering engine and a versatile input configuration module into a coherent framework, enabling the exploratory prototyping of interaction techniques and rapid development of post-WIMP applications running on cluster-driven interactive visualization platforms. Research on cluster-driven wall displays has mostly focused on techniques for parallel rendering of complex 3D models. There has been comparatively little research effort dedicated to other types of graphics and to the software engineering issues that arise when prototyping novel interaction techniques or developing full-featured applications for such displays.


Derrick Coetzee - 11/14/2011 3:24:21

Today's readings described user interface (UI) toolkits. The first work surveyed the state of the art and future predictions as of 1999, while the second presented a single novel UI toolkit in detail.

"Past, Present and Future of User Interface Software Tools," a 1999 work by Myers et al of Carnegie Mellon, surveys the attributes and successes of software for producing user interfaces and predicts how the will have to change for the (then impending) mobile space.

I had encountered many of the industrial tools mentioned in the article but was not until now familiar with their research predecessors like Sassafras, Trillium, etc. - it's interesting to discover that these ideas weren't invented by industry.

I found it confusing that Python and Perl were described both as "research languages" and as a vehicle for rapid prototyping of UIs. Perl was created by a software developer in industry, and neither includes GUI support out of the box. The authors seems to have overgeneralized their understanding of Tcl/Tk to other scripting languages.

Of the author's many predictions concerning future UIs, some did not bear out. "Recognition-based" input devices have only supplanted the keyboard, mouse, and touchscreen in very limited domains. Integrated use of multiple devices (simultaneously or otherwise) was ultimately dealt with using continuous Internet availability and remote services rather than syncing. 3D interfaces have failed to achieve any commercial significance (we are still waiting for "a breakthrough […] in our understanding of what kinds of applications 3D will be useful for."). End user programming remains important, but little progress has been made on it.

From a language design perspective I was intrigued by Figure 1, which roughly shows the difficulty of doing increasingly sophisticated tasks using various tools. The idea of a "low threshold and high ceiling" is intuitive, but this paints a more nuanced picture, that a good tool has no "vertical walls" where a programmer is forced to learn a lot just to do a little more. Scenarios like this are familiar to me: a beginner asks "how do I do X using tool Y?" and I answer "you can't - well you could but you'd have to learn Z and it'd be a lot of work."

"Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks" by Pietriga et al is, in contrast, a very new work (2011) that describes a novel UI toolkit for a specific system, ultra high resolution cluster-driven wall displays.

The work is focused more on providing a pragmatic platform for researching new UIs than in itself providing new types of interfaces, and addresses the technical challenge of providing a smooth animated interface at ultra high resolution in a distributed setting. They enable any application based on the ZVTM framework to be rapidly ported to a wall display with minimal changes. The ability to test and debug applications on an ordinary desktop is invaluable, since demand for expensive wall display systems can be high, and the system has already been used to perform useful HCI experiments.

On the negative side, although the system provides access to a variety of sophisticated input devices used in conjunction with wall displays, it remains difficult to simulate these accurately enough to discover problems on the development platform, where the devices are unavailable due to cost.


Sally Ahn - 11/14/2011 6:32:51

Myers et al. provide a review of past research in UI Toolkits and analyzes the successes and failures and motivates pushing current (year 1999 in the context of the paper) research beyond the homogeneity of mouse and keyboard interface designed for desktop machines. The paper by Pietriga et al. is a recent paper that describes jBricks, one specific Java toolkit for developing applications for cluster-based wall displays.

The paper by Myers et al. was interesting because the ideas and concerns covered within its wide scope of UI toolkits were written more than a decade ago, and yet, many still seem applicable to today's research. One concern the authors had was that event-based paradigm would not map well to input modalities involving gestures and speech. The new iPhone 4s embodies both of these modalities, and I wonder how the new toolkit handles the concerns the authors mention. The paper also discuss the "stagnation" of interface design brought on by the homogeneity of window-based systems and screen, keyboard, and mouse platforms. While we seem to breaking away from the latter with the growing popularity of smartphones and tablets, we also seem to be repeating history as touchscreens become the default interface for most mobile devices. If we replace the authors' references to pagers and PalmPilots with smartphones and tablets, we return to the issues of growing difference in the sizes and characteristics of displays and input modalities described by the authors.

Pietriga et al. provides an in-depth description of their jBricks system. Specifically, it deals with managing user inputs and rendering to many displays on separate servers. By extending existing frameworks and creating a toolkit for raphic development and prototyping, jBricks addresses some of the points mentioned in the previous paper by Myers et al. (e.g. the diversity in display and input modalities). Overall, the two papers we read for this week presented an interesting contrast on UI toolkits, both in terms of the time they were written and the scope of the topic covered.

Suryaveer Singh Lodha - 11/14/2011 7:20:42

Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks:

I paper deals with an important aspect of rapid prototyping of new interaction and visualization techniques for state of the art cluster based wall displays. The authors have developed "jBricks" which allows them to prototype and also run controlled experiments for their evaluation. As cluster based video wall technology will be useful to a lot of applications in future, it is essential to research various interaction techniques. The objectives for jBricks are pretty useful - i) hiding the complexity entailed by distributed rendering, ii) promoting ease of learning and ease of use, and iii) enabling code reuse: visualization components initially developed for desktop computers should run on cluster-driven wall displays with minimal changes to the original application code. As the paper mentions, future work will focus on outputting higher definition visuals.


Past, Present, and Future of User Interface Software Tools:

I felt that the draft was perhaps a bit too long and could have been concise. What I did like however, was that author did categorize the changes related to UI design very well. I also believe that as modes of input for new computing devices change (ex: touch), the UIneeds to be redsigned. For example, entering text on touch screen phones (without keypad) is a tedius task. Also the concept of replacable user interfaces for same application on different devices and possible customization seems a step in right direction, based on the papers we read last week on customizing UI's to boost performance. All in all, this is an interesting area of research which gives us a chance to think about UI's from scratch all over again and make the experience better than current one, but at the same time keep it simple.


Allie - 11/14/2011 8:20:32

In "Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks", Pietriga et al discuss jBricks, a Java toolkit that integrates a high-quality 2-D graphics rendering engine and separate input configuration module into a coherent framework that hides low-level details from the developer. This eases the development, testing, and debugging of interactive visualization applications. ZVTM, which supports most Java2D drawing primitives and offers higher-level abstractions; and Java2D's OpenGL pipline. jBricks Input Server (jBIS) was a system developed to be 1) generic 2) extensible and 3) adaptable.

In "Past, Present, and Future of User Interface Software Tools", Myers, Hudson, and Paush assert that tools help reduce the amount of code programmers need to produce when creating a user interface. They established 5 criterias for evaluating such tools 1) the parts of the user interface that are addressed 2) threshold and ceiling 3) path of least resistance 4) predictability 5) moving targets. Additional future trends include: 1) skill and dexterity of users 2) non-overlapping layout or rectangular and opaque interactive components 3) using fixed libraries of interactive components 4) interactive setting 5) requiring the user's full attention 6) support for evaluation 7) creating usable interfaces. The researchers lament that UIMS, or User Interface Management Systems have not caught on, falling victim to the moving target problem.

Both papers focus on HCI tools that will enhance the user experience. The first paper focuses on a specific tool, whereas the second paper is a survey of various modern technologies. The criterias used are quite different, as the second paper is more general. I think both papers are quite optimistic as the field of HCI is improving on its existing technologies across various platforms.


Shiry Ginosar - 11/14/2011 8:32:21

The directions of user interfaces implemented on consumer devices follows user interface research and the interface tooling provided by this research. This is the main thesis of Myers et al in their review paper of the past present and future of user interface software tools. One issue that the paper raises is the concentration in the past 20 years on the standard windows like interfaces and the fact that this paradigm does not fit newly emerging displays such as cellphones and large wall displays. This is because UI widgets designed like scroll down menus for the desktop screen size no longer work for very small or very large screens. They also mention the importance of rapid prototyping of user interfaces and the success of the tools that allow for this rapid development process.

JBricks addresses some of these issues by presenting a rapid prototyping tool for driving and interacting with arrays of LCD screens. The authors explain the importance of awareness of the underlying hardware but provide a system that abstracts away the distributed nature of the underlying cluster for the purpose of providing a simple programming environment. It is interesting to note that only now, 11 years after Myers et al's review paper it is finally time to address the issue of driving many LCD screens with clusters of computers.


Donghyuk Jung - 11/14/2011 8:44:20

Past, Present, and Future of User Interface Software Tools

In this paper, the authors presented some cases of both success and failure in past user interface tools and they extracted a set of theme as references for future work. In general, this paper explained well how these tools have been developed so far but our computing environment rapidly changed from desktop computing to ubiquitous computing during last decade. In this sense, this paper needs to be updated to fill the time gap. However, I think some themes they have identified seem to be good model for determining whether tools are successful or not. Especially, I think that ‘Threshold and Ceiling’ concept is most important element, which can measure the efficiency of future user interface software tools.

Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks

In this paper, the authors presented jBricks, a Java toolkit for the development of post-WIMP applications executed on cluster-driven wall displays, the extends and integrates a high-quality 2D graphics rendering engine and a versatile input management module into a coherent framework hiding low-level details from the developer. They mentioned that the goal of their framework is to ease the development, testing and debugging of interactive visualization applications. Generally, this paper showed jBricks framework architecture in detail but I think it failed to demonstrate how users take advantage of their tools. Although this tool is deigned for advanced programmers who have some experiences in wall displays, the authors just mentioned what they did. I think they should explain how programmers felt while they are using this tool and how performance is improved.


Vinson Chuong - 11/14/2011 8:47:27

In "Past, Present and Future of User Interface Software Tools", Myers, Hudson, and Pausch survey successful and unsuccessful user interface tools, extracting a list of themes on which they can be evaluated and using those themes to discuss future tools. In "Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks", Pietriga, Huot, Nancel, and Primet present an abstraction where different types input devices are normalized to present a consistent programming interface.

Through examining the factors which determined the success or failure of various user interface tools (from the 1999's and older) Myers, Hudson, and Pausch arrive at a list of themes for evaluating them: the parts of a UI that are addressed, threshold and ceiling, path of least resistance, predictability, and moving targets.

For threshold and ceiling, they give the goal of designing tools with "gentle slopes", where implementation difficulty should increase slowly and continuously with the sophistication of the UI design. This is an ideal which seems to optimize three aspects at the same time: simplicity, expressiveness, and power. In practice, a tool can optimize at most two aspects while trading off on the last. Usually, what we see is that power is kept constant, while simplicity is prioritized for some set of simple tasks and expressiveness is prioritized for the more complex tasks. Many of today's interface tools (WPF, iOS SDK, etc.) manifest this as a GUI for doing simple widget compositions and a declarative language for expressing more complex designs. I believe that the goal of a "gentle slope" is a little deceptive and oversimplifies the trade-offs when building a tool.

An interesting point brought up about the "path of least resistance" is how tools can lead "implementers towards doing the right things, and away from doing the wrong things" and how they can "have a profound effect on how we think about problems". A good example is the "iOS Human Interface Guidelines", which state a list of conventions that every app should follow. XCode and Interface Builder enforce these conventions by restricting what developers can do and not providing framework support for designs which deviate from them. As a result, from my experience, the iOS SDK is easy to use for most designs (after buying into their conventions) and I don't find myself ever having to drop down into low levels.

Above, we saw the power of "convention over configuration". jBricks takes a different approach to curbing design complexity. They show a successful abstraction for standardizing over input devices. Different actions in different input devices can all map to the same outcome in any given application. The main drawback, however, is that those outcomes have to be specified in advance. In other words, the expressiveness of an input device with many degrees of freedom is essentially limited by the input device with the fewest degrees of freedom. Of course, they have shown examples where multiple input devices are used at the same time to achieve the needed degrees of freedom. All in all, while jBricks is successful in providing a consistent programming interface to various input devices, it might not contribute much in terms of interaction quality or taking full advantage of the expressiveness of each input device.


Apoorva Sachdev - 11/14/2011 8:51:09

CS 260 Reading Responses:

Today’s first reading ‘Past, Present and Future of User Interface Software tools’ gave a broad overview of the current state of the tools, and how they have changed over the past few years and in the authors’ view what lies ahead in the future. The second paper we read was about jBricks a framework that enables rapid development of post-WIMP applications for cluster based wall displays equipped with advanced input devices and modalities. I liked the first paper as the authors presented in detail how our current interface design is rooted on research performed in the 80’s and we had reached a level of stagnation in terms of creating new very different interaction techniques. However, I think this is now changing as with advancements in technology, we are now able to create device that have multiple input capabilities and that are going further away from the keyboard-mouse interaction. Although, it does seem that we are getting caught within the touchscreen framework (as was evident in the Microsoft Vision Video). The authors presented a good analysis of what tools seem to succeed and why some never caught on, describing the tradeoff between having low learning thresholds and high ceiling for the tools. Personally, I feel that I usually engage in using two kinds of tools, where one is used for fast prototyping to create many versions of the same thing in a rough manner and once I have chosen a design, I then use a more complex software to generate the final design, so I can iron out the intricacies and have more control over my design. Another example is use of Latex, I really like how powerful it is, but sometimes I wish they also added an interactive component to the program, so one could easily move things around in the preview section. The second paper we read on jBricks was also interesting. Abstracting away the distributed cluster for the developer would be really helpful. It seems like this program provides a lot of ease of use and as described in the scenario allows the people to split the task and go back and forth easily when they need to make changes. Last week while we were reading the proposals, there was a similar project presented which talked about “Intel’s new super chips which can support rendering multiple screens at the same time”; I wonder how the improvements in computing power would affect these kinds of systems (when one computer would be good enough for rendering multiple screen and distributed cluster would be unnecessary). But having the ability to remotely control a large screen with just one laptop and getting a complete view at least for now seems very resourceful. I am not completely sure how the interaction loop takes place and whether each input is routed back through the one laptop being used to control multiple screens and if that would be a bottle neck in the future.


Yin-Chia Yeh - 11/14/2011 8:54:26

Today’s two papers are about UI toolkit. The UI software tools paper reviews successful and unsuccessful UI tools in the past, and predicts there will be a revolution on UI tools because of the coming of the ubiquitous computing era. The paper also addresses important tools in the future. The jBricks paper presents a Java toolkit that integrates high quality 2D graphics and various input methods into a framework. It enables rapid prototyping on cluster-driven wall display.

The accuracy of the prediction of the future in the UI software tools paper is really impressing. How to design GUI across various devices have become important research topics in these days. I also like its theme in evaluating UI tools because it mentions some factors I never thought of before, such as predictability and moving targets. On the other hand, there is one prediction has not happened. The authors mention that event languages might not work well in recognition based interfaces. However, I think event languages are still frequently used a lot in today’s recognition based interfaces. Actually it’s hard for me to imagine how a UI can work without event language.

In the jBricks paper, my favorite feature is that it enabling prototyping outside of the platform. Working in front of the ultra-high-resolution display can be very tiring and it is very uneconomic if one developer needs one set of ultra-high-resolution display to work. Dividing the framework into two modules, graphics and input handling, also makes sense. Graphics maps to view module and input handling maps to controller module of Model-View-Controller architecture. The only thing I am not quite sure is why 2D graphics is harder to deal with on ultra-high-resolution display. The authors mention pixel streaming is not efficient but I don’t understand why 3D graphics doesn’t require pixel streaming.


Rohan Nagesh - 11/14/2011 9:01:31

This week's readings provided insight into how user interface software tools will change/have changed to accommodate new input modalities and devices for end users. The first reading "Past, Present, and Future of User Interface Software Tools" gives a brief overview of the styles of UI programming in the past, the stagnation that's occurred lately, and the future potential of UI tools. The second reading "Rapid Development of User Interfaces on Cluster-Driven Wall Displays With jBricks" presents a 2011 view on things and a novel Java toolkit that abstracts away much of the complexity in distributed rendering.

I found the first paper particularly interesting keeping in mind that the year it was published was 1999. Many of the trends these authors were predicting in terms of reduced hardware costs, ubiquitous computing, computers becoming a commodity, etc. all happened and were huge drivers of innovation for us the past decade. Indeed, each of these trends has had tremendous impact on the way UI's are designed. As the paper mentioned, you have to start by analyzing the user interfaces of the future before you can build the toolkits to support the development of these futuristic user interfaces.

The trend I thought they completely nailed was the "End user programming" trend. Most definitely, the most significant trend I have observed over the past decade is the degree of professionalization in software and the Internet. The authors' hypothesis that the tools will be more about providing context on the user and application state as opposed to events is spot on.

I found the second paper to be an interesting application and culmination of many of the trends the first paper forecasted. What I liked most about the application was the fact that the authors understood the 3 most important success metrics. First, hide the complexity of the distributed rendering. Second, have a low "threshold" for use as the first paper describes (high functionality, easy to use). Lastly, allow for code reuse. Very compelling application with much commercial potential in my opinion.