Multiple Displays

From CS294-84 Spring 2013
Jump to: navigation, search

arie - 2/25/2013 23:42:49

The Codex paper presents a multiple-display unit that detects its state by measuring the angle between the two individual displays. The idea of using different displays to divide the cognitive space (e.g. public vs. private) is intriguing and can be used not only for collaboration but also as a feedback mechanism (for example in a bank when you have no idea what the clerk is typing on his terminal). I thought that the authors have presented a rich compilation of possible applications as what with a comprehensive-looking user-experience study.

The Facet paper extends the idea of multiple displays into the wearable domain. Personally I thought it would be a bit too much to make sense of all the different tiny screens. The fragmentation might be overwhelming, which might have been the reason the authors have presented no data on any user- experience investigation.

The final paper, "Going beyond the display" was the most interesting in technical terms. Basically the authors have leveraged a special type projection screen material in order to integrate a display with a rear projection mechanism and camera sensing. This creates several dimensions of visual presentation which can be used to deliver information in orthogonal directions. The ability to smooth the transition between multi -touch displays and vision-based sensing is a nice example to full-immersion - the user interacts with the system using multiple modalities in what seems to be a seamless fashion.

The organization of the paper was somewhat unorthodox and the paper didn't make a really compelling argument for the work, rather it presented a technology and a list of possible applications. There was no user-experience study performed, which really makes one wonder how was this technology percieved by the target audience. ~


Ben Zhang, PhD - 2/26/2013 0:25:38

  • Codex

Codex forms a book-like shape and two screens can be configured in multiple ways for different context usages. Through a user study, they determined the preferable size and some desired functionalities provided by such form-factor. Their prototype relies on the off-the-shell tablets, and they showed the primary use by a note-taking application. Rich interaction between the two screens are discussed, and the detachability raises some other collaborative uses.

This design is kind of well motivated in the sense that some user studies do reveal the need for "dual" displays for cross-reference in most daily tasks. However, whether "dual" comes from physical partition or window manager is debatable. A single large display to perform virtual partition has more flexibility to adjust. And also many interaction might be more intuitive than two. But the detachability of this devices makes it great for collaborative tasks. This is also shown in their evaluation that most users mark it as "must have" feature.

On my daily work, I have been used to having two separate windows for reference and writing, respectively. And this has been a major drawback that I found in using tablets. The context switching is so frequent that even well-optimized gestures (like the sliding from edge on iOS) is not pleasant. Therefore, I like many interesting aspects this work has demonstrated. Also, given the book-like form factor, many interaction seems more natural (such as note-taking).

  • SecondLight

The innovative use of a special material to achieve dual mode of display enables many new interaction scheme. The intrinsic difference between this and multiple screens is the naturally overlay. This system preserves almost all the features that traditional HCI research on rear projection, while the "second light" could be used to enhance the interaction (such as tracking markers to display accordingly). The hardware section discusses how they use PSCT-LC and the related implementation details, such as the projector setup and camera configuration. FTIR is also employed to detect input and tracking.

A direct intuition after reading this paper is that such dual mode enhances the interaction (or the sensing capability) and the overlay display. And I personally like tangible layering effect demo a lot.


  • Facet

Facet is a multi-display wrist-worn system. Its unique form factor exposes many interesting user interaction discussion. The motivating application that Karen uses it to manage her life seems intriguing. Different displays take charge of different applications, while still could be employed together for a single task if needed (the weather forecast one looks great). The small displays are able to form a better huge display for continuous reading, and this would save the user the context switching overhead in some other screen constrained device. Wrist-wearing increases its accessibility a lot for daily usage.

Large portion of this paper discusses the Face Manager; by which, different reasonable gestures are mapped into actions in interaction. Though at first glance it seems complicated with various gestures to control the collaborative views of neighboring screens, with some time of training, users should be used to them. Many of these interaction schemes are designed to compensate for the limited screen size.

Their prototype uses the off-the-shelf Android-based devices so that each display is actually a separate machines. And presumably, integrating them might be a good way to better manage the screens and also reduce the overall size (especially the thickness, which might be large). But even if this is made smaller, such wrist-weared device might still be blocking some people's use of keyboard or so. Besides, I am not quite convinced with the usage of so many displays... There is few discussions about the number of displays that users find appropriate. Though they claim those screens that is on the backside of wrist might be used to show others, the requirements for this functionalities might be rare. What's more, one simple extending thoughts might be such reconfigurable multiple displays that each being the building blocks to form arbitrary shapes. Other than the interaction scheme studies, non-trivial technical challenges of orchestrating these devices exist.

Overall, this paper designs a unique form-factor device that combines the benefit of multiple displays and wrist-wearable characteristics.


elliot nahman - 2/26/2013 0:36:17

Codex: Dual tablet system tied together in one case. The authors categorize use into discreet types of interaction based on the orientation and angle between the two screens. Some use cases are individual and some collaborative. They focus, however, on those use cases in which the screens are symmetric.

They constructed a prototype using sensors to detect the tablets’ orientation and position and had it automatically adjust to screens optimized for specific use cases based on their categorization scheme.

It seems like their systems attempts to reconstruct a lot of the affordances of paper while adding some additional features. For example, when collaborating, the dual screen device helps compensate for view angle between participants.

The cynical part of me wants to say that all they did was double the price of a tablet. In a way, one could always argue that more screen real estate is always better, and yes, having two helps users divide tasks more readily. Their innovation was really just tying the two screens together successfully. This of course comes with the disadvantages of it not being a small or light as a single tablet, yet still more portable than a laptop while recreating some of the feel of paper. The familiarity with paper makes me wonder how much of their reported success comes from the fact that it fits easily in with existing paper paradigms rather than being a more innovative approach which uses have to adjust to.

Switchable diffuser: The authors created a table system employing switch glass which allows for one image to display on the glass, while a second image passes through the glass for display on objects above the or on the surface. They coupled this with a camera to capture either FTIR or diffuse IR.

In this paper, I don't feel they make much of a case as to why this technology is interesting. They mention how they are able to achieve this without the use of computer vision, but where is the advantage? Why is using computer vision to modify the image projected a worse way of doing it? Also, in the Codex paper, they did a good job explaining situations where having dual screens was a benefit. Here, the authors do not make their case. When it comes to a large surface, such as they are creating, what is the benefit of overlaying images, rather than just having them in separate regions of the table?

Facet: Wrist system comprised of multiple screens which work in tandem. A standard array of screens are show by default. The user can use touch inputs to expand a single screen onto multiple screens for more details.

Like the previous paper, not much attempt was made to really explain why their device is useful. What is the advantage of having a bracelet made of multiple screens over a mobile device? It seems like they are more likely to get damaged through normal daily activities than a mobile phone. I personally find a regular wristwatch cumbersome when doing things like typing; I can only imagine how cumbersome this device would be. Also, I question privacy issues if screens on the back of your wrist are displaying information which anyone can then view.


Sean Chen - 2/26/2013 3:44:09

Codex

Codex is a duel screen tablet. The user can use it to perform multitask as well as collaborative work. The device has sensors to detect the postures and then automatically switch the screen orientation and interactions correspondingly.

The paper did a nice job introducing different postures and how Codex would act in each situation. I also like the detent hinge so that the tablets can be put in any desired orientations and angles.

This paper presents many clever details designed to improve the usability and experiences of using duel-screen tablets. But it seems like some of the abilities can be held by a single screen device with a little tweak in the UI. This paper doesn't discuss what are the most critical features that can't be replaced by a single screen.

This paper is trying to explain, if people have to perform such tasks on a tablet, why can duel-screen outperform single screen, and what are the best designs for performing those tasks. But my question would be, would people really want to perform those tasks on tablets?

The collaboration mode seems a little bit limited. There should be more possible interactions other than synchronous/asynchronous screen. Otherwise, apps such as Google Docs can easily replace this device.


SecondLight

The research brings up a surface technology which can be toggled between a diffuse state and a clear state. The former state allows rear-projection to display images on the surface, whereas the latter one allows the device to project another image above the surface without interfering the other.

This technology and application really stunned me. It does in a sense, bring the best of two worlds. Although one doesn't necessary need a second display to see multiple layers (such as Google Map), the tangible lenses make it easier to control and allows multiple users simultaneously interact with the system.

The system can track mobile surfaces and even allows input from it. This can be used as a control panel without taking the space from the primary display. In addition, the mobile surface control can be dynamically changed according to the context. The ability to provide feedback by projecting to the interacting objects also brings a lot of potential interactions to the system.

Like some of the tabletops researches we read, it demonstrated its capabilities with few scenarios. But perhaps due to the nature of it, no solid use cases were evaluated. And like Grudin mentioned, very few applications were designed for multiple displays. Without killer-apps, most people won't register the benefits of having such technology. One last thing, I could not found the cost of such surface materials. Maybe the cost could also be an important factor.


Facet

Facet is a wrist-worn device consisting multiple small displays. Each display segment can be removed from the bracelet. The system has an inter-device communication framework which allows the segments to interact with each other, such as expand one application to adjacent segments.

This device reminds me of the 6th generation of iPod nano, which can be worn like a watch. The benefit of such design is that it allows always available input. In Facet, it further expands the possible usages and interactions. The virtual segment is clever. But I wonder why not just make the content scrollable so that the user need not keep rotating the device?

This paper, doesn't have clearer use cases and evaluations for how well can a user really perform tasks on this small yet wearable devices. My doubts are how efficient can emails be displayed on such small screens. Even though the device has 6 segments, a user can at best view 3 of them at the same time. In addition, I don't see a use of showing content on the bottom segment as well as the 2 facing outwards. Is it possible to just have only 3 segments and then virtually mimic the 'rotate' action so that the content of those 3 segments changes correspondingly? Thus, we can save the hardware cost and reduce the discomfort the bottom segments may cause when ones arm rests on them.


David Burnett - 2/26/2013 4:53:41

“Codex: A Dual Screen Tablet Computer”

This project explores the usability concepts involved with a tablet computer with two screens instead of one. The screens can detect orientation and can be detached from one another, which can trigger actions.

The study is well-motivated by the amount of content in the paper discussing the ways we might interact with such a device and paper models to judge form factor before fabrication. In addition, where some device designers would build a novel hardware platform and leave it to the hands of developers to determine how it should be used, a significant amount of effort was made to create sample dual-tablet applications and demonstration uses. The versatility of the hardware itself is an achievement, much less the applications for use with it.

The dual-screen concept itself is extremely natural; our information systems have long had two panels in books and this work capitalizes on many computer users' preferences for two screens. Regardless of screen size, the appeal of two physically separate panels free from window management seems universal.

Though the authors cover nearly every possible orientation and use of a flexible two-display handheld device, it may be that they offer too much capability at once. Focusing on too many use cases has potentially diluted their core notebook-like interactivity concept; uses like disconnection, presentation, and collaboration might be best left to separate studies while this one focused on all the possible uses the book configuration could have.

The authors' evaluation of success leaves much to be desired, as most results "data" consists of qualitative market research-style feedback. Possible methods of quantitative evaluation, beyond numerically scoring user responses, could include speed of task completion, ability to keep focus while on a task requiring reference, and ability to percieve complex document organization in a more familiar manner.


"Going Beyond the Display: A Surface Technology with an Electronically Switchable Diffuser"

"SecondLight," as the paper describes it, includes the ability to pulse the opaqueness of a screen in time with two projection systems fast enough that one projector can shine through the screen when in transparent mode while the second uses the diffuse mode as a projection surface.

In a surprising twist from MS Research's typically infrared-themed work, this pair of projectors offers a creative, novel display system using simple but seldom employed technology. The two layers combine traditional a computing surface with a real-world one without a typical projection screen. It's the first multi-layered display I've seen, and I wonder if they have plans to make sandwiches of the material to create display systems with true depth of focus. Thankfully, the paper makes no mention of 3D applications.

The screen is paired with MSR's existing IR tracking techniques to make a tactile display surface capable. A static pair of screens allowing "reveal" capability such as their suggested wireframe or constellation demos are fun but lack immersion; adding the Surface-style widget and fingertip tracking make a two-layer display into which creative energy is worth investing.

Creative energy is what this project could use slightly more of, unfortunately. The previous paper created several legitimate demonstration applications for its hardware, but this paper suffers from the same issue the original Microsoft Surface suffered from: a lack of compelling, application-specific use cases. The original Surface simply took it to be a very large computer screen you could touch instead of click, and I fear that this projector system may share the same fate. The paper describes several types of interactions the device supports, but goes into little detail about the kind of activities this device enhances.

The paper makes only brief mention of viewing angle, which appears to be carefully oriented to avoid blinding the user. Projection paper is clearly illuminated by the projector responsible for transmission during transparent period, and therefore looking straight down through the surface would mean looking directly into one of the projectors. The projector is angled to avoid this, but at the cost of carefully-calibrated preprocessing to ensure square projection. The authors' insinuation that facial recognition could blank out the light on a user's face implies they don't have much familiarity with image processing-based feedback systems.


"Facet: A multi-segment wrist worn system"

Facet is a wrist system that deals with the limited screen real estate of a watch by wearing six of them. These six screens, arranged around the wrist, are touch-sensitive and able to communicate with each other.

This project enables extending the concepts of multiple desktop monitors and virtual desktops in a new form factor. The hardware is mostly off the shelf, which enables software-oriented HCI researchers familiar with interface concepts dealing with multiple monitors to easily prototype concepts on hardware themselves. In other words, someone interested in this concept could try it out for themselves which is rare in HCI hardware development.

It also solves one of the fundamental frustrations with advanced watch functions: programming. By mapping specific tasks, actions, or roles with physical hardware tokens, the user is able to reconfigure without accidental mode switches or complicated multi-step button sequences that lead users to often ignore any secondary watch functions.

Each face of Facet depends on knowing the current wrist pose, which may pose problems when using the device while in motion or needing to refer to it quickly. Waiting for accelerometers to settle to reorient a smartphone or tablet is already a snag, and waiting for the same change for a reference device would be many times more frustrating -- especially since it may eliminate simple, low-effort glances.

Lastly, six faces may be an extreme number of additional windows. The number six is effectively at the top end of simultaneous human memory, and the three faces not facing the user at any given time will easily have their content and/or order forgotten. I can imagine many instances where the user must spin the device and hunt around for the intended panel.


Joey Greenspun - 2/26/2013 8:28:06

Codex: A Dual Screen Tablet Computer This paper presents a unique two screen system capable of determining the relative position of both screens, and changing functionality accordingly.

One very interesting design aspect this group used was performing a very preliminary human trial step. They made paper and foam prototypes of varying sizes to determine how they should go about making their product. Form factor is basically everything they are innovating. The technology already exists, they are just putting it together in a novel way. They made three different sizes and found that uses enjoyed the smaller two much more than the larger one. This fit nicely into the market space they were looking to fill. The goal was to create something that could be brought into a meeting room and was highly portable.

Something they don’t do a great job with is motivating why two screens are better than one. They have one point wherein they mention that when partitioning one screen into two separate sections, users would “need to remain disciplined enough to avoid resizing windows or placing other windows across the virtual boundary.” However, this doesn’t really seem like an actual problem to me. This can all be done in software. This is really their only point. The continue to state that it would be interesting to explore how physically separate screens are useful…but shouldn’t they have done that already? This is the salient feature of their product.

Going Beyond the Display These researchers present a novel rear projection technology that maintains all the benefits of a rear projection system, however has the additional benefit of being able to interact with the space above the surface of the screen. They accomplish this by rapidly toggling the screen to be diffuse and transparent so that projection and sensing can be performed at the surface and then above the surface very quickly. Each is only on for half the time, however the human eye cannot see this and thus sees both on all the time. At first I thought there would need to be an incredible amount of machine learning and tracking to be done in order to project through the screen onto objects above it, however one of their functionalities does not require any additional processing to be done. An interesting example is having an augmented image of a car being constantly projected through the screen so that when you place a piece of paper of transparent film above the car, you see a more detailed schematic view of the car as opposed to the outside only. This will cut down on the processing they need to do, and make the system very versatile in a very unique way. They go on to mention that they could track and monitor the positions of various objects to dynamically determine what is being projected on them. The give the example of a piece glass that can act as a magnifying glass, so that when it is moved close to the surface, it enlarges the image, or displays new information. I think this is really where a lot of interesting applications can be developed.

The researchers go on to mention they could do hand tracking using this setup, which I thought was a very strong argument for why their technology was relevant. Many of the hand tracking systems have a completely decoupled imaging and display platform. You are typically using your hands in one place and seeing things being done on a screen elsewhere. This setup provides a system in which this is all done in the same space, however they state that determining depth would be difficult without having additional cameras. One problem that I had when thinking about this system was that if the user peered over the display, he/she would be blinded by the through projection, so I’m glad they mention in their future work that they would like to track eyes and faces so they can ‘project black’ at a user’s eyes.

Facet: A Multi-Segment Wrist Worn System Here is presented a device that has six completely separate, but coupled, screens that are fit into a watch/bracelet form factor. The platform has magnetometers and accelerometers in each segment of the device which are used to determine a variety of things.

At the start of this paper, the researchers mention that there has been no research in multi-display watch-sized wearable systems. It begs the question as to why this is. This it by design, or have no researchers picked up on the hidden gem yet. My initial reaction is that this is overkill and a bit cumbersome. I don’t need to have my calendar, email, time, and weather all visible around my wrist at the same time. The beauty of a watch is that at a single glance you can get all the information it has to offer. We want to interact with devices such as this as unobtrusively as possible. Having to put three fingers on three separate screens definitely is not unobtrusive. Additionally, having screens completely blocked from our vision is frustrating and inefficient. These researchers have developed a fairly rich set of gestures they can recognize and use to change what is displayed where on their setup. Additionally, they solve the issue of determining where each screen is in an interesting way, however with questionable robustness. They determine what the orientation of the screen is in relation to the other screens. So, all screens that share an axis are said to be in the device, this seems a little hacked together to me. There should be a way to determine which devices are coupled via some sort of real communication link. Additionally, I don’t see where the huge benefit is in being able to detach the various screens. The researchers are adamant about this being a great aspect of their design, and admit they could remove a lot of redundancies in hardware if the screens had to stay in place. I would have liked to see them better defend why it’s so important to be able to remove the screens. Something that isn’t mentioned at all is stress testing or durability. If this slips off your wrist and hits the ground, is it going to shatter? Will the screen just pop out? And along the same lines, is there any way to adjust the watch so that it is tighter of looser on your wrist?


Valkyrie Savage - 2/26/2013 10:08:18

Many new interaction techniques become possible when we introduce multiple displays. The papers this week explored the use of two small displays, a large display and a small display, and six tiny displays.

The Codex, somewhat-pretentiously named after the Memex of Bush’s dreams, uses two medium-sized displays with input capabilities inside a book-like form factor. They describe that the screens can be detached. The most interesting part of the paper are the “postures” that they offer as users move the book-like holder into different configurations (different angles, portrait-vs-landscape). A lot of the language in this paper surrounding that interaction sounded rather social-science-fluffy: “a nuanced gradation of private vs. public interactions” and “space with a dedicated purpose”. But the idea was sound. The authors spent a significant amount of time considering problems like how users could collaborate effectively (when the need to no longer screen share arises), how to preview links, and how to support primary and secondary tasks.

The Microsoft paper, detailing a display which has both diffuse and clear states, sounded quite technically awesome. They essentially used privacy glass, two projectors, two cameras, and a whole host of LEDs to create a tabletop interaction surface that could go beyond the tabletop, projecting information onto diffuse objects above the surface as well as looking through the surface to determine which touch points belong to which users, and even potentially who those users are! I wish they had done more evaluation of applications with this thing, because I can see that using the ability to actually disambiguate users rather than just the probability of disambiguating them would really improve user experience in collaborative situations. They had a lot of suggestions for directions to explore with their new tabletop, otherwise, including “projecting black” over detected eyes so as not to blind users during tabletop use. The ability to project on diffuse surfaces and actually realize Ishii’s long-ago dream of a magic lens is terrific, and I’m sort of surprised that this isn’t a product yet (although their implementation was quite complicated... and tabletops just aren’t that popular...).

The final paper, Facets, explored the usage of six tiny screens worn as a wristband. These offered some interesting possibilities for public vs. private interactions (as in the Codez) based on the orientation of the screens with respect to the primary user/wearer, and I thought that their implementation of the virtual screens was a very clever one. There’s nothing really to distinguish the screens from each other, anyway, so why not make them infinite? It’s kind of a shame that the implementation they offered used touch screens which only supported one touch point, but it also seems that they did a good job of overcoming this obstacle with multi-screen multi-touch gestures. Their usage scenario up front gave a good motivation for the paper, too. The thing I was most disappointed about by this paper was the lack of evaluation. I know it’s a UIST paper, but slap the thing on some people’s wrists, why don’t ya, and see if they like it or it’s too heavy or the interactions you constructed are unnatural.