Design I: Methods

From CS260Wiki
Jump to: navigation, search

Lecture Slides


Extra Materials

Discussant's Slides and Materials

Reading Responses

Luke Segars - 10/17/2010 14:51:31

Getting the Right Design and the Design Right

This paper explores the idea of doing usability testing with multiple designs simulatenously. The paper states several reasons for why this might (and usually is) a superior approach to testing a single product's usability with an untrained user, including their willingness to leave negative feedback and compare the strengths and weaknesses of a particular design.

It was particularly interesting to see how people's comments and criticisms changed when they had designs to compare against each other. Since the vast majority of test subjects will not be trained designers, it may be hard for them to determine whether they "like" or "dislike" a particular feature in isolation. With that in mind, it is unsurprising that users may become more critical given a point of comparison. Building off of this discovery, the experimenters also noticed a decrease in "superficial" (generally vague or non-helpful) comments, meaning that the quality and overall userfulness of each comment likely increase on average. It would be interesting to see how the trial time changed as multiple designs were introduced. I suspect that adding additional designs would not drastically increase a user's trial time; instead, they would likely be able to compare all designs after gaining a firm understanding of one of them. If this is the case then that makes the multiple design alternatives an even more valuable method for research involving many subjects.

Designing a user-centered HCI experiment is amazingly difficult and more variables seem to show up every time I think about designing them. After hearing the idea stated by another person, it is logical to expect that someone could perform a better design analysis if they hadsamples of multiple designs available for them to compare. Nevertheless, this isn't something that's occurred to me and adds another dynamic to project design. The idea of presenting multiple designs doesn't seem to have many downsides (aside from the need to develop the competing designs in the first place) and seems like a best practice that the HCI and design community would do well to adopt.

Krishna - 10/17/2010 15:11:14

Dilemmas in general theory of planning

The arguments make sense if we relate human interface design to making policy design. As argued by the authors, policy design for a pluralistic society is a problem with no sense of a goal due to the subjectivity that goes with equity, with no sense of correctness and thus no sense of optimal solutions. Thus from a HCI perspective, we might be tempted to ask questions such as 'does it make sense to talk about an optimal design which works equally well across users ?'

I however disagree and consider the arguments of this paper to HCI as irrelevant. Great solutions are disruptive and they are not created to please everyone in which case the issues mentioned in the paper would have been relevant. Rather, they tend to make users adapt to them, the failure of which is a significant loss to the user and not to the design. In such a case, (great) design solutions can be thought of coming out from an autocratic society where the designer rules - however right or wrong he may be. I would argue that for creating such disruptive designs, designers should have a goal, they should have a sense of correctness and a sense of optimality. They may be wrong but in the case of disruptive designs they are amazingly right.

Testing Many Is Better Than One

The authors report that participants are reluctant to be critical of designs when evaluating a single design - they are not open enough and not forthcoming for various social and psychological reasons. The authors argue that design evaluations should therefore include a number of alternate design solutions. Their hypothesis is that when given alternate design solutions to evaluate, participants will tend to be more critical and less pressured to be positive of the individual designs compared to when they see only one. They also hypothesize that participants evaluating alternate design solutions will be exposed to a broader base of design ideas and thus tend to provide more suggestions, prompting to consider evaluation of multiple designs as a participatory design process - this is not immediately intuitive to me as design critique is much more than providing suggestive comments.

They test these hypothesis by asking participants to evaluate three design solutions for a house climate control system - the solutions had the same functionality and varied only in style for example, one solution used a circular dial instead of a table of drop down lists. Participants were assigned to either the single design evaluation experiment or the experiment where they were asked to evaluate all the three. They were asked to score each of the designs in addition to providing textual response statements - these statements were later classified as a comment or suggestion; comments were categorized as positive or negative and statements were classified as substantial(new or borrowed) or superficial.

Their results show that the average score given by participants to these designs were higher when shown individually; the results also show a higher number of positive comments for the individual design evaluation case. They also found that while the number of substantial suggestions remained the same, the number of superficial suggestions decreased from the single to multiple design case.

The experiments clearly suggest that participants are relatively more objective when offered multiple design solutions to compare and evaluate. However, unlike what the authors had hoped for, participants did not provide enough substantial comments. Though this rules out the possibility of using the evaluation technique for participatory design, the authors argue that the superficial comments, rich in objectivity and critique, should be considered as a valuable feedback for the design process.

Thejo Kote - 10/17/2010 15:36:13

Dilemmas in a general theory of planning:

In this paper, the authors argue that in the domain of social policy, a professional's job has become harder because of diversity and pluralism. They call problems of this nature "wicked" problems as compared to the "tame" problems of the natural sciences where problem formulation and identification of solutions is much easier.

The authors argue that it is a challenge even to formulate goals and to define the problem in the case of "wicked" problems. It is often the case that to define a problem, one has to determine what an acceptable solution is, because the solution determines the problem statement. Thus the classical "systems approach" to problem resolution don't apply. Also, they argue that it is hard to determine when a solution has been achieved. Their nature is such that nobody can say with certainty that a solution has been reached, and even if they can, the solution can only bew classified as good or bad depending on the view point of the evaluator. Wicked problems also have no tests to determine if the solution is the right one. In general, they are unique and don't provbide the benefit of hindsight to learn from.

The authors paint a very gloomy picture and it's hard to find any major holes in it. My main takeaway was that - and the reason I assume we were made to read this paper - in the social sciences (and design, in our case), there are never easy solutions to problems, if we are able to successfully formulate the problem in the first place. We have to learn to embrace the ambiguity and find workable solutions without expecting a convincing, universally acceptable solution.

Getting the Right Design and the Design Right: Testing Many Is Better Than One:

In this paper, Tohidi and co-authors study the impact of testing single versus multiple versions of a design with users. They test their hypotheses that when shown multiple designs, participants will rate them lower, make fewer positive comments and provide more suggestions for improvement. The authors conduct an experiment to test the hypotheses and share the results.

They learn that their hypotheses about lower ratings and fewer positive comments is accurate, but not the one about generating useful suggestions. They also conclude from their experiment that user tests are only good for identifying problems, not finding solutions to them.

The reason testing a single design is not ideal is because test subjects don't want to seem negative by criticising it. They feel that negative feedback reflects poorly on the designer, which they want to avoid. On the other had, testing multiple designs suggests that the designer is not yet committed to any one of them, and is still in the process finding the best one and so, is more open to feedback.

While this is an interesting study, I don't think they considered an important aspect of the test. What if it was made very clear to the test subjects that the people conducting the test are not the designers, or if it was conducted without any human intervention? Does the difference between single and multiple design tests still hold?

Charlie Hsu - 10/17/2010 16:25:53

Dilemmas in a General Theory of Planning

This paper discusses the difficulty of using traditional scientific methods in addressing "wicked problems." The paper is motivated by the professionalization of social services in the 1960s: policemen, civil engineers, social workers, etc. and their tendency to be publicly attacked for inadequate execution of their jobs. This is because their problems are usually "planning problems", societal problems that are different from the scientific and "tame" problems that previous professionals were used to dealing with. The paper describes some characteristics of these "wicked problems," and the different approaches needed to tackle them in relation to "tame" scientific problems.

Though the paper does not directly deal with human-computer interaction, it is clear that HCI problems strongly tilt towards the "wicked problem" classification rather than the "tame," scientific, easily verifiable problems that the paper contrasts with wicked planning problems. I will quickly hit some of the characteristics mentioned in the paper that apply to HCI problems. There is indeed no definitive formulation of the problem of design; what exactly defines good design, usability, richness of human-computer interaction? If we could define these things, the problem itself would be solved, one of the main characteristics of wicked problems. There is no stopping rule in HCI: never will we reach a point where we cannot make forward progress and continue to iterate. Design solutions are not concretely "true or false", but subjective, "good or bad", depending on judgments from any number of interested parties. Nor does HCI have an enumerable set of potential solutions: though attempts have been made to define the design space of certain areas (see the paper on morphological analysis of the input device design space), the overall set of solutions to HCI problems is an innumerable and ever-expanding one.

However, there were some characteristics of wicked problems noted in the paper that clearly did not apply to HCI. We still use scientific methods to test hypotheses in HCI research. Though we may not receive a concrete answer, statistics and hypothesis testing can offer some degree of certainty about the effectiveness of a new design. Furthermore, HCI solutions do not have quite the same risk associated with the planning problems described in the paper; prototyping a new input device and prototyping a new highway certainly have different costs. Is this simply due to the scale of the problem? I argue that it is not: HCI trial-and-error experimentation can be done in a closed, controlled environment, and does not force consequences out into the real world. Furthermore, the iterative design process and use of lo-fi prototyping in HCI research dramatically lowers cost of trial-and-error testing, and may not be as feasible in tackling social planning problems.

Getting the Right Design and the Design Right

This paper presented a study comparing two different usability testing techniques: one where users were presented with a single design prototype to evaluate, and one where users were presented with multiple design prototypes to evaluate and compare between. Results from the study showed that users were more likely to criticize designs when given multiple designs to consider, compared to when they were given only one. Users also were less likely to comment positively on designs when seen in a group rather than singularly. Finally, though presenting multiple designs to users allowed them to focus on the critical aspects of the design more closely, as shown by a decrease in the number of superficial suggestions received, presenting multiple designs did not affect the number of substantial suggestions as compared to when users were presented with a single design.

Many of the insights in this paper brought back memories of user testing in CS160 and inspired new ways I might have improved on those user tests. Our initial paper and lo-fi prototypes were all presented singularly to the user. There was indeed no chance for us to "get the right design," since we could only iterate on the design we had already chosen from the start. Using the low cost of paper prototypes, we should have mocked up a set of prototypes to give ourselves a head start by starting with "the right design," instead of trying to force the design we had chosen from the start to become the "right design." This is a classic example of investing more work on the planning stages of the design to reap huge benefits and avoid roadblocks later on in the iterative design process.

All three of the hypothesis analyses in the paper also matched the experiences we had during user testing with our single design. Users were often reluctant to criticize, and quick to praise. Furthermore, suggestions were often superficial in the later stages of prototyping. Even in lofi and paper prototypes, where the user was able to focus more clearly on critical design flaws instead of superficial ones, the suggestions offered to us were often not related to the actual interface design, but were instead "feature suggestions," coming from their field expertise in music editing and dance practice, not from conjured creativity in user interface design. This matches what the authors concluded about design and creativity skills: they are specialized, and user testing should be more about detecting errors instead of soliciting design ideas.

Aaron Hong - 10/17/2010 16:31:30

In "Getting the Right Design and the Design Right" by Tohidi et al. they talk about how standard usability testing techniques are inadequate and how we need to test multiple alternatives at the same time. They even mentioned that design in parallel with MULTI-teams is not ideal either, as industry usually has one team working on multiple designs. They found that there were more negative comments, more constructive comments, and less tendency to impress the test conductors. Generally, I felt that running multiple designs at the same time is a good idea. That we can't be stuck on one, otherwise we wouldn't be able to consider the more orthogonal designs to our current one.

In "Dilemmas in a General Theory of Planning" Rittel and Webber talk about social policy and how applying regular science to it will be "failed" because social problems are not neatly defined. They are what they call "wicked" problems. They do not ahve an enumerable set of potential solutions and there is no immediate and ultimate test of a solution. Given these criteria I agree that these are important problems, yet difficult. Knowing this gives us an idea of how to approach and think about it, without resorting to quasi-science when there is none.

Kurtis Heimerl - 10/17/2010 16:46:36

Dilemmas in a General Theory of Planning This paper attempts to explain why policy sciences have generally failed.

This is a terrible paper. As someone who sits in the Development field, I've read many works where social scientists try to explain why their job is so hard and why results are rare. I feel like I have a good understanding of that. This set of authors does not. They completely misrepresent the "hard" science world, saying that they work on "benign" problems. Is "how did the universe come into existence" a benign problem? No, it's not. Not by this author's definition, at least.

What scientists are able to do, is break problems into smaller parts and agree on those. Policy organizers can do that as well. I'm pretty sure we're good at building and designing overpasses now, as an example. We may not know how to "reduce crime" but we do know how to "immunize people".

The key point is that scientists are better at finding low hanging fruit. As referenced in the early part of the paper, no one complained during the early years of policy discussions. Only when they had to start tackling large (not "wicked") problems did the problems appear.

Getting the Right Design and the Design Right: Testing Many Is Better Than One, In this paper, they argue for presenting multiple designs when conducting user studies. This is to produce the "right design" instead of "designing right", providing the correct basic interface rather than spending great deals of time iterating on the wrong one and making it usable.

I agree with the sentiment, though I doubt the results somewhat. Why three designs? Why just N=6? Fundamentally, this seems correct for all of the reasons they gave; giving users the task of selecting one from many is a huge shift from asking users to critique just one. I just wish the analysis had been more thorough.

There's not much to write here, I suppose. There's a concern about the axes that you preset to the user; they will have a harder time thinking "outside the box" of the 3 designs you present to them. This may have been covered, but I didn't see it. This is more problematic when you design across cultures, as the reasonable designs you present may not be localized, regardless of how many you give. The better methodology would be participatory design, having the users design the system themselves.

David Wong - 10/17/2010 17:05:53

I choose to opt out of this week's reading due to a conflicting deadline.

Drew Fisher - 10/17/2010 17:06:54

Dillemmas in a General Theory of Planning

Designers face "Wicked Problems" that have no clear formulation, hypotheses that can't be directly tested, and unformulatable value functions. Further, these problems

This paper calls for attention to the fact that existing solutions to large-scale societal problems - Observe-Plan-Act and best practices - are ultimately ineffective. Further, we lack the theory to even discuss such systems.

The situation presented in this paper is rather dismal. It seems that it applies to both problems of design and problems of information representation - even thick description is not good enough to properly understand and solve these problems. Nothing that we currently have is adequate. It'd be interesting to see what proposed solutions people come up with to this wickedest of problems, since the paper offers no insight, in that regard.

Getting the Right Design and the Design Right: Testing Many is Better Than One

User studies have many difficulties, and one of them is getting the users to be honest. Testers tend to not like to criticize someone's hard work. This paper suggests that one way to get less-inflated feedback from users is to present multiple designs.

Nonetheless, while testers produced more negative and less positive feedback when testing multiple designs, they were still reluctant to offer suggestions; it is likely that they self-censored as a result of lack of experience in the design realm. Design skill is still a specialized skill.

The upshot of this paper is that we should consider working with multiple designs throughout the design process, rather than simply dropping to a single design upon which is iterated, and by doing so, obtain more valuable criticism.

Aditi Muralidharan - 10/17/2010 17:09:23

In "Dilemmas in a General Theory of Planning", Rittel and Webber introduced the "wicked problem", for which they later became well-known in their fields. These are problems that are difficult to define, difficult or impossible to identify solutions for, and are interconnected with other wicked problems, so that there are unintended consequences everywhere. Wikipedia has a clear [summary] of this concept and where it went.

This article was illuminating for me because it clearly spelled out our changing in attitudes towards what a planning or design professional's job should be: from optimizing a clearly spelled-out problem to find the most efficient solution, to choosing and identifying the problem in the first place, and then proposing a solution not necessarily goverened by efficiency. This concept feels relevant to me as a researcher applying computational methods to the humanities -- before I can even identify a set computational problem, there is a minefield of humanities politics, stakeholders, biases, and perspectives to navigate and make explicit. The application of a bite-size computational tool to humanistic analysis in a way that satisfies both humanists and computer scientists often leads to considering other criteria than efficiency, so it was illuminating see a perspective on why the problems suddenly seemed so thorny to me as a scientist, and so unlike the problems I "grew up" with, when I was doing my undergraduate degree in physics.

The second reading "Getting the right design..." doesn't seem related to the first paper, and the concept the authors present is not new to me - it was impressed upon us in i213 (User Interface Design) last semester. We did not read this actual paper for the class, so it was nice to finally read it. In that class, the importance of presenting multiple designs and giving an "unfinished" look for getting good user feedback was stressed. We had to make sure our paper prototypes and video prototypes were not too polished looking, and that we came up with at least 3 different designs for our user studies and for all of our design assignments in that course. From my experience, at least, I can say that the conclusions drawn in this paper are accurate and very useful in practice.

Bryan Trinh - 10/17/2010 17:51:59

Dilemmas in A General Theory of Planning

Wicked problems are those in which there is no definite answer--no clear solution. Rittel and Webber, provide the context in which wicked problems exist and provide a frame of mind in which to view these problems. Design as with politics and other disciplines that solve problems of human behavior all have an initial step of knowing what the root of the problems are. It requires a deep review of priorities and purposes-- an on going gauge of values and commitments.

More than anything else, this paper establishes the correct frame of mind for solving problems holistically--unfortunately it does not provide the accompanying frameworks to solve wicked problems. The language of the paper seems to glorify the existence of this planner guy, who needs to think in these complex overarching ways--seemed more like a self proclamation of importance than anything else.

If we take what the author has said about science and engineering professions to be true in his time--it is certainly not true today. Engineers and scientists are just as exposed to the pressing issues of the world and properly equipped to uncover them as well as solve them in pragmatic ways.

Getting the Right Design and the Design Right

Like the last paper, this one focuses on problem definition. The authors show that by providing many possibly solutions as opposed to just one solution, to a user testing group, they are much more likely to identify problems in a design. Furthermore they also found that users were only able to identify the problems, but not provide the solutions.

This paper is a call to change the way designers initiate the design process. By borrowing some of the methods used by other design disciplines, they were able to show that by iterating early in the design process, they were able to come up with a better solution. This iteration process is not provided by multiple people or teams, it is provided by one team. I think this is one of the most important concepts addressed in this paper in regards to designing with a team. The team member has to abandon the championing of individual ideas and keep an open mind.

It also addresses some issues with user testing. Like with all other wicked problems, human subjectivity can make user testing a very nuanced practice. The planning of this process is of the utmost importance when trying to capture good data.

Siamak Faridani - 10/17/2010 18:33:39

In this article, Horst Rittel and Melvin Webber, both urban planners at Berkley define the “Wicked Problem”. Unlike “tame” problems these problems are difficult to solve. They are not well defined, and their requirements are either not well known or contradicting. Authors start with the fact that some questions (mostly in science and engineering) have a unique answer, they might be hard to solve but the answer is well justified, and their validity is verifiable.

Rittel and Webber point out that planning problems are wicked problems and highlight 10 characteristics of these problems 1- There is no definitive formulation of a wicked problem 2- They have no stopping rule 3- The solutions are good or bad and not true or false 4- Solutions to wicked problems are not easily verifiable 5- Solution to a wicked problem is a one shot operations, since most of the steps cannot be undone 6- One cannot enumerate all possible solutions to a wicked problem 7- Wicked problems are unique 8- Wicked problems can be considered a symptom of another problem 9- Wicked problems can be explained in different ways 10- The planner faces great deal of liability

In the second paper Tohidi et. al. start with this hypothesis that during a user study if you give your participants one UI they feel obliged to impress you and will be less critical of your work (my personal user test with my grad student friend typically shows a completely different effect though) They start by a number of hypothesis (H1, H2 and H3) and design an experiment that supports their hypothesis. The article has received 30 citations.

One criticism that I may have for their paper is that even though they have shown that the number of comments for each UI decreases as the number of alternatives increase, it is not guaranteed that they are receiving more constructive comments. Inceasing the number of alternatives may simply confuse participants and will make it harder for them to keep track of pros and cons of each UI. For the researcher it may also cause problem, it will make it harder to keep track of the controlled variable.

Matthew Can - 10/17/2010 18:39:55

Dilemmas in a General Theory of Planning

Rittel and Webber argue that there is no scientific basis for dealing with problems of planning because those problems are “wicked”, whereas the problems dealt with in science are “tame”. They provide ten properties that are indicative of planning problems and explain why those properties make planning problems wicked.

The primary reason that planning-type problems are difficult to deal with (and cannot be subjected to a scientific framework) is that they are problems of social policy. Such problems are not even well defined (one can only understand the problem with a preconceived idea for solving it), nor do they have a solution in the traditional sense. These are the first two properties of wicked problems, as laid out in the paper.

What I liked about this paper was its emphasis on the increasingly heterogeneous social context as a source of problems for planners. The plurality of social groups makes it difficult to plan for the social welfare of everyone. The authors conclude that the notion of a “social product” does not make sense because there is no way to objectively measure the welfare of a diverse society. Even if professionals are entrusted to make decisions on planning-type problems, the outcomes will not be better given the nature of the problems. The conclusion is that a theory of how to approach planning problems cannot exist.

In this paper, the authors talk about design planning in the context of city planning. This raises the question of whether their conclusions are relevant to the design process in HCI, where the role of social policy is less significant (or at least less salient). I would agree that design problems in HCI have some properties of wicked problems. For example, in the design of interfaces, there is no stopping rule. A true solution does not exist. Rather, the designer keeps iterating toward a better design until he runs out of resources. Along those lines, the solutions in HCI design are good-or-bad in nature, not true-or-false.

Although HCI design is similar to planning problems in some ways, it is not in others. A solution in HCI design is not a “one-shot operation” like a solution in city planning. There is an opportunity to learn by trial-and-error with usability testing, prototyping, etc. On another note, while I do agree that every problem in HCI design is unique in some way, solutions can be applied to similar problems. There are best practices in UI design that are widely applicable. Finally, the notion that “the planner has no right to be wrong” really depends on the context of the HCI design. Certainly it is not true in research. But even in commercial applications, the consequences of a failed design are not so severe if the design can quickly be rolled back. A Facebook UI designer and a freeway planner do not face the same liability.

Getting the Right Design and the Design Right

This paper explores the benefit of usability testing three interface designs (functionally equivalent, yet different designs) versus a single design. The key finding is that users give a lower rating to a design and are less willing to praise the design when it is presented in a group of three as opposed to by itself. However, presenting users with three designs does not increase the number of suggestions they provide for how to improve a design.

Compared to other HCI papers we have read, I thought this one argued its point rigorously. In particular, I liked the detailed description of the method the authors employed. Most important, they thoroughly described the user tasks, the questionnaire, and the interview. This lends a lot of credibility to the study. As for results, the paper soundly validates its conclusions with tests of statistical significance. One result I found interesting is that although presenting a user with three prototypes makes the user provide fewer positive comments, it does not necessarily make the user provide more negative comments.

For me, the most relevant insight gained from this paper is that usability testing with multiple designs can help lead to “getting the right design”, not just to “get the design right”. Working with only one design is a process of iterative improvement, not a process of searching through the design space for the best solution.

Arpad Kovacs - 10/17/2010 18:42:11

The "Getting the Right Design and the Design Right" paper shows how presenting multiple alternatives, rather than just a single design, encourages users to provide more useful criticism by providing a basis for comparisons, and less pressure to praise a particular design. The investigation consisted of creating low-fidelity paper prototypes of 3 styles (circular, tabular, and linear) of a house climate control system, and then running a between-subjects experiment that showed users either only one of the designs, or 3 at a time, and asked the users to perform a task, then rate their experiences of using the system in a survey and an interview.

I think that the main contribution of this paper is that when a design is tested in isolation, the results may prove to be more optimistic than they should, since users do not have a frame of reference, and will thus be less willing to point out flaws in the design and instead provide false praise. In contrast, when multiple designs are shown at once, users will not hesitate to identify the most deficient design. In particular, the number of negative comments for a bad design increases significantly (11.5->70 for the linear design) when the alternatives are shown; perhaps this is because users can clearly see that there exist better solutions, and become more confident that the flaw is with the design rather than their own abilities.

I thought that it was amusing that the researchers tried to analyze subjective user feedback by using a categorization-tree, and then proceeded to run statistical analysis on the quantity of responses in each category. This approach seems to be a good way to identify bad designs (by counting the number of negative comments), however it is not clear to me what the utility of distinguishing a new substantial comment vs a borrowed substantial comment is. I think it would have been more interesting to actually implement each user group's suggestions and then compare the resulting designs. I expect that the 3-designs-at-a-time user group would end up taking the best elements of each design and combining them into a composite, while the groups that evaluated only 1 design in isolation would branch off in more novel directions. The ultimate question is which approach results in the most usable final design; however this remains unanswered by the paper.

The "Dilemmas in a General Theory of Planning" opens with how the public's confidence in professionals is being shaken, and then attempts to justify the unsatisfactory results achieved by professionals by describing the process of planning and attempting to solve difficult problems. The paper claims that central planning, as implemented by professionals, is divided into the 3 parts of goal formation, problem definition, and equity issues. Goal formation is concerned with asking what systems do, and what should they do; it consists of exploration to find latent desired outcomes. Problem definition, which was once all about efficiency and efficacy, is now more concerned with identifying observed conditions vs desired conditions, and then prescribing the actions that will bridge this gap. Finally, the chapter delves into the realm of "wicked planning problems", which are difficult to solve because they are unique yet not clearly defined nor bounded, cannot be solved through trial and error, and do not have a clear terminal state.

I am not sure what the contribution of this paper to HCI is. What I can extract from it is that life is much easier when we deal with deterministic, observable, and well-defined problems in the pure sciences and math, rather than messy engineering and social-science issues that are nondeterministic, stochastic, and often interminable. The chapter also gives a rough outline to the steps of solving such "wicked problems", but I think that it is much more straightforward to adopt the 5 steps of the scientific method, and run them in an infinite loop until we get results that approximate an end goal that we iteratively refine according to new knowledge/priorities.

Linsey Hansen - 10/17/2010 18:49:19

Dilemmas in General Theory Planning

In Dilemmas in General Theory Planning, the authors discuss how many modern professionals are now subject to popular attacks, regardless of the fact that their professions are now a lot more refined than they once were. The authors attribute this to the rise of “wicked problems,” or problems with no definite solution, and where the defining the problem can be the same as solving it. Because there are no definite answers there can be a variety of solutions, which can range from good to bad depending on those it affects, but there is never a perfect answer, or really even a way of knowing how close you are to perfecting it.

While this article did mostly address planning in terms of professionals, planning “wicked problems” can pretty much be related to any sort of user interface design that has been and will ever be created. That is because a user interface is something that has no definite answer; if you want to make something such as a painting program, knowing what sort of tools to include, where the buttons could be, and how the interaction should take place has an infinite amount of answers ranging from terrible to really good. The challenge is to find the design that will please the largest user base, which also includes defining a user base who would want to use the interface in a specific manner. There will never a solution that will please everyone, and many users will still nit-pick a small details, believing that they could have done a much better job (and in some case this is true, since changing one little thing would be useful to a very large majority of users).

One thing that I do not fully agree with in the article is how it says that all wicked problem solutions are a “one-shot” that do not allow for trial and error. I believe that creating UIs allows for some trial and error with prototyping and user testing. Of course it is impossible to make something perfect the first time, but there are definitely ways for planners to test their solutions somewhat without suffering too terribly for it or by spending too much.

Getting the Right Design and the Design Right: Testing Many is Better than One

In this paper, the authors describe the results of an experiment to test user feedback when the user is given one vs. multiple designs. Their results showed that users tend to be more critical when presented with multiple ideas, though they are not more creative.

This is incredibly useful when coming up with design ideas, because as this study showed, if you were to present a user with just one design, chances are that the user will be less critical of it since they do not wish to displease the tester. If an experimenter were to present a user with multiple designs though, the user would feel less bad about hurting the experimenters feelings because multiple designs would imply less attachment on the designer's part. While this might not exactly help with getting the user to suggest something that is completely new, it would definitely help with deciding what the user does and does not like about a given design (the paper suggested that users would pick “favorite” parts about each design compared to the others presented). Based on this, even if a designer were to have one primary idea that was completely new and fresh, they could just slap together a few more generic designs that could complete the same task, then show these to users to get a more honest opinion of the newer idea.

Anand Kulkarni - 10/17/2010 18:52:34


The authors argue that social planning problems are difficult to approach scientifically and quantitatively.

The authors present a distinction between the "tame" problems that are typically addressed by science and "wicked" problems tacked in social contexts and argue that the latter are essentially difficult to address with the techniques of science. I like that the authors identify how pervasive attempts to remedy the latter class of problems with techniques from the former are. I particularly liked the authors' contention that reduction of a social problem to a mathematical model or formal model is equivalent to attempting to solve it in a particular way. However, many of the qualities the authors conclude make social planning problems unsolvable are in fact the very challenges that have been studied in the sciences -- ESPECIALLY in artificial intelligence, where the difficulty of these real-world problems (and in AI planning) becomes a central challenge -- in the past forty years! It would be fantastic to see a response to this paper written in light of modern techniques, especially by a roboticist.

The argument is structured as a series of numbered, well-reasoned points of discussion. These are strong points and made stronger by the fact that they are typically drawn in direct analogy from mathematical language; ie, that "there is no stopping rule" for social problems. I like the argument that erroneous scientific attempts in social planning have real-world consequences that cannot be ignored; this is something that casual attempts to remedy social dilemmas with scientific methodology often discount. I also appreciate the argument that planners have no implicit right to be incorrect, as they do in science. Again, I do feel quite strongly that the science of the past 40 years, particularly in robotics, is attempting to quantify and come to terms with precisely these arguments, and I would have liked to see the authors consider how possible advances in scientific technique and theory would generate new approaches to their points of view.

Right design

The authors examine the practice of usability and design testing by showing a single design and a set of 3 designs to a user, and find that users are less likely to critique a single design, and also that user testing is not an effective way to get suggestions for improvements in a design.

I like the fundamental contribution here, which is an examination of the practice of usability testing with a direct conclusion that users should not be relied upon to produce useful suggestions in how to improve a design. I also appreciate the other core contribution, which is the fact that users should always be presented with multiple designs in order to get honest feedback. This is a fundamental activity in HCI research and it is critical that this practice be understood well -- and that any psychological biases involved in evaluation methodologies -- are well-understood. I like the fact that the authors identified several possible factors explaining user performance. I wish they had better considered the psychological literature here in attempting to explain the phenomenon, rather than largely the design literature.

The evaluation tracked various statements made by the users during their design critiques and assigned each to a different category (positive or negative, suggestion or not, substantial or not) and assessed whether these . I like this evaluation technique; the fact that the authors have done it means that subsequent user design tests won't need to consider this breakdown. However, it would have been interesting to see more than 3 -- for example, what about presenting 2 choices? Or n choices? Can they generalize? I like also that the authors argue that their technique suggests specific conclusions about how to get the right design. I wish the authors had made their case more strongly; their results seem to have a strong impact on the entire foundations of design evaluation as they have happened in the past, and they don't seem to argue the importance of this point.

Shaon Barman - 10/17/2010 18:52:47

Getting the Right Design and the Design Right: Testing Many Is Better Than One

In this paper, the effects of choice are explored when designing a user study. In particular, they explored how multiple prototypes in a user study affected the quality and number of responses provided by the participants. Additionally, they explored the affects on idea generation.

I liked this paper in its methodology to isolate the effects of multiple prototypes. Most of the results are not too surprising, but designing a rigorous experiment helps to verify this intuition. For most experiments dealing with people, the design of the experiment can drastically affect the results. Showing multiple different prototypes shows that there is room for change in the current design, and that criticism will be accepted. Also, it can push the participant to think out of the designs presented to him, and look at why exactly a particular feature is good or bad. It as also interesting to see that having multiple designs did not get users to give more suggestions. The use of prototyping and user studies is important to reduce wasted time and to find valuable ideas. But, it seems like most people do not understand why the dislike or like a particular idea. When designing an experiment, its important to design it so the user does not have to analyze themselves, but instead the data can be extracted in a hidden manner.

One thing I would like changed was the distribution of users. All of the users were college students. It would have been interesting to compare how different groups, such as professional designers or random participants found in a mall, were affected by these changes. College students, especially those in engineering disciplines, tend to be more critical and this could have affected the results.

Dilemmas in a General Theory of Planning

The authors discuss why "wicked problems" (those usually found in social sciences) are much more difficult than tame problems, and how those difficulties are inherent to the problem space.

This paper was a bit confusing. The first section deals with a "revolt" against professionals which I have not seen or experienced. The second part deals with planning and the goals of all of this planning. The author comes up with the conclusion that one of the challenges facing Americans is defining problems. In particular, for most social problems, defining the problem is as difficult as solving the problem. He states that engineering problems, in contrast, are tame problems because they can be define, run multiple times, etc. While at a very superficial level, this might be correct, this view misses some of the complexities of engineering problems. First, many engineering problems only occur once and their solutions could be quite costly/time consuming and therefore the solution needs to be well thought out and back with experiments. Engineering problems differ from most social problems in that there is a background of information which can be used. These mostly consist of solutions from similar problems or equations. Also, defining engineering problems can also be just as difficult. For example, when debugging code, is a fix something which causes the program to continue running or something which solves the underlieing bugs. The decisions engineers make seem to have many of the qualities of "wicked" problems. Most of the points about wicked problems are not applicable to basic science, but can be used to describe engineering problems. I found the one aspect of "wicked" problems which is not shared by scientific problems is the human factor and how society is constantly changing in unpredictable ways.

Thomas Schluchter - 10/17/2010 18:59:43

Getting the Design Right...

The paper presents a study on the difference in response in usability studies when multiple designs are presented vs. a single design being tested. The most significant finding is an increase of negative comments on the least favored design when subjects see alternatives.

Interesting twist on usability engineering. It emphasizes that all phases of an iterative design process are, in essence, exploratory. Because the potential solution space is unknown before the design process begins, covering a breadth of concepts in testing is more likely to generate valid results.

This article adds to the literature that is concerned with the psychology of testing things, a field that needs further development. Other studies have shown for example that different levels of fidelity in prototypes affect responses as well as people's understanding of the general application space that they are confronted with. Given that testing is such an important part of the design process, there needs to be more actionable research on best practices. Even when the truism "A little testing is better than none at all" holds, improving the effectiveness of tests could probably boost the productivity of design teams dramatically.

Dilemmas in a General Theory of Planning

The authors develop a framework for understanding why problem-solving in policy and in planning for public works is inherently more difficult than in science and engineering. They introduce a class of problems that they call 'wicked' which are at the same time inaccessible to first-principle based analytical solving strategies and don't allow for synthetical experimentation in order to solve them either.

The text reads like a report from the dawn of ages of design as it developed self-consciousness. Much of what the authors say about planning is directly applicable to design as a planning process, if possibly on a different scale. Designers of human-facing computer systems encounter messy environments that cannot be adequately controlled. They need to find an entry point into the concept generation process by analyzing these messy environments without a guarantee that the complexity they've observed is adequately captured in their models. Consequently, so many designs fail (at least initially) at addressing the perceived issues.

The real value of this paper is that it makes us aware of the fact that designers don't even have a guarantee that those issues are the right issues to tackle. Precisely because it is so difficult to even describe a problem in the social space, the success of design processes is hard to predict. Prototyping can be a very valuable epistemic activity in the design space because it embeds an early-stage attempt to solve a problem in the messy environment that it's geared towards. Seeing how it fares there is very instructive as to whether the design project is asking the right questions at all.

This is the point where design and planning processes differ in scale as I mentioned. As the authors point out, "prototyping" a freeway is not feasible whereas the variation of a word processing interface is. If we take this lesson seriously, we must conclude that Design (big D design as a discipline) would be well advised to engage in reflection not only of its practices but also of its aspirations. In recent years, the attitude that Design will save the world has been spreading like a disease. It is the same fantasy that the technocratic systems analysts had in the 60s -- not the methodology is the problem, the problem is the problem.

Brandon Liu - 10/17/2010 18:59:52

‘Dilemmas in a General Theory of Planning’

This is a great paper that characterizes ‘wicked problems’. The author draws from city planning to describe ten features of problems which have been approached by ‘scientific’ methods, but fail to yield any solutions.

This relevance of this paper to HCI is obvious, since HCI as a discipline sits alongside other computer science topics that tackle tame problems. The degree to which each of the aspects applies differs. HCI problems are generally of the form ‘what is the most usable for humans?’, while sometime incorporating some aspects of efficiency (see KLM, Fitt’s law, etc). The most applicable aspect is that the solutions are ‘good or bad’ rather than true or false.

In general, HCI gets around the problems (5,6, and 7), since generally, HCI and design can have trial/error testing, and the set of potential solutions is limited by the nature of human-computer interfaces (visual displays, mechanical input devices). Also, problem strategies are often transferable between goals and devices.

The allusions to ‘public welfare’ in this paper, and the ambiguity of the term, reminded me of the discussion of whether A/B testing results in optimal design solutions. While A/B tests can optimize for concrete metrics such as conversions or click-through rates, the ultimate ‘goal’ of a design is often more complex than that metric.

In the context of design tools, the design of a tool is vulnerable to much more than the design of a web site, since we rarely have any goal as specific as a number. We could create text editors for writers that optimize for the # of words written, and probably get useful data as well as an explanation using cognitive effects, but these would be dodging around the central ill-defined problem of what design/creativity is.

‘Getting the Right Design and the Design Right’

The paper hypothesizes that having users evaluate multiple low-fi prototypes at once influences their subjective feedback. The authors described an experiment where the responses (comments, suggestions) of participants were recorded and coded as posititive/negative, substantial/superficial, or novel/unoriginal.

The experiment found that showing multiple prototypes resulted in the same amount of positive comments, but more negative comments, and did not result in more suggestions. While reading the paper, I intuitively thought that seeing multiple designs would give the participant a standard of comparison for generating more suggestions, so this result was surprising to me.

The main takeaway from the paper was the use of multiple prototypes to reduce the ‘positive feedback’ bias where participants feel obligated to praise a design.

Overall, this was an interesting paper since it showed how subjective responses were affected systematically by an experiment’s design. An interesting sub-point touched upon was how the ‘overall rating’ of a user interface is correlated with the individual questions. Although the paper found that the individual results were correlated with the overall rating, I would be interested to see if an experimental design could affect or break this correlation.

Richard Shin - 10/17/2010 19:01:00

Dilemmas in a General Theory of Planning

This paper describes the difficulties of planning in the social sciences, compared to in other fields. The authors describe how such problems are "wicked", in that there exists no definitive formulation of them (and, in fact, finding a solution from a formulation of these problems is quite easy); it is impossible to tell when they have been solved; assessing solutions to the problem are difficult; it is impossible to reversibly try possible solutions; possible solutions are inenumerable; problems are unique; all such problems are caused by other problems; causes for the problems can be described in many ways; and the planner cannot produce a wrong answer.

This paper seemed a good description of the difficulties faced when planning solutions to problems in society. I generally sympathized with many of the anecdotes given throughout the paper to support its arguments. The applicability of "wicked problems" to HCI, however, seems a bit tenuous, in that HCI does not attempt to solve policy problems. It's much easier to define what a problem in HCI exactly is, for example ("created an interface to accomplish [...]"), measure solutions, experiment with various designs, and even enumerate possible solutions. I suppose lessons from solving more difficult problems might still be applicable, though.

In some parts, it seemed to me that the paper was being too gloomy about the difficulty of "wicked problems". For example, the authors lament that every implemented solution to a problem would have long-term consequences. Nevertheless, in many cases, I think it should be possible to draw sufficient conclusions from testing with only a small, random subset, or running simulations about what the effects of a solution might be.

Getting the Right Design and the Design Right: Testing Many Is Better Than One

As compactly encapsulated in the title, this paper argues that, when evaluating designs in user testing, testing many designs at once leads to better feedback than testing only one design. According to the paper's results, when testing one design, the participants in a user study tend to respond more positively about the design and refrain from providing negative, even if constructive, comments about the design. To test their hypothesis, the authors had study participants evaluate three paper-prototype interfaces for a climate control system, where some participants only saw one interface and others saw all three. The authors recorded any comments that the participants made, whether formally or while using the system, categorized the feedback, and compared the type of feedback received in each situation.

This paper tells those of us, who need to create a user interface design, to always try different alternatives and reduce the choices to one, especially if the designs are undergoing user study, although I imagine that having multiple interfaces available to us would have similar effects on self-criticism of them. Of course, the paper doesn't tell us exactly how to design an interface, but rather just a pitfall to avoid when doing so; I imagine that the conclusions drawn could improve the quality of new systems, in a very indirect way. Also, perhaps the overall question of what factors influence the quality of user studies could be further investigated.

Overall, it seemed to me that the paper had a strong basis for what it was arguing. I can't think of a particular way to improve the paper; the experiment seemed fairly exhaustive, and the analysis thorough.

Pablo Paredes - 10/17/2010 19:17:26

Summary for Rittel, H. and Webber, M. – Dilemmas in a General Theory of Planning

The overall planning dilemma is based on the realization that professionalism has brought with its practice benefits but also unfulfilled expectations. There is a conflict between goal-formulation, goal-definition and equity issues, which increases the dilemma of professionalism and their need for planning to provide a path to some accruable notion of progress.

The author defines “wicked” problems, those that do not have a simple answer and which decision process does not necessarily follows a “scientific” logic. The authors expose that setting a goal to solve a problem, i.e. defining what is “expected” to happen, is indeed the main problem. Finding the problem is the problem (this is the wicked situation).

A traditional planning approach (which the author questions if it really exists) is utopic to say the least. The author defines planning as a series of questions that explain the complexity of wicked problems planning: 1) no definitive formulation exits, 2) there is no stopping rule, 3) not true or false, but good or bad, 4) there is no test for a solution, 5) there is no trial-and-error (only one-chance), 6) there is not a set of solutions, nor operations, 7) every problem is unique, 8) every problem is a symptom of another problem, 9) the explanation to a discrepancy defines the nature of the problem (therefore numerous explanations exist), 10) the planner cannot be wrong.

I can relate very much to this paper, as I had been working in the past 2 years developing new markets for wireless broadband in an unregulated environment. The nature of the problem (not having enough internet penetration) is of a wicked nature, as it can be approached from several angles, have different “solutions”, its evaluation can be seen as positive or not over time (depending on the agenda of the agent involved), it has great variability depending on the political agenda, there are several people interested and not clear leadership or experience (as it is a new technology), there is no room for errors, and implementation is complex and demands very high compromises and resources. I can testify that the level of political involvement in these type of problems is enormous and a even gathering a smaller ecosystem of interested parties to agree in a single approach is not easy, nor enough. Therefore, planning for these complex systems requires much more than just syllogisms and black-boxes, it demands involvement and the ability to take specific actions that provide tangible progress and be able to correct the course of planned actions as necessary over time, but without loosing the final vision. Planning in this context is more a profession and a life experience, rather than a skill. Although not mentioned in the paper, the role of leadership in planning and carrying forward projects to confront (wicked and not wicked) problems is fundamental to both problem formulation and project execution.

Summary for Tohidi, M, Buxton, W., Baecker, R., Sellen, A. – Getting the Right Design and the Design Right: Testing Many is Better Than One.

The paper overall describes the importance of approaching design as a family of solutions as a response to a family of problem to be solved. Having several solutions is healthy, as it allows the designers to be less personal with their creations and to let the solutions to fight among themselves. This perception is also well received by the users who also fell free to argument openly against or in favor of a solution, being able to even request for complete dismissal of a solution that is not well received. This approach is clearly the way industrial designers work, and the HCI community needs to learn from that perspective. Due to the resource limitations and complexity of research, low-cost usability testing techniques can help. However it is necessary to keep in mind that context needs to be incorporated in the overall design process, and therefore the need to prototype at higher fidelity cannot be overruled.

An additional interesting conclusion from the analysis is the realization that feedback from users allows the designers to identify the problem set better, rather than singling out a solution. I believe the notion of problem definition is center piece in the overall approach of getting the right design (rather than the design right), because only by receiving good information about a family of solutions the researcher/designer is able to go back and revise if the problem definition was adequate or not. Without the possibility of eliminating one solution in front another one, the question of whether the problem was clearly defined may never be asked and the Abilene Paradox that could have been created inside the design group (even when expert juices are incorporated) is never broken. After all, the only true judges for a design are its users, provided that they have many solutions to provide frank and merciless feedback.

In relationship with both papers, I believe that the an author that is doing a great job of revealing the human nature of finding solutions to the wrong problems is USC’s professor Ian Mitroff. In his book Dirty Rotten Strategies, he describes how organizations define complex and elaborate plans to find (complex and costly) solutions to the “wrong” problems by not facing the real issues up-front; therefore entering in the unavoidable (Garbage In – Garbage Out) GIGO cycle, waiting for a paper rejection or strong criticism is the paper is published.

Kenzan boo - 10/17/2010 19:37:29

Getting the Right Design and the Design Right: Testing Many Is Better Than One

In this article the author suggests that in a user study, providing 3 alternatives rather than just 1 design gets more constructive criticism from the users. The studies that they did was a between subject study where different groups saw either one of the three prototypes or all three prototypes for a home climate control system. The success of the experiment was evaluated on the easy of use/ design of the prototype, the users willingness to criticize, and the number and type of suggestions for change in the user. The test show that the average score for each design was higher when seen individually than in a set of three, implying that the users are biased to give a better rating. The number of criticism was also increased which leads to constructive feedback. However, contrary to the original prediction, there was not significant difference in the number of suggestions for change. It did lead to more suggestions to wipe the slate clean and start over when a bad design was placed in front of them.

Dilemmas in a General Theory of Planning

The article is on the general theory of planning. Discussion on these articles. I fully agree with the experiment on getting the right design. Three or many alternatives would have the user be much more likely to suggest to start new and totally eliminate one. If there was only one, the user would feel obligated to continue working with one since that is what they are being studied on. When i was being interviewed for a user interface by a startup, they only provided one interface for me to look at. i would say when something is bad, but i would give them suggestions on how to improve Based on other examples ive seen online. in essence was leading myself to other alternatives ive seen to make suggestions. With web site organization this is easy, but with a home climate system this may be difficult for the users to come up with alternative examples. so providing them alternatives would hugely help.

Dan Lynch - 10/17/2010 20:28:59

Getting the Right Design and the Design Right

This article presents an interesting perspective on usability testing and user interface development in general. The fact is, people are less likely to give you bad feedback if you present them with more than one interface.

There were three hypotheses: 1) designs are rated lower when users presented with more designs, 2) users exposed to more designs are less pressured to be positive, and 3) users exposed to more designs will provide more suggestions

After testing and collecting data, the group proved these hypotheses to be true with a very low probability of the null hypotheses. Across the board multiple designs yielded more comments, whether positive or negative.

This is an important topic for many reasons. The first is ensuring that when developing a product that you have the optimal user interface, without blowing your budget. Paper prototyping is the topic that is discussed in the introduction. A very key topic as its the cheapest way to get feedback during early stages in the design process---consider having coded 10,000 lines and then doing a usability test to find that no one can use your system: use low-fidelity first!

Problems in the General Theory Of Planning

This paper begins with the idea that no social problem can be codified, and that any scientific attempt to solve a “social problem” is set for failure. One of the first topics covered is goal-finding, which is supposed to be a pillar upholding the general theory of planning. This concept of goal-finding was also said to be difficult to manage or control, in their words, obstinate. Problem definition is another major topic. The author goes on to say that this is an intractable problem.

To summarize, the author makes a definition for what it is each topic is, he then characterizes it in such a way that when the idea applies to some social aspect or human context, the idea itself is not characterizable, or intractable. Hence, using this method to solve social problems is impossible.

This is important because as engineers and computer scientists, we need to solve and characterize problems day in and day out. Thus, having a way to characterize problems by understanding the nature of characterizing problems its process plays an important role in developing optimal solutions.

Airi Lampinen - 10/17/2010 20:36:32

Rittel and Webber's article "Dilemmas in a General Theory of Planning felt initially like a somewhat puzzling reading for an HCI course. However, as they go deeper into discussing wicked problems, connections to the issues engineers, designers and social scientists struggle with in HCI become more evident. The article discusses challenges of policy planning in societies where there is a plurality of politics and objectives and, thus, it becomes impossible (or to be more optimistic, a hard and compromise-requiring task) to pursue unitary aims.

The authors review ten characteristics of wicked problems. There are no definitive definitions for wicked problems, there are no stopping rules in solving them, solutions are not true-or-false but good-or-bad, there are no immediate or ultimate solutions to them and so on. Maybe the most interesting characteristics mentioned, from my point of view, is that, as Yoda would say, "there is no try": trial and error is not on option, since every attempted solution has real and wide consequences.

The paper does a good job in outlining what wicked problems are like, how they differ from tame problems - the ones that have traditionally been the target of science, and why it is problematic to take wicked problems for tame and try to treat them with non-matching methods. A systemic approach is stressed - wicked problems are wicked exactly because they are not straight-forwardly defined but tightly interweaved to a larger context with complex interconnections.

However, I couldn't avoid being slightly disappointed with the conclusions: "We have neither a theory that can locate societal goodness, nor one that might dispel wickedness, nor one that might resolve the problems of equity that rising pluralism is provoking. We are inclined to think that these theoretic dilemmas may be the most wicked conditions that confront us." Of course, it is not in the nature of wicked problems that one could propose clear solutions but yet, I would have hoped for some guidelines as to how to proceed when investigating wicked problems.

The second reading, Tohidi, Buxton, Baecker & Sellen's "Getting the Right Design and the Design Right: Testing May is Better Than One" was a much simpler read. The authors describe a study of the effects of offering one vs multiple prototypes to users when pursuing usability tests. They were curious both in how to "get the right design" that is find the most appropriate approach to a design problem and in how to "get the design right" that is refining the selected solution and solving its problems.

The paper shows how users were criticizing designs more when they had multiple solutions to talk about and how this setting also produced some comments refusing specific design solutions. Another finding was that users were better at identifying problems than at proposing solutions to them. Overall, the comparative setting was deemed superior. The authors claim that it allows users to criticize more freely as they do not need to feel that they are in a conflict with the designers' views. I think an added benefit is that working longer with multiple designs also keeps the design & research team itself more open to change and alternatives - something that is already now supposed to be a part of the basic practices of most designers but that is not always leveraged in the field of HCI.