Evaluation Methods and Techniques

From CS260 Fall 2011
Jump to: navigation, search

Bjoern's Slides

media:cs260-11-eval1.pdf

Extra Materials

A self-study course:

Some useful books:

Discussant's Materials

The presentation by Suryaveer File:Evaluation Methods and Techniques.pdf

Reading Responses

Steve Rubin - 10/8/2011 17:35:17

The first paper presented 49 heuristics for creating hypotheses, and the second gave an overview of considerations to take while designing and conducting experiments. Between the two papers, we get a vertical process that can guide us from creating a hypothesis to designed a study to appropriately assess it. Both of these papers were targeted at studies in psychology, but, fortunately, most of the techniques apply to studies in HCI.

McGuire's "Creative Hypothesis Generating in Psychology" is a bit conceited in tone--he is, after all, explaining how to be creative--but still useful. His heuristics will probably be useful to me in the future if I'm trying to get out of a research slump and I'm not sure what I should be looking for in my project. While his methods would undoubtedly produce some creative ideas, this paper makes no differentiation between creative ideas and plausible (dare I say, good) ideas. We would all benefit from thinking outside the box, but in the case of HCI, we should make sure to check ourselves at every step along the way. Otherwise, we may end up spending a long time working on projects that, while creative, don't make much sense ("LightSpace" comes to mind). Don't get me wrong: I'm all for creativity, but we need to make sure research is, in fact, research.

The second paper was a solid reference, but many of the points seemed over-explained and obvious. My high school statistics class covered many of the paper's points. The most important point in this paper is that researchers should try to make observations in as many ways as possible so that any conclusions made are not strongly tied to the strengths and weaknesses of a particular research strategy. This is easy to say, but in practice this is a lofty goal.


Laura Devendorf - 10/9/2011 10:03:28

McGuire's paper discusses and classifies 49 strategies to aid researchers in generating hypotheses and claims that each of the techniques can be learned in order to increase effectiveness. McGrath's paper discusses a number of methods and evaluations and considerations to keep in mind when collecting data to support a research question.

The author notes that he does not have conclusive evidence that this behavior can be learned or that it could actually help generate creative theses and for this reason, I'm not fully sold on claims made in the paper. While it presents a number of interesting techniques, I would have liked a more depth on a few examples rather than a quick explanation of all 49. In Ramesh Raskar's talk a few weeks ago, he presented a similar slide that likened research ideas to mathematical variables. He said, if you had an idea X and you're looking for what to do next, look at !X (not X, the opposite claim) , X -> Y (X mapped into a different domain), X^d (X taken to another dimension), X+Y (the combination of X and another idea Y), and so on. These terse suggestion get at what the article explains more elaborately. It almost suggests that you could write construct a grammar for research hypotheses and just fill in the blanks with hot topics. Would the random approach be just as effective?

I found McGrath's paper helpful and will most likely refer to it again in the future to refer back to varying techniques. The moral of the paper seems to be that your method matters, nothing's perfect and no research exists in a vacuum. While the point seems somewhat obvious, I think this is good to keep in mind as it stresses the importance of reviewing other relevant papers in the field in order to assess and support your own idea. I have often felt that background research in HCI has been a way to review existing ideas to ensure that your idea is new. Perhaps this is why he makes this statement, "Such differences in interpretation are also frequent in the study of human-computer interaction. It often seems that the protagonists for particular systems find more virtues and fewer limitations in those systems than do other researchers." If HCI research if focused on innovation and novelty, I could see why it is often the case that the researcher is maintaining that their technique is superior. If HCI were solely focused on building a standardized body of knowledge about how people interact with computers, maybe this wouldn't be the case.


Hong Wu - 10/9/2011 21:10:17

Main Idea:

“Methodology matters” talked about the method and their limitation we may face in research. “Creative hypothesis” proposed five different ways to generate hypothesis.

Details:

“Methodology matters” divides methods into four categories. However, all methods have limitations and results largely depend on methods we use. Thus, any result has its own limitation. Moreover, it is impossible to fulfill all desirable features in one set of result so that there must be tradeoffs and dilemmas. Each set of results must be interpreted to other evidence on the same questions.

“Creative hypothesis” argued that proposing hypothesis rather than testing hypothesis is the key of the research. The author proposed five categories, including observation, simple and complex analysis, interpretation and collecting new data. The hypothesis can be proposed by outsider or professional in the field, depending on which categories of the hypothesis. The paper provides a way to think about the way to do research and initialize new ideas.

Both papers focus on the research itself. However, they neglect the subject to do research, which is people. I feel motivation is very important in research. People can show different efficiency even they have some ability and some facility. Another thing is that the relationship between research and the real world. Research must have a purpose. This is critically important when we think about proposing hypothesis.


Valkyrie Savage - 10/9/2011 22:34:36

Main idea:

These papers worked together to convey that research is hard, but there are strategies for getting it right. One can mentally stretch to light on a creative hypothesis. Many types of experiments can be combined to get good data.

Reactions:

Aside from the fact that I was a bit unpleasantly surprised by its length, I mostly liked the creative hypothesis paper. The ways in which things were introduced seemed reasonable (though I can't say I enjoyed his rote repetition of "Due to space constraints..."; as the paper was already 31 pages I can't imagine that someone was complaining to make it shorter), though sadly, due to space constraints, not many examples were given. As the author mentioned, several methods of genesis shared a common spirit and ought to've been smashed into one. I enjoyed that he didn't seem afraid of being a little wacky, particularly as he alluded to the spectrum of altered states of consciousness. I feel it is important to acknowledge that there isn't one right way of doing things, and that in fact testing out other options might lead to better work.

The second paper, dealing with methodology, was not especially interesting, nor did I feel like I gained any net new information from reading it: it seemed more like a re-hash of knowledge I've gained over the years, particularly during the psychology classes I took as an undergraduate. The application of methodolical information to HCI is obvious, but mostly what all that I gained from this reading was that this information was once more in the forefront of my mind.

As to taking the two readings together, they make a nice cycle. We can create ideas and test them by combining the two. I'm pleased that both authors acknowledge that research is hard, and that it is simply by keeping at it and keeping a fresh mind to new possibilities that we can continue to learn.

I vaguely wish we had read these papers earlier. The methodoligical one especially would have re-surfaced potential criticisms of experimental papers that might have led to better discussions and reading responses earlier in the term. Then again, the hypothesis generation paper will lead us nicely into our final projects.


Viraj Kulkarni - 10/9/2011 22:44:17

The first paper, 'CREATIVE HYPOTHESIS GENERATING IN PSYCHOLOGY: Some Useful Heuristics' is about developing an hypothesis. The author argues that although there has been plenty of study in the field of testing hypotheses, there has been almost no study in the field of generating hypotheses. This, he believes, is not because its importance is not recognized but more because it is difficult to teach or describe this process to others. To overcome this is the chief objective of the paper. What follows in the paper is a long list of techniques which you can use to generate hypotheses. Some of the techniques are pretty obvious and straightforward while others come across as clever and well thought of.

The second paper, METHODOLOGY MATTERS: DOING RESEARCH IN THE BEHAVIORAL and SOCIAL SCIENCES', is about strategies and methods of doing effective research. It speaks about what research is and how it should be carried out and, more importantly, it also speaks about pitfalls and limitations of using a particular method to conduct research. Each method has its own inherent flaws and limitations. While conducting research, it is important to choose the strategy in a way that leads to the correct combination and balance of the strengths and flaws of the methods employed. The paper lists and classifies several research strategies according to metrics such as how obstrusive or abstract the method is. It also talks about combining research strategies so as to conduct research in the best possible way.

Amanda Ren - 10/9/2011 23:52:35

In the McGuire paper, the author talks about his heuristics for generating create hypotheses.

This paper is important because as it points out, most papers go more into details about designing experiments to prove hypotheses, rather than supplying details on hypotheses generation. I feel that the heuristic of "extrapolating from similar problems already solved" under the category focusing on natural occurrences is important because it brings creativity of other fields to your own (like as mentioned in class with crowdsourcing and economics). A lot of the other heuristics also mention using what is already there, like taking the converse of common hypothesis. It also ties in with the whole category of reinterpretating past research. Some of the heuristics offered by the paper are fairly trivial - like the suggested training of showing people graphs and having them point out outliers. The paper is good in that the author points out potential flaws with a heuristic, such as the one about applying a current enthusiasm, which may encourage the researcher to follow current trends.

The Mcgraph paper covers the methodological domain of the research process - the instruments, techniques, and procedures used to gather and analyze information.

This paper is important because it describes different methods used by researchers. The paper mentions the four quadrants of research studies for gathering information - field, experimental, respondent, and theoretical. A main point stressed is that by using a combination of methods, the flaws of individual methods can be balanced by the strengths of others. You can also use multiple methods to further validate your hypothesis. Although the paper is targeted towards the social sciences, it can be relevant to the HCI field. The paper also discusses strategies of obtaining and allocating a sample group for our methods, and validating our results, which are partly determined by the method used.


Derrick Coetzee - 10/10/2011 1:07:26

Today's readings focused on conducting research in the social sciences, the first work on generating hypotheses for testing and the second an overview of concepts in social research and tactics for conducting experiments.

"Creative Hypothesis Generation," a 1997 work by McGuire of Yale, gives advice on how to formulate scientific hypotheses for testing via a list of 49 teachable heuristics. Examples focus on psychology, but many of the ideas apply to science in general - and I found it interesting to consider examples of how the heuristics may be applied in computer science while reading. Even techniques that on the surface seem unrelatable, like evolutionary functionalism, may have some analogy in computer science, e.g. in speculating that certain architecture features became dominant due to their reception by the market.

Case studies are an example of a valuable technique that among computer science disciplines is primarily limited to software engineering, but it would be instructive to consider how to apply it to other subfields: for example, in the are of programming languages, one may investigate how a language like C++ developed over time by studying the debates and rationales of its committee and motivations of stakeholders.

Although the work endeavored to organize the various heuristics, the conclusion notes the substantial overlap between heuristics and how some cover larger areas than others. Yet the areas don't seem to have enough in common structurally for the proposed "grid" organization. A more fundamental treatment, where various techniques are built up out of simpler axiomatic ones, could be instructive. I would also like to see applicability of the heuristics discussed (what areas might this apply to? how hard is it to learn and apply? how commonly used is it in practice?)

"Methodology Matters", a 1995 work by McGrath, took a much broader view of social research, giving an overview of concepts such as content/ideas/procedures, research strategies (field observation, experiment, theoretical analysis), and gave tactics for empirical studies in particular. It emphasizes in particular tradeoffs between methods.

The discussion of random allocation procedures raises the interesting point that, when many independent factors are considered, it is highly likely that some factors will be imbalanced between test groups, even if the groups are large. If relevant, the outcome may be affected. This is part of why I'm reluctant to embrace McGuire's heuristic of looking for "serendipitous" correlations in experimental results, since without a follow-up study one may draw improper conclusions from a correlation that occurred by mere chance.

The mechanism of trace measures is interesting because of how widely accessible it is becoming in the digital age: for example, public online communities leave extensive records of all their actions that can be analyzed in detail. The issue of "dross" is less daunting in the presence of techniques like data mining. But the problem of a loose connection to the variables under study often remains; for example inferring user motivation from forum posts would be complex even for a human observer.


Alex Chung - 10/10/2011 1:29:32

Creative Hypothesis Generating In Psychology: Some Useful Heuristics

Summary: Generating hypotheses and theories need not be an art. This paper introduces some methodical strategies and categorizes them that allow readers to pick what is applicable to their needs.

Positive: Instead of dwelling into topics that are specific to psychology, McGuire covers a wide range of topics and gives examples from many different areas of study. I found the method of observing behavior for insight is apt for human computer interaction study because HCI designers must understand user’s motive and natural behavior. Furthermore, a collection of diverse views and experience can lead to a more thought-out hypothesis.

Positive: McGuire has listed many different approaches but I particularly like the ideas of “juxtaposing opposite problems to reciprocal solutions” and “extrapolating from similar problems already solved”. Both strategies involve borrowing ideas from one discipline and applying them on another discipline – arbitrage. HCI is a multi-disciplinary study having clients from all sort industries. Like social computing, HCI can learn and borrow many theories and methods from the established disciplines instead of re-inventing the wheel.

Negative: The author mentioned “technical determinism” when discussing Max Plank’s uncommon method of working from the conclusion to the theory that supports it. I, on the other hand, believe that technology is driven by its users and shaped by the society.

Negative: Is nature so simple and straightforward where simpler is better? Or perhaps simple theory is popular because it is easier to be consumed by people. A more general explanation covers more ground but it depends on the goal and situation.

Negative: The author did not provide enough strategies on narrowing down the number of hypotheses. Often, I come up with a list of ideas but do not know which direction to focus my effort.

Methodology Matters: Doing Research in the Behavioral and Social Sciences

Summary: The processes of research are very similar across all disciplines. This paper discusses various study designs, comparison techniques, validation methods, experiment designs, data collection, and data analysis. The author also goes over the strength and weaknesses of each method.

Positive: The strategy circumplex in Figure 2 is an excellent summary of research processes and I agree with the author than there is a trade off when choosing to emphasize on one set of questions over the others. However, it is not a zero-sum game, in which, researchers with enough resources can revisit another quadrant of strategies at a different time. In terms of HCI research, the lesson is too narrow the scope for each experiment in order to factor out random variables and to focus on one set of questions. Otherwise, someone could come out of the experiment without answering any question.

Positive: There is no perfect method for each study. All methods are valuable but all have weaknesses or limitations. This is probably why our HW assignments covered many different topics and areas. Therefore, the best-case scenario is being able to combine different methods in a limited number of experiments and the methods complement each other’s weaknesses. I can see some parallel to biological experiments where scientists spend more time designing the experiment than performing it.

Negative: How applicable area social science research methods in the area of HCI? Aside from the flight simulation example, there aren’t enough examples about HCI research. The intellectual framework is not novel and generalization of research questions down to baserates, correlations, and differences is so trivial.

Negative: While I understand that the truth is close to the consensus, the explanation of random sampling method is confusing. How will research method be changed when the number of participants is small or big?


Yin-Chia Yeh - 10/10/2011 1:39:14

The two papers today are somewhat relevant to Bjoern’s slide of design process but instead focus on research. The creative hypothesis aims at the early stage – generating many ideas and the methodology matters papers aims at the later stage – narrowing down and consolidating. The creative hypothesis paper is about strategies to come up with new creative hypothesis. It is amazing that the author can come up with 49 heuristics for generating new hypothesis. This reminds me an article about how to come up new research ideas by Ramesh Raskar at MIT. I think both articles’ goal is to teach people how to come up with new ideas creatively. As some people might think that creativity can't be taught, the author tries to tell that creativity can be taught and he has got some training materials for that. I am curious about if he managed to show that creativity can be taught, or if creativity of certain expertise (psychology in this case) can be taught, or to what degree creativity can be taught and beyond that is talent. The methodology matters paper introduces methodologies in social and behavioral science. The papers discusses that any single research method has its advantage and flaw. Researchers should combine different methods to strengthen and verify the credibility of each other. However, it is impossible to maximize all possible features of research in one study so researchers should carefully make tradeoffs. Unlike the creative hypothesis paper aims to help people finding ideas, this paper aims to help people narrow down and consolidate ideas. I think it is also very important since sometimes we have too many ideas but end up trying things without focus, which is not productive at all.

Shiry Ginosar - 10/10/2011 1:43:35

These two papers aim to teach the novice social scientist a lesson or two in conducting research. While McGrath discusses social research at large, McGuire provides a list of methods that may be used by a researcher in order to generate hypotheses.

I found these readings to be interesting, albeit wordy, for a couple of different reasons. First, the attention to detail and attempt to frame discussions of "soft science" research always leaves me awe struck and inspired. Coming from and engineering background, I mostly tend to focus on the computationally significant or easily proven parts of my research, assuming that common sense and minimal familiarity with social science methodologies will do the rest. Hardly do I take the time to properly contemplate about the "softer sides" of my research beyond what is obvious. This is partly due to personal comfort zone and a tendency to shy away from wordiness. And yet, I am always impressed by those who have the patience to delve into the abstract and try and make order in it.

Second, it was especially interesting to read these papers right after Kuhn's discussion of scientific revolutions. This week's papers would definitely be categorized by Kuhn as guides to conducting cumulative research. This is most evident in McGRath's statement that "It is only by accumulating evidence, over studies done so that they involve different -complementary - methodological strengths and weaknesses, that we can begin to consider the evidence as credible, as probably true, as a body of empirically-based knowledge." What better way is there to define cumulative research than that?


Donghyuk Jung - 10/10/2011 2:54:12

  • Creative Hypothesis Generating in Psychology: Some Useful Heuristics

In this paper, the author has presented a comprehensive analysis of heuristics for generating hypothesis. He pointed out the imbalance in research methodology courses. While researchers focus almost entirely on hypothesis-testing issues such as ‘measurement’, ‘experimental design’, ‘manipulating and controlling variables’, and ‘statistical analysis’, they neglect hypothesis-generating issues that are at least as important.

The author describes a total of 49 hypothesis-generating techniques, into five different categories, each with sub-categories. The first category includes nine observational heuristics that simply require sensitivity to provocative natural occurrences. Categories II and III call for going beyond observational sensitivity by requiring also conceptual analysis, either by direct inference, such as accounting for the contrary of a banal hypothesis (Category II), or by more complicated mediated inference, such as using a thought-diversifying structure (Category III). The final two categories, IV and V, go beyond a priori conceptual analysis by requiring some wrestling with empirical data, either by retrospectively examining past studies, such as by decomposing a complex obtained relation into multiple simpler components (Category IV), or by prospectively reanalyzing old data or collecting new data, such as by content-analyzing participants’ open-ended responses to obtain new insights (Category V).

Even though the author had made a great attempt to list techniques for creative hypothesis generating, he did not try to explain readers everything from point to point. I think 49 heuristics can be a good guide for hypothesis generation in Psychology but they are too complex to utilize his training procedures into other fields.

  • Methodology Matters: Doing Research in The Behavioral and Social Sciences

The author represents some of the tools (domains and levels of concepts in behavioral and social science research) with researchers in the social and behavioral sciences. Especially, the article points out some of the inherent limits and strengths of various features of the research processes. “The meaning of that knowledge, and the confidence we can have in it, both are contingent on the methods by which it was obtained. All methods used to gather and to analyze evidence offer both opportunities not available with other methods, and limitations inherent in the use of those particular methods.” The major point of this article is about all the methods have both strengths and limitations. Therefore, it is impossible to maximize all advantages for a single method. (e.g self-reports, observations, trace measures, archival records)


Hanzhong (Ayden) Ye - 10/10/2011 4:56:24

McGuire’s paper provides a long list of heuristics which can be used to generate creative hypothesis in psychology. Although I am totally not familiar with concepts in psychology, I found this list of methods also useful in fields other than psychology. The division of all the methodology system shows an increasing demand for conceptual analysis, and for the reference of previous works. Although some of the methods listed overlap with each other, there are still a lot of creative proposals of methodology given. However, one of the weakness of this paper is most of the methodologies discussed are built on superficial description, rather than sound definition and clear explanation on how they work.

The second paper addresses on methodology used in research in the behavioral and social sciences. It first divides the research process into three different domain: substantive domain, conceptual domain and methodological domain. Delving deep into the features of methodological domain, some summary are drawn. It is an intrinsic essence of methods that they both enable and limit evidence, and it is useful to know that although all methods have weakness and limitations, we can still offset those shortcomings by using multiple methods. It is also very useful to know that most research strategies can be classified into a quadrant field consists of experimental strategies, respondent strategies, field strategies and theoretical strategies. In addition, there are strengths and weaknesses for different measures as well, which should be taken into consideration when one is chosen.


Suryaveer Singh Lodha - 10/10/2011 5:57:15

Methodology Matters:

The author discusses in detail the merits and demerits of dfferent types of experiment setups, methodology of measurement, choosing and allocating participants for an experiment and validation of results. There is a common thread which runs throught the chapter, which emphasizes on the fact the every method has its own limitations. Since HCI is a behavioral science, its important to keep in mind the points from paper while designing experiments. By achieveing similar results from two different methods/approaches one can validate the result and be more confident about the conclusions. On the contrary, conflicting results might put the scientist in a tricky spot and raise questions/concerns about the methodology. In both cases the information/learning is useful and definitely helps in arriving at a better decision.

Creative Hypothesis Generation: The author does a very good job at structuring and categorizing the hypothesis generating techniques. The reading was very qualitative in nature and exposed me to finer nuances of creative thinking, moreover to the question - how do I start creative thinking? The categories are very well explained with detailed examples. One classification which stood out for me was: content analyzing particpants' open ended responses to obtain new insights. I like the idea of open ended questions as a way to get access to thoughts/point of view of the participant, but at the same time I think it would be tricky to categorize such responses of various participants. Also, I'm not too sure if such approaches are scalable! Overall, I think the paper helped me learn how to think about problems and structure better hypothesis.


Ali Sinan Koksal - 10/10/2011 8:13:05

McGuire points to the lack of focus on generating hypothesis in psychology research, while so much effort has been spent on testing them. He proceeds with presenting forty-nine heuristics for generating hypotheses, grouped into five categories which are increasingly complex.

This work reminds of a couple of our past readings. Similarly to generating design models, conceiving an extensive framework of heuristics for devising hypotheses can help structuring research directions. Furthermore, in our last reading, Kuhn described the role of paradigms in research during periods of "normal science". This listing of heuristics can be seen as part of those periods, in which researchers learn about these heuristics during their education and are expected to base their research on them. There might therefore be the risk of not going towards directions that can truly help progress, if we follow strictly products of currently established paradigms.

In the second paper, Joseph McGrath describes three constituents of research as content, concepts and methods, and gives a detailed account of methodology and its effect on research. The key idea is that, each methodology has its own limits, and to go beyond these limits, multiple methods should be combined, and the results obtained using different methods should converge. Any of the methodologies in the four quadrants described by McGrath does not allow to maximize generalizibility, precision and realism at the same time.

I think that this paper provides important insight for work that involves running user studies in HCI. Researchers should not abide by a single method for gathering results, as these results may be strongly influenced by the method that is used. Techniques for combining the use of different methods for assessing the validity of concepts are valuable tools.


Rohan Nagesh - 10/10/2011 8:23:32

The first paper, "Creative Hypothesis Generating in Psychology" discusses heuristics to generate hypotheses and the author's intent to educate readers that generating hypotheses also needs to be studied in addition to testing hypotheses. The second paper "Methodology Matters: Doing Research in the Behavioral and Social Sciences" discusses the various methods employed by researchers to manipulate and measure variables.

I found it interesting that the first author so adamantly believes that more of an emphasis is placed in educating how to test hypotheses as opposed to how to generate hypotheses. While I absolutely agree that a good balance of both is needed, I feel that from my own educational experience there has been more of an emphasis on generating hypotheses--brainstorming, being creative, and documenting ideas. I have done very little testing over my years.

That being said, if I take the author's premise as given, I do agree with the elements of his framework. Although it's too long to remember all the details, the framework provides a good set of tools for researchers to bank on when they're stuck generating hypotheses. I did, however, feel like many of the 49 heuristics were quite matter-of-fact.

With regards to the second paper, I absolutely agree with the author that methodology matters in research. The ultimate message the author gets across in my opinion is that research is a continual process and a challenge to the community to accumulate a body of evidence to support findings. I liked the author's depiction of 4 quadrants of strategies--field, experimental, respondent, and theoretical. I think this is a great framework and a great way to structure any methodology analysis in research.


Allie - 10/10/2011 8:31:17

In "Creative Hypothesis Generation" by McGuire, he introduces 49 research heuristics. Some require no special training or formal analysis, based solely on observation based on natural experience. For example, B3 attempts to obtain insight into human situations by recaling how/why one behaved on past occasions in similar situations. Category D tries to obtain purposeful and programmatic observations in order to gain insights. A9 is where psychologists leave their observation equipment on in places of public in order to observe variables of interest in their natural environment. Category E is a thought-experiment that require mental manipulation of the relational component of the hypothesis.

Mcguire introduces the idea of reversing one's attention when thoughts tend to revolve around a conventional way of thinking due to habituation, limited knowledge, emotional blocks, or cognitive styles that make it difficult to generate a wide range o fideas when the hypothesis is initially approached. To that end, G20 deliberately reverses one's accustomed way of thinking. For example, if one normally succumbs under peer pressure, then this heuristic deliberately sabotages causes such as peer pressure, mass media, physiological defects, and lack of purpose.

Some other heuristics include H26, where hypothetical-deductive methods generate a set of axioms covering the domain we are concerned with to generate new, unforeseen theorems. J32, however, embraces an existing theory and continues to derive testable implications.

The latter part of the paper covers heuristisc that require professional training and background. This encompasses subcategories K and L, i.e. L39, where the researcher is to integrate a heterogeneous set of studies, and interpret them creatively so that the whole set is more meaningful than the combination of the individual studies.

In "Methodology Matters: Doing Research in the behavioral and social sciences," McGrath defines research as:

a) some content that is of interest b) some ideas that give meaning to the content c) some techniques or procedures by means of which those ideas/contents can be studied

In exploring research, he defines 3 domains:

a) the substantive domain, from which we draw contents that seem worthy of our study phenomena, patterns of phenomena b) the conceptual domain, from which we draw techniques that seem likely to give meaning to our results. Relations in the conceptual domain refer to any of a variety of ways in which two or more elements can be connected. c) the methodological domain, from which we draw techniques that seem useful in conducting that research. Modes of treatment are different ways by which a researcher can deal with a particular feature of the human systems to be studied includes techniques for manipulating some feature of a research situation.

McGrath contends all research methods are bounded opportunities to gain knowledge about some set of phenomena in some substantive domain.

a) methods enable but also limit evidence b) all methods are valuable, but all have weaknesses or limitations c) you can offset the different weaknesses of various methods by using multiple methods d) you can choose such multiple methods so that they have patterned diversity; so the strengths of some methods offset weaknesses of others

He asserts the fundamental principle in behavioral and social science is credible empirical knowledge requires consistency or convergence of evidence across studies based on different methods.

a. generalizability of the evidence over the population of actors b. precision of measurement of the behaviors that are being studied c. realism of the situation or context within which the evidence is gathered, in relation to the contexts to which you want your evidence fo apply.

To that end, although you want to maximize criterias A, B, C simultaneously, you cannot do so.

He goes on to cover 4 quadrants:

Quadrant I: the field strategies: field study/field experiment. Researcher sets out to make direct observations of “natural”, ongoing systems, while intruding on and disturbing those systems as little as possible.

Quadrant II: the experimental strategies: investigator deliberately makes up a situation or behavior setting context, defines the rules for its operation, then induces some individuals/groups to enter the system and engage in behaviors called for by its rules/circumstances.

Quadrant III: respondent strategies. the investigator tries to obtain evidence that will permit him to estimate the distribution of some variables, and/or some relationships among a specified population.

Quadrant IV: the theoretical strategies: formal theory is a strategy that does not involve gathering of any empirical observations aka computer simulation. Like quadrant II, experimental simulation, it attempts to model some particular real-world system. no new behavior transpires during the run of the simulation, the simulaiton shows the logical status of predictions from the theory the researcher built into the model.

Depending on the field of discipline of the researcher, he may interpret results based on biases in his knowledge domain.

For randomization and true experiments, since a random allocation procedure does not guarantee an equal distribution of of the potential extraneous factors among the conditions being compared. reasoning from a true experiment involves inductive rather than deductive logic; probability not certainty.

I liked the McGrath paper more than Mcguire, for the McGuire paper was all-encompassing in its research heuristics, but McGrath went more in-depth with the methods he introduced.


Vinson Chuong - 10/10/2011 8:46:59

McGuire's "Creative Hypothesis Generating In Psychology" offers a set of what he calls heuristics for directing research by generating hypotheses and questions about observed phenomenon. McGrath's "Methodology Matters" offers a framework for choosing and comparing various methods for evaluating such hypotheses and questions.

McGrath breaks down the process of "doing research" into three domains: drawing "contents that seem worthy of our study and attention", drawing "ideas that seem likely to give meaning to our results", and drawing "techniques that seem useful in conducting that research". McGuire's paper focuses on the first domain, while McGrath's paper focuses on the third one (and gives a brief discussion of the second).

McGuire details and categorizes 49 common ways--heuristics--in which psychologists direct their research by generating hypotheses to study. He contends that these heuristics can be reliably taught and should be, observing that typically, most of the focus is given to teaching methods for testing hypotheses. His categorization is an important step towards fully describing the processes and relations within and between the three domains.

McGrath offers a detailed model for choosing, comparing, and classifying different methods for evaluating hypotheses. In his model, methods are compared along the axes of generalizability, precision, and realism. Those methods are classfied into experimental, field, theoritical, and respondent strategies (and further classified into subcategories). Within this model, he discusses the strengths and weaknesses of the different types of strategies and contends that the validity of the results drawn with the strategies is heavily dependent on those strengths and weaknesses. Hence, his model provides a way for finding a strategy or combination of strategies to produce the types of results desired.

After reading both of these papers, I realized how general these heuristics and strategies are in that they can be applied in many different areas. For example, consider design formulation and iteration. One observes a problem needing to be solved (category I in McGuire's paper), formulates a design that addresses it, and evaluates that design (with user testing or by applying a formal model of human performance). I wonder how the various permutations of McGuire's heuristics and McGrath's strategies can combine to change the design process--or any other seemingly related process.


Manas Mittal - 10/10/2011 8:49:07

Paper:Creative Hypothesis Generation in Physchology talks of a few heuristics to help generating new ideas and hytphothesis. The attempt is to put a framework around how to come up with new questions and ideas. The paper is somewhat straight-forward attempt to document already known mechanisms that we use for hypothesis generation. The challenge, I think, lies in creative application of these (and other) ideas & heuristics. I'd like to add two other heuristic to the list - think cross culturally (how would this be done in African settings, in Chinese or India settings etc), and don't think in dumbbells - we often think in pro's and con's, but we should think of non-dumbbell alternatives too. This reminds of the (Stanford) D-School way of idea generation - Go broad, and then focus. That too is a tool for creative problem-solving, and I've used it on a number of occasions informally.


Paper 2 (McGrath) was more interesting and formalizes social science research as three domains: Substantive (content/what), Conceptual(ideas/why), Methodological (techniques/how). The paper focuses on Methodological (how). I like the description of research methods as "Bounded opportunities to gain knowledge".

In research papers, I often find a gap between what the authors claim their study says, and what I really think the study says. For example, without longitudinal (long-term) study of a technique in HCI, its hard to say anything meaningful about its usefulness. Take the case of our bubble cursor - it is likely that the cursor will initially appear more or less useful to the user, but their beliefs will change over time, and yet, research papers will make broad claim based on short 1-hour studies. Another example is to think of the changed scrolling direction in the new Apple OS. Initial experiments are almost always negative, but long-term user experiments might be more promising. Generalizability, Precision & Realism are stated as three axis to maximize a study result on. Different techniques and mechanisms have different G,P & R, and using multiple complementary techniques is essential to span the space.


Apoorva Sachdev - 10/10/2011 8:53:29

Reading Response 10/10/11

This week’s reading focused on research methods and described some of the pitfalls encountered while formulating research hypothesis and doing it. The article by William J.McGuire describes how to create a hypothesis statement and think out of the box by keeping in mind the 49 heuristics he has describes. On the other hand, Joseph E. Mcgrath concentrated on methodologies used to conduct research and how each method of conducting the research has its own flawless and should be complemented with another method.

I found the “Creative hypothesis generating in psychology” paper very interesting. I wish the author had given more examples of how to teach students the particular skills/heuristic. He provides pointers that can train/force you to think out of the box. I liked the example he gave about Heroin, although fairly obvious, I feel sometimes researches tend to get stuck with a certain idea and get so enclosed in their own perspective that they forget to look at the issue from various viewpoints. This paper reminds me of IDEO cards (http://www.ideo.com/) which have a similar purpose of helping people brainstorm out of the box. The only issue was that sometimes the various heuristics seemed to close to each other and I wasn’t sure if they really warranted as being counted as a separate heuristic.

The second paper focused on research methods and highlighted the fact that no one research method is flawless, hence to ensure reliability one must use more than one method to overcome the flaws of each method. He stresses that the main three criteria in which research evidence should excel is generality, precision and realism and that there always exists a trade-off between them. He compares field research (observing) with experimental research (set –up) against questionnaires and theoretical research. I thought the author provided a pretty thorough analysis of how the results from each study should be interpreted to see if the original issue behind the study is even being resolved. So, overall it was a good article to read.


Peggy Chi - 10/10/2011 8:56:08

HCI is sometimes not considered as "science" by certain groups of people because of the evaluation methodology: unlike other research disciplines that results can be validated by clear procedures, involving human in the loop makes HCI research difficult to conduct experiments and studies. The good news is, the papers we read this week have brought insights from psychology and behavioral and social science perspectives. McGuire introduced a long list of heuristics that support hypothesis generation beyond hypothesis testing. McGrath gave an overview of research process and evidence.

From these papers I learned a lot about the methodologies and concepts such as experimental control, field study, etc. However, I'm still not too clear about how to choose from different heuristics or strategies and how to evaluate exactly. I'd love to know more about how current technology could support these studies, such as using video coding, online questionnaire, crowdsourcing, etc.


Jason Toy - 10/10/2011 9:02:10

Creative Hypothesis Generation

In this paper, William McGuire provides a list of 49 heuristics to help generate hypotheses, arguing that there is too strong a focus on hypothesis-testing.

"Creative Hypothesis Generation" provides a new framework for thinking about and generating hypotheses. The paper provides a formal way of analyzing the data of an experiment, which could be useful for many of the authors of experimental papers we have read, such as the papercraft or design galleries systems. This could reduce the time spent exploring avenues of research or hypotheses that would not prove fruitful. But at the same time, narrows the experimenter's view to be more restricted to his or her paradigm as described in "The Structure of Scientific Revolutions". For example, heuristics b3 and b4, which depend on the experimenter's own beliefs, might be greatly tied to what he or she has been taught and what is generally accepted by members of his or her scientific community.

McGuire's problem is well motivated: the hypotheses of an experiment has a large effect on what is tested and what results an experimenter can get. Having a framework to think about this would be beneficial for scientists. However there are several weaknesses of his paper. The structure of the paper lays out a number of heuristics, of which one of them per sub-category is enumerated, and then groups several heuristics in the "other" category. All of these heuristics in the sub-category are related, yet the main idea that is common between them is lost because a reader might focus on the first idea of each subgroup and might disregard the "other" ideas, which feel crammed in. Each sub-group of heuristics should have been grouped into a general idea, with variations as subpoints to the idea, allowing the main ideas to be exposed. In addition, while it might be plausible to see McGuire's argument for his paper, he does not make the argument in the paper itself. What is the weakness of the focus of hypothesis-testing only and how his heuristics can help is left up to the reader to imagine. In what cases is it better to use one heuristic over another, or multiples in conjunction.

Methodology Matters: Doing Research in the behavioral and social sciences

"Methodology Matters" is about the concepts and techniques to go about doing research and generating evidence. The paper goes on to discuss the subjects, ideas, and techniques of experiments and the strengths and weaknesses of specific scientific techniques.

This paper provides a new framework for choosing research methods to collect data. Arguing that each method has its flaws, for example experimentation, which can create a narrow and artificial problem, it proposes that multiple methods be used to separate out flaws and compound strengths of techniques. This theory touches on an idea from "Dilemmas in a General Theory of Planning" in that experimentation can create narrow problems that are far away from the real world, just for the purpose of scientists to be able to solve them. It also relates to the "The Structure of Scientific Revolutions" in that validation is greatly tied to the paradigm of a field of science. In order to validate, you must do so by the methods accepted by your peers.

The paper does a good job analyzing the state of experimentation. I especially like the part of randomization, where the author describes how it is not a random sample you have created, but a sample that you have generated randomly. This acknowledges the inherent difficulty of creating an experiment that accurately represents the population, a problem that we only know how to solve by generating enough samples and letting statistics take over. What I did not like about the paper was that given the three desirable features an experimenter would like to maximize of precision, generalizability, and realism, it did not attempt to analyze the use cases of the three. Is there certain cases where we would like one at the expense of others, and when is it worth it? Maybe it is impossible or not relevant to have realism say in the first stages of a drug test because it is impossible to test in the real world outside the laboratory.