Usability Testing

From CS160 Spring 2014
Jump to: navigation, search


Readings

Ziran Shang - 3/8/2014 0:15:20

It is not always the case that the best experiments manipulate only the independent variables and leave all other variables constant. Very tightly controlled experiments are likely to suffer from threats to external validity, meaning that results do not generalize very well. Whenever something is made a control variable, the applicable results of the experiment become limited to that variable. This can cause the results to be biased. This could be detrimental if one factor of the experiment is very skewed towards one extreme, but the other factors are balanced. If all variables are controlled except the independent variable, the one odd variable could negatively affect the accuracy of the whole experiment.


Ian Birnam - 3/8/2014 17:31:48

This claim is not necessarily true for many cases. If you wanted to experiment for a very focused user group, or for a specific use case, then the method the claim describes would work, as everything would pertain to a small breadth. However, if you wanted to make more general claims based on your experiment, this would not be the best approach.

As the reading discussed, you need to introduce some form of randomness, otherwise the experiment seems to contrived. You wouldn't be able to give a clear answer to the Air force General's question, as your experiments would rely on too many things being constant.


Tien Chang - 3/8/2014 22:52:27

The best experiments are not always those where only the independent variables are manipulated and all other variables are controlled. While this may create greater correlation between the independent variable and the dependent variable, the results are less generally applicable given the unique set of circumstances. Variables from natural disasters, genetic and environmental conditions cannot be manipulated. Also, it is impossible to force human subjects with "cooperative attitudes, attentional states, metabolic rates, and many other situational factors," as indicated on page 28. Then this process of controlling variables would be detrimental in an experiment.

There are also numerous threats to an experiment that relies on only the independent variable is manipulate and all other variables held constant, such as history, maturation, selection of participants in groups, differential mortality, testing, statistical regression, and interactions with selection. All these could affect an experiment, not matter if other variables were attempted to be held constant.


Gregory Quan - 3/9/2014 17:04:25

The purpose of only manipulating the independent variables and holding all variables constant in an experiment is to try to ensure that the changes to the dependent variable were caused by the independent variable. By controlling as many variables as possible, we can make sure that those variables do not influence the outcome of the experiment. For example, in the reaction time experiment discussed in the reading, variables such as ambient light, the age and gender of participants, and time of day of the experiment can all be controlled. If these variables were not controlled, they may affect the experimental results. For example, if all of the test subjects happened to be between the ages of 18-21, they may have faster reaction times than older populations.

Controlling as many variables as possible is not always desirable because it makes the results less generalizable. For example, if we controlled the reaction time experiment for gender and only tested males, we would not be able to say anything about the reaction time of females because we did not test any females. We can try to have the best of both worlds by randomly selecting test subjects, and if our sample is large enough, we can usually assume it is representative of the entire population.


Shana Hu - 3/9/2014 12:51:42

If you manipulate a few independent variables but then hold all other variables constant, you will end up with an incredibly specific experiment which cannot be generalized across broader contexts. For instance, Martin uses the example of answering General Nosedive with an experiment of a 19-year-old college sophomore with an IQ of 115 in an air-conditioned, 10-foot-by-15-foot room with no distractions and a warning signal. It quickly becomes obvious that holding all other variables constant, or controlling them, becomes unrealistic and also, not very useful. Most experiments are looking for results that explain broad situations, so although holding variables constant can determine the answer to a particularly specific query, you generally do not want to control all of your variables.


Jay Kong - 3/9/2014 19:12:29

Experiments where only independent variables are manipulated are not always necessarily the best experiments. By controlling all variables in an experiment, a unique set of circumstances will be created. In the case where all variables are controlled while the independent variable is manipulated, the resulting causal relationship is said to be not externally valid. This means that the relationship is not very generalizable, potentially making the experiment not very useful in a real-world scenario. For example: experimenters test their product on 19 year old male college sophomores and it proves successful. They cannot then generalize their product as a universal success because their product was tested under a very unique set of circumstances.


Myra Haqqi - 3/9/2014 19:42:09

The best experiments are those where only the independent variables are manipulated and all other variables are held constant is a false claim because there are situations when holding all variables other than the independent variables constant will lead to a lack of ability to generalize the experimental findings. For example, suppose there is an experiment to have participants eat different foods and see how the general world population would react to the tastes of the foods. If an experiment uses only participants who are 12-year-old males from China, ultimately holding age, gender, country of residence, and all other conditions constant, and only manipulating the independent variables in the experiment (such as types of food that the participants must eat), then the experimental findings will only be applicable to 12-year-old males from China and not all people in the world. Thus, holding all variables constant is detrimental to providing accurate results that can be generalized to broad situations as opposed to only a specific unique set of conditions.

No, this is not always the case, because if all other variables are held constant, then there would be a unique set of circumstances that the experiments are conducted under, which is not accurate. The problem arises when we lose the ability to generalize our experiment findings over general situations, because the experiment was conducted under such controlled conditions that they only apply to those specific circumstances. An experiment with all variables held constant, other than the independent variable, has low external validity, meaning it is not very justifiable to to use the experimental findings to formulate conclusions about cause due to the fact that the experimental findings lack the ability to be generalized across broad circumstances.

This process would be detrimental in an experiment if one endeavors to apply the results of the experience over all general cases. If one wants to generalize the results of his experiment, then if he controls all variables except the independent variable, then his results will not be applicable to all general scenarios, and would instead only apply to specific conditions that were used during the experiment.

This process would also be detrimental in an experiment when a combination of multiple variables influence some factor, but individually, single variables do not cause a major effect. When there is a combination of causes, then this would be detrimental in an experiment which only takes into consideration single variables at separate times.

In order to solve this problem, one can allow certain variables to vary randomly through random assignment. Then, if circumstances are randomly selected with an equal probability of being selected, external validity will be ensured, allowing one to generalize the results of an experiment. For example, in continuation of my example above about food preferences, in order to be able to generalize the results of some experiment to the world population, researchers should randomly select members to engage as participants in the research. Instead of choosing people from the same group, researchers will randomly assign people in a manner that is representative of the entire world population, and also randomly assign them to try different foods in order to test their reactions to different tastes.


Zack Mayeda - 3/9/2014 22:30:02

This core idea behind this claim is valid, but should not be followed with extreme precision. It is true that when running an experiment that one wants few variables to change significantly other than the independent variables being purposefully changed. The key is determining the degree to which control variables should remain constant. If a control variable changing will significantly alter the subjects response to the independent variable change, then it should remain as constant as possible. However this is not the case for all control variables. It would actually be detrimental to an experiment if all possible control variables remained extremely constant because the experiment results would be relevant only to a specific set of subjects. If the claim was followed precisely, then even things like exact age, hometown, hair color, and car model would be constant between subjects. This could mean that the results of the experiment aren't generalizable to a greater set of subjects, essentially rendering the results useless in practice.


Jimmy Bao - 3/9/2014 22:47:20

The best experiments are ones where only the independent variables are manipulated and all other variables held constant because the experimenters need to be able to determine that the measurements of dependent variables are truly caused by changes to independent variables. If all other variables are manipulated as well, it would be very difficult to come to a conclusion to determine which variables under what circumstances affect the results of the dependent variables. However, the process of having only independent variables being manipulated is detrimental in an experiment if they need to find the relationship and correlation between independent variables and dependent variables (as mentioned as above) without other types of variables playing a role in the results.

It is not always the case that the best experiments are only the ones where independent variables are manipulated. For simplicity, let's just consider an experiment where the conditions favor some participants more than others; since the participants are chosen at random, this could potentially be true. In some sense, although it would kind of alter the results of the experiment in some way, maybe if you already know a participant does better under some test conditions, you may want to manipulate it so that you get the most equal playing field that you possibly can. Sure, this could introduce confounding variables and other unwanted issues, but this is just a mere example to show that "best experiments" are those where other variables are manipulated as well since it also depends on the type of experiment.


Andrea Campos - 3/9/2014 23:14:10

This claim is made based on the argument that the more control (constant) variables you have in place, the better the experiment because it will be easier to see that any change in the dependent variables is due to the changes in the independent variables. While this sounds appealing, it is not always so clear that the independent variables are the ones causing the change, even if one has many control variables. Sometimes changes in the dependent variable are due to factors one isn't accounting for, such as confounding variables--variables that may be constantly changing as the independent variables are manipulated. In this case, it may be harder to tell which variables are truly causing the change. Additionally, having many control variables in place could be detrimental because they make the results of the experiment less generalizable. The results would become reflective of what would occur under the exact conditions of the experimental setting with all its constants, which in the real world are likely not to be replicated. Thus, any conclusions drawn from the experiment may be inaccurate and unhelpful at best, and if the experiments may have life or death implications in the real world (like the pilot example in the reading), could potentially be disastrous.


Bryan Sieber - 3/9/2014 23:22:57

When doing an experiment you want to be able to see how changing a specific item or feature affects an individual in a certain way. There are control variables, independent variables, dependent variables, random variables, and confounding variables. The independent variable is manipulated by the experimenter, the dependent variable is what is being studied, and the control variables are held constant. If all variables, other than the independent variables, are held constant, then the results of the experiment would seem to be legitimate among most scientists. However, it is known that this is seemingly impossible to hold every control variable as a constant; for example, the attitude, attentional states, metabolic rates, and a multitude of situational factors are impossible to be controlled. Perhaps someone ate before the experiment, or someone did not, that may affect their attention or pace for which to proceed with the experiment. Instances where the independent variables as the only source of change are seemingly impossible, if done appropriately it could adequately determine a realistic evaluation of what the experiment is on. However, say you are experimenting with an app to test its evaluation on perhaps a stylistic or design choice, (I.e. a button on the left side or the button on the right side). If only the button is tested on both of those sides you may find that one side is more effective to get them to click it, however you may miss out on the possibility of a completely alternate design where a button is unnecessary and generates even more use. Sometimes narrowing the scope down is good for figuring out small scale choices, however this could trap you into a choice that still may not be the optimal one.


Christopher Echanique - 3/9/2014 23:39:33

If all other variables are not held constant, it is difficult to conclude causation based on a correlation between two variables. By only manipulating independent variables and keeping all other variables constant, the experimenter can have more confidence in the causal relationship between the independent and dependent variables. However this can be detrimental in experiments that control too many variables as this poses a threat to external validity. When an experiment controls all variable while manipulating the independent variable, the relationship established by the experiment only holds for that one very specific case. It is important to limit the number of controlled variables to increase the external validity of the experiment. This generalizes the results to other situations that apply.


Nicholas Dueber - 3/9/2014 23:58:18

The idea that when only the independent variables are manipulated and all other variables held constant causes the best experiments is often true when there is statistical evidence to be gained. An independent variable is a variable that can be manipulated by the experimenter. This allows for the experimenter to measure some dependent variable by the behavior of the system. This method or experimenting is good when you can control and can easily determine dependent variables. This process would be detrimental to an experiment when you are testing usability of an application for a wide audience. Not all audiences will want to use an application in the same way, and hence you can not isolate variables to make them independent. This will also fail when you are trying to make improvements because the method mandates that you reduce the dependent variables to as few as possible. This leaves little room to discover true faults of an application and can miss lead conclusions with confounding variables.


Zhiyuan Xu - 3/10/2014 0:04:23

Without an independent variable, one is not able to draw empirical conclusions about a behavior in relation to a changing event or object. Even though causation may not be explained through the experiment, it is possible to see a direct correlation between the changing independent variable and the dependent variable. However, this process may be detrimental in an experiment when there is a confounding variable, where such a variable changes when the independent variable changes. In this way, the confounding variable may be the variable related to the dependent variable, not the independent variable. This is detrimental to the experiment as incorrect correlation relationships will be drawn between the independent and dependent variables.


Sergio Macias - 3/10/2014 0:12:44

While I would usually disagree with overarching statements, I personally feel that the best experiments would be the ones where only the independent variables are manipulated and the other variables held constant or at least within a relatively small range. I feel this way because the independent variable is what one’s hypothesis is based on and one does not care about the other variables which come into play when testing the independent variable. The reason they are the “best” is because they are able to pin-point the effect the variable, without having other variables distort the results. This is what you need because otherwise one can argue that it was not your independent variable which had the most effect but instead another variable, which you did not test for or take into account. To see specific cause-and-effect relationships between certain things and be certain that only your independent variable affected the results, this is the best method. At the same time, it is not the best in the sense that experiments in which you control every variable except the independent variable are not generalizable. Reason being is that to confidently connect your results, which you gathered from a strict experiment, to a real-life case, you first have to make sure that the case and all its variables will match up perfectly with the variables in your experiment – otherwise your results have no to low external validity with respect to the case in question. So to answer the question of whether “is this always the case” – no definitely not. The choice between a loose or strict experiment must be made with respect to the kind of results you want – do you want generalizable results or do you want very concrete, specific results. Based on the hypothesis, the choice might already be made for you. Even though I have already touched on the last question, I will concretely answer it here. The process in which you control all variables to a constant, other than the independent variable, would be detrimental to an experiment in which you wish to apply the results to a very general case. If you conduct a strict experiment, you will have results which are only true to a very specific case and cannot be transferred to a more general case.


Ryan Yu - 3/10/2014 0:24:02

This is not always the case -- as the article mentions, "we really do not wish to control all the variables in an experiment or else we would create a unique set of circumstances." If, for instance, we could control every single variable while manipulating *just* the independent variable, then the relationship and results produced by the experience would hold only in one situation -- when all variables were set exactly "at the levels established for control."

A good rule of thumb that the article mentions is that the "more highly controlled the experiment, the less generally applicable the results." In this sense, if the experiment you are conducting has many, many different interacting variables, then you want to try and distinguish the variable that you are most interested in looking at, and get a good distribution of values for your other variables; only then can you generalize your experiment's results and make an overarching statement about the trends/tendencies of the variable that you actually want to examine.

In this regard, the article talks about random variables, in which the experimenter lets the other variables in his/her experiment vary randomly, in order to achieve this normal/even distribution. Only then can a representative sample of the experimental domain be achieved.

A good example of all of this is, for instance, measuring the tendencies of individuals at concerts to do various drugs. Obviously, there are many different variables in play here -- one could ask about the type/genre of concert that is being talked about, the demographics of the people who are attending, the venue location, and more. Say that the experimenter wanted to fix the aforementioned variables and not let them vary at all -- for instance, the genre/type of concert could be restricted to Grateful Dead concerts, the demographics of people measured could be people ages 18-26, and venue location could be restricted to the Bay Area. Well, with these restrictions, the results of the experiment would *only* really be applicable within the scope of the restrictions -- more specifically, the experimenter would probably produce data of a much larger proportion of concert attendees doing drugs from the experiment than actually is the case in the scope of all concerts. In this sense, the process if only manipulating independent variables and keeping all other variables held constant would be detrimental to the experiment's results.


Diana Lu - 3/10/2014 0:57:26

Sangeetha Alagappan - 3/10/2014 1:17:44

An independent variable is the variable whose effect is being studied in an experiment (light intensity as in the example in Martin’s writing) which needs to have at least two levels (low and high in the example) for the experiment to be of any value. Experiments where the independent variable is manipulated while the other variables are held constant is common practice, especially in scientific experiments like understanding the effect of variation of light intensity on plant growth, in order to isolate the effect being studied from other interfering variables (like the amount of water supplied to the plant or the outside temperature). While this is a good practice and gives reliable results on most occasions, holding all other variables constant while changing the independent variable might prove to be detrimental to the experiment. Martin explains confounding variables as variables that are extraneous variables that are statistically related to the independent variable and therefore, variables that can corrupt data by influencing the dependent variable, causing a threat to internal validity. Keeping with the example of a study of plant growth in varying light intensity, a number of confounding variables can be identified. Take the temperature - temperature is bound to be correlated with light intensity; naturally, it is expected that there is a rise in temperature with an increase in light intensity. However, temperature is bound to have an effect on plant growth (too high a temperature might cause leaves to wilt). However, it is quite difficult to ascertain whether this effect on plant growth is due to increased light intensity or a rise in temperature. Experimentally, it is difficult to isolate which variable is causing a change in the dependent variable. Therefore, even though the independent variable is changed while other variables are being held constant, this procedure is not sufficient to tell if the independent variable is the sole cause for a change in the dependent variable and thus, is not the best experimental process. Since plant growth is a natural process, it is very difficult to perform this experiment in isolation from confounding variables. A way around this problem of confounding variables would be extending Martin’s suggestion of careful selection and striking a balance at maintaining internal (precision) and external (generality) validity by randomised experimentation. In randomised experimentation, we would keep the same constant variables constant while manipulating the independent variable to study its effect on the dependent variable in a variety of levels of the confounding variable. If the number of samples is large enough, it can be told whether across all samples, the independent variable has had effect. While this is not a perfect solution to the problem, this process might produce data better representative of the conditions and more reliable experimental results.


Nahush Bhanage - 3/10/2014 1:34:16

It is often claimed that the best experiments are those where only the independent variables are manipulated and all other variables held constant. This statement apparently sounds logical - in order to study the effects of independent variables on the dependent ones, it makes sense to control all other variables in the experiment. This would ensure that the changes in the dependent variables are only due to changes in the independent variables and that there are no external factors involved. However, strictly following this process in an experiment could prove to be detrimental. Categorizing all variables (other than independent and dependent ones) as control variables may not always work for the following reasons:

1) The degree of control in an experiment is inversely proportional to the generality of its results. It is often not a good idea to control all other variables while manipulating the independent variables, otherwise that would create a unique set of circumstances. And in that scenario, the results might hold true only when all the variables have the exact values that were established for control. It is impossible to generalize the results in such scenarios since they may not be applicable even if the value of a single variable changes. For instance, consider a product that is supposed to be used in an environment with harsh weather conditions. Conducting an experiment on such a product in an extremely controlled temperature and pressure environment would be totally useless since the variables that are controlled in the experiment are not controllable in real world.

2) In most cases, it is impossible to control all the variables in the experiment. This especially holds true in case of human participants. Factors related to human psychology (such as attentiveness, cooperative attitude, interest etc.) could be relevant in the context of the experiment. These factors are far beyond the experimenters' control.


Peter Wysinski - 3/10/2014 2:49:36

Experiments have results that are easy to understand when only independent variables are manipulated and all other variables are held constant. Since by definition, independent variables are unassociated with on-another, we know exactly what changes over the course of an experiment. If we are performing a study to see what apps are popular within a certain age group of users, the user’s age would be an independent variable as it is unaffected by the interests that they have. While manipulating only independent variables and keeping the other variables constant leads to consistent results, it is not representative of the typical nature of events. It precludes us from being able to generalize the results of an experiment to other situations; as such we want to add some variability to out variables so that we can extrapolate results from our findings. Random variables increase the external validity of an experiment and allow it to be generalized to other settings and people. Ergo, it is important to introduce some entropy when performing experiments.


Sijia Li - 3/10/2014 3:55:26

This is NOT always the case. In short, the "best" experiments in which all other variables (except independent variables) are held constant have ""LOW External Validity"". They are only considered "the best" in a very restricted sense. In other words, they do not ""generalize"" to any other situations. "The more highly controlled the experiment, the less generally applicable the results" (page 28).

On one hand, it is certainly necessary to keep some other variables constant ("controlled") while independent variables are manipulated. Those variables that are held constant are called "control variables" (page 27); they do not vary, since they are constant and under control. The reason why it is necessary to keep some other variables constant ("controlled") is that, as an experimenter, you will want to be sure that you have indeed achieved complete command of the control variables in your experiment.

However, on the other hand, not all variables should be assigned "constant" or as control variables. There are mainly two reasons. First, it is "impossible to control all the variables" (page 28). Not only is it impossible to control many genetic and environmental conditions, but it is impossible to force cooperative attitudes, attentional states, metabolic rates, and many other situational factors on our human participants. Second, [most importantly], we really do not wish to control all the variables, because if we control all variables while manipulating the independent variable, the relationship established by the experiment will hold in only one case (when all variables are set at exactly the levels established for control). In other words, we won't be able to generalize the experiment result to any other situation.


Sijia Li - 3/10/2014 3:55:54

This is NOT always the case. In short, the "best" experiments in which all other variables (except independent variables) are held constant have ""LOW External Validity"". They are only considered "the best" in a very restricted sense. In other words, they do not ""generalize"" to any other situations. "The more highly controlled the experiment, the less generally applicable the results" (page 28).

On one hand, it is certainly necessary to keep some other variables constant ("controlled") while independent variables are manipulated. Those variables that are held constant are called "control variables" (page 27); they do not vary, since they are constant and under control. The reason why it is necessary to keep some other variables constant ("controlled") is that, as an experimenter, you will want to be sure that you have indeed achieved complete command of the control variables in your experiment.

However, on the other hand, not all variables should be assigned "constant" or as control variables. There are mainly two reasons. First, it is "impossible to control all the variables" (page 28). Not only is it impossible to control many genetic and environmental conditions, but it is impossible to force cooperative attitudes, attentional states, metabolic rates, and many other situational factors on our human participants. Second, [most importantly], we really do not wish to control all the variables, because if we control all variables while manipulating the independent variable, the relationship established by the experiment will hold in only one case (when all variables are set at exactly the levels established for control). In other words, we won't be able to generalize the experiment result to any other situation.

Sijia Li


Anju Thomas - 3/10/2014 10:20:10

Discuss the claim that the best experiments are those where only the independent variables are manipulated and all other variables held constant. Is this always the case? When would this process be detrimental in an experiment?

Often when an experiment is held and independent variables are manipulated, not keeping some variable constant can cause a miscorrelation in results .For instance, when testing if the a certain drink could lead to diabetes, it is often reasonable to keep other variables constant. Otherwise, the results could claim that drinking Pepsi can lead to diabetes, when in reality the cause of the disease is related to the type of sample chosen for the experiment. If the sample group consisted of a larger representation of an older group of females with a diabetic history of genetics,and they happen to drink Pepsi at times, it might be easy to conclude that pepsi leads to diabetes. However, this causal relationship could be entirely false, since some variables such as gender, age and genetic history can be kept constant when having the participants with only the type of drink as the independent variable to test which drink could lead to diabetes when compared to others. This might avoid making false misassumptions or having false conclusions about the result of an experiement to the independent variables in some cases.

However, though it might seem reasonable to have constant variables, it might not be right choice when performing experiments. The claim that the best experiments are those where only the independent variables are manipulated and all other variables are held constant can help us come up with precise experiments. However, holding every other variable constant is not plausible and impossible. There will be environmental conditions that often cannot be controlled such as weather. Other causes include political, social or personal circumstances, such as the amount of sleep a person receives, the situations previously faced by each individual, or even one’s social status or backgrounds.

Even if we were able to control all other variables in the experiment, it would create a less generalizable example. When something happens with only a specific set of measurements, then it could not necessarily be expected to happen with uncontrolled variables, not making it as usable for other conditions.For instance, if we wanted to know the average time taken by someone to learn how to bike, and if our experiment was a well controlled experiment that was guaranteed to be correct if the person learning has to be 21 yrs of age female with 5ft 6 in, then the result of the experiments could not be used for people that do not fulfill these requirements. In this case though our experiment would be useful for testing precise learning rates for 21 yr old females with an average height, but would otherwise be useless. Thus, more often the greater level of control exhibited on experimental variables, the less generalizable or scalable it becomes.

Due to its precise restrictions, well controlled experiments thus affect the external validity, which describes the relation that can be generalized across people, events, and times. The external validity is more at stake when the results of a small non randomized sample is applied to a larger group. This may lead to misconceptions, let by create incorrect causal relations between results and the objects being tested on. Though external validity can be assured for some well controlled experiments, it cannot always be guaranteed.

As keeping all control variables the same might not be possible, an alternative solution used to combat the issue is using random variables. Unlike well controlled experiments, experiments having random variables allow some things to vary as long as they are not biased. For instance, making sure that the sample used for experimenting the cause of diabetes related to Pepsi is diverse from wide ranges such as people of different genders, age groups, and history, might lead to a more reliable and generalizable result. This prevents biased results, while making it more usable in common circumstances. It allows external validity while making the result more generalizable rather than narrowing down the sample size. Thus in conclusion, though controlled variables can be useful in testing causal relations in specific scenarios, it may not be scalable to a larger group, affecting its usability and reliability.


Lauren Speers - 3/10/2014 11:11:24

Experiments where only the independent variables are manipulated and all other variables are held constant allow researchers to declare true causal conclusions; any variation in the observed variables can only be attributed to variation in the independent variables. This feature can be useful. However, this type of experiment is impossible to conduct because there are some variables, like the weather or the subjects’ attitudes, which can influence the experiment results, but cannot be controlled. In addition, controlling more variables will likely require more money and more time.

Perhaps most importantly, a perfectly controlled experiment, where there is only variation in the independent variables, lends itself to very specific conclusions and loses external validity. Despite the effort expended to achieve the causal statements, the conclusions are likely to be useless except in a small set of rare, specific cases. Consider the example from the reading where the researchers want to determine a relationship between light intensity and reaction time. If only the light intensity (independent variable) is manipulated and all other variables are kept constant (participant age, gender, hair color, etc.), then the results of the experiment can only be applied to people who fit the age, gender, and hair color of the participants. It is unlikely that one would encounter this extremely selective group in the real world. It is, therefore, unlikely that anyone would be able to apply this experiment's findings.


Steven - 3/10/2014 10:39:01

This is not always the case because you're simulating a set result since variables are constant. You can't generalize things with such an experiment setting. A detrimental example can be testing if someone can reach a basketball net by jumping. If you restrict is to people taller than 6'3, then most likely all of the participants can reach the net in reality most of everyone can't


Jeffrey DeFond - 3/10/2014 11:19:33

I've spent a few years as a research assistant at a cognitive neuroscience lab, and when it comes to getting good data from a subject, there can only be a few factors that are not controlled for. Neuroscience involving fMRI do not use left handed subjects (unless the study is on that particular subject) because the roughly 3 percent chance that they are the sort of left handers that have reversed hemispheres. The human mind is vastly complex, and if studies don't control most things and only change a few independent variables they run the risk of factors contaminating the effects of that are being tested for. I often think about it the same way that I think about modularity in my code, if you test for too much (hardcode too much) then you don't learn anything that isn't overly specific.


Seyedshahin Ashrafzadeh - 3/10/2014 11:46:57

First of all, it is important to notice that it is impossible to control all the variables (other than the independent variables) and hold them constant. For example, we are not able to control many genetic and environmental conditions. Also, we can't force cooperative attitudes, attentive states, and metabolic rates. Secondly, if we control all the variables and manipulate only the independent variable in an experiment, we have created a unique set of circumstances. Then the relationship and result that we get from the experiment would only hold in one case. As a result, we would not be able to generalize the experimental results to any situation. The more highly an experiment is controlled, the less generally applicable the results are. We call this generalizability of an experimental finding the external validity. It is how well a causal relationship can be generalized across people, settings, and times. For example, to conclude the reaction time based on the intensity of a light for a pilot, it would be very bias if we experimented on 19 year-old college sophomore with an IQ of 115, sitting in an air-conditioned room with no distracting sound. Now that we know, we don't want to control all the variables, one possibility is to let them vary randomly (random variable). This would make sure that our experiment is unbiased. However, the experimenter should eliminate or minimize the confounding variables because they distort the relationship between the independent and dependent variables. These confounding variables create low internal validity. It would be hard to conclude that an independent variable caused a change in the dependent variable if confounding variables exist in our experiment.


Aayush Dawra - 3/10/2014 11:50:48

Although the experimental method involving the manipulation of the Independent variable while keeping all the other variables constant allows for a very effective method of drawing causal relationships, it is not always the case that this experimental method is the best method. Several factors can affect the internal validity of the conclusions, for instance, the fact that most experiments require a laboratory but ironically, the subject's awareness of being in a laboratory may possibly affect his behavior while being tested leading to inaccurate derivations of causal relationships, also known as testing. A more subtle but more realistic shortcoming of the experimental method is the lack of control over all possible variables while carrying out the experiment, which is a massive limitation for experimental design. Yet another limitation discussed in the reading is history, the occurrence of an uncontrolled event during the experiment, which can cause the experiment to have low internal validity. Consider for instance the occurrence of a natural disaster between the initial and final measurements of an experiment, whose goal is to measure how a natural disaster mentally affects people. The occurrence of the natural disaster would severely affect the internal validity of the experiment and the experimental method of manipulating the independent variable while keeping all other variables constant would be highly detrimental in drawing the causal relationship from such an experiment whose internal validity is compromised.

Apart from all the described threats to internal validity, the experimental method may also be plagued by the lack of external validity, the notion that an experiment's conclusions cannot be generalized to beyond the experimental setting to the world, which severely limits the utility of the experiment in the first place. In fact, it is interesting to note here that since the threats to internal validity are largely minimized by having enough control over the situation to ensure that no extraneous variables are influencing the results, the premise of the results of an experiment being generalizable and therefore having external validity evidently create a dichotomy between internal and external validity. In fact, experimenters often have to trade off between internal and external validity and therefore maintain a fine balance between the two, depending on the scope of the experiment.

Keeping all these shortcomings in mind, it is evident that the experimental method described, although a largely effective method, may not be the best method to adopt in all scenarios and may prove to be detrimental to the conclusions of the experiment when the threats to internal or external validity of the experiment are dominant.


Emon Motamedi - 3/10/2014 12:18:54

The claim that the best experiments are those where only the independent variables are manipulated and all other variable held constants exists because by holding all other (control variables) constant, the experimenter will know that any change in the dependent variable is due to a change made on the independent variable.

However, this is not always the case. This process could be detrimental to an experiment in several settings. One is when the experimenter feels he can fully and completely control every control variables. Oftentimes, there are variables that are impossible to control completely, such as those affected by genetic and environmental forces and those affected by situational factors within the participants of the experiment. By not acknowledging the reality of the uncontrollability of certain variables, the experimenter may be mistaken in thinking that aspects of the dependent variable are solely being affected by changes in the independent variable.

Secondly, and more importantly, all control variables being perfectly controlled creates results for the relationship between the independent and dependent variable that can only be held true at the given levels for the control variables in the experiment. This does not allow the experimenter to generalize the results for other situation, making highly controlled experiments detrimental toward generally applicable results. In order to achieve this general applicability, experimenters must utilize random variables.


Steven Wu - 3/10/2014 12:31:08

When experiments are conducted, they usually have independent variables that are the only variables that are manipulated in the overall experiment. Except that this belief constraint can only go far. There need to be at least two levels or states in which an independent variable can be. Without at least two levels, experiments are not doing an experiment. Additionally there are control variables that maintain the consistency of the experiment. These are usually unaltered to control the experiment. The dependent variables are to be observed that are dependent upon the independent variables. But one of the greatest difficulties is making sure that the independent variable is the only one that is manipulated in a study. If we review the anecdote from the reading about General Nosedive, you can notice that if a study was conducted with independent variables with precise environments in the study, then it is going to be difficult to reproduce those previous states and atmospheres to generate the same results of the experiment for the next iteration of the experiment. Instead, it is better to draw generalized results from the experiment and not to control all of the variables. Another issue I see with maintaining independent variables in an experiment is the issue of possibility that an independent variable would become a confounding variable. When this happens, this is extremely detrimental to the experiment. Consider the example of the differently labeled glass bottles with Coca-Cola and Pepsi, labeled Q and M respectively. Although the Coca-Cola experimenters were originally hypothesizing that there was only one independent variable in the experiment (the different soda in the bottle), we have to realize that the labeling of each bottle had had significance on the people's choice. Instead, there were two independent variables: the content inside of the glasses as well as the letter labeled on the glass. Therefore, the Coca-Cola experimenters failed to meet what their original hypothesis had articulated. To handle this more appropriately, the Coca-Cola experimenters needed to cover their tracks and conduct another experiment to solidify their belief that users had chosen the M lettered bottles because the Q lettered bottles were to obscure for their tastes, something that would be terrible PR for a company as large as Coca-Cola.


Michelle Nguyen - 3/10/2014 12:51:11

The article claims that all other variables besides the independent variables should be held constant because variable differences may lead to confounding variables. However, this is not always the case, because the experimenter would not always be able to generalize from their results. For instance, say an experimenter wants to test a UI for an app they made for plumbers. They want to observe how the plumber uses the app while plumbing, and create a situation that is the exact same for each of the plumbers they experiment with, except a small change in their UI (which is the independent variable). They would not be able to generalize their results to other situations, say when a plumber must rush to do his job to meet his next appointment. The pressure may make them use the UI differently than if they were not pressed for time, and thus the results from the experiment may not hold for other situations. Since the app will be used in all possible situations, the experimenter needs to be able to generalize to all the possible differences. The differences could be the situation, environment, or even who the plumber is as a person. By allowing flexibility of other variables, the experimenter will be able to see how useful their UI feature is as a whole despite these differences. After all, the target group for an app should be specific, but not as specific as a highly controlled experiment calls for. Thus, the process is detrimental when the experimenter needs results to see an overall trend. In this case, one general experiment is better than many, very controlled experiments because the results are more accurate to how it would be in real life.


Shaina Krevat - 3/10/2014 12:55:30

The scientific method, as taught in school and used for experimentation, relies on the idea that only one variable in an experiment is ever being changed, that way whatever other changes are observed in the experiment can be attributed to the independent variable. While this works for proving a theory (such as, figuring out exactly which food source makes bacteria grow) it doesn’t account for experimenting when different cases need to be considered. For example, if one is conducting tests on users’ responses to an application, it doesn’t make any sense to have all variables but the independent variable controlled. Putting a user in a sterile environment with a phone and an application won’t account for how users will respond on airplanes, trains, while driving, eating etc. Whenever use is involved, more complicated experiments need to be done to determine how the facts observed in the experiment will translate to the real world.


Max Dougherty - 3/10/2014 13:02:36

To state that “the best experiments are those where only the independent variables are manipulated and all other variables held constant” constrains the result of the experiment. If truly all other variables are constant, any conclusion will carry the caveat that a number of other conditions must hold true. By allowing control variables of your constraints to act as random variables, a test can draw conclusions for a more general condition. This does however run the risk of having random variables affect the result. If a random variable does act as a confounding variable, the resulting dependent variables may be influenced in an unobserved manner. The safe broad-stroke statement follow that all other non-independent variables are held constant. This overzealous approach can make finding a testable population difficult and results possibly inconclusive. However, a truly detrimental result can occur if one of the variables held constant has a large impact on the dependent variable and has high variability in the relevant population. The distribution of the dependent variable could then greatly differ from a nearly identical experiment in which the disputed variable was left random.


Juan Pablo Hurtado - 3/10/2014 13:46:42

If you want to test very specific things it could be better to do it holding all the others variables constants. But it could be detrimental if it is a more complex experiment where some variables depends from each others. For example, if you set too many control variables you could be maybe biasing the experiment, because, there could be some outcomes that depends of the relationship between different variables.


Charles Park - 3/10/2014 13:29:15

I think it is important that in an experiment only the independent variables are manipulated especially since if the dependent variables are manipulated then it cannot be said with certainty that the relationship between the independent and dependent variables are true. Assuming the researcher can manage to find proper control variables as well as understand the possible impact of random variables, manipulating solely the independent variables seems to be the best way to observe a specific relation between the independent and dependent variables. This may not always be the case, however, since certain circumstances might result in the dependent variable changing without the manipulation from the researcher.

It can become detrimental in a experiment when there is a change in the dependent variable such as the mentioned threats to internal validity in the reading: history, maturation, selection, mortality, testing, and statistical regression. In an experiment where the independent variables are the only ones to be manipulated, if the dependent variables are subjected to change or the independent variables undergo certain circumstances that causes the relation between the dependent and independent variables, then it can be detrimental to the experiment.


Emily Sheng - 3/10/2014 13:53:12

Although an experiment where only the independent variables are manipulated and all other variables are held constant may produce very precise results (for we know the exact correlation between independent variable and dependent variable), these experiments are also very difficult to generalize. Because all other conditions are controlled, the conclusion can only be applied to situations where all other conditions must be controlled. Thus, it is more desirable to have some amount of randomization, so that whatever conclusion is drawn may be applied more generally to similar situations. For example, maybe an experiment involves observing a pilot's reaction time in a flight simulation where only the independent variable is manipulated. No matter what the conclusion is, we can only generalize it to pilots of a certain height, preferred handedness, age, certain average amount of sleep, etc if we control all these variables. General applicability is thus a reason why we might not want to hold all other variables constant in an experiment.


Dalton Stout - 3/10/2014 13:58:22

This view seems to be an exaggerated misunderstanding of the function of control variables. Control variables are held constant during and experiment so they do not become confounding variables. An example of proper use of a constant variable is holding the temperature constant during an experiment that tests participant reaction time in different levels of light. The reason temp. should be held constant is because a warm, comfortable room compared to a brisk, cold room may affect the dependent variable (reaction time) and become a confounding variable. That being said, not everything is meant to be a control variable. If one attempts to control each and every facet of the experiment you will over specify the participants and conditions, leading to a lack of generality. This is called lacking external validity. For example, imagine I performed the reaction time test described above, but I used only hispanic, male, 6'0", college educated businessmen as my participant pool. I would find very accurate data for my specific participants, but I would not be able to say much about anyone else's average reaction time (like an overweight, white, 5'0", female). This would be detrimental to the experiment.


Emily Reinhold - 3/10/2014 14:03:17

If the goal of an experiment is to find a very clear, direct causal relationship between an independent variable and a dependent variable given *highly specific conditions*, then a good approach to experimenting would be to have a single independent variable that is manipulated, and to put forth great effort to ensure that all other variables are control variables (held constant). This approach by definition will produce a statistically significant causal relationship between the independent variable and the dependent variable, because any variations in the dependent variable must be caused by the manipulation of the independent variable if all other variables are held constant. Changes in the dependent variable must arise from change in *some* variable, and the only variable that changes is the independent variable, it is obvious that the change in the dependent variable is directly caused by changes to the independent variable.

With that said, the goal of experiments is often to infer a causal relationship between the independent variable and the dependent variable for *general* conditions. If the experimenter wishes to extend the causal relationship found between the independent and dependent variable to a wide variety of circumstances, it is much more useful to allow other variables to vary (not be held constant). The more variables that are treated as control variables, the less generalizable the results are (the less external validity the causal relationship has).

For example, suppose an experimenter seeks to determine how the color of flashcards affects a subject's ability to recall the definition of words on the flashcards. In this case, the experimenter would choose the color of the flashcard as the independent variable, and the subject's ability to recall the definitions as the dependent variable. In attempt to keep all other variables constant, the experimenter might only choose subjects to participate in the experiment to be 18 year old senior girls in high school who are in AP Biology. That is, the following variables were held constant: age, grade in school, gender, coursework. This experiment might provide a causal relationship like "subjects were able to recall the definitions of words on pink flashcards the most easily". This result could be useful to 18 years old senior girls in high school who are in AP Biology, but it the relationship is essentially meaningless for any subjects outside of that category! Since the experimenter sought to determine a *general* relationship between color of flashcard and ability to recall definitions, this particular choice of subject group has deemed this experiment useless.

Thus, when an experimenter's goal is to provide results that are widely generalizable, having a single independent variable, while all other variables are controlled is a terrible approach.


Seth Anderson - 3/10/2014 14:24:38

The claim that the best experiments are those with independent variables changing and all other variables stay constant is generally true, as it allows for more generalization of results than using only constant variables. However, there are a few rare cases for which other methods would be more beneficial. For example, if the case one is researching is extremely specific, using independent variables would be detrimental to an experiment if these variations have no possibility of occurring in real world conditions, as it would be a massive waste of time to test them.


Andrew Dorsett - 3/10/2014 14:26:13

It's not the case that experiments where all variables except the independent variables are constant. The reading gave a good example with the response time of fighter pilots. If you do an experiment with certain parameters then you can only say that when those parameters are present that your study is valid. For example say we wanted to do a study about what work space CS students work the best in. We sample a bunch of Berkeley students. The independent variable would be the work space and one constant dependent variable would be the fact that we're using Berkeley CS students. We find out that CS students work best outdoors. Can we conclude that all CS students work best outside or only Berkeley CS students? Weather is a variable that affects the outcome and in different areas the variables would be different. So we can't conclude that all CS students work best outdoors.


Gavin Chu - 3/10/2014 14:41:16

It's important to only manipulate the independent variables because experimenters want to compare results and find some kind of dissociation. If other variables are not held constant, comparison will be difficult because the experimenter can't determine whether a dependent variable changed due to the independent variables or the other variables. However, keeping all other variables constant is pretty much impossible for example variables such as genetics and environment. Although singling out the independent variable is useful, the reading also mentions that being too specific about the controlled experiment will cause the result to lose generality. Usually some randomization is good, but the overall experiment should be well controlled. There might also be a problem with confounding variables, so how the experimenter control the independent variables is also very important.


Daniel Haas - 3/10/2014 14:41:21

Today's reading certainly didn't make the above claim, and in fact brought up several arguments in counter to it. First, it is impossible to have "all other variables held constant" simply because there are countless factors that could potentially confound an outcome. This makes it impractical to attempt to design experiments where only the independent variable varies. Second, designing experiments in this way can be damaging to external validity. That is, unless all of the other variables are already constant in real life, trying to generalize results from an experiment in which they were held constant to real life scenarios will not be valid. Finally (and not mentioned by the article), in my experience, exhaustively listing and controlling for many potential variables in an experiment can induce a false sense of confidence in the comprehensiveness of the control. This is a bit of a psychological argument: if you've spent a lot of time thinking about and controlling for variables, it's easy to assume that you successfully covered everything and be blindsided by an obvious confounding variable you simply overlooked.

However, it's certainly worth mentioning that experiments that attempt to control for all variables except the independent variable are likely to have high internal validity. That is, if the experiment reveals that the independent variables influence the dependent variables, that influence is likely to be real and not a result of a hidden correlation with a confounding variable, even if it cannot be generalized to scenarios where the other variables do change.


Andrew Chen - 3/10/2014 14:44:42

The best experiments are not necessarily the ones where only the independent variables are manipulated, and all other variables are controlled. In fact, it is impossible to control all variables. For instance, in a psychological experiment, there are genetic, environmental, and societal influences that cannot be controlled. Furthermore, even if controlling all variables were possible, doing so would be detrimental to an experiment, as it would create a specific set of values for the controlled variables to which the result of the experiment would be associated. This means that the experiment can no longer be generalized to make a statement or assertion about a certain population. As a matter of fact, it is better for experiments to include some levels of randomness so that any samples or other variables are not biased. Of course, the designer of the experiment must be wary of confounding variables when looking for causes of change, but this is merely an unavoidable side-effect of allowing randomness, and thus the possibility of generalization.


Liliana (Yuki) Chavez - 3/10/2014 15:25:33

Selecting your independent variables and controlling for other variables is usually a lengthy preparation process, that is generally worth the effort to getting exact data. If however, you are running short on time, and accuracy is less important than a general idea of a conclusion, then running an experiment that lacks this careful preparation is beneficial to the project, as it will save time for other important endeavors higher on the priority list.


Sang Ho Lee - 3/10/2014 15:26:22

In an ideal world, the best experiments would be those in which only the independent variables are manipulated and all other variables are held constant. However, it is impossible to hold all other variables constant in the real world. If possible, one could gain a profound amount of information about the effects of a single variable on a system. Therefore, assuming that this is in fact possible for any relatively complex system, would be detrimental to any experimental procedure. For example, one must take into account confounding variables. Assuming that only independent variables are manipulated would be foolish, for in the case of confounding variables, the manipulation of the independent variables results in the change of other variables, even those assumed to remain constant. There is clearly a more complex layer of interplay between variables than a simple dichotomy of controlled and independent variables. Therefore, confounding variables decrease internal validity and the experimenter's confidence in conclusions about the effects of the independent variables. Furthermore, besides confounding variables, things such as selection bias, overlooking the effects of statistical regression, maturation, and even the act of testing distort results and affect the entire experimental system, such that even if the experimenter believes that all other variables are held constant, there is no way to be confident about the sole effects of the the independent variables.


Derrick Mar - 3/10/2014 15:26:38

As discussed in the reading by David Martin, experiments where only the independent variable is manipulated increases the “internal validity” of the experiment. In other words, we have more confidence that the change in the independent variable is responsible for the variation in the dependent one. However, it should be noted that there are several problems with making all other variables constant because there are essentially thousands of variables to control. Some mentioned in the reading are history, confounding variables (a big one in my opinion that sometimes cannot be controlled), random variables, and mortality (people leaving an experiment over time). But beyond the jargon, the claim that these types of experiments are the best seems very rational because science is all about starting with doubt and only concluding hypothesis with proof. Ultimately, that’s what gives science such a strong basis is it’s ability to be extremely valid at the beginning.

However, by trying to control all the variables in an experiment, you decrease the external validity of your results. In short, what this means is that your experiment cannot be as generalized to the real world. I love the example given in the reading where a general asks the experimenter if the results can help him find out how loud the alarm should be to quicken response time. Then the experimenter says “only if this, this, and this” is true which is obviously not the case in real life.

In short, what this implies is that the more controlled things are, the less it will be applied to the real world where not all variables are constant. I think that’s actually probably one of the biggest ideas in the reading: that controlling for more variables decrease the ability to generalize your results. Obviously, we often do experiments to help understand and therefore improve things in the real world. So being able to have some random variables that our applicable to real life is often a good thing and the majority of the time, less expensive.


Everardo Barriga - 3/10/2014 15:27:41

This would be considered a completely controlled experiment, and while you do achieve more precision out of your results you miss out on generality. Your results are confined to only exist within the constant variables you chose. For example if you were doing a test on whether wearing headphones while running makes you run faster and you were somehow able to keep all variables constant then you would only be able to make the claim for those variables you chose. If you chose onle one specific person to run, you wouldn't be able to generalize it for all people. Or if you chose to only run your experiment on a treadmill, you wouldn’t be testing for running outside and you would also be missing out on all of the variables randomized within constraints, for example you could be running on the same trail but you don’t get to decide what animals can show up, or whether the dirt is particularly soft on any given day.


Conan Cai - 3/10/2014 15:28:39

I believe that for the general case, holding variables constant while manipulating others minimizes the sources from which an outcome results. It is a good idea to try and keep things as constant as possible so that during testing, a result can be attributed directly. However, there is a point at which too much control can actually be detrimental. Having too much control leads to scenarios that are too specific. These specific scenarios might not happen in real life - or the occurrence is rare. In the real world, it is unlikely that only a single variable will be affecting the outcome of an action. Often times it will be multiple factors that cannot be predicted that will influence an outcome. Having too much control neglects any possible number of other influences so as a result, results derived from an experiment that is too controlling may not necessarily be valid when applied to the real world. However, using random variables with constraints allows there to be randomness to scenarios proposed by the experiment so that there is more external validity. The constraints serve to control variables to a degree so that specific variables can be attributed to the cause of a result. In this case with random variables and constraints, I believe that the general idea of manipulating independent variables while limiting all other variables (putting constraints on the random variables) leads to the "best" experiments.


Erik Bartlett - 3/10/2014 15:28:59

In a laboratory setting, where the experimenter is attempting to come up with a one to one mapping of cause and effect, the control of variables is much prefered (and even required) for the experiment to be successful; but this scenario is not always the best for testing. Given a user interface or product, if the experiment were to control everything but variations on the interface the test would not tell the experimenter much. If a product is going to be used in multiple scenarios (on mobile, while seated, etc) then the tests must incorporate all of these things, which is impossible. It also must take into account the random things that happen in day to day interactions and adapt or still be usable. By limiting the variables acting on the user the tester is unable to get a good picture at how useful the product is in a real-life context, because that is what is applicable. The sterile development environment would lead to a disconnected and possibly terrible product.


Vinit Nayak - 3/10/2014 15:31:49

The validity of the claim in question depends on the nature of the experiment. When only one independent variable is allowed to change, it is easier to see directly what caused the changes by measuring the dependent variables at the end of the experiment. This can prove to be slow if there are a range of values the independent variable can take and all of them need to be tested, since a separate experiment will need to be conducted to test each value (a lot of work, even if they are run concurrently assuming the resources are available). On the other hand, if the experiment is being done with a target value for dependent variables and the experimenter simply needs to know what input will provide the desired output, the conductor of the experiment can change multiple independent variables to get the result he/she needs.

The claim becomes invalid if one of the goals of the experiment is to be as natural as possible, because the nature of many phenomenons that are examined rarely result from just one variable changing. In the real world, many dependent variables change on a combination of multiple independent variables, and expected might not be produced by only changing one. For example, if one wants to check what weather conditions are required to produce rain, it is difficult to pinpoint only one factor. If one suspects the single independent variable are clouds, one could simply have a cloudy or cold day without rain.


Andrew Lee - 3/10/2014 15:35:05

For the most part, I agree with this claim. It helps keep the space of potential explanations relatively small for the phenomenon the experiment is experimenting. However, as the reading discusses, it can be carried too far. If the experimental parameters are too controlled, then the results may be less applicable (since the results are only shown to hold in the experimental conditions laid out) and thus, less useful outside of its study.


Sol Han - 3/10/2014 15:44:30

Only manipulating the independent variables while holding all other variables constant is not always the best way of carrying experiments. This is because confounding variables can complicate the results and therefore affect the internal validity of the experiment. For example, one can imagine an experiment in which one is trying to determine whether PC owners have higher life satisfaction than Mac owners. The experiment shows that Mac owners are happier. However, owning a Mac increases the likelihood that the owner uses the Safari browser, and perhaps the Safari browser is the actual cause of the higher satisfaction rating. Here, the Internet browser acts as a confounding variable. Therefore, even when independent variables are controlled, we cannot necessarily draw valid conclusions from the results. One may conduct additional experiments to remedy this, such as by studying the effects of these confounding variables.


Armando Mota - 3/10/2014 15:52:30

The claim that the best experiments are those where only the independent variables are manipulated and all other variables are held constant has some truth to it, however it depends on what result you are seeking and what you are measuring. In theory, a “perfect” experiment is like this - everything being controlled for is the only way you get to a pure finding, or find that the dependent variable’s changes are truly caused by your manipulation of the independent variable and nothing else. In practice, however, this is often not possible, nor is it always preferable. There are many situations in which by strictly controlling all variables you create a situation which is unlikely to be found anywhere else in actual reality - as the paper explains, this reduces the experiment’s external validity. Creating a unique set of circumstances doesn’t tell us much about what would happen in real circumstances, and thus we make exceptions in control for greater generalizability and value. Take, for example, a hypothetical study that Nascar is funding in order to see how the harsh conditions inside the cockpit of race cars alter decision making (and decision making in regards to safety). The experimenter places the drivers in order on the track and has them ride around at full speed while giving certain drivers instructions at random times to make certain decisions like “pass the driver next to you or stay where you are”. Unfortunately because everything is so tightly controlled, the drivers aren’t actually in the environment they would be in during a race, and thus the results don’t help in finding an answer to the initial research question. In order to fix it, the experimenter would have to measure drivers during actual race time, however there would no way to control anything in that situation.


Robin Sylvan - 3/10/2014 16:08:31

It is important for studies to pick an independent variable and try to control other variables in order to keep its validity intact. At the same time, trying to micromanage every other variable in the experiment can lead it to lose value in other contexts. Keeping every other variable the same in a study would lead to a unique environment that wouldn't be replicated in the real world. Let's take the example of trying to discover how fast a new user is able to navigate through different first user flows for an app we're designing. If we wanted to keep every variable in the experiment the same, we might pick twenty 21 year old males with an IQ of 110 who all own iPhones and have lots of experience with similar applications. The level of detail would create an environment not representative of the real world and our actual users. There are variables in experiments we'd like to have randomized in order to find the flow that connects most effectively with our user base. While it is important for experiments to try to keep some control variables and account for confounding variables, its generally a good idea to have some randomized variables to make sure the study is valid in more than one setting.


Doug Cook - 3/10/2014 16:15:19

That only experiments that allow independent variables to change are the best is too general of a claim. It supposes that the experimenter has the ability to actually control all other variables, and that they actually desire to do so. In his chapter on psychology experiments, Martin notes that this is almost never the case in practice. Real-world experiments are subject to the presence of confounding variables – a circumstance that changes systematically with the independent variables. Martin proceeds to discuss how such variables constitute “threats to internal validity” of the experiment, though there may also be cases where those threats aren’t so serious.

Attempting to control every variable other than the independent ones may even be detrimental if the success of the experiment rests on participants being influence by the environment. If the experimenter was interviewing people in a public place about interactions there, having some “uncontrolled” or random interference from the environment may actually stimulate the interviewee to provide more thorough answers. Martin also acknowledges the utility of random variables in his chapter, embracing them when studies need to be generalized by other fields. There are certainly other examples where having too much control over the variables can shift the results – especially if the experimenter unknowingly sways them a certain direction. Even so, only manipulating the independent variables is a tried and tested method for producing robust data and should be kept in mind when designing experiments.


Tristan Jones - 3/10/2014 16:18:04

This is not a good claim. Independent variables must be manipulated one at a time in order to isolate the effect of a single variable. When two variables are manipulated at once, it can be very difficult to isolate the effects on a third variable that depends on the two independent ones. Therefore, it is much better to vary one independent variable at a time and see the effects from there.

However, this is not always the case. A perfect example of this is Parrondo's paradox. Only discovered very recently*, this is a proof that if you have two variables with positive effects, when you activate both of them, you can have a net negative effect. This means that we cannot isolate each independent variable at a time and generate some kind of linear/nonlinear combination of them. This means experimental testing, and combining results from other experiments, will always be difficult even if we only manipulate independent variables.


  • 1996, which is very recent from a mathematical standpoint


Brenton Dano - 3/10/2014 16:19:33

If all other variables remain constant sometimes it makes it so the experiment cannot be generalized to real world situations. In the reading, the example they used was the General Nosedive who wanted to see what brightness to use for his low altitude warning button. If all other variables are held constant then the information gathered from the study might only be applicable to a narrow group of individuals with distinct characteristics. This could essentially make the experiment useless if it only applies to male college sophomores who are in fraternities or something of that nature. Obviously, having some control variables is important for experiments but they need not be too rigid as to make it so the results from the experiment cannot be extended to real world situations. Therefore, it's not always the case that the best experiments are those where only the independent variables are manipulated and all other variables are held constant. It should be noted that it's also sometimes important to have variables randomized with constraints which are not necessarily constant either but random with certain constraints like when the study of violent video games is played (morning or night) you would want to have an equal number of morning and night sessions but occurring in random chunks.


Haley Rowland - 3/10/2014 16:22:45

There are certainly circumstances in which an experiment should keep all variables constant excepting the independent variables. For example, if an experimenter is testing the effects of some medical drug, you want to be sure that the result you see is indeed from the drug of interest and not from some other drug or circumstance of the subject. However, when doing more exploratory experiments, keeping all variables constant is not necessarily the best course of action. Allowing variation may lead to discoveries in areas which were not the direct focus of study. Take, for example, the discovery of penicillin: Dr. Fleming decided to examine a contaminated petri dish, rather than ignoring the circumstance that didn’t fit with his experimental protocol, and led to his discovery of a life-saving antibiotic. Additionally, holding all variables constant except those which are independent doesn’t let you generalize the results to a larger population or more lenient circumstances.


Daphne Hsu - 3/10/2014 16:33:38

I think that the claim is somewhat false, because in the real world, variables differ so much. These experiments would not fully represent what would happen in real life, and you would only have results of a few variables. At the same time, we may be able to pinpoint characteristics of certain variables by isolating them in this manner. This process could be detrimental when there are many independent variables, and one constant variable. It would be hard to draw conclusions from this experiment about the independent variables, since there are so many possible combinations of them that it would be hard to tell what was happening as each variable changed.


Diana Lu - 3/10/2014 16:35:35

The claim that the best experiments are those where only the independent variables are manipulated and all others held constant is not always true. For example, even if only the independent variables are manipulated, it is impossible to guarantee that the outcome will be unaffected by any other variables. Also, an experiment in which all independent and dependent factors could be completely controlled would not have a high level of external validity, which refers to the validity of generalized inferences. This process would be detrimental in an experiment when there is are a high number of confounding variables.


Anthony Sutardja - 3/10/2014 16:39:30

The best experiments cannot possibly hold all other variables constant; there are simply too many variables to be controlled. The reading brings up the point that many variables, like human emotion, metabolism, etc, can affect the results of an experiment. It is not possible to control every minute detail. That being said, the reading suggests that we should sample randomly and allow for these random variables. Even if these factors vary across individuals, the experiment should yield good results if it treats subject selection randomly as well.

That being said, experimenting while allowing for random variables allows the experimenters to generalize their results to the wider population of people. However, including random variables has its potential detrimental effects as well. Because all variables and processes are assumed to be random, true errors in the experiment could be accounted for in the experimental results as well, which could lead to findings that are more generalized than they should be.


Cory McDowell - 3/10/2014 16:40:45

There is valid logic supporting the claim that the best experiments are those where only the independent variables are manipulated and all the other variables held constant. When conducting an experiment, you want things to be as similar as possible. For example, the author discussed, for the reaction time experiment, that they wanted the lighting to be consistent, all participants to be right handed, and the temperature be constant. Because we are measuring reaction time, we want all other variables to be controlled, if possible. If this is the case, then our results will solely highlight people’s reaction times, and we can eliminate some other cause affecting our study.

However, this process can be detrimental, because it is not relevant to the real world. In the real world there are countless random variables, and countless external forces affecting a single person’s day-to-day life. So, by only manipulating independent variables, you will obtain a result to a very specific scenario, which could be detrimental if said results cannot be more broadly applied.


Will Tang - 3/10/2014 16:43:17

While it is true that holding all variables constant aside from the manipulated independent variables yields controlled and informative results, it is not always the case that the test environment should be completely controlled. Human computer interaction undeniably varies from person to person, and sometimes it is more appropriate to provide the testers with an environment comparable to the real world, complete with all its randomness and inconsistency. If a UI designer wants to examine reaction times in a completely controlled environment where every user experiences the same temperature, amount of light, etc., then controlling variables makes sense. If a UI designer wants to obtain a more holistic perspective of the user interactions, then it would likely be wise to include many of the random variables associated with the real world. Ultimately, if an experiment is too controlled, its results may not tell the designers anything useful. One example would be the Stanford Prison Experiment. Many people found the results very intriguing and telling, but the fact remains that the tester pool was not in any way comprehensive, with most of the testers being classified as psychologically stable, middle class white men. In addition, many of the controlled variables were not necessarily standard across all prisons, creating a very specific environment that didn't necessarily provide any general information.


Stephanie Ku - 3/10/2014 16:48:51

The claim that the best experiments are those where only the independent variables are manipulated and all the other variables held constant is not always the case. While the independent variables are the ones being manipulated, only the control variables should really be held constant whilst other variables, such as random variables will vary and not necessarily held constant. As mentioned in the reading, we might not always wish to control all the variables in the experiment because this would create a unique set of circumstances. This highly controlled experiment will be less generally applicable in its results. The decision to increase control increases the precision of the result, i.e. increases internal validity, but decreases its generality, i.e. external validity. This process could be detrimental when we want to test the reaction time of people entering the ‘Enter’ button on a webpage. If we control every aspect, such as age of tester, gender, etc., then the results will only be true for that specific case, and will be unable to generalize to a larger population. This will then render the results almost useless as that it does not serve the purpose of the experiment.


Opal Kale - 3/10/2014 16:49:51

The claim that the best experiments are those where only the independent variables are manipulated and all other variables are held constant is false as mentioned in "Doing Psychology Experiments" by David Martin. You should o not wish to control all variables because if you did, you'd create a unique set of circumstances. Furthermore, if you controlled all the variables while only manipulating the independent variable, the relationship established by the experiment would only hold true in one very specific case. Thus, you could not generalize the experiment because it would be very circumstantial to what we had set to be the control variables. The less general the experiment is, the less applicable the results. This process would be detrimental in an experiment if you, for example, if you did a highly controlled laboratory experiment when you wish to generalize to real world settings where perhaps it is noisy, hot crowded, and the workers are tired and unmotivated but have a lot of practice. In addition, this process would be detrimental if there were too many trials at a high intensity/very controlled occurring early in the sequence.


Justin MacMillin - 3/10/2014 16:55:26

It is not always the case that the best experiments are those where only the independent variables are manipulated and all other variables are constant, to a certain degree. Yes, it is true that experiments where only the independent variables are manipulated can be effective and prove hypotheses. However, if experiments become too specific they do not apply to the general case anymore. The author states that as a general rule the more specific a case gets, such as holding more variables constant, the less it applies to the general case. It is important to allow for random variables to affect the outcome of the results as long as those random variables are duly noted and will not drastically affect the outcome of the results. It is impossible to avoid random variables when studying something about the world. In any case, there will always be external factors that affect any situation. With this in mind, these imperfections should be taken into account when performing a study because it makes the situation closest to real life situations. With too many things held constant, the experiment becomes less and less applicable to real life interpretation. The reading discusses the importance of random variables which are factors in a study that the researchers know will affect the results. They allow for these variables to remain because they want their study to be generalized as much as is appropriate.


Insuk Lee - 3/10/2014 17:02:17

Experiments where only the independent variables are manipulated and all other variables held constant is the de facto standard for any kind of scientific experiment and has been followed for a long time. As long as there are no confounding variables, this method will yield the most accurate results. However, there are often cases, ie when there are confounding variables, when the results will not align due to the fact that these auxiliary, or hidden variables affect other control variables in a way that it is not consistent across different runs of the experiment, or produce outcomes that we had not expected. This is indeed detrimental to the experiment since we are exposed to something unexpected and therefore we need to make sure that the variables are strictly scrutinized and enforced. For example, when we carry out an experiment on several users to test the usability of an app, one user might rate the app well when all others rate it as poor because that user has seen or experienced similar interface designs in the past and recognize with more fluidity how to go about using the application.


Kevin Johnson - 3/10/2014 17:02:28

If every single variable other than the independent variable were held constant, it would be difficult or impossible to generalize the results. Since only one very precise situation was tested, there would be little reason to believe that the results would hold in situations different from the control situation devised for the experiment. If the variables held constant are identical to the situation in the intended use case, as may be the case when evaluating highly controlled processes such as manufacturing, that may not be a concern. If the results are intended to hold for a broad spectrum of people, such as if you are attempting to generalize results to describe humanity in general, having an extremely controlled experiment would be detrimental.

Incidentally, this reading provides an admirably clear and concise explanation of the scientific process to my layman's eye, and I'm impressed.


Luke Song - 3/10/2014 17:05:57

It is not always true that constant variables always make for good experiments. Controlling all the other variables can make the experiment apply to only very specific settings, and reduce the generality of the experiment. The specificity of the experiment must be general enough to be applicable, yet specific enough such that the experiment shows trends. Using random variables, the experimenters are able to allow variables to vary and still get useful results through statistics.


Brian Yin - 3/10/2014 17:08:18

Experiments where only independent variables are manipulated and all other variables are hold constant have theoretically have the advantage of giving the user complete control over their experiments and allow them to find the effects of each independent variable. While this control is desirable, there are some problems with this approach. First of all, it is unlikely one will be able to design an experiment that holds in which all variables except for the independent variables are constant as there are an infinite number of variables. Second, even if one could hold all variables except for the independent variables constant, the results may not be very useful as they are unable to be generalized to situations where the control variables are not met.

This process would be fairly detrimental in cases where there are many confounding variables. These variables make it difficult to effectively control the experiment. In cases where one is successful in doing so, then the data is probably not generalizable.


Kaleong Kong - 3/10/2014 17:10:06

No, this is not always the case. If we may want to know the correlation of between independent variables with different variables, we are not able to know that if all other variables held constant.


Hao-Wei Lin - 3/10/2014 17:15:15

This is not always the case, here are some reasons:

1. if one ignores manipulating the random variables, the experimental results may not be accurate. For instance, a psychological study is conduct to test the correlation between lack of sleep to visual attention. If the experimenter tests all the students who lack sleep in the morning and all the students who have healthy amount of sleep in the afternoon. The results may be biased because the experimental results may be results of the time of the day (morning or afternoon) instead of the amount of sleep.

2. If the experimenter doesn't consider the possibility of having confounding variables in his experiment, in an experiment that has the test subjects, say, respond to three different light intensities in which the reaction time is measure, the results can due to both the intensity of the light and the amount of practice the test subjects have. There is no way to avoid this conundrum if confounding variables are not considered.

3. If the experiment doesn't take into consideration the effect of "history", he/she might misinterpret the test results. History is usually something impossible/hard to control, but should definitely be taken into considerations. History refers to things that might have taken place between experimental test. An example is that if an experimenter is testing the effect of creative teaching on student performance over three years. He might ignore that fact that during each testing periods, the student might have become smarter/more responsive not because of the creative teaching, but the change in class requirement/ course requirements etc, that are administrated by the school.


Prashan Dharmasena - 3/10/2014 17:18:39

Experiments where only independent variables are manipulated are good when you can ensure that the other variables stay constant. So, they are very good for scientific experiments. But when you try and apply this to design and contextual inquiry, you will find that there are some variables that you cannot ensure are the same. For example, every user has a memory of previous interfaces they've used, there is no way to make sure that this variable is constant across all users. Even the gulf of evaluation and gulf of execution will be different for each user. For things like these, it is better to just use the process of contextual inquiry and observe them.


Allison Leong - 3/10/2014 17:18:41

Good experimental design involves manipulating the independent variables and holding other variables constant, but this is not always possible. Given the variability between participants in an experiment, there will always be qualities that cannot be held constant between subjects. Luckily, there are ways to handle these differences between participants. Strategies include random assignment, where subjects are randomly assigned to one independent variable or another. With random assignment, the idea is that variabilities between subjects will be randomly (and therefore, evenly) distributed amongst the experimental groups. Since perfect experimental design is very difficult to attain, confounding variables may also exist which change systematically with the independent variables. If confounding variables are not detected and eliminated, then the validity of experimental results will be compromised even if the designer believes that only the independent variables are manipulated and all other variables are held constant.


Albert Luo - 3/10/2014 17:20:18

Manipulating only the independent variables while holding the other variables constant can be useful when we are trying to determine correlation or cause and effect, because it limits confounding factors. It can also be useful for testing out new features for a product, so that we can determine for sure which feature is most agreeable to users. This process can be detrimental if our experiment is seeking to obtain basic information about whether the app can be used or not, or even if it is appropriate for its intended task. To that end, manipulating only independent variables severely limits the feedback we can get, when the idea is to get an overall picture.


Sol Park - 3/10/2014 17:21:13

The independent variable is independent of the participant's behavior. I do not think it is always the case that the best experiments are those where only the independent variables are manipulated and all other variables held constant. With the result from the experiment, people could interpret differently. Everyone could be right or everyone could be wrong. The reading explicitly says that people are unintentionally confound the experiment with a variable that changes systematically with the independent variable. Hence, it is not always the case that the best experiments are those where only the independent variables are manipulated and all other variables held constant.


Patrick Lin - 3/10/2014 17:23:18

Holding as many variables as possible static is not necessarily the best form of experimentation because it threatens the validity of an experiment. People conduct tests so that they are able to draw useful conclusions from them, but strictly controlling variables may produce results that are far too specific to the testing parameters. Strict manipulation is useful if the testers have an extremely well-defined objective, such as finding soda preferences in 18 year old males in high school that consume a specific brand 3 times a week during lunch, but most experiments have more general goals (e.g. soda preferences of high schoolers) and thus require looser enforcement of variables. Internal validity is also threatened by strict manipulation, as selection (especially self-selection) is a major factor that could compromise conclusions. Allowing self-selection in a soda experiment, for example, could result in only people who frequently drink soda and already have an established preference for a specific brand. This is why many experiments require increased, not decreased, randomization of variables.


Meghana Seshadri - 3/10/2014 17:25:03

Independent variables are usually left to manipulate to observe behavior of something under various circumstances. However, other circumstances that occur within the experiment must also be accounted for in some way. This is why there are control variables, so that we can control other circumstances such that they remain to one level as we examine the independent variables. While control variables are very important to the experimental method, making every other variable than the independent variable in your experiment a control variable is first off impossible as well as detrimental to the external validity of your experiment.

It is impossible to control that many variables, especially those related to genetic, environmental, or human influenced conditions. Furthermore, by having all other variables be control variables, you are invariably experimenting only for a unique set of conditions or circumstances. Manipulating only the independent variable means that creating a generalization based upon the results of the experiment would be pointless as the experiment was already only catering to a specific case. Hence, the more number of control variables in an experiment, the less generally applicable the results become. For example, say a librarian is trying to experiment the library’s closing hours such that it caters to the most number of students, depending on the time periods that most number of students arrives to the library. By having many control variables, the results of your experiment could look like: “An plausible answer might be reached if the student is an Engineering undergraduate student who lives on the north side of campus and doesn’t like traveling far late at night. But this doesn’t make sense as the librarian is looking to cater to as many students as possible.

External Validity is the amount of generalization that can be configured from an experiment. This decreases and is threatened with the higher number of control variables added to the experiment. Threats to the external validity of an experiment takes away from an important basis of an experiment in the first place, which is to be able to draw up useful, generalized conclusions. A solution to not having to control all circumstances in an experiment is using random variables, and allowing that variation to occur.


Chirag Mahapatra - 3/10/2014 17:25:26

No. This is not the case. This is because if all other variables are held constant then we would get the result for only that specific case. We will not be able to generalize the result for all cases. Hence while the independent variables should be manipulated, all other variables should be randomized. This helps improve the external validity of the experiment. In some cases it might be fine to randomize by constraint as well. E.g. If we are trying to gauge the attentiveness of students according to different lecture paces, and we run all our experiments on a single day in a specific room, then we will only be able to comment about the attentiveness according to those parameters and not be able to generalize it for a large case.


Cheng Sima - 3/10/2014 17:25:53

Some people claim that the best experiments are those where only the independent variables are manipulated and all other variables held constant. I believe that this is not always the case because a very strictly controlled experiment will decrease external validity. Admittedly, it will increase internal validity by ensuring that the causal relationship is due to the independent variable. However, we do experiments in order to generalize causal relationships in the real world. We know that the external world is not as strictly controlled, and such a controlled experiment will not bring about meaningful results to be generalized.

Therefore, this process will be detrimental when the results of the experiment will be directly transferred and used in the real world. For example, if we are testing a drug to be used by people in daily lives, we cannot strictly control the experiment to lab conditions, since you cannot control the external real life conditions where the drug will actually be used in.


Romi Phadte - 3/10/2014 17:25:59

This isn't always the case. A great example of a situation when this would be detrimental is in user interface design. The changing of variables in an experiment implies that you know what variables to be changed. However, with designing an user interface, it is difficult to know which colors features or designs are wrong. Thus it might be a good idea to go ahead and get feedback and then change a variety of variables, many of which you may not have considered before. Additionally, it may not be viable to change only one variable if there are features that have to go together or if there are simply too many things to test. Thus, a multilayered A/B test might be necessary across a huge number of users. This is pretty common among the Facebook engineering team. In design, it isn't important to establish a direct correlation between variables but just know that the design has been improved. Often times the best way to determine improvements is to try changing everything a little. By limiting the experiment to only one single variable, you limit the ways the user interface can expand and evolve. Thus it is necessary, to keep an open mind and make sure that any variable within an interface is editable and not get too attached to any one design.


Christina Guo - 3/10/2014 17:26:54

The best experiments are not always the ones where only the independent variables are manipulated. Controlling all dependent variables may have problems such as an inability to generalize the results of the experiment. One solution to this would be to use randomization within the pool of dependent variables, which would make it statistically likely to have a well-distributed set of attributes for any particular dependent variable. This improves the external validity of the experiment, and allows us to generalize our results to a greater range of situations. An example of this would be a large pool of subjects assigned to different groups based on something completely random such as a coin toss. In this case, test subjects with certain characteristics are not more likely to be in any one group. However, it is still important to thoroughly analyze the effect of random variables on the dependent variables we are looking to learn about, since they may have unintended effects. For this reason, sometimes the experimenter would want to use randomization within a certain set of constraints. For example, we would not want to assume that volunteer test subjects coming in at different times of the day would completely randomize the attributes of the entire pool.

Nevertheless, there are still benefits to controlling many of the variables in the experiment to prevent compromising the internal validity, and casting doubt on whether the outcomes were caused by the specific independent variable we are looking at, or another varying variable, such as what happened with the Coca Cola experiment with cups marked M and Q.


Alexander Chen - 3/10/2014 17:29:05

In most cases, experiments where only the independent variables are changed and the other variables are held constants are the most representative of how an input affects an output. In essence, the experiments have carefully reduced the amount of randomness that could cause the output to differ.

This also reduces the amount of rows in a table, needed to keep track of the input variables. For example, imagine if the independent variable, control variables, and confounding variables were all boolean values. Each time we can isolate a control or confounding variable, we can reduce the number of input rows by a factor of two. The scenario mentioned, with only independent variables being changed, can show a much more clear correlation between the independent variables.

Of course, there are ways to perform variable elimination, but these operations might not yield the accuracy that a fixed control and confounding variable experiment might yield.

However, the most representative experiments may not always have the non-independent variables held constant. In the physical world, sometimes confounding variables are linked to some of the input independent variables that we want to modify.

For example, we might want to try to determine the effect of a user's physical location on the way they interact with their cell phone. We might take walking on a sidewalk and riding a bus as input independent variables. We might notice that bus riders tend to listen to music and keep their phones in their pockets while strollers are texting on their mobile devices. One might complain that confounding variables, like the amount of people around the subject, the noise level, etc, were not maintained at the same level for the two types of experiments.

However, we must understand that many of these confounding variables are a byproduct of an independent variable. Riding a bus means that you will likely have people in close proximity, the noise level is high, and you don't have to be weary of your surroundings. Strolling on the sidewalk gives you more space, but you must be aware of where you are walking.

If the experimenters purposely restrict the control variables to that of the "bus ride" scenario, then the setting might not be representative of walking while using a mobile device.

So, the important takeaway is that experimenters must find a good balance of controlling their control and confounding variables in conjunction with their input independent variables to achieve a generalizable conclusion. These are more relevant to the problems that designers and managers need to solve.


lisa li - 3/10/2014 17:30:26

No. First, in the real world setting, it's impossible to design a perfect experiment where only the independent variables are manipulated and all other variables held constant. Also, controlled variables are important so the researcher can take full command of some experiment conditions.


Cheng-yu Hong - 3/10/2014 17:47:39

At first glance, it might seem that only manipulating the independent variables and keeping all other variables constant is a good idea. However, this is not always the case, as usually the more highly controlled the experiment is, the less generally applicable the results are. You will only be able to relate to a highly controlled experiment and draw conclusions from the results if all the controlled conditions match, which is rarely the case in real-world scenarios. Additionally, holding all variables besides the independent variables constant would be detrimental to the sample size. For example, an experiment where the age, gender, ethnicity, visual acuity, and IQ are hold constant across subjects will have difficulty finding subjects that match the criteria. For most cases, it will be better to relax the selection constraints and draw from a much larger sample size.


Andrew Fang - 3/10/2014 20:14:43

This is not always the case. Sometimes, there may be other variables that interact with the independent variable a large percentage of the time in the real world. If we hold these variables constant (especially if that constant is only represented a small fraction of the time in real scenarios), we lose that bit of data and we lose the ability to generalize our findings. With too many control variables, our experiment becomes too different from the real world, and although we may find a correlation and deem something to be effective, it may very well be that that is not the case. Instead, what we find in our experiments is efficacy: how well something works in a controlled, experimental environment. If we want to test effectiveness: how well something works in real life, our tests would not show this. This process would be detrimental especially if we hold the variables at an unreasonable constant, at values that we will unlikely see in our population dynamic.