Vittana.org

16 Advantages and Disadvantages of Experimental Research

How do you make sure that a new product, theory, or idea has validity? There are multiple ways to test them, with one of the most common being the use of experimental research. When there is complete control over one variable, the other variables can be manipulated to determine the value or validity that has been proposed.

Then, through a process of monitoring and administration, the true effects of what is being studied can be determined. This creates an accurate outcome so conclusions about the final value potential. It is an efficient process, but one that can also be easily manipulated to meet specific metrics if oversight is not properly performed.

Here are the advantages and disadvantages of experimental research to consider.

What Are the Advantages of Experimental Research?

1. It provides researchers with a high level of control. By being able to isolate specific variables, it becomes possible to determine if a potential outcome is viable. Each variable can be controlled on its own or in different combinations to study what possible outcomes are available for a product, theory, or idea as well. This provides a tremendous advantage in an ability to find accurate results.

2. There is no limit to the subject matter or industry involved. Experimental research is not limited to a specific industry or type of idea. It can be used in a wide variety of situations. Teachers might use experimental research to determine if a new method of teaching or a new curriculum is better than an older system. Pharmaceutical companies use experimental research to determine the viability of a new product.

3. Experimental research provides conclusions that are specific. Because experimental research provides such a high level of control, it can produce results that are specific and relevant with consistency. It is possible to determine success or failure, making it possible to understand the validity of a product, theory, or idea in a much shorter amount of time compared to other verification methods. You know the outcome of the research because you bring the variable to its conclusion.

4. The results of experimental research can be duplicated. Experimental research is straightforward, basic form of research that allows for its duplication when the same variables are controlled by others. This helps to promote the validity of a concept for products, ideas, and theories. This allows anyone to be able to check and verify published results, which often allows for better results to be achieved, because the exact steps can produce the exact results.

5. Natural settings can be replicated with faster speeds. When conducting research within a laboratory environment, it becomes possible to replicate conditions that could take a long time so that the variables can be tested appropriately. This allows researchers to have a greater control of the extraneous variables which may exist as well, limiting the unpredictability of nature as each variable is being carefully studied.

6. Experimental research allows cause and effect to be determined. The manipulation of variables allows for researchers to be able to look at various cause-and-effect relationships that a product, theory, or idea can produce. It is a process which allows researchers to dig deeper into what is possible, showing how the various variable relationships can provide specific benefits. In return, a greater understanding of the specifics within the research can be understood, even if an understanding of why that relationship is present isn’t presented to the researcher.

7. It can be combined with other research methods. This allows experimental research to be able to provide the scientific rigor that may be needed for the results to stand on their own. It provides the possibility of determining what may be best for a specific demographic or population while also offering a better transference than anecdotal research can typically provide.

What Are the Disadvantages of Experimental Research?

1. Results are highly subjective due to the possibility of human error. Because experimental research requires specific levels of variable control, it is at a high risk of experiencing human error at some point during the research. Any error, whether it is systemic or random, can reveal information about the other variables and that would eliminate the validity of the experiment and research being conducted.

2. Experimental research can create situations that are not realistic. The variables of a product, theory, or idea are under such tight controls that the data being produced can be corrupted or inaccurate, but still seem like it is authentic. This can work in two negative ways for the researcher. First, the variables can be controlled in such a way that it skews the data toward a favorable or desired result. Secondly, the data can be corrupted to seem like it is positive, but because the real-life environment is so different from the controlled environment, the positive results could never be achieved outside of the experimental research.

3. It is a time-consuming process. For it to be done properly, experimental research must isolate each variable and conduct testing on it. Then combinations of variables must also be considered. This process can be lengthy and require a large amount of financial and personnel resources. Those costs may never be offset by consumer sales if the product or idea never makes it to market. If what is being tested is a theory, it can lead to a false sense of validity that may change how others approach their own research.

4. There may be ethical or practical problems with variable control. It might seem like a good idea to test new pharmaceuticals on animals before humans to see if they will work, but what happens if the animal dies because of the experimental research? Or what about human trials that fail and cause injury or death? Experimental research might be effective, but sometimes the approach has ethical or practical complications that cannot be ignored. Sometimes there are variables that cannot be manipulated as it should be so that results can be obtained.

5. Experimental research does not provide an actual explanation. Experimental research is an opportunity to answer a Yes or No question. It will either show you that it will work or it will not work as intended. One could argue that partial results could be achieved, but that would still fit into the “No” category because the desired results were not fully achieved. The answer is nice to have, but there is no explanation as to how you got to that answer. Experimental research is unable to answer the question of “Why” when looking at outcomes.

6. Extraneous variables cannot always be controlled. Although laboratory settings can control extraneous variables, natural environments provide certain challenges. Some studies need to be completed in a natural setting to be accurate. It may not always be possible to control the extraneous variables because of the unpredictability of Mother Nature. Even if the variables are controlled, the outcome may ensure internal validity, but do so at the expense of external validity. Either way, applying the results to the general population can be quite challenging in either scenario.

7. Participants can be influenced by their current situation. Human error isn’t just confined to the researchers. Participants in an experimental research study can also be influenced by extraneous variables. There could be something in the environment, such an allergy, that creates a distraction. In a conversation with a researcher, there may be a physical attraction that changes the responses of the participant. Even internal triggers, such as a fear of enclosed spaces, could influence the results that are obtained. It is also very common for participants to “go along” with what they think a researcher wants to see instead of providing an honest response.

8. Manipulating variables isn’t necessarily an objective standpoint. For research to be effective, it must be objective. Being able to manipulate variables reduces that objectivity. Although there are benefits to observing the consequences of such manipulation, those benefits may not provide realistic results that can be used in the future. Taking a sample is reflective of that sample and the results may not translate over to the general population.

9. Human responses in experimental research can be difficult to measure. There are many pressures that can be placed on people, from political to personal, and everything in-between. Different life experiences can cause people to react to the same situation in different ways. Not only does this mean that groups may not be comparable in experimental research, but it also makes it difficult to measure the human responses that are obtained or observed.

The advantages and disadvantages of experimental research show that it is a useful system to use, but it must be tightly controlled in order to be beneficial. It produces results that can be replicated, but it can also be easily influenced by internal or external influences that may alter the outcomes being achieved. By taking these key points into account, it will become possible to see if this research process is appropriate for your next product, theory, or idea.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Med Educ
  • v.8(4); 2019 Aug

Logo of pmeded

Limited by our limitations

Paula t. ross.

Medical School, University of Michigan, Ann Arbor, MI USA

Nikki L. Bibler Zaidi

Study limitations represent weaknesses within a research design that may influence outcomes and conclusions of the research. Researchers have an obligation to the academic community to present complete and honest limitations of a presented study. Too often, authors use generic descriptions to describe study limitations. Including redundant or irrelevant limitations is an ineffective use of the already limited word count. A meaningful presentation of study limitations should describe the potential limitation, explain the implication of the limitation, provide possible alternative approaches, and describe steps taken to mitigate the limitation. This includes placing research findings within their proper context to ensure readers do not overemphasize or minimize findings. A more complete presentation will enrich the readers’ understanding of the study’s limitations and support future investigation.

Introduction

Regardless of the format scholarship assumes, from qualitative research to clinical trials, all studies have limitations. Limitations represent weaknesses within the study that may influence outcomes and conclusions of the research. The goal of presenting limitations is to provide meaningful information to the reader; however, too often, limitations in medical education articles are overlooked or reduced to simplistic and minimally relevant themes (e.g., single institution study, use of self-reported data, or small sample size) [ 1 ]. This issue is prominent in other fields of inquiry in medicine as well. For example, despite the clinical implications, medical studies often fail to discuss how limitations could have affected the study findings and interpretations [ 2 ]. Further, observational research often fails to remind readers of the fundamental limitation inherent in the study design, which is the inability to attribute causation [ 3 ]. By reporting generic limitations or omitting them altogether, researchers miss opportunities to fully communicate the relevance of their work, illustrate how their work advances a larger field under study, and suggest potential areas for further investigation.

Goals of presenting limitations

Medical education scholarship should provide empirical evidence that deepens our knowledge and understanding of education [ 4 , 5 ], informs educational practice and process, [ 6 , 7 ] and serves as a forum for educating other researchers [ 8 ]. Providing study limitations is indeed an important part of this scholarly process. Without them, research consumers are pressed to fully grasp the potential exclusion areas or other biases that may affect the results and conclusions provided [ 9 ]. Study limitations should leave the reader thinking about opportunities to engage in prospective improvements [ 9 – 11 ] by presenting gaps in the current research and extant literature, thereby cultivating other researchers’ curiosity and interest in expanding the line of scholarly inquiry [ 9 ].

Presenting study limitations is also an ethical element of scientific inquiry [ 12 ]. It ensures transparency of both the research and the researchers [ 10 , 13 , 14 ], as well as provides transferability [ 15 ] and reproducibility of methods. Presenting limitations also supports proper interpretation and validity of the findings [ 16 ]. A study’s limitations should place research findings within their proper context to ensure readers are fully able to discern the credibility of a study’s conclusion, and can generalize findings appropriately [ 16 ].

Why some authors may fail to present limitations

As Price and Murnan [ 8 ] note, there may be overriding reasons why researchers do not sufficiently report the limitations of their study. For example, authors may not fully understand the importance and implications of their study’s limitations or assume that not discussing them may increase the likelihood of publication. Word limits imposed by journals may also prevent authors from providing thorough descriptions of their study’s limitations [ 17 ]. Still another possible reason for excluding limitations is a diffusion of responsibility in which some authors may incorrectly assume that the journal editor is responsible for identifying limitations. Regardless of reason or intent, researchers have an obligation to the academic community to present complete and honest study limitations.

A guide to presenting limitations

The presentation of limitations should describe the potential limitations, explain the implication of the limitations, provide possible alternative approaches, and describe steps taken to mitigate the limitations. Too often, authors only list the potential limitations, without including these other important elements.

Describe the limitations

When describing limitations authors should identify the limitation type to clearly introduce the limitation and specify the origin of the limitation. This helps to ensure readers are able to interpret and generalize findings appropriately. Here we outline various limitation types that can occur at different stages of the research process.

Study design

Some study limitations originate from conscious choices made by the researcher (also known as delimitations) to narrow the scope of the study [ 1 , 8 , 18 ]. For example, the researcher may have designed the study for a particular age group, sex, race, ethnicity, geographically defined region, or some other attribute that would limit to whom the findings can be generalized. Such delimitations involve conscious exclusionary and inclusionary decisions made during the development of the study plan, which may represent a systematic bias intentionally introduced into the study design or instrument by the researcher [ 8 ]. The clear description and delineation of delimitations and limitations will assist editors and reviewers in understanding any methodological issues.

Data collection

Study limitations can also be introduced during data collection. An unintentional consequence of human subjects research is the potential of the researcher to influence how participants respond to their questions. Even when appropriate methods for sampling have been employed, some studies remain limited by the use of data collected only from participants who decided to enrol in the study (self-selection bias) [ 11 , 19 ]. In some cases, participants may provide biased input by responding to questions they believe are favourable to the researcher rather than their authentic response (social desirability bias) [ 20 – 22 ]. Participants may influence the data collected by changing their behaviour when they are knowingly being observed (Hawthorne effect) [ 23 ]. Researchers—in their role as an observer—may also bias the data they collect by allowing a first impression of the participant to be influenced by a single characteristic or impression of another characteristic either unfavourably (horns effect) or favourably (halo effort) [ 24 ].

Data analysis

Study limitations may arise as a consequence of the type of statistical analysis performed. Some studies may not follow the basic tenets of inferential statistical analyses when they use convenience sampling (i.e. non-probability sampling) rather than employing probability sampling from a target population [ 19 ]. Another limitation that can arise during statistical analyses occurs when studies employ unplanned post-hoc data analyses that were not specified before the initial analysis [ 25 ]. Unplanned post-hoc analysis may lead to statistical relationships that suggest associations but are no more than coincidental findings [ 23 ]. Therefore, when unplanned post-hoc analyses are conducted, this should be clearly stated to allow the reader to make proper interpretation and conclusions—especially when only a subset of the original sample is investigated [ 23 ].

Study results

The limitations of any research study will be rooted in the validity of its results—specifically threats to internal or external validity [ 8 ]. Internal validity refers to reliability or accuracy of the study results [ 26 ], while external validity pertains to the generalizability of results from the study’s sample to the larger, target population [ 8 ].

Examples of threats to internal validity include: effects of events external to the study (history), changes in participants due to time instead of the studied effect (maturation), systematic reduction in participants related to a feature of the study (attrition), changes in participant responses due to repeatedly measuring participants (testing effect), modifications to the instrument (instrumentality) and selecting participants based on extreme scores that will regress towards the mean in repeat tests (regression to the mean) [ 27 ].

Threats to external validity include factors that might inhibit generalizability of results from the study’s sample to the larger, target population [ 8 , 27 ]. External validity is challenged when results from a study cannot be generalized to its larger population or to similar populations in terms of the context, setting, participants and time [ 18 ]. Therefore, limitations should be made transparent in the results to inform research consumers of any known or potentially hidden biases that may have affected the study and prevent generalization beyond the study parameters.

Explain the implication(s) of each limitation

Authors should include the potential impact of the limitations (e.g., likelihood, magnitude) [ 13 ] as well as address specific validity implications of the results and subsequent conclusions [ 16 , 28 ]. For example, self-reported data may lead to inaccuracies (e.g. due to social desirability bias) which threatens internal validity [ 19 ]. Even a researcher’s inappropriate attribution to a characteristic or outcome (e.g., stereotyping) can overemphasize (either positively or negatively) unrelated characteristics or outcomes (halo or horns effect) and impact the internal validity [ 24 ]. Participants’ awareness that they are part of a research study can also influence outcomes (Hawthorne effect) and limit external validity of findings [ 23 ]. External validity may also be threatened should the respondents’ propensity for participation be correlated with the substantive topic of study, as data will be biased and not represent the population of interest (self-selection bias) [ 29 ]. Having this explanation helps readers interpret the results and generalize the applicability of the results for their own setting.

Provide potential alternative approaches and explanations

Often, researchers use other studies’ limitations as the first step in formulating new research questions and shaping the next phase of research. Therefore, it is important for readers to understand why potential alternative approaches (e.g. approaches taken by others exploring similar topics) were not taken. In addition to alternative approaches, authors can also present alternative explanations for their own study’s findings [ 13 ]. This information is valuable coming from the researcher because of the direct, relevant experience and insight gained as they conducted the study. The presentation of alternative approaches represents a major contribution to the scholarly community.

Describe steps taken to minimize each limitation

No research design is perfect and free from explicit and implicit biases; however various methods can be employed to minimize the impact of study limitations. Some suggested steps to mitigate or minimize the limitations mentioned above include using neutral questions, randomized response technique, force choice items, or self-administered questionnaires to reduce respondents’ discomfort when answering sensitive questions (social desirability bias) [ 21 ]; using unobtrusive data collection measures (e.g., use of secondary data) that do not require the researcher to be present (Hawthorne effect) [ 11 , 30 ]; using standardized rubrics and objective assessment forms with clearly defined scoring instructions to minimize researcher bias, or making rater adjustments to assessment scores to account for rater tendencies (halo or horns effect) [ 24 ]; or using existing data or control groups (self-selection bias) [ 11 , 30 ]. When appropriate, researchers should provide sufficient evidence that demonstrates the steps taken to mitigate limitations as part of their study design [ 13 ].

In conclusion, authors may be limiting the impact of their research by neglecting or providing abbreviated and generic limitations. We present several examples of limitations to consider; however, this should not be considered an exhaustive list nor should these examples be added to the growing list of generic and overused limitations. Instead, careful thought should go into presenting limitations after research has concluded and the major findings have been described. Limitations help focus the reader on key findings, therefore it is important to only address the most salient limitations of the study [ 17 , 28 ] related to the specific research problem, not general limitations of most studies [ 1 ]. It is important not to minimize the limitations of study design or results. Rather, results, including their limitations, must help readers draw connections between current research and the extant literature.

The quality and rigor of our research is largely defined by our limitations [ 31 ]. In fact, one of the top reasons reviewers report recommending acceptance of medical education research manuscripts involves limitations—specifically how the study’s interpretation accounts for its limitations [ 32 ]. Therefore, it is not only best for authors to acknowledge their study’s limitations rather than to have them identified by an editor or reviewer, but proper framing and presentation of limitations can actually increase the likelihood of acceptance. Perhaps, these issues could be ameliorated if academic and research organizations adopted policies and/or expectations to guide authors in proper description of limitations.

Chapter 10 Experimental Research

Experimental research, often considered to be the “gold standard” in research designs, is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research (rather than for descriptive or exploratory research), where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalizability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments , conducted in field settings such as in a real organization, and high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic Concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favorably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receives a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the “cause” in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and assures that each unit in the population has a positive chance of being selected into the sample. Random assignment is however a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group, prior to treatment administration. Random selection is related to sampling, and is therefore, more closely related to the external validity (generalizability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

  • History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.
  • Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.
  • Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam. Not conducting a pretest can help avoid this threat.
  • Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.
  • Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.
  • Regression threat , also called a regression to the mean, refers to the statistical tendency of a group’s overall performance on a measure during a posttest to regress toward the mean of that measure rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest was possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-Group Experimental Designs

The simplest true experimental designs are two group designs involving one treatment group and one control group, and are ideally suited for testing the effects of a single independent variable that can be manipulated as a treatment. The two basic two-group designs are the pretest-posttest control group design and the posttest-only control group design, while variations may include covariance designs. These designs are often depicted using a standardized design notation, where R represents random assignment of subjects to groups, X represents the treatment administered to the treatment group, and O represents pretest or posttest observations of the dependent variable (with different subscripts to distinguish between pretest and posttest observations of treatment and control groups).

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

limitations of experimental research include which of the following

Figure 10.1. Pretest-posttest control group design

The effect E of the experimental treatment in the pretest posttest design is measured as the difference in the posttest and pretest scores between the treatment and control groups:

E = (O 2 – O 1 ) – (O 4 – O 3 )

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement (especially if the pretest introduces unusual topics or content).

Posttest-only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

limitations of experimental research include which of the following

Figure 10.2. Posttest only control group design.

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

E = (O 1 – O 2 )

The appropriate statistical analysis of this design is also a two- group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

Covariance designs . Sometimes, measures of dependent variables may be influenced by extraneous variables called covariates . Covariates are those variables that are not of central interest to an experimental study, but should nevertheless be controlled in an experimental design in order to eliminate their potential effect on the dependent variable and therefore allow for a more accurate detection of the effects of the independent variables of interest. The experimental designs discussed earlier did not control for such covariates. A covariance design (also called a concomitant variable design) is a special type of pretest posttest control group design where the pretest measure is essentially a measurement of the covariates of interest rather than that of the dependent variables. The design notation is shown in Figure 10.3, where C represents the covariates:

limitations of experimental research include which of the following

Figure 10.3. Covariance design

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

limitations of experimental research include which of the following

Figure 10.4. 2 x 2 factorial design

Factorial designs can also be depicted using a design notation, such as that shown on the right panel of Figure 10.4. R represents random assignment of subjects to treatment groups, X represents the treatment groups themselves (the subscripts of X represents the level of each factor), and O represent observations of the dependent variable. Notice that the 2 x 2 factorial design will have four treatment groups, corresponding to the four combinations of the two levels of each factor. Correspondingly, the 2 x 3 design will have six treatment groups, and the 2 x 2 x 2 design will have eight treatment groups. As a rule of thumb, each cell in a factorial design should have a minimum sample size of 20 (this estimate is derived from Cohen’s power calculations based on medium effect sizes). So a 2 x 2 x 2 factorial design requires a minimum total sample size of 160 subjects, with at least 20 subjects in each cell. As you can see, the cost of data collection can increase substantially with more levels or factors in your factorial design. Sometimes, due to resource constraints, some cells in such factorial designs may not receive any treatment at all, which are called incomplete factorial designs . Such incomplete designs hurt our ability to draw inferences about the incomplete factors.

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for 3 hours/week of instructional time than for 1.5 hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid Experimental Designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomized bocks design, Solomon four-group design, and switched replications design.

Randomized block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full -time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between treatment group (receiving the same treatment) or control group (see Figure 10.5). The purpose of this design is to reduce the “noise” or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

limitations of experimental research include which of the following

Figure 10.5. Randomized blocks design.

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs but not in posttest only designs. The design notation is shown in Figure 10.6.

limitations of experimental research include which of the following

Figure 10.6. Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organizational contexts where organizational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

limitations of experimental research include which of the following

Figure 10.7. Switched replication design.

Quasi-Experimental Designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organization is used as the treatment group, while another section of the same class or a different organization in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of a certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impact by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

Many true experimental designs can be converted to quasi-experimental designs by omitting random assignment. For instance, the quasi-equivalent version of pretest-posttest control group design is called nonequivalent groups design (NEGD), as shown in Figure 10.8, with random assignment R replaced by non-equivalent (non-random) assignment N . Likewise, the quasi -experimental version of switched replication design is called non-equivalent switched replication design (see Figure 10.9).

limitations of experimental research include which of the following

Figure 10.8. NEGD design.

limitations of experimental research include which of the following

Figure 10.9. Non-equivalent switched replication design.

In addition, there are quite a few unique non -equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression-discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to treatment or control group based on a cutoff score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardized test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program. The design notation can be represented as follows, where C represents the cutoff score:

limitations of experimental research include which of the following

Figure 10.10. RD design.

Because of the use of a cutoff score, it is possible that the observed results may be a function of the cutoff score rather than the treatment, which introduces a new threat to internal validity. However, using the cutoff score also ensures that limited or costly resources are distributed to people who need them the most rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design does not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

limitations of experimental research include which of the following

Figure 10.11. Proxy pretest design.

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data are not available from the same subjects.

limitations of experimental research include which of the following

Figure 10.12. Separate pretest-posttest samples design.

Nonequivalent dependent variable (NEDV) design . This is a single-group pre-post quasi-experimental design with two outcome measures, where one measure is theoretically expected to be influenced by the treatment and the other measure is not. For instance, if you are designing a new calculus curriculum for high school students, this curriculum is likely to influence students’ posttest calculus scores but not algebra scores. However, the posttest algebra scores may still vary due to extraneous factors such as history or maturation. Hence, the pre-post algebra scores can be used as a control measure, while that of pre-post calculus can be treated as the treatment measure. The design notation, shown in Figure 10.13, indicates the single group by a single N , followed by pretest O 1 and posttest O 2 for calculus and algebra for the same group of students. This design is weak in internal validity, but its advantage lies in not having to use a separate control group.

An interesting variation of the NEDV design is a pattern matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique, based on the degree of correspondence between theoretical and observed patterns is a powerful way of alleviating internal validity concerns in the original NEDV design.

limitations of experimental research include which of the following

Figure 10.13. NEDV design.

Perils of Experimental Research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, many experimental research use inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artifact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if doubt, using tasks that are simpler and familiar for the respondent sample than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Experimental and Quasi-Experimental Research

Guide Title: Experimental and Quasi-Experimental Research Guide ID: 64

You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up. A soft tone sounds and the two halves of the wall slide apart to reveal a small room. You step into the room. Looking to the left, then to the right, you see a panel of more buttons. You know that you seek a room marked with the numbers 1-0-1-2, so you press the button marked "10." The halves slide shut and enclose you within the cubicle, which jolts upward. Soon, the soft tone sounds again. The door opens again. On the far wall, a sign silently proclaims, "10th floor."

You have engaged in a series of experiments. A ride in an elevator may not seem like an experiment, but it, and each step taken towards its ultimate outcome, are common examples of a search for a causal relationship-which is what experimentation is all about.

You started with the hypothesis that this is in fact an elevator. You proved that you were correct. You then hypothesized that the button to summon the elevator was on the left, which was incorrect, so then you hypothesized it was on the right, and you were correct. You hypothesized that pressing the button marked with the up arrow would not only bring an elevator to you, but that it would be an elevator heading in the up direction. You were right.

As this guide explains, the deliberate process of testing hypotheses and reaching conclusions is an extension of commonplace testing of cause and effect relationships.

Basic Concepts of Experimental and Quasi-Experimental Research

Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder. In addition, you could observe that turning the knob clockwise alone, and nothing else, caused the sound level to increase. You could further conclude that a causal relationship exists between turning the knob clockwise and an increase in volume; not simply because one caused the other, but because you are certain that nothing else caused the effect.

Independent and Dependent Variables

Beyond discovering causal relationships, experimental research further seeks out how much cause will produce how much effect; in technical terms, how the independent variable will affect the dependent variable. You know that turning the knob clockwise will produce a louder noise, but by varying how much you turn it, you see how much sound is produced. On the other hand, you might find that although you turn the knob a great deal, sound doesn't increase dramatically. Or, you might find that turning the knob just a little adds more sound than expected. The amount that you turned the knob is the independent variable, the variable that the researcher controls, and the amount of sound that resulted from turning it is the dependent variable, the change that is caused by the independent variable.

Experimental research also looks into the effects of removing something. For example, if you remove a loud noise from the room, will the person next to you be able to hear you? Or how much noise needs to be removed before that person can hear you?

Treatment and Hypothesis

The term treatment refers to either removing or adding a stimulus in order to measure an effect (such as turning the knob a little or a lot, or reducing the noise level a little or a lot). Experimental researchers want to know how varying levels of treatment will affect what they are studying. As such, researchers often have an idea, or hypothesis, about what effect will occur when they cause something. Few experiments are performed where there is no idea of what will happen. From past experiences in life or from the knowledge we possess in our specific field of study, we know how some actions cause other reactions. Experiments confirm or reconfirm this fact.

Experimentation becomes more complex when the causal relationships they seek aren't as clear as in the stereo knob-turning examples. Questions like "Will olestra cause cancer?" or "Will this new fertilizer help this plant grow better?" present more to consider. For example, any number of things could affect the growth rate of a plant-the temperature, how much water or sun it receives, or how much carbon dioxide is in the air. These variables can affect an experiment's results. An experimenter who wants to show that adding a certain fertilizer will help a plant grow better must ensure that it is the fertilizer, and nothing else, affecting the growth patterns of the plant. To do this, as many of these variables as possible must be controlled.

Matching and Randomization

In the example used in this guide (you'll find the example below), we discuss an experiment that focuses on three groups of plants -- one that is treated with a fertilizer named MegaGro, another group treated with a fertilizer named Plant!, and yet another that is not treated with fetilizer (this latter group serves as a "control" group). In this example, even though the designers of the experiment have tried to remove all extraneous variables, results may appear merely coincidental. Since the goal of the experiment is to prove a causal relationship in which a single variable is responsible for the effect produced, the experiment would produce stronger proof if the results were replicated in larger treatment and control groups.

Selecting groups entails assigning subjects in the groups of an experiment in such a way that treatment and control groups are comparable in all respects except the application of the treatment. Groups can be created in two ways: matching and randomization. In the MegaGro experiment discussed below, the plants might be matched according to characteristics such as age, weight and whether they are blooming. This involves distributing these plants so that each plant in one group exactly matches characteristics of plants in the other groups. Matching may be problematic, though, because it "can promote a false sense of security by leading [the experimenter] to believe that [the] experimental and control groups were really equated at the outset, when in fact they were not equated on a host of variables" (Jones, 291). In other words, you may have flowers for your MegaGro experiment that you matched and distributed among groups, but other variables are unaccounted for. It would be difficult to have equal groupings.

Randomization, then, is preferred to matching. This method is based on the statistical principle of normal distribution. Theoretically, any arbitrarily selected group of adequate size will reflect normal distribution. Differences between groups will average out and become more comparable. The principle of normal distribution states that in a population most individuals will fall within the middle range of values for a given characteristic, with increasingly fewer toward either extreme (graphically represented as the ubiquitous "bell curve").

Differences between Quasi-Experimental and Experimental Research

Thus far, we have explained that for experimental research we need:

  • a hypothesis for a causal relationship;
  • a control group and a treatment group;
  • to eliminate confounding variables that might mess up the experiment and prevent displaying the causal relationship; and
  • to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental differences from fouling things up.

But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the researcher cannot control all the factors that might affect the outcome.

A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or there may be no control group. The researcher is limited in what he or she can say conclusively.

The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it allows for comparison. Some data is quite straightforward, but other measures, such as level of self-confidence in writing ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation often involves a number of strategies to compare subjectivity, such as rating data, testing, surveying, and content analysis.

Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and experimental groups, as well as different correlations between groups.

Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a probability that one thing causes another. The probability that a result is the due to random chance is an important measure of statistical analysis and in experimental research.

Example: Causality

Let's say you want to determine that your new fertilizer, MegaGro, will increase the growth rate of plants. You begin by getting a plant to go with your fertilizer. Since the experiment is concerned with proving that MegaGro works, you need another plant, using no fertilizer at all on it, to compare how much change your fertilized plant displays. This is what is known as a control group.

Set up with a control group, which will receive no treatment, and an experimental group, which will get MegaGro, you must then address those variables that could invalidate your experiment. This can be an extensive and exhaustive process. You must ensure that you use the same plant; that both groups are put in the same kind of soil; that they receive equal amounts of water and sun; that they receive the same amount of exposure to carbon-dioxide-exhaling researchers, and so on. In short, any other variable that might affect the growth of those plants, other than the fertilizer, must be the same for both plants. Otherwise, you can't prove absolutely that MegaGro is the only explanation for the increased growth of one of those plants.

Such an experiment can be done on more than two groups. You may not only want to show that MegaGro is an effective fertilizer, but that it is better than its competitor brand of fertilizer, Plant! All you need to do, then, is have one experimental group receiving MegaGro, one receiving Plant! and the other (the control group) receiving no fertilizer. Those are the only variables that can be different between the three groups; all other variables must be the same for the experiment to be valid.

Controlling variables allows the researcher to identify conditions that may affect the experiment's outcome. This may lead to alternative explanations that the researcher is willing to entertain in order to isolate only variables judged significant. In the MegaGro experiment, you may be concerned with how fertile the soil is, but not with the plants'; relative position in the window, as you don't think that the amount of shade they get will affect their growth rate. But what if it did? You would have to go about eliminating variables in order to determine which is the key factor. What if one receives more shade than the other and the MegaGro plant, which received more shade, died? This might prompt you to formulate a plausible alternative explanation, which is a way of accounting for a result that differs from what you expected. You would then want to redo the study with equal amounts of sunlight.

Methods: Five Steps

Experimental research can be roughly divided into five phases:

Identifying a research problem

The process starts by clearly identifying the problem you want to study and considering what possible methods will affect a solution. Then you choose the method you want to test, and formulate a hypothesis to predict the outcome of the test.

For example, you may want to improve student essays, but you don't believe that teacher feedback is enough. You hypothesize that some possible methods for writing improvement include peer workshopping, or reading more example essays. Favoring the former, your experiment would try to determine if peer workshopping improves writing in high school seniors. You state your hypothesis: peer workshopping prior to turning in a final draft will improve the quality of the student's essay.

Planning an experimental research study

The next step is to devise an experiment to test your hypothesis. In doing so, you must consider several factors. For example, how generalizable do you want your end results to be? Do you want to generalize about the entire population of high school seniors everywhere, or just the particular population of seniors at your specific school? This will determine how simple or complex the experiment will be. The amount of time funding you have will also determine the size of your experiment.

Continuing the example from step one, you may want a small study at one school involving three teachers, each teaching two sections of the same course. The treatment in this experiment is peer workshopping. Each of the three teachers will assign the same essay assignment to both classes; the treatment group will participate in peer workshopping, while the control group will receive only teacher comments on their drafts.

Conducting the experiment

At the start of an experiment, the control and treatment groups must be selected. Whereas the "hard" sciences have the luxury of attempting to create truly equal groups, educators often find themselves forced to conduct their experiments based on self-selected groups, rather than on randomization. As was highlighted in the Basic Concepts section, this makes the study a quasi-experiment, since the researchers cannot control all of the variables.

For the peer workshopping experiment, let's say that it involves six classes and three teachers with a sample of students randomly selected from all the classes. Each teacher will have a class for a control group and a class for a treatment group. The essay assignment is given and the teachers are briefed not to change any of their teaching methods other than the use of peer workshopping. You may see here that this is an effort to control a possible variable: teaching style variance.

Analyzing the data

The fourth step is to collect and analyze the data. This is not solely a step where you collect the papers, read them, and say your methods were a success. You must show how successful. You must devise a scale by which you will evaluate the data you receive, therefore you must decide what indicators will be, and will not be, important.

Continuing our example, the teachers' grades are first recorded, then the essays are evaluated for a change in sentence complexity, syntactical and grammatical errors, and overall length. Any statistical analysis is done at this time if you choose to do any. Notice here that the researcher has made judgments on what signals improved writing. It is not simply a matter of improved teacher grades, but a matter of what the researcher believes constitutes improved use of the language.

Writing the paper/presentation describing the findings

Once you have completed the experiment, you will want to share findings by publishing academic paper (or presentations). These papers usually have the following format, but it is not necessary to follow it strictly. Sections can be combined or not included, depending on the structure of the experiment, and the journal to which you submit your paper.

  • Abstract : Summarize the project: its aims, participants, basic methodology, results, and a brief interpretation.
  • Introduction : Set the context of the experiment.
  • Review of Literature : Provide a review of the literature in the specific area of study to show what work has been done. Should lead directly to the author's purpose for the study.
  • Statement of Purpose : Present the problem to be studied.
  • Participants : Describe in detail participants involved in the study; e.g., how many, etc. Provide as much information as possible.
  • Materials and Procedures : Clearly describe materials and procedures. Provide enough information so that the experiment can be replicated, but not so much information that it becomes unreadable. Include how participants were chosen, the tasks assigned them, how they were conducted, how data were evaluated, etc.
  • Results : Present the data in an organized fashion. If it is quantifiable, it is analyzed through statistical means. Avoid interpretation at this time.
  • Discussion : After presenting the results, interpret what has happened in the experiment. Base the discussion only on the data collected and as objective an interpretation as possible. Hypothesizing is possible here.
  • Limitations : Discuss factors that affect the results. Here, you can speculate how much generalization, or more likely, transferability, is possible based on results. This section is important for quasi-experimentation, since a quasi-experiment cannot control all of the variables that might affect the outcome of a study. You would discuss what variables you could not control.
  • Conclusion : Synthesize all of the above sections.
  • References : Document works cited in the correct format for the field.

Experimental and Quasi-Experimental Research: Issues and Commentary

Several issues are addressed in this section, including the use of experimental and quasi-experimental research in educational settings, the relevance of the methods to English studies, and ethical concerns regarding the methods.

Using Experimental and Quasi-Experimental Research in Educational Settings

Charting causal relationships in human settings.

Any time a human population is involved, prediction of casual relationships becomes cloudy and, some say, impossible. Many reasons exist for this; for example,

  • researchers in classrooms add a disturbing presence, causing students to act abnormally, consciously or unconsciously;
  • subjects try to please the researcher, just because of an apparent interest in them (known as the Hawthorne Effect); or, perhaps
  • the teacher as researcher is restricted by bias and time pressures.

But such confounding variables don't stop researchers from trying to identify causal relationships in education. Educators naturally experiment anyway, comparing groups, assessing the attributes of each, and making predictions based on an evaluation of alternatives. They look to research to support their intuitive practices, experimenting whenever they try to decide which instruction method will best encourage student improvement.

Combining Theory, Research, and Practice

The goal of educational research lies in combining theory, research, and practice. Educational researchers attempt to establish models of teaching practice, learning styles, curriculum development, and countless other educational issues. The aim is to "try to improve our understanding of education and to strive to find ways to have understanding contribute to the improvement of practice," one writer asserts (Floden 1996, p. 197).

In quasi-experimentation, researchers try to develop models by involving teachers as researchers, employing observational research techniques. Although results of this kind of research are context-dependent and difficult to generalize, they can act as a starting point for further study. The "educational researcher . . . provides guidelines and interpretive material intended to liberate the teacher's intelligence so that whatever artistry in teaching the teacher can achieve will be employed" (Eisner 1992, p. 8).

Bias and Rigor

Critics contend that the educational researcher is inherently biased, sample selection is arbitrary, and replication is impossible. The key to combating such criticism has to do with rigor. Rigor is established through close, proper attention to randomizing groups, time spent on a study, and questioning techniques. This allows more effective application of standards of quantitative research to qualitative research.

Often, teachers cannot wait to for piles of experimentation data to be analyzed before using the teaching methods (Lauer and Asher 1988). They ultimately must assess whether the results of a study in a distant classroom are applicable in their own classrooms. And they must continuously test the effectiveness of their methods by using experimental and qualitative research simultaneously. In addition to statistics (quantitative), researchers may perform case studies or observational research (qualitative) in conjunction with, or prior to, experimentation.

Relevance to English Studies

Situations in english studies that might encourage use of experimental methods.

Whenever a researcher would like to see if a causal relationship exists between groups, experimental and quasi-experimental research can be a viable research tool. Researchers in English Studies might use experimentation when they believe a relationship exists between two variables, and they want to show that these two variables have a significant correlation (or causal relationship).

A benefit of experimentation is the ability to control variables, such as the amount of treatment, when it is given, to whom and so forth. Controlling variables allows researchers to gain insight into the relationships they believe exist. For example, a researcher has an idea that writing under pseudonyms encourages student participation in newsgroups. Researchers can control which students write under pseudonyms and which do not, then measure the outcomes. Researchers can then analyze results and determine if this particular variable alone causes increased participation.

Transferability-Applying Results

Experimentation and quasi-experimentation allow for generating transferable results and accepting those results as being dependent upon experimental rigor. It is an effective alternative to generalizability, which is difficult to rely upon in educational research. English scholars, reading results of experiments with a critical eye, ultimately decide if results will be implemented and how. They may even extend that existing research by replicating experiments in the interest of generating new results and benefiting from multiple perspectives. These results will strengthen the study or discredit findings.

Concerns English Scholars Express about Experiments

Researchers should carefully consider if a particular method is feasible in humanities studies, and whether it will yield the desired information. Some researchers recommend addressing pertinent issues combining several research methods, such as survey, interview, ethnography, case study, content analysis, and experimentation (Lauer and Asher, 1988).

Advantages and Disadvantages of Experimental Research: Discussion

In educational research, experimentation is a way to gain insight into methods of instruction. Although teaching is context specific, results can provide a starting point for further study. Often, a teacher/researcher will have a "gut" feeling about an issue which can be explored through experimentation and looking at causal relationships. Through research intuition can shape practice .

A preconception exists that information obtained through scientific method is free of human inconsistencies. But, since scientific method is a matter of human construction, it is subject to human error . The researcher's personal bias may intrude upon the experiment , as well. For example, certain preconceptions may dictate the course of the research and affect the behavior of the subjects. The issue may be compounded when, although many researchers are aware of the affect that their personal bias exerts on their own research, they are pressured to produce research that is accepted in their field of study as "legitimate" experimental research.

The researcher does bring bias to experimentation, but bias does not limit an ability to be reflective . An ethical researcher thinks critically about results and reports those results after careful reflection. Concerns over bias can be leveled against any research method.

Often, the sample may not be representative of a population, because the researcher does not have an opportunity to ensure a representative sample. For example, subjects could be limited to one location, limited in number, studied under constrained conditions and for too short a time.

Despite such inconsistencies in educational research, the researcher has control over the variables , increasing the possibility of more precisely determining individual effects of each variable. Also, determining interaction between variables is more possible.

Even so, artificial results may result . It can be argued that variables are manipulated so the experiment measures what researchers want to examine; therefore, the results are merely contrived products and have no bearing in material reality. Artificial results are difficult to apply in practical situations, making generalizing from the results of a controlled study questionable. Experimental research essentially first decontextualizes a single question from a "real world" scenario, studies it under controlled conditions, and then tries to recontextualize the results back on the "real world" scenario. Results may be difficult to replicate .

Perhaps, groups in an experiment may not be comparable . Quasi-experimentation in educational research is widespread because not only are many researchers also teachers, but many subjects are also students. With the classroom as laboratory, it is difficult to implement randomizing or matching strategies. Often, students self-select into certain sections of a course on the basis of their own agendas and scheduling needs. Thus when, as often happens, one class is treated and the other used for a control, the groups may not actually be comparable. As one might imagine, people who register for a class which meets three times a week at eleven o'clock in the morning (young, no full-time job, night people) differ significantly from those who register for one on Monday evenings from seven to ten p.m. (older, full-time job, possibly more highly motivated). Each situation presents different variables and your group might be completely different from that in the study. Long-term studies are expensive and hard to reproduce. And although often the same hypotheses are tested by different researchers, various factors complicate attempts to compare or synthesize them. It is nearly impossible to be as rigorous as the natural sciences model dictates.

Even when randomization of students is possible, problems arise. First, depending on the class size and the number of classes, the sample may be too small for the extraneous variables to cancel out. Second, the study population is not strictly a sample, because the population of students registered for a given class at a particular university is obviously not representative of the population of all students at large. For example, students at a suburban private liberal-arts college are typically young, white, and upper-middle class. In contrast, students at an urban community college tend to be older, poorer, and members of a racial minority. The differences can be construed as confounding variables: the first group may have fewer demands on its time, have less self-discipline, and benefit from superior secondary education. The second may have more demands, including a job and/or children, have more self-discipline, but an inferior secondary education. Selecting a population of subjects which is representative of the average of all post-secondary students is also a flawed solution, because the outcome of a treatment involving this group is not necessarily transferable to either the students at a community college or the students at the private college, nor are they universally generalizable.

When a human population is involved, experimental research becomes concerned if behavior can be predicted or studied with validity. Human response can be difficult to measure . Human behavior is dependent on individual responses. Rationalizing behavior through experimentation does not account for the process of thought, making outcomes of that process fallible (Eisenberg, 1996).

Nevertheless, we perform experiments daily anyway . When we brush our teeth every morning, we are experimenting to see if this behavior will result in fewer cavities. We are relying on previous experimentation and we are transferring the experimentation to our daily lives.

Moreover, experimentation can be combined with other research methods to ensure rigor . Other qualitative methods such as case study, ethnography, observational research and interviews can function as preconditions for experimentation or conducted simultaneously to add validity to a study.

We have few alternatives to experimentation. Mere anecdotal research , for example is unscientific, unreplicatable, and easily manipulated. Should we rely on Ed walking into a faculty meeting and telling the story of Sally? Sally screamed, "I love writing!" ten times before she wrote her essay and produced a quality paper. Therefore, all the other faculty members should hear this anecdote and know that all other students should employ this similar technique.

On final disadvantage: frequently, political pressure drives experimentation and forces unreliable results. Specific funding and support may drive the outcomes of experimentation and cause the results to be skewed. The reader of these results may not be aware of these biases and should approach experimentation with a critical eye.

Advantages and Disadvantages of Experimental Research: Quick Reference List

Experimental and quasi-experimental research can be summarized in terms of their advantages and disadvantages. This section combines and elaborates upon many points mentioned previously in this guide.

Ethical Concerns

Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it. Looking for specific results, they may ask questions and look at data that support only desired conclusions. Conflicting research findings are ignored as a result. Similarly, researchers, seeking support for a particular plan, look only at findings which support that goal, dismissing conflicting research.

Editors and journals do not publish only trouble-free material. As readers of experiments members of the press might report selected and isolated parts of a study to the public, essentially transferring that data to the general population which may not have been intended by the researcher. Take, for example, oat bran. A few years ago, the press reported how oat bran reduces high blood pressure by reducing cholesterol. But that bit of information was taken out of context. The actual study found that when people ate more oat bran, they reduced their intake of saturated fats high in cholesterol. People started eating oat bran muffins by the ton, assuming a causal relationship when in actuality a number of confounding variables might influence the causal link.

Ultimately, ethical use and reportage of experimentation should be addressed by researchers, reporters and readers alike.

Reporters of experimental research often seek to recognize their audience's level of knowledge and try not to mislead readers. And readers must rely on the author's skill and integrity to point out errors and limitations. The relationship between researcher and reader may not sound like a problem, but after spending months or years on a project to produce no significant results, it may be tempting to manipulate the data to show significant results in order to jockey for grants and tenure.

Meanwhile, the reader may uncritically accept results that receive validity by being published in a journal. However, research that lacks credibility often is not published; consequentially, researchers who fail to publish run the risk of being denied grants, promotions, jobs, and tenure. While few researchers are anything but earnest in their attempts to conduct well-designed experiments and present the results in good faith, rhetorical considerations often dictate a certain minimization of methodological flaws.

Concerns arise if researchers do not report all, or otherwise alter, results. This phenomenon is counterbalanced, however, in that professionals are also rewarded for publishing critiques of others' work. Because the author of an experimental study is in essence making an argument for the existence of a causal relationship, he or she must be concerned not only with its integrity, but also with its presentation. Achieving persuasiveness in any kind of writing involves several elements: choosing a topic of interest, providing convincing evidence for one's argument, using tone and voice to project credibility, and organizing the material in a way that meets expectations for a logical sequence. Of course, what is regarded as pertinent, accepted as evidence, required for credibility, and understood as logical varies according to context. If the experimental researcher hopes to make an impact on the community of professionals in their field, she must attend to the standards and orthodoxy's of that audience.

Related Links

Contrasts: Traditional and computer-supported writing classrooms. This Web presents a discussion of the Transitions Study, a year-long exploration of teachers and students in computer-supported and traditional writing classrooms. Includes description of study, rationale for conducting the study, results and implications of the study.

http://kairos.technorhetoric.net/2.2/features/reflections/page1.htm

Annotated Bibliography

A cozy world of trivial pursuits? (1996, June 28) The Times Educational Supplement . 4174, pp. 14-15.

A critique discounting the current methods Great Britain employs to fund and disseminate educational research. The belief is that research is performed for fellow researchers not the teaching public and implications for day to day practice are never addressed.

Anderson, J. A. (1979, Nov. 10-13). Research as argument: the experimental form. Paper presented at the annual meeting of the Speech Communication Association, San Antonio, TX.

In this paper, the scientist who uses the experimental form does so in order to explain that which is verified through prediction.

Anderson, Linda M. (1979). Classroom-based experimental studies of teaching effectiveness in elementary schools . (Technical Report UTR&D-R- 4102). Austin: Research and Development Center for Teacher Education, University of Texas.

Three recent large-scale experimental studies have built on a database established through several correlational studies of teaching effectiveness in elementary school.

Asher, J. W. (1976). Educational research and evaluation methods . Boston: Little, Brown.

Abstract unavailable by press time.

Babbie, Earl R. (1979). The Practice of Social Research . Belmont, CA: Wadsworth.

A textbook containing discussions of several research methodologies used in social science research.

Bangert-Drowns, R.L. (1993). The word processor as instructional tool: a meta-analysis of word processing in writing instruction. Review of Educational Research, 63 (1), 69-93.

Beach, R. (1993). The effects of between-draft teacher evaluation versus student self-evaluation on high school students' revising of rough drafts. Research in the Teaching of English, 13 , 111-119.

The question of whether teacher evaluation or guided self-evaluation of rough drafts results in increased revision was addressed in Beach's study. Differences in the effects of teacher evaluations, guided self-evaluation (using prepared guidelines,) and no evaluation of rough drafts were examined. The final drafts of students (10th, 11th, and 12th graders) were compared with their rough drafts and rated by judges according to degree of change.

Beishuizen, J. & Moonen, J. (1992). Research in technology enriched schools: a case for cooperation between teachers and researchers . (ERIC Technical Report ED351006).

This paper describes the research strategies employed in the Dutch Technology Enriched Schools project to encourage extensive and intensive use of computers in a small number of secondary schools, and to study the effects of computer use on the classroom, the curriculum, and school administration and management.

Borg, W. P. (1989). Educational Research: an Introduction . (5th ed.). New York: Longman.

An overview of educational research methodology, including literature review and discussion of approaches to research, experimental design, statistical analysis, ethics, and rhetorical presentation of research findings.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Boston: Houghton Mifflin.

A classic overview of research designs.

Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers . ed. E. S. Overman. Chicago: University of Chicago Press.

This is an overview of Campbell's 40-year career and his work. It covers in seven parts measurement, experimental design, applied social experimentation, interpretive social science, epistemology and sociology of science. Includes an extensive bibliography.

Caporaso, J. A., & Roos, Jr., L. L. (Eds.). Quasi-experimental approaches: Testing theory and evaluating policy. Evanston, WA: Northwestern University Press.

A collection of articles concerned with explicating the underlying assumptions of quasi-experimentation and relating these to true experimentation. With an emphasis on design. Includes a glossary of terms.

Collier, R. Writing and the word processor: How wary of the gift-giver should we be? Unpublished manuscript.

Unpublished typescript. Charts the developments to date in computers and composition and speculates about the future within the framework of Willie Sypher's model of the evolution of creative discovery.

Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings . Boston: Houghton Mifflin Co.

The authors write that this book "presents some quasi-experimental designs and design features that can be used in many social research settings. The designs serve to probe causal hypotheses about a wide variety of substantive issues in both basic and applied research."

Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communication, 2 , N. pag.

This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained.

Daniels, L. B. (1996, Summer). Eisenberg's Heisenberg: The indeterminancies of rationality. Curriculum Inquiry, 26 , 181-92.

Places Eisenberg's theories in relation to the death of foundationalism by showing that he distorts rational studies into a form of relativism. He looks at Eisenberg's ideas on indeterminacy, methods and evidence, what he is against and what we should think of what he says.

Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge: Cambridge University Press.

Danzinger stresses the importance of being aware of the framework in which research operates and of the essentially social nature of scientific activity.

Diener, E., et al. (1972, December). Leakage of experimental information to potential future subjects by debriefed subjects. Journal of Experimental Research in Personality , 264-67.

Research regarding research: an investigation of the effects on the outcome of an experiment in which information about the experiment had been leaked to subjects. The study concludes that such leakage is not a significant problem.

Dudley-Marling, C., & Rhodes, L. K. (1989). Reflecting on a close encounter with experimental research. Canadian Journal of English Language Arts. 12 , 24-28.

Researchers, Dudley-Marling and Rhodes, address some problems they met in their experimental approach to a study of reading comprehension. This article discusses the limitations of experimental research, and presents an alternative to experimental or quantitative research.

Edgington, E. S. (1985). Random assignment and experimental research. Educational Administration Quarterly, 21 , N. pag.

Edgington explores ways on which random assignment can be a part of field studies. The author discusses both non-experimental and experimental research and the need for using random assignment.

Eisenberg, J. (1996, Summer). Response to critiques by R. Floden, J. Zeuli, and L. Daniels. Curriculum Inquiry, 26 , 199-201.

A response to critiques of his argument that rational educational research methods are at best suspect and at worst futile. He believes indeterminacy controls this method and worries that chaotic research is failing students.

Eisner, E. (1992, July). Are all causal claims positivistic? A reply to Francis Schrag. Educational Researcher, 21 (5), 8-9.

Eisner responds to Schrag who claimed that critics like Eisner cannot escape a positivistic paradigm whatever attempts they make to do so. Eisner argues that Schrag essentially misses the point for trying to argue for the paradigm solely on the basis of cause and effect without including the rest of positivistic philosophy. This weakens his argument against multiple modal methods, which Eisner argues provides opportunities to apply the appropriate research design where it is most applicable.

Floden, R.E. (1996, Summer). Educational research: limited, but worthwhile and maybe a bargain. (response to J.A. Eisenberg). Curriculum Inquiry, 26 , 193-7.

Responds to John Eisenberg critique of educational research by asserting the connection between improvement of practice and research results. He places high value of teacher discrepancy and knowledge that research informs practice.

Fortune, J. C., & Hutson, B. A. (1994, March/April). Selecting models for measuring change when true experimental conditions do not exist. Journal of Educational Research, 197-206.

This article reviews methods for minimizing the effects of nonideal experimental conditions by optimally organizing models for the measurement of change.

Fox, R. F. (1980). Treatment of writing apprehension and tts effects on composition. Research in the Teaching of English, 14 , 39-49.

The main purpose of Fox's study was to investigate the effects of two methods of teaching writing on writing apprehension among entry level composition students, A conventional teaching procedure was used with a control group, while a workshop method was employed with the treatment group.

Gadamer, H-G. (1976). Philosophical hermeneutics . (D. E. Linge, Trans.). Berkeley, CA: University of California Press.

A collection of essays with the common themes of the mediation of experience through language, the impossibility of objectivity, and the importance of context in interpretation.

Gaise, S. J. (1981). Experimental vs. non-experimental research on classroom second language learning. Bilingual Education Paper Series, 5 , N. pag.

Aims on classroom-centered research on second language learning and teaching are considered and contrasted with the experimental approach.

Giordano, G. (1983). Commentary: Is experimental research snowing us? Journal of Reading, 27 , 5-7.

Do educational research findings actually benefit teachers and students? Giordano states his opinion that research may be helpful to teaching, but is not essential and often is unnecessary.

Goldenson, D. R. (1978, March). An alternative view about the role of the secondary school in political socialization: A field-experimental study of theory and research in social education. Theory and Research in Social Education , 44-72.

This study concludes that when political discussion among experimental groups of secondary school students is led by a teacher, the degree to which the students' views were impacted is proportional to the credibility of the teacher.

Grossman, J., and J. P. Tierney. (1993, October). The fallibility of comparison groups. Evaluation Review , 556-71.

Grossman and Tierney present evidence to suggest that comparison groups are not the same as nontreatment groups.

Harnisch, D. L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature. In D. L. Harnisch et al. (Eds.), Selected readings in transition.

This chapter describes several common types of research studies in special education transition literature and the threats to their validity.

Hawisher, G. E. (1989). Research and recommendations for computers and composition. In G. Hawisher and C. Selfe. (Eds.), Critical Perspectives on Computers and Composition Instruction . (pp. 44-69). New York: Teacher's College Press.

An overview of research in computers and composition to date. Includes a synthesis grid of experimental research.

Hillocks, G. Jr. (1982). The interaction of instruction, teacher comment, and revision in teaching the composing process. Research in the Teaching of English, 16 , 261-278.

Hillock conducted a study using three treatments: observational or data collecting activities prior to writing, use of revisions or absence of same, and either brief or lengthy teacher comments to identify effective methods of teaching composition to seventh and eighth graders.

Jenkinson, J. C. (1989). Research design in the experimental study of intellectual disability. International Journal of Disability, Development, and Education, 69-84.

This article catalogues the difficulties of conducting experimental research where the subjects are intellectually disables and suggests alternative research strategies.

Jones, R. A. (1985). Research Methods in the Social and Behavioral Sciences. Sunderland, MA: Sinauer Associates, Inc..

A textbook designed to provide an overview of research strategies in the social sciences, including survey, content analysis, ethnographic approaches, and experimentation. The author emphasizes the importance of applying strategies appropriately and in variety.

Kamil, M. L., Langer, J. A., & Shanahan, T. (1985). Understanding research in reading and writing . Newton, Massachusetts: Allyn and Bacon.

Examines a wide variety of problems in reading and writing, with a broad range of techniques, from different perspectives.

Kennedy, J. L. (1985). An Introduction to the Design and Analysis of Experiments in Behavioral Research . Lanham, MD: University Press of America.

An introductory textbook of psychological and educational research.

Keppel, G. (1991). Design and analysis: a researcher's handbook . Englewood Cliffs, NJ: Prentice Hall.

This updates Keppel's earlier book subtitled "a student's handbook." Focuses on extensive information about analytical research and gives a basic picture of research in psychology. Covers a range of statistical topics. Includes a subject and name index, as well as a glossary.

Knowles, G., Elija, R., & Broadwater, K. (1996, Spring/Summer). Teacher research: enhancing the preparation of teachers? Teaching Education, 8 , 123-31.

Researchers looked at one teacher candidate who participated in a class which designed their own research project correlating to a question they would like answered in the teaching world. The goal of the study was to see if preservice teachers developed reflective practice by researching appropriate classroom contexts.

Lace, J., & De Corte, E. (1986, April 16-20). Research on media in western Europe: A myth of sisyphus? Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

Identifies main trends in media research in western Europe, with emphasis on three successive stages since 1960: tools technology, systems technology, and reflective technology.

Latta, A. (1996, Spring/Summer). Teacher as researcher: selected resources. Teaching Education, 8 , 155-60.

An annotated bibliography on educational research including milestones of thought, practical applications, successful outcomes, seminal works, and immediate practical applications.

Lauer. J.M. & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford University Press.

Approaching experimentation from a humanist's perspective to it, authors focus on eight major research designs: Case studies, ethnographies, sampling and surveys, quantitative descriptive studies, measurement, true experiments, quasi-experiments, meta-analyses, and program evaluations. It takes on the challenge of bridging language of social science with that of the humanist. Includes name and subject indexes, as well as a glossary and a glossary of symbols.

Mishler, E. G. (1979). Meaning in context: Is there any other kind? Harvard Educational Review, 49 , 1-19.

Contextual importance has been largely ignored by traditional research approaches in social/behavioral sciences and in their application to the education field. Developmental and social psychologists have increasingly noted the inadequacies of this approach. Drawing examples for phenomenology, sociolinguistics, and ethnomethodology, the author proposes alternative approaches for studying meaning in context.

Mitroff, I., & Bonoma, T. V. (1978, May). Psychological assumptions, experimentations, and real world problems: A critique and an alternate approach to evaluation. Evaluation Quarterly , 235-60.

The authors advance the notion of dialectic as a means to clarify and examine the underlying assumptions of experimental research methodology, both in highly controlled situations and in social evaluation.

Muller, E. W. (1985). Application of experimental and quasi-experimental research designs to educational software evaluation. Educational Technology, 25 , 27-31.

Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect.

Murray, S., et al. (1979, April 8-12). Technical issues as threats to internal validity of experimental and quasi-experimental designs . San Francisco: University of California.

The article reviews three evaluation models and analyzes the flaws common to them. Remedies are suggested.

Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and books: The paperless office revisited? Behavior and Information Technology, 10 (4), 257-66.

The researchers test for reading and skimming effectiveness, defined as accuracy combined with speed, for written text compared to text on a computer monitor. They conclude that, given optimal on-line conditions, both are equally effective.

O'Donnell, A., Et al. (1992). The impact of cooperative writing. In J. R. Hayes, et al. (Eds.). Reading empirical research studies: The rhetoric of research . (pp. 371-84). Hillsdale, NJ: Lawrence Erlbaum Associates.

A model of experimental design. The authors investigate the efficacy of cooperative writing strategies, as well as the transferability of skills learned to other, individual writing situations.

Palmer, D. (1988). Looking at philosophy . Mountain View, CA: Mayfield Publishing.

An introductory text with incisive but understandable discussions of the major movements and thinkers in philosophy from the Pre-Socratics through Sartre. With illustrations by the author. Includes a glossary.

Phelps-Gunn, T., & Phelps-Terasaki, D. (1982). Written language instruction: Theory and remediation . London: Aspen Systems Corporation.

The lack of research in written expression is addressed and an application on the Total Writing Process Model is presented.

Poetter, T. (1996, Spring/Summer). From resistance to excitement: becoming qualitative researchers and reflective practitioners. Teaching Education , 8109-19.

An education professor reveals his own problematic research when he attempted to institute a educational research component to a teacher preparation program. He encountered dissent from students and cooperating professionals and ultimately was rewarded with excitement towards research and a recognized correlation to practice.

Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English, 26 .

Three issues concerning research and assessment is writing are discussed: 1) School writing is a matter of products not process, 2) school writing is an ill-defined domain, 3) the quality of school writing is what observers report they see. Purves discusses these issues while looking at data collected in a ten-year study of achievement in written composition in fourteen countries.

Rathus, S. A. (1987). Psychology . (3rd ed.). Poughkeepsie, NY: Holt, Rinehart, and Winston.

An introductory psychology textbook. Includes overviews of the major movements in psychology, discussions of prominent examples of experimental research, and a basic explanation of relevant physiological factors. With chapter summaries.

Reiser, R. A. (1982). Improving the research skills of instructional designers. Educational Technology, 22 , 19-21.

In his paper, Reiser starts by stating the importance of research in advancing the field of education, and points out that graduate students in instructional design lack the proper skills to conduct research. The paper then goes on to outline the practicum in the Instructional Systems Program at Florida State University which includes: 1) Planning and conducting an experimental research study; 2) writing the manuscript describing the study; 3) giving an oral presentation in which they describe their research findings.

Report on education research . (Journal). Washington, DC: Capitol Publication, Education News Services Division.

This is an independent bi-weekly newsletter on research in education and learning. It has been publishing since Sept. 1969.

Rossell, C. H. (1986). Why is bilingual education research so bad?: Critique of the Walsh and Carballo study of Massachusetts bilingual education programs . Boston: Center for Applied Social Science, Boston University. (ERIC Working Paper 86-5).

The Walsh and Carballo evaluation of the effectiveness of transitional bilingual education programs in five Massachusetts communities has five flaws and the five flaws are discussed in detail.

Rubin, D. L., & Greene, K. (1992). Gender-typical style in written language. Research in the Teaching of English, 26.

This study was designed to find out whether the writing styles of men and women differ. Rubin and Green discuss the pre-suppositions that women are better writers than men.

Sawin, E. (1992). Reaction: Experimental research in the context of other methods. School of Education Review, 4 , 18-21.

Sawin responds to Gage's article on methodologies and issues in educational research. He agrees with most of the article but suggests the concept of scientific should not be regarded in absolute terms and recommends more emphasis on scientific method. He also questions the value of experiments over other types of research.

Schoonmaker, W. E. (1984). Improving classroom instruction: A model for experimental research. The Technology Teacher, 44, 24-25.

The model outlined in this article tries to bridge the gap between classroom practice and laboratory research, using what Schoonmaker calls active research. Research is conducted in the classroom with the students and is used to determine which two methods of classroom instruction chosen by the teacher is more effective.

Schrag, F. (1992). In defense of positivist research paradigms. Educational Researcher, 21, (5), 5-8.

The controversial defense of the use of positivistic research methods to evaluate educational strategies; the author takes on Eisner, Erickson, and Popkewitz.

Smith, J. (1997). The stories educational researchers tell about themselves. Educational Researcher, 33 (3), 4-11.

Recapitulates main features of an on-going debate between advocates for using vocabularies of traditional language arts and whole language in educational research. An "impasse" exists were advocates "do not share a theoretical disposition concerning both language instruction and the nature of research," Smith writes (p. 6). He includes a very comprehensive history of the debate of traditional research methodology and qualitative methods and vocabularies. Definitely worth a read by graduates.

Smith, N. L. (1980). The feasibility and desirability of experimental methods in evaluation. Evaluation and Program Planning: An International Journal , 251-55.

Smith identifies the conditions under which experimental research is most desirable. Includes a review of current thinking and controversies.

Stewart, N. R., & Johnson, R. G. (1986, March 16-20). An evaluation of experimental methodology in counseling and counselor education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

The purpose of this study was to evaluate the quality of experimental research in counseling and counselor education published from 1976 through 1984.

Spector, P. E. (1990). Research Designs. Newbury Park, California: Sage Publications.

In this book, Spector introduces the basic principles of experimental and nonexperimental design in the social sciences.

Tait, P. E. (1984). Do-it-yourself evaluation of experimental research. Journal of Visual Impairment and Blindness, 78 , 356-363 .

Tait's goal is to provide the reader who is unfamiliar with experimental research or statistics with the basic skills necessary for the evaluation of research studies.

Walsh, S. M. (1990). The current conflict between case study and experimental research: A breakthrough study derives benefits from both . (ERIC Document Number ED339721).

This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes. The study was not a case study, but it provided insights into the attitudes and feelings of small clusters of student writers.

Waters, G. R. (1976). Experimental designs in communication research. Journal of Business Communication, 14 .

The paper presents a series of discussions on the general elements of experimental design and the scientific process and relates these elements to the field of communication.

Welch, W. W. (March 1969). The selection of a national random sample of teachers for experimental curriculum evaluation. Scholastic Science and Math , 210-216.

Members of the evaluation section of Harvard project physics describe what is said to be the first attempt to select a national random sample of teachers, and list 6 steps to do so. Cost and comparison with a volunteer group are also discussed.

Winer, B.J. (1971). Statistical principles in experimental design , (2nd ed.). New York: McGraw-Hill.

Combines theory and application discussions to give readers a better understanding of the logic behind statistical aspects of experimental design. Introduces the broad topic of design, then goes into considerable detail. Not for light reading. Bring your aspirin if you like statistics. Bring morphine is you're a humanist.

Winn, B. (1986, January 16-21). Emerging trends in educational technology research. Paper presented at the Annual Convention of the Association for Educational Communication Technology.

This examination of the topic of research in educational technology addresses four major areas: (1) why research is conducted in this area and the characteristics of that research; (2) the types of research questions that should or should not be addressed; (3) the most appropriate methodologies for finding answers to research questions; and (4) the characteristics of a research report that make it good and ultimately suitable for publication.

Citation Information

Luann Barnes, Jennifer Hauser, Luana Heikes, Anthony J. Hernandez, Paul Tim Richard, Katherine Ross, Guo Hua Yang, and Mike Palmquist. (1994-2024). Experimental and Quasi-Experimental Research. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides/.

Copyright Information

Copyright © 1994-2024 Colorado State University and/or this site's authors, developers, and contributors . Some material displayed on this site is used with permission.

Experimental Method In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

17 Advantages and Disadvantages of Experimental Research Method in Psychology

There are numerous research methods used to determine if theories, ideas, or even products have validity in a market or community. One of the most common options utilized today is experimental research. Its popularity is due to the fact that it becomes possible to take complete control over a single variable while conducting the research efforts. This process makes it possible to manipulate the other variables involved to determine the validity of an idea or the value of what is being proposed.

Outcomes through experimental research come through a process of administration and monitoring. This structure makes it possible for researchers to determine the genuine impact of what is under observation. It is a process which creates outcomes with a high degree of accuracy in almost any field.

The conclusion can then offer a final value potential to consider, making it possible to know if a continued pursuit of the information is profitable in some way.

The pros and cons of experimental research show that this process is highly efficient, creating data points for evaluation with speed and regularity. It is also an option that can be manipulated easily when researchers want their work to draw specific conclusions.

List of the Pros of Experimental Research

1. Experimental research offers the highest levels of control. The procedures involved with experimental research make it possible to isolate specific variables within virtually any topic. This advantage makes it possible to determine if outcomes are viable. Variables are controllable on their own or in combination with others to determine what can happen when each scenario is brought to a conclusion. It is a benefit which applies to ideas, theories, and products, offering a significant advantage when accurate results or metrics are necessary for progress.

2. Experimental research is useful in every industry and subject. Since experimental research offers higher levels of control than other methods which are available, it offers results which provide higher levels of relevance and specificity. The outcomes that are possible come with superior consistency as well. It is useful in a variety of situations which can help everyone involved to see the value of their work before they must implement a series of events.

3. Experimental research replicates natural settings with significant speed benefits. This form of research makes it possible to replicate specific environmental settings within the controls of a laboratory setting. This structure makes it possible for the experiments to replicate variables that would require a significant time investment otherwise. It is a process which gives the researchers involved an opportunity to seize significant control over the extraneous variables which may occur, creating limits on the unpredictability of elements that are unknown or unexpected when driving toward results.

4. Experimental research offers results which can occur repetitively. The reason that experimental research is such an effective tool is that it produces a specific set of results from documented steps that anyone can follow. Researchers can duplicate the variables used during the work, then control the variables in the same way to create an exact outcome that duplicates the first one. This process makes it possible to validate scientific discoveries, understand the effectiveness of a program, or provide evidence that products address consumer pain points in beneficial ways.

5. Experimental research offers conclusions which are specific. Thanks to the high levels of control which are available through experimental research, the results which occur through this process are usually relevant and specific. Researchers an determine failure, success, or some other specific outcome because of the data points which become available from their work. That is why it is easier to take an idea of any type to the next level with the information that becomes available through this process. There is always a need to bring an outcome to its natural conclusion during variable manipulation to collect the desired data.

6. Experimental research works with other methods too. You can use experimental research with other methods to ensure that the data received from this process is as accurate as possible. The results that researchers obtain must be able to stand on their own for verification to have findings which are valid. This combination of factors makes it possible to become ultra-specific with the information being received through these studies while offering new ideas to other research formats simultaneously.

7. Experimental research allows for the determination of cause-and-effect. Because researchers can manipulate variables when performing experimental research, it becomes possible to look for the different cause-and-effect relationships which may exist when pursuing a new thought. This process allows the parties involved to dig deeply into the possibilities which are present, demonstrating whatever specific benefits are possible when outcomes are reached. It is a structure which seeks to understand the specific details of each situation as a way to create results.

List of the Cons of Experimental Research

1. Experimental research suffers from the potential of human errors. Experimental research requires those involved to maintain specific levels of variable control to create meaningful results. This process comes with a high risk of experiencing an error at some stage of the process when compared to other options that may be available. When this issue goes unnoticed as the results become transferable, the data it creates will reflect a misunderstanding of the issue under observation. It is a disadvantage which could eliminate the value of any information that develops from this process.

2. Experimental research is a time-consuming process to endure. Experimental research must isolate each possible variable when a subject matter is being studied. Then it must conduct testing on each element under consideration until a resolution becomes possible, which then requires data collection to occur. This process must continue to repeat itself for any findings to be valid from the effort. Then combinations of variables must go through evaluation in the same manner. It is a field of research that sometimes costs more than the potential benefits or profits that are achievable when a favorable outcome is eventually reached.

3. Experimental research creates unrealistic situations that still receive validity. The controls which are necessary when performing experimental research increase the risks of the data becoming inaccurate or corrupted over time. It will still seem authentic to the researchers involved because they may not see that a variable is an unrealistic situation. The variables can skew in a specific direction if the information shifts in a certain direction through the efforts of the researchers involved. The research environment can also be extremely different than real-life circumstances, which can invalidate the value of the findings.

4. Experimental research struggles to measure human responses. People experience stress in uncountable ways during the average day. Personal drama, political arguments, and workplace deadlines can influence the data that researchers collect when measuring human response tendencies. What happens inside of a controlled situation is not always what happens in real-life scenarios. That is why this method is not the correct choice to use in group or individual settings where a human response requires measurement.

5. Experimental research does not always create an objective view. Objective research is necessary for it to provide effective results. When researchers have permission to manipulate variables in whatever way they choose, then the process increases the risk of a personal bias, unconscious or otherwise, influencing the results which are eventually obtained. People can shift their focus because they become uncomfortable, are aroused by the event, or want to manipulate the results for their personal agenda. Data samples are therefore only a reflection of that one group instead of offering data across an entire demographic.

6. Experimental research can experience influences from real-time events. The issue with human error in experimental research often involves the researchers conducting the work, but it can also impact the people being studied as well. Numerous outside variables can impact responses or outcomes without the knowledge of researchers. External triggers, such as the environment, political stress, or physical attraction can alter a person’s regular perspective without it being apparent. Internal triggers, such as claustrophobia or social interactions, can alter responses as well. It is challenging to know if the data collected through this process offers an element of honesty.

7. Experimental research cannot always control all of the variables. Although experimental research attempts to control every variable or combination that is possible, laboratory settings cannot reach this limitation in every circumstance. If data must be collected in a natural setting, then the risk of inaccurate information rises. Some research efforts place an emphasis on one set of variables over another because of a perceived level of importance. That is why it becomes virtually impossible in some situations to apply obtained results to the overall population. Groups are not always comparable, even if this process provides for more significant transferability than other methods of research.

8. Experimental research does not always seek to find explanations. The goal of experimental research is to answer questions that people may have when evaluating specific data points. There is no concern given to the reason why specific outcomes are achievable through this system. When you are working in a world of black-and-white where something works or it does not, there are many shades of gray in-between these two colors where additional information is waiting to be discovered. This method ignores that information, settling for whatever answers are found along the extremes instead.

9. Experimental research does not make exceptions for ethical or moral violations. One of the most significant disadvantages of experimental research is that it does not take the ethical or moral violations that some variables may create out of the situation. Some variables cannot be manipulated in ways that are safe for people, the environment, or even the society as a whole. When researchers encounter this situation, they must either transfer their data points to another method, continue on to produce incomplete results, fabricate results, or set their personal convictions aside to work on the variable anyway.

10. Experimental research may offer results which apply to only one situation. Although one of the advantages of experimental research is that it allows for duplication by others to obtain the same results, this is not always the case in every situation. There are results that this method can find which may only apply to that specific situation. If this process is used to determine highly detailed data points which require unique circumstances to obtain, then future researchers may find that result replication is challenging to obtain.

These experimental research pros and cons offer a useful system that can help determine the validity of an idea in any industry. The only way to achieve this advantage is to place tight controls over the process, and then reduce any potential for bias within the system to appear. This makes it possible to determine if a new idea of any type offers current or future value.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism. Run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

limitations of experimental research include which of the following

Enago Academy's Most Popular Articles

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Journals Combat Image Manipulation with AI

Science under Surveillance: Journals adopt advanced AI to uncover image manipulation

Journals are increasingly turning to cutting-edge AI tools to uncover deceitful images published in manuscripts.…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

limitations of experimental research include which of the following

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

limitations of experimental research include which of the following

What should universities' stance be on AI tools in research and academic writing?

FutureofWorking.com

8 Advantages and Disadvantages of Experimental Research

Experimental research has become an important part of human life. Babies conduct their own rudimentary experiments (such as putting objects in their mouth) to learn about the world around them, while older children and teens conduct experiments at school to learn more science. Ancient scientists used experimental research to prove their hypotheses correct; Galileo Galilei and Antoine Lavoisier, for instance, did various experiments to uncover key concepts in physics and chemistry, respectively. The same goes for modern experts, who utilize this scientific method to see if new drugs are effective, discover treatments for illnesses, and create new electronic gadgets (among others).

Experimental research clearly has its advantages, but is it really a perfect way to verify and validate scientific concepts? Many people point out that it has several disadvantages and might even be harmful to subjects in some cases. To learn more about these, let’s take a look into the pros and cons of this type of procedure.

List of Advantages of Experimental Research

1. It gives researchers a high level of control. When people conduct experimental research, they can manipulate the variables so they can create a setting that lets them observe the phenomena they want. They can remove or control other factors that may affect the overall results, which means they can narrow their focus and concentrate solely on two or three variables.

In the pharmaceutical industry, for example, scientists conduct studies in which they give a new kind drug to a group of subjects and a placebo drug to another group. They then give the same kind of food to the subjects and even house them in the same area to ensure that they won’t be exposed to other factors that may affect how the drugs work. At the end of the study, the researchers analyze the results to see how the new drug affects the subjects and identify its side effects and adverse results.

2. It allows researchers to utilize many variations. As mentioned above, researchers have almost full control when they conduct experimental research studies. This lets them manipulate variables and use as many (or as few) variations as they want to create an environment where they can test their hypotheses — without destroying the validity of the research design. In the example above, the researchers can opt to add a third group of subjects (in addition to the new drug group and the placebo group), who would be given a well-known and widely available drug that has been used by many people for years. This way, they can compare how the new drug performs compared to the placebo drug as well as the widely used drug.

3. It can lead to excellent results. The very nature of experimental research allows researchers to easily understand the relationships between the variables, the subjects, and the environment and identify the causes and effects in whatever phenomena they’re studying. Experimental studies can also be easily replicated, which means the researchers themselves or other scientists can repeat their studies to confirm the results or test other variables.

4. It can be used in different fields. Experimental research is usually utilized in the medical and pharmaceutical industries to assess the effects of various treatments and drugs. It’s also used in other fields like chemistry, biology, physics, engineering, electronics, agriculture, social science, and even economics.

List of Disadvantages of Experimental Research

1. It can lead to artificial situations. In many scenarios, experimental researchers manipulate variables in an attempt to replicate real-world scenarios to understand the function of drugs, gadgets, treatments, and other new discoveries. This works most of the time, but there are cases when researchers over-manipulate their variables and end up creating an artificial environment that’s vastly different from the real world. The researchers can also skewer the study to fit whatever outcome they want (intentionally or unintentionally) and compromise the results of the research.

2. It can take a lot of time and money. Experimental research can be costly and time-consuming, especially if the researchers have to conduct numerous studies to test each variable. If the studies are supported by the government, they would consume millions or even billions of taxpayers’ dollars, which could otherwise have been spent on other community projects such as education, housing, and healthcare. If the studies are privately funded, they can be a huge burden on the companies involved who, in turn, would pass on the costs to the customers. As a result, consumers have to spend a large amount if they want to avail of these new treatments, gadgets, and other innovations.

3. It can be affected by errors. Just like any kind of research, experimental research isn’t always perfect. There might be blunders in the research design or in the methodology as well as random mistakes that can’t be controlled or predicted, which can seriously affect the outcome of the study and require the researchers to start all over again.

There might also be human errors; for instance, the researchers may allow their personal biases to affect the study. If they’re conducting a double-blind study (in which both the researchers and the subjects don’t know which the control group is), the researchers might be made aware of which subjects belong to the control group, destroying the validity of the research. The subjects may also make mistakes. There have been cases (particularly in social experiments) in which the subjects give answers that they think the researchers want to hear instead of truthfully saying what’s on their mind.

4. It might not be feasible in some situations. There are times when the variables simply can’t be manipulated or when the researchers need an impossibly large amount of money to conduct the study. There are also cases when the study would impede on the subjects’ human rights and/or would give rise to ethical issues. In these scenarios, it’s better to choose another kind of research design (such as review, meta-analysis, descriptive, or correlational research) instead of insisting on using the experimental research method.

Experimental research has become an important part of the history of the world and has led to numerous discoveries that have made people’s lives better, longer, and more comfortable. However, it can’t be denied that it also has its disadvantages, so it’s up to scientists and researchers to find a balance between the benefits it provides and the drawbacks it presents.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Limitations of the Study
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The limitations of the study are those characteristics of design or methodology that impacted or influenced the interpretation of the findings from your research. Study limitations are the constraints placed on the ability to generalize from the results, to further describe applications to practice, and/or related to the utility of findings that are the result of the ways in which you initially chose to design the study or the method used to establish internal and external validity or the result of unanticipated challenges that emerged during the study.

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Theofanidis, Dimitrios and Antigoni Fountouki. "Limitations and Delimitations in the Research Process." Perioperative Nursing 7 (September-December 2018): 155-163. .

Importance of...

Always acknowledge a study's limitations. It is far better that you identify and acknowledge your study’s limitations than to have them pointed out by your professor and have your grade lowered because you appeared to have ignored them or didn't realize they existed.

Keep in mind that acknowledgment of a study's limitations is an opportunity to make suggestions for further research. If you do connect your study's limitations to suggestions for further research, be sure to explain the ways in which these unanswered questions may become more focused because of your study.

Acknowledgment of a study's limitations also provides you with opportunities to demonstrate that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new knowledge but also to confront assumptions and explore what we don't know.

Claiming limitations is a subjective process because you must evaluate the impact of those limitations . Don't just list key weaknesses and the magnitude of a study's limitations. To do so diminishes the validity of your research because it leaves the reader wondering whether, or in what ways, limitation(s) in your study may have impacted the results and conclusions. Limitations require a critical, overall appraisal and interpretation of their impact. You should answer the question: do these problems with errors, methods, validity, etc. eventually matter and, if so, to what extent?

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com.

Descriptions of Possible Limitations

All studies have limitations . However, it is important that you restrict your discussion to limitations related to the research problem under investigation. For example, if a meta-analysis of existing literature is not a stated purpose of your research, it should not be discussed as a limitation. Do not apologize for not addressing issues that you did not promise to investigate in the introduction of your paper.

Here are examples of limitations related to methodology and the research process you may need to describe and discuss how they possibly impacted your results. Note that descriptions of limitations should be stated in the past tense because they were discovered after you completed your research.

Possible Methodological Limitations

  • Sample size -- the number of the units of analysis you use in your study is dictated by the type of research problem you are investigating. Note that, if your sample size is too small, it will be difficult to find significant relationships from the data, as statistical tests normally require a larger sample size to ensure a representative distribution of the population and to be considered representative of groups of people to whom results will be generalized or transferred. Note that sample size is generally less relevant in qualitative research if explained in the context of the research problem.
  • Lack of available and/or reliable data -- a lack of data or of reliable data will likely require you to limit the scope of your analysis, the size of your sample, or it can be a significant obstacle in finding a trend and a meaningful relationship. You need to not only describe these limitations but provide cogent reasons why you believe data is missing or is unreliable. However, don’t just throw up your hands in frustration; use this as an opportunity to describe a need for future research based on designing a different method for gathering data.
  • Lack of prior research studies on the topic -- citing prior research studies forms the basis of your literature review and helps lay a foundation for understanding the research problem you are investigating. Depending on the currency or scope of your research topic, there may be little, if any, prior research on your topic. Before assuming this to be true, though, consult with a librarian! In cases when a librarian has confirmed that there is little or no prior research, you may be required to develop an entirely new research typology [for example, using an exploratory rather than an explanatory research design ]. Note again that discovering a limitation can serve as an important opportunity to identify new gaps in the literature and to describe the need for further research.
  • Measure used to collect the data -- sometimes it is the case that, after completing your interpretation of the findings, you discover that the way in which you gathered data inhibited your ability to conduct a thorough analysis of the results. For example, you regret not including a specific question in a survey that, in retrospect, could have helped address a particular issue that emerged later in the study. Acknowledge the deficiency by stating a need for future researchers to revise the specific method for gathering data.
  • Self-reported data -- whether you are relying on pre-existing data or you are conducting a qualitative research study and gathering the data yourself, self-reported data is limited by the fact that it rarely can be independently verified. In other words, you have to the accuracy of what people say, whether in interviews, focus groups, or on questionnaires, at face value. However, self-reported data can contain several potential sources of bias that you should be alert to and note as limitations. These biases become apparent if they are incongruent with data from other sources. These are: (1) selective memory [remembering or not remembering experiences or events that occurred at some point in the past]; (2) telescoping [recalling events that occurred at one time as if they occurred at another time]; (3) attribution [the act of attributing positive events and outcomes to one's own agency, but attributing negative events and outcomes to external forces]; and, (4) exaggeration [the act of representing outcomes or embellishing events as more significant than is actually suggested from other data].

Possible Limitations of the Researcher

  • Access -- if your study depends on having access to people, organizations, data, or documents and, for whatever reason, access is denied or limited in some way, the reasons for this needs to be described. Also, include an explanation why being denied or limited access did not prevent you from following through on your study.
  • Longitudinal effects -- unlike your professor, who can literally devote years [even a lifetime] to studying a single topic, the time available to investigate a research problem and to measure change or stability over time is constrained by the due date of your assignment. Be sure to choose a research problem that does not require an excessive amount of time to complete the literature review, apply the methodology, and gather and interpret the results. If you're unsure whether you can complete your research within the confines of the assignment's due date, talk to your professor.
  • Cultural and other type of bias -- we all have biases, whether we are conscience of them or not. Bias is when a person, place, event, or thing is viewed or shown in a consistently inaccurate way. Bias is usually negative, though one can have a positive bias as well, especially if that bias reflects your reliance on research that only support your hypothesis. When proof-reading your paper, be especially critical in reviewing how you have stated a problem, selected the data to be studied, what may have been omitted, the manner in which you have ordered events, people, or places, how you have chosen to represent a person, place, or thing, to name a phenomenon, or to use possible words with a positive or negative connotation. NOTE :   If you detect bias in prior research, it must be acknowledged and you should explain what measures were taken to avoid perpetuating that bias. For example, if a previous study only used boys to examine how music education supports effective math skills, describe how your research expands the study to include girls.
  • Fluency in a language -- if your research focuses , for example, on measuring the perceived value of after-school tutoring among Mexican-American ESL [English as a Second Language] students and you are not fluent in Spanish, you are limited in being able to read and interpret Spanish language research studies on the topic or to speak with these students in their primary language. This deficiency should be acknowledged.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Senunyeme, Emmanuel K. Business Research Methods. Powerpoint Presentation. Regent University of Science and Technology; ter Riet, Gerben et al. “All That Glitters Isn't Gold: A Survey on Acknowledgment of Limitations in Biomedical Studies.” PLOS One 8 (November 2013): 1-6.

Structure and Writing Style

Information about the limitations of your study are generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or, the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section.

If you determine that your study is seriously flawed due to important limitations , such as, an inability to acquire critical data, consider reframing it as an exploratory study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in a new study.

But, do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic . If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to revise your study.

When discussing the limitations of your research, be sure to:

  • Describe each limitation in detailed but concise terms;
  • Explain why each limitation exists;
  • Provide the reasons why each limitation could not be overcome using the method(s) chosen to acquire or gather the data [cite to other studies that had similar problems when possible];
  • Assess the impact of each limitation in relation to the overall findings and conclusions of your study; and,
  • If appropriate, describe how these limitations could point to the need for further research.

Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't interview a group of people that you later wish you had]. If this is the case, don't panic. Acknowledge it, and explain how applying a different or more robust methodology might address the research problem more effectively in a future study. A underlying goal of scholarly research is not only to show what works, but to demonstrate what doesn't work or what needs further clarification.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Ioannidis, John P.A. "Limitations are not Properly Acknowledged in the Scientific Literature." Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh. Writing the Empirical Social Science Research Paper: A Guide for the Perplexed. January 24, 2012. Academia.edu; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com; What Is an Academic Paper? Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

Writing Tip

Don't Inflate the Importance of Your Findings!

After all the hard work and long hours devoted to writing your research paper, it is easy to get carried away with attributing unwarranted importance to what you’ve done. We all want our academic work to be viewed as excellent and worthy of a good grade, but it is important that you understand and openly acknowledge the limitations of your study. Inflating the importance of your study's findings could be perceived by your readers as an attempt hide its flaws or encourage a biased interpretation of the results. A small measure of humility goes a long way!

Another Writing Tip

Negative Results are Not a Limitation!

Negative evidence refers to findings that unexpectedly challenge rather than support your hypothesis. If you didn't get the results you anticipated, it may mean your hypothesis was incorrect and needs to be reformulated. Or, perhaps you have stumbled onto something unexpected that warrants further study. Moreover, the absence of an effect may be very telling in many situations, particularly in experimental research designs. In any case, your results may very well be of importance to others even though they did not support your hypothesis. Do not fall into the trap of thinking that results contrary to what you expected is a limitation to your study. If you carried out the research well, they are simply your results and only require additional interpretation.

Lewis, George H. and Jonathan F. Lewis. “The Dog in the Night-Time: Negative Evidence in Social Research.” The British Journal of Sociology 31 (December 1980): 544-558.

Yet Another Writing Tip

Sample Size Limitations in Qualitative Research

Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework. However, it remains true that sample sizes that are too small cannot adequately support claims of having achieved valid conclusions and sample sizes that are too large do not permit the deep, naturalistic, and inductive analysis that defines qualitative inquiry. Determining adequate sample size in qualitative research is ultimately a matter of judgment and experience in evaluating the quality of the information collected against the uses to which it will be applied and the particular research method and purposeful sampling strategy employed. If the sample size is found to be a limitation, it may reflect your judgment about the methodological technique chosen [e.g., single life history study versus focus group interviews] rather than the number of respondents used.

Boddy, Clive Roland. "Sample Size for Qualitative Research." Qualitative Market Research: An International Journal 19 (2016): 426-432; Huberman, A. Michael and Matthew B. Miles. "Data Management and Analysis Methods." In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 428-444; Blaikie, Norman. "Confounding Issues Related to Determining Sample Size in Qualitative Research." International Journal of Social Research Methodology 21 (2018): 635-641; Oppong, Steward Harrison. "The Problem of Sampling in qualitative Research." Asian Journal of Management Sciences and Education 2 (2013): 202-210.

  • << Previous: 8. The Discussion
  • Next: 9. The Conclusion >>
  • Last Updated: Apr 11, 2024 1:27 PM
  • URL: https://libguides.usc.edu/writingguide

Scientific Research and Methodology : An introduction to quantitative research and statistics

9 study design limitations.

So far, you have learnt to ask a RQ and designs studies. In this chapter , you will learn to identify:

  • limitations to internally valid.
  • limitations to externally valid.
  • limitations to ecologically valid.

limitations of experimental research include which of the following

9.1 Introduction

The type of study and the study design determine how the results of the study should be interpreted. Ideally, a study would be perfectly externally and internally valid; in practice this is very difficult to achieve. Practically every study has limitations. The results of a study should be interpreted in light of these limitations. Limitations are not necessarily problems .

Limitations generally can be discussed through three components:

  • Internal validity (Sect. 3.8 ): Discuss any limitations to internal validity due to the study design (such as identifying possible confounding variables). This is related to the effectiveness of the study within the sample (Sect. 9.2 ).
  • External validity (Sect. 3.9 ): Discuss how well the sample represents the intended population. This is related to the generalisability of the study to the intended population (Sect. 9.3 ).
  • Ecological validity : Discuss how well the study methods, materials and context approximate the real situation being studied. This is related to the practicality of the results to real life (Sect. 9.4 ).

All these issues should be addressed when considering the study limitations.

Almost every study has limitations. Identifying potential limitations, and discussing the likely impact they have on the interpretation of the study results, is important and ethical.

Example 9.1 Delarue et al. ( 2019 ) discuss studies where subjects rate the taste of new food products. They note that taste-testing studies should (p. 78):

... allow generalizing the conclusions obtained with a consumer sample [...] to the general targeted population [i.e., external validity]... tests should be reliable in terms of accuracy and replicability [i.e., internal validity].

However, even with good internal and external validity, these studies often result in a 'high rate of failures of new launched products'. That is, the studies do not replicate the real world, and so lack ecological validity .

9.2 Limitations: internal validity

Internal validity refers to the extent to which a cause-and-effect relationship can be established in a study, eliminating other possible explanations (Sect. 3.8 ). A discussion of the limitations of internal validity should cover, as appropriate: possible confounding variables; the impact of the Hawthorne, observer, placebo and carry-over effects; the impact of any other design decisions.

If any of these issues are likely to compromise internal validity, the implications on the interpretation of the results should be discussed. For example, if the participants were not blinded, this should be clearly stated, and the conclusion should indicate that the individuals in the study may have behaved differently than usual (the Hawthorne effect).

limitations of experimental research include which of the following

Example 9.2 (Study limitations) A study ( Axmann et al. 2020 ) randomly allocated Ugandan farmers to receive, or not receive, hybrid maize seeds to improve internal validity. One potential threat to internal validity was that farmers receiving the hybrid seeds could share their seeds with their neighbours.

Hence, the researchers contacted the \(75\) farmers allocated to receive the hybrid seeds; none of the contacted farmers reported selling or giving seeds to other farmers. This extra step increased the internal validity of the study.

Maximizing internal validity in observational studies is more difficult than in experimental studies (e.g., random allocation is not possible). The internal validity of experimental studies involving people is often compromised because people must be informed that they are participating in a study.

limitations of experimental research include which of the following

Example 9.3 (Internal validity) In a study of the hand-hygiene practices of paramedics ( Barr et al. 2017 ) , self -reported hand-hygiene practices were very different than what was reported by peers . That is, how people self-report their behaviours may not align with how they actually behave, which influence the internal validity of the study.

A study evaluated using a new therapy on elderly men, and listed some limitations of their study:

... the researcher was not blinded and had prior knowledge of the research aims, disease status, and intervention. As such, these could all have influenced data recording [...] The potential of reporting bias and observer bias could be reduced by implementing blinding in future studies. --- Kabata-PiĆŒuch et al. ( 2021 ) , p. 10

9.3 Limitations: external validity

limitations of experimental research include which of the following

External validity refers to the ability to generalise the findings made from the sample to the entire intended population (Sect.  3.9 ). For a study to be externally valid, it must first be internally valid: if the study of not effective in the sample studied (i.e., internally valid), the results may not apply to the intended population either.

External validity refers to how well the sample is likely to represent the intended population in the RQ.

If the population is Alaskans, then the study is externally valid if the sample is representative of Alaskans. The results do not have to apply to people in the rest of the United States (though this can be commented on, too). The intended population is Alaskans .

External validity depends on how the sample was obtained. Results from random samples (Sects.  5.4 to  5.8 ) are likely to generalise to the population and be externally valid. (The analyses in this book assume all samples are simple random samples .) Furthermore, results from approximately representative samples (Sect.  5.9 ) may generalise to the population and be externally valid if those in the study are not obviously different than those not in the study.

Example 9.4 (External validity) A New Zealand study ( Gammon et al. 2012 ) identified (for well-documented reasons) a population of interest: 'women of South Asian origin living in New Zealand' (p. 21). The women in the sample were 'women of South Asian origin [...] recruited using a convenience sample method throughout Auckland' (p. 21).

The results may not generalise to the intended population ( all New Zealand women) because all the women in the sample came from Auckland, and the sample was not a random sample.

Example 9.5 (Using biochar) A study of growing ginger using biochar ( Farrar et al. 2018 ) used one farm at Mt Mellum, Australia. The results may only generalise to growing ginger at Mt Mellum, but since ginger is usually grown in similar types of climates and soils, the results may apply to other ginger farms also.

9.4 Limitations: ecological validity

The likely practicality of the study results in the real world should also be discussed. This is called ecological validity .

limitations of experimental research include which of the following

Definition 9.1 (Ecological validity) A study is ecologically valid if the study methods, materials and context closely approximate the real situation of interest.

Studies don't need to be ecologically valid to be useful; much can be learnt under special conditions, as long as the potential limitations are understood when applying the results to the real world. The ecological validity of experimental studies may be compromised because the experimental conditions are sometimes artificially controlled (for good reason).

limitations of experimental research include which of the following

Example 9.6 (Ecological validity) Consider a study to determine the proportion of people that buy coffee in a reusable cup. People could be asked about their behaviour . This study may not be ecologically valid, as how people act may not align with how they say they will act.

An alternative study could watch people buy coffees at various coffee shops, and record what people do in practice. This second study is more likely to be ecologically valid , as real-world behaviour is observed.

A study observed the effect of using high-mounted rear brake lights ( Kahane and Hertz 1998 ) , which are now commonplace. The American study showed that such lights reduced rear-end collisions by about \(50\) %. However, after making these lights mandatory, rear-end collisions reduced by only \(5\) %. Why?

9.5 Study types and limitations

Experimental studies, in general, have higher internal validity than observational studies, since more of the study design in under the control of the researchers; for example, random allocation of treatments is possible to minimise confounding.

Only well-conducted experimental studies can show cause-and-effect relationships.

However, experimental studies may suffer from poor ecological validity; for instance, laboratory experiments are often conducted under controlled temperature and humidity. Many experiments also require that people be told about being in a study (due to ethics), and so internal validity may be comprised (the Hawthorne effect).

Example 9.7 (Retrofitting) In a study of retro-fitting houses with energy-saving devices, Giandomenico, Papineau, and Rivers ( 2022 ) found large discrepancies in savings for observational studies ( \(12.2\) %) and experimental studies ( \(6.2\) %). The authors say that 'this finding reinforces the importance of using study designs with high internal validity to evaluate program savings' (p. 692).

9.6 Chapter summary

The limitations in a study need to be identified, and may be related to:

  • internal validity (effectiveness): how well the study is conducted within the sample, isolating the relationship of interest.
  • external validity (generalisability): how well the sample results are likely to apply to the intended population.
  • ecological validity (practicality): how well the results may apply to the real-world situation.
  • the type of study.

9.7 Quick review questions

Are the following statements true or false ?

  • When interpreting the results of studies, the steps taken to maximize internal validity should be evaluated TRUE FALSE
  • If studies are not externally valid, then they are not useful. TRUE FALSE
  • When interpreting the results of studies, the steps taken to maximize external validity do not need to be evaluated TRUE FALSE
  • When interpreting the results of studies, ecological validity is about the impact of the study on the environment. TRUE FALSE

9.8 Exercises

Selected answers are available in App.  E .

Exercise 9.1 A research study examined how people can save energy through lighting choices ( Gentile 2022 ) . The study states (p. 9) that the results 'are limited to the specific study and cannot be easily projected to other similar settings'.

What type of validity is being discussed here?

Exercise 9.2 Fill the blanks with the correct word: internal , external or ecological .

When interpreting the results of studies, we consider the practicality ( internal external ecological validity), the generalizability ( internal external ecological validity) and the effectiveness ( internal external ecological validity).

Exercise 9.3 A student project at the university where I work posed the RQ:

Among university students on-campus, is the percentage of word retention higher in male students than female students?

When discussing external validity , the students stated:

We cannot say whether or not the general public have better or worse word retention compared to the students that we will be studying.

Why is the statement not relevant in a discussion of external validity?

Exercise 9.4 Researchers conducted an experimental study ( Yeh et al. 2018 ) to 'determine if using a parachute prevents death or major traumatic injury when jumping from an aircraft'.

The researchers randomised \(23\) volunteers into one of two groups: wearing a parachute, or wearing an empty backpack. The response variable was a measurement of death or major traumatic injury upon landing. From the study, death or major injury was the same in both groups (0% for each group). However, the study used 'small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps' (p. 1).

Comment on the internal, external and ecological validity.

Exercise 9.5 A study examined how well hospital patients sleep at night ( Delaney et al. 2018 ) . The researchers state that 'convenience sampling was used to recruit patients' (p. 2). Later, the researchers state (p. 7):

while most healthy individuals sleep primarily or exclusively at night, it is important to consider that patients requiring hospitalization will likely require some daytime nap periods. This study looks at sleep only in the night-time period \(22\) : \(00\) -- \(07\) : \(00\) h, without the context of daytime sleep considered.

Discuss these issues using the language introduced in this chapter.

Exercise 9.6 A study ( Botelho et al. 2019 ) examined the food choices made when subjects were asked to shop for ingredients to make a last-minute meal. Half were told to prepare a 'healthy meal', and the other half told just to prepare a 'meal'. The authors stated (p. 436):

Another limitation is that results report findings from a simulated purchase. As participants did not have to pay for their selection, actual choices could be different. Participants may also have not behaved in their usual manner since they were taking part in a research study, a situation known as the Hawthorne effect.

What type of limitation is being discussed?

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

14.2 True experiments

Learning objectives.

Learners will be able to…

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

A true experiment , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables (as treatments) are manipulated by the researcher, subjects are randomly assigned (i.e., random assignment) to different treatment levels, and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its ability to increase internal validity and help establish causality through treatment manipulation, while controlling for the effects of extraneous variables. As such they are best suited for explanatory research questions.

In true experimental design, research subjects are assigned to either an experimental group, which receives the treatment or intervention being investigated, or a control group, which does not.  Control groups may receive no treatment at all, the standard treatment (which is called “treatment as usual” or TAU), or a treatment that entails some type of contact or interaction without the characteristics of the intervention being investigated.  For example, the control group may participate in a support group while the experimental group is receiving a new group-based therapeutic intervention consisting of education and cognitive behavioral group therapy.

After determining the nature of the experimental and control groups, the next decision a researcher must make is when they need to collect data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle data collection another way? Below, we’ll discuss three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often difficult and can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The participants in the experimental group will receive CBT, while the participants in the control group will receive a series of videos about social anxiety.

Classical experiments (pretest posttest control group design)

The elements of a classical experiment are (1) random assignment of participants into an experimental and control group, (2) a pretest to assess the outcome(s) of interest for each group, (3) delivery of an intervention/treatment to the experimental group, and (4) a posttest to both groups to assess potential change in the outcome(s).

When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the components of the experiment. Table 14.2 starts us off by laying out what the abbreviations mean.

Figure 14.1 depicts a classical experiment using our example of assessing the intervention of CBT for social anxiety.  In the figure, RA denotes random assignment to the experimental group A and RB is random assignment to the control group B. O 1 (observation 1) denotes the pretest, X e denotes the experimental intervention, and O 2 (observation 2) denotes the posttest.

limitations of experimental research include which of the following

The more general, or universal, notation for classical experimental design is shown in Figure 14.2.

limitations of experimental research include which of the following

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way (Figure 14.3), with X i denoting treatment as usual:

limitations of experimental research include which of the following

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes temporality , a key component of a causal relationship. By administering the pretest, researchers can assess if the change in the outcome occured after the intervention. Assuming there is a change in the scores between the pretest and posttest, we would be able to say that yes, the change did occur after the intervention.

Posttest only control group design

Posttest only control group design involves only giving participants a posttest, just like it sounds. But why would you use this design instead of using a pretest posttest design? One reason could be to avoid potential testing effects that can happen when research participants take a pretest.

In research, the testing effect threatens internal validity when the pretest changes the way the participants respond on the posttest or subsequent assessments (Flannelly, Flannelly, & Jankowski, 2018). [1] A common example occurs when testing interventions for cognitive impairment in older adults. By taking a cognitive assessment during the pretest, participants get exposed to the items on the assessment and get to “practice” taking it (see for example, Cooley et al., 2015). [2] They may perform better the second time they take it because they have learned how to take the test, not because there have been changes in cognition. This specific type of testing effect is called the practice effect . [3]

The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome. Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the posttest, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is. To mitigate the influence of testing effects, posttest only control group designs do not administer a pretest to participants. Figure 14.4 depicts this.

limitations of experimental research include which of the following

A drawback to the posttest only control group design is that without a baseline measurement, establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. The posttest only control group design relies on the random assignment to groups to create groups that are equivalent at baseline because, without a pretest, researchers cannot assess whether the groups are equivalent before the intervention. Researchers must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect threatens internal validity is with the Solomon four group design. Basically, as part of this experiment, there are two experimental groups and two control groups. The first pair of experimental/control groups receives both a pretest and a posttest. The other pair receives only a posttest (Figure 14.5). In addition to addressing testing effects, this design also addresses the problems of establishing time order and equivalent groups in posttest only control group designs.

limitations of experimental research include which of the following

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our posttest measures, and groups C and D would take only our posttest measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

Key Takeaways

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest posttest research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Posttest only research design involves only one point of measurement—after the intervention or treatment. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a posttest, while the other receives only a posttest. This can help uncover the influence of testing effects.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a researcher?
  • What hypothesis(es) would you test using this true experiment?

TRACK 2 (IF YOU AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

Imagine you are interested in studying child welfare practice. You are interested in learning more about community-based programs aimed to prevent child maltreatment and to prevent out-of-home placement for children.

  • Think about a true experiment you might conduct for this research project. Which design would be best for this research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated) for you to carry your true experimental design in the real-world as a researcher?
  • Flannelly, K. J., Flannelly, L. T., & Jankowski, K. R. B. (2018). Threats to the internal validity of experimental and quasi-experimental research in healthcare. Journal of Health Care Chaplaincy, 24 (3), 107-130. https://doi.org/10.1080/08854726.20 17.1421019 ↵
  • Cooley, S. A., Heaps, J. M., Bolzenius, J. D., Salminen, L. E., Baker, L. M., Scott, S. E., & Paul, R. H. (2015). Longitudinal change in performance on the Montreal Cognitive Assessment in older adults. The Clinical Neuropsychologist, 29(6), 824-835. https://doi.org/10.1080/13854046.2015.1087596 ↵
  • Duff, K., Beglinger, L. J., Schultz, S. K., Moser, D. J., McCaffrey, R. J., Haase, R. F., Westervelt, H. J., Langbehn, D. R., Paulsen, J. S., & Huntington's Study Group (2007). Practice effects in the prediction of long-term cognitive outcome in three patient samples: a novel prognostic index. Archives of clinical neuropsychology : the official journal of the National Academy of Neuropsychologists, 22(1), 15–24. https://doi.org/10.1016/j.acn.2006.08.013 ↵

An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

A demonstration that a change occurred after an intervention. An important criterion for establishing causality.

an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment

The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself

improvements in cognitive assessments due to exposure to the instrument

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

learntopoint.com

Experimental Research: A Comprehensive Guide (2024)

 Experimental research is a scientific method used to test a hypothesis or study the cause-and-effect relationship between variables. In this type of research, the researcher manipulates one variable, called the independent variable, to observe the effect on another variable, called the dependent variable. Experimental research aims to establish a causal relationship between variables and eliminate alternative explanations for the observed results.

Experimental Research

Experimental research is widely used in various fields, including psychology, medicine, engineering, and social sciences. It is considered the most rigorous research method because it allows the researcher to control the variables and minimize the influence of extraneous factors. However, experimental research can be time-consuming, expensive, and sometimes unethical, especially involving human subjects. Therefore, researchers must carefully design their experiments and follow ethical guidelines to ensure the safety and well-being of their participants.

Overall, experimental research is crucial in advancing scientific knowledge and understanding the world. By following a systematic and rigorous approach, researchers can test their hypotheses and draw valid conclusions about the cause-and-effect relationships between variables. However, it is important to recognize experimental research’s limitations and challenges and use it in conjunction with other research methods to gain a more comprehensive understanding of complex phenomena.

Understanding Experimental Research

Experimental research is a scientific approach that involves manipulating one or more independent variables to observe the effect on a dependent variable. It is a research method widely used in various fields, including psychology, education, and medicine.

The primary goal of experimental research is to establish a cause-and-effect relationship between the independent and dependent variables. This is done by controlling all other variables that may affect the study’s outcome. By manipulating the independent variable, researchers can determine whether it significantly affects the dependent variable.

To conduct experimental research, researchers must first identify a research problem that can be addressed through experimentation. They must then develop a specific hypothesis that can be tested by manipulating the independent variable. The hypothesis should be testable and falsifiable, meaning that it can be proven false if the study’s results do not support it.

Once the hypothesis has been developed, researchers must design the experiment, including the selection of participants, the manipulation of the independent variable, and the measurement of the dependent variable. Researchers must also determine the appropriate research design, such as a pretest-posttest design or a between-subjects design.

During the experiment, researchers must carefully control all other variables that may affect the study’s outcome. This is done to ensure that any observed effects can be attributed to the manipulation of the independent variable and not to other factors.

After the experiment, researchers must analyze the data and draw conclusions based on the results. They must also consider the study’s limitations and the implications of the findings for future research.

Overall, experimental research is a powerful tool for investigating cause-and-effect relationships in various fields. By carefully controlling all variables except the independent variable, researchers can determine whether it significantly affects the dependent variable. However, it is important to recognize the limitations of experimental research and to consider alternative research methods when appropriate.

Types of Experimental Research

Experimental research is a type of research that involves manipulating one or more variables to observe the effect on another variable. Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types: True Experimental Research, Quasi-Experimental Research, and Pre-Experimental Research.

True Experimental Research

True experimental research is the most rigorous type of experimental research design. In this type of design, the researcher randomly assigns participants to either a control or experimental group. The control group receives no treatment, while the experimental group receives the treatment being studied. The researcher then measures the effect of the treatment by comparing the outcomes of the two groups. True experimental research is the gold standard of experimental research designs because it allows the researcher to establish a cause-and-effect relationship between the independent and dependent variables.

Quasi-Experimental Research

Quasi-experimental research is a type of research design that does not involve randomly assigning participants to groups. Instead, the researcher uses an existing group of participants and compares the outcomes of two or more groups. Quasi-experimental research is less rigorous than true experimental research because it does not allow the researcher to establish a cause-and-effect relationship between the independent and dependent variables. However, it is still useful in situations where true experimental research is not possible or ethical.

Pre-Experimental Research

Pre-experimental research design is the most basic type of experimental research design. In this type of design, the researcher observes a group or many groups after implementing the research’s cause and effect factors. Pre-experimental research design is useful when the researcher is interested in studying the effect of a particular intervention but cannot use random assignment of participants to groups. Pre-experimental research design is less rigorous than true experimental research and quasi-experimental research because it does not allow the researcher to establish a cause-and-effect relationship between the independent and dependent variables.

In conclusion, experimental research is a powerful method that allows researchers to establish cause-and-effect relationships between variables. True experimental research is the most rigorous type of experimental research design, while quasi-experimental and pre-experimental research is less rigorous but still useful in certain situations.

Components of Experimental Research

Experimental research is a scientific method of investigation that involves manipulating one or more variables to observe the effect on another variable. The following are the key components of experimental research:

Independent Variables

An independent variable is a variable that the researcher can manipulate. The variable is being tested to determine the effect on the dependent variable. For example, in a study to determine the effect of caffeine on memory, caffeine is the independent variable.

Dependent Variables

A dependent variable is a variable that is being measured in the study. It is the variable that is affected by the independent variable. In the example above, memory is the dependent variable.

Control Group

A control group is a group of participants who are not exposed to the independent variable. The control group is used to compare the experimental group’s results to determine if the independent variable had an effect on the dependent variable.

A hypothesis is a statement that predicts the relationship between the independent and dependent variables. It is a tentative explanation for the observed phenomenon. The hypothesis should be testable and falsifiable.

Experimental research is a powerful tool for investigating cause-and-effect relationships between variables. It allows researchers to manipulate and control extraneous variables to determine the effect on the dependent variable. Using a control group, researchers can determine if the results are due to the independent variable or other factors.

In summary, experimental research involves manipulating one or more variables to observe the effect on another variable. The key components of experimental research include independent variables, dependent variables, control group, and hypothesis. These components are essential for designing a valid and reliable experiment.

Conducting Experimental Research

Experimental research is a scientific method that involves manipulating one or more variables to observe the effects on another variable. It is a powerful tool that allows researchers to establish cause-and-effect relationships between variables. Here are the key steps involved in conducting experimental research:

Data Collection

Data collection is a crucial step in experimental research. Researchers must ensure that data is collected accurately and reliably. Several data collection methods include surveys, interviews, and observations. The choice of data collection method depends on the research question and the data collection type.

Random Assignment

Random assignment is a method used to randomly assign participants to different groups in an experiment. This ensures that each participant has an equal chance of being assigned to any of the groups, reducing the likelihood of bias. Random assignment is essential in experimental research because it helps control individual participant differences.

Research Design

The research design must be carefully planned and executed to ensure the results are valid and reliable. The design should include an experimental group and a control group, each with the same set of variables except for the one being manipulated. The experimental group receives the experimental treatments, while the control group does not. The research design must also include measures to control for extraneous variables that could affect the results.

Experimental research can be conducted using different research designs, including pre-experimental, quasi-experimental, and true experimental designs. The choice of research design depends on the research question and the level of control required.

In summary, conducting experimental research involves:

  • Collecting data accurately.
  • Randomly assigning participants to groups.
  • Carefully planning and executing the research design.

By following these steps, researchers can establish cause-and-effect relationships between variables and make valid conclusions.

Advantages and Disadvantages of Experimental Research

Experimental research offers several advantages, making it a popular research method in various fields. One of the most significant advantages of experimental research is that it provides researchers with a high level of control. By isolating specific variables, it becomes possible to determine if a potential outcome is viable. Each variable can be controlled independently or in different combinations to study what possible outcomes are available for a product, theory, or idea 

Another advantage of experimental research is that it allows researchers to establish a cause-and-effect relationship between variables. This is because experimental research involves manipulating one or more variables to see how they affect the outcome. By controlling all other variables, researchers can determine if a change in one variable causes a change in the outcome.

Experimental research also allows researchers to replicate their findings. By following the same experimental design, other researchers can carry out the same study to see if they obtain similar results. This makes it possible to verify the validity of experimental findings.

Disadvantages

Despite its advantages, experimental research also has some disadvantages that researchers must consider. One of the main disadvantages of experimental research is that it may not apply to real-world situations. This is because experimental research typically takes place in a controlled environment, which may not represent real-world conditions. As a result, experimental research findings may not be generalizable to the real world [4].

Another disadvantage of experimental research is that it may sometimes be unethical. This is because experimental research often involves manipulating variables, which may negatively affect participants. Researchers must ensure that the benefits of the research outweigh the potential risks to participants [5].

Finally, experimental research can be time-consuming and expensive. This is because experimental research requires careful planning and execution to ensure the results are valid and reliable. Additionally, experimental research often requires specialized equipment and facilities, which can be costly [6].

In summary, experimental research provides researchers with a high level of control, allows them to establish cause-and-effect relationships, and enables them to replicate their findings. However, it may not apply to real-world situations, may be unethical sometimes, and can be time-consuming and expensive. Researchers must carefully consider the advantages and disadvantages of experimental research before deciding to use it as a research method.

Applications of Experimental Research

Experimental research is a scientific approach used to evaluate cause-and-effect relationships between variables. It is a rigorous research design used to test hypotheses and establish causal relationships. Experimental research is used in various fields, including education and social sciences.

In Education

Experimental research is an important tool in education research. Researchers use experimental designs to evaluate the effectiveness of teaching methods, interventions, and programs. For example, a group of students can be randomly assigned to different teaching methods, and the outcomes can be compared to evaluate the effectiveness of each method.

Experimental research can also be used to evaluate the effectiveness of educational programs. For instance, a program designed to improve reading skills can be evaluated using an experimental design. A group of students can be assigned to the program while another group is not, and the outcomes can be compared to determine the program’s effectiveness.

In Social Sciences

Experimental research is also widely used in social sciences. Researchers use experimental designs to evaluate the effectiveness of interventions, policies, and programs. For example, experiments are carried out to evaluate the effectiveness of anti-poverty programs, health interventions, and public policies.

Experimental research is also used to evaluate the impact of social interventions. For instance, an experimental design can evaluate a social intervention aimed at reducing prejudice. A group of participants can be randomly assigned to the intervention, while another group is not, and the outcomes can be compared to determine the effectiveness of the intervention.

Overall, experimental research is a powerful tool for evaluating cause-and-effect relationships between variables. It is used in various fields to evaluate the effectiveness of interventions, programs, and policies. Using experimental designs, researchers can establish causal relationships and make evidence-based decisions.

Validity and Reliability in Experimental Research

Experimental research is a scientific method used to establish cause-and-effect relationships between variables. To achieve accurate and meaningful results, experimental research must have high levels of validity and reliability.

Validity refers to the extent to which an experiment measures what it is intended to measure. In experimental research, several types of validity must be considered:

  • Internal Validity:  Internal validity refers to the extent to which the experimental results are due to the manipulation of the independent variable and not due to other factors. Internal validity can be threatened by factors such as selection bias, maturation of participants, and history.
  • External Validity:  External validity refers to the extent to which the experimental results can be generalized to other populations and settings. External validity can be threatened by factors such as the use of non-representative samples and the artificiality of the experimental setting.
  • Construct Validity:  Construct validity refers to the extent to which the experimental results accurately measure the theoretical construct being studied. Construct validity can be threatened by factors such as poor operationalization of variables and inadequate measurement tools.

To ensure high levels of validity, experimental researchers must carefully design their studies, use appropriate measurement tools, and control for potential confounding variables.

Reliability

Reliability refers to the consistency and stability of the experimental results over time and across different observers or measurement tools. In experimental research, several types of reliability must be considered:

  • Test-Retest Reliability:  Test-retest reliability refers to the extent to which the same results are obtained when the experiment is repeated later. Test-retest reliability can be threatened by factors such as participant fatigue and practice effects.
  • Inter-Rater Reliability:  Inter-rater reliability refers to the extent to which different observers or raters obtain the same results. Differences in observer interpretations and biases can threaten inter-rater reliability.
  • Internal Consistency Reliability:  Internal consistency reliability refers to the extent to which the different items or measures within an experiment are consistent. Poorly constructed measurement tools can threaten internal consistency and reliability.

To ensure high levels of reliability, experimental researchers must use standardized procedures, train observers or raters, and use multiple measures to assess the same constructs.

Overall, high levels of validity and reliability are essential for experimental research to produce accurate and meaningful results. By carefully considering and addressing these factors, experimental researchers can increase the credibility and impact of their research.

Statistical Analysis in Experimental Research

Experimental research is a systematic approach to understanding cause-and-effect relationships between variables. It is an essential tool for scientists, businesses, and policymakers who want to test hypotheses and make informed decisions. Statistical analysis is a crucial component of experimental research, as it helps researchers extract meaningful insights from the collected data.

Quantitative Data

Quantitative data is numerical data that can be measured and analyzed statistically. It is often collected through surveys, experiments, or other objective methods. Quantitative data can be analyzed using statistical methods such as regression analysis, hypothesis testing, and ANOVA (analysis of variance).

Statistical Analysis

Statistical analysis analyzes quantitative data to identify patterns, relationships, and trends. It involves using statistical methods to summarize and interpret the data. Statistical analysis can be used to test hypotheses, make predictions, and identify important variables.

In experimental research, statistical analysis is used to determine whether the results of an experiment are statistically significant. Statistical significance measures the probability that the results are due to chance. If the results are statistically significant, the results are unlikely to be due to chance and are likely to be a real effect.

Statistical analysis is also used to identify important variables in an experiment. Variables are factors that can influence the outcome of an experiment. By identifying important variables, researchers can better understand the underlying mechanisms of an experiment and make more informed decisions.

In conclusion, statistical analysis is a critical component of experimental research. It helps researchers extract meaningful insights from quantitative data and make informed decisions. Using statistical methods to analyze data, researchers can identify patterns, relationships, and trends and determine whether the results of an experiment are statistically significant.

Experimental Research: Qualitative or Quantitative?

Experimental research can be both qualitative and quantitative. However, it is more commonly associated with quantitative research. Quantitative experimental research involves testing theories and hypotheses by measuring and analyzing numerical data, such as statistics and graphs. This type of research is often used in fields such as psychology, medicine, and engineering, where researchers aim to identify cause-and-effect relationships between variables.

In contrast, qualitative experimental research focuses on exploring and understanding the meaning behind human experiences and behaviors. This type of research involves collecting and analyzing non-numerical data, such as interviews, observations, and case studies. Qualitative experimental research is often used in fields such as anthropology, sociology, and education, where researchers aim to gain a deeper understanding of human behavior and culture.

It is important to note that experimental research can also combine quantitative and qualitative approaches. For example, a researcher may use a quantitative experimental design to test a theory or hypothesis and collect qualitative data through interviews or observations to gain a deeper understanding of the participants’ experiences and perspectives.

Overall, the choice between qualitative and quantitative experimental research depends on the research question and the type of data that needs to be collected and analyzed. Researchers should carefully consider the strengths and limitations of each approach before deciding which one to use.

Related Posts

What is Research

What is Research?

Characteristics of good research, leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Book cover

Pain pp 123–125 Cite as

Observational Studies: Uses and Limitations

  • Aaron S. Hess 2 &
  • Alaa Abd-Elsayed 2  
  • First Online: 11 May 2019

2879 Accesses

9 Citations

7 Altmetric

Observational epidemiologic studies are a type of nonexperimental research in which exposure is not controlled by the investigator. Observational studies are by far the most common form of clinical research because of their relatively low complexity, cost, and ethical constraints compared to randomized trials or other forms of clinical experimentation. Bias, confounding, and issues with validity are more common in observational studies. Observational studies can be retrospective or, in some cases, prospective. Common forms of observational studies in clinical research include cross-sectional studies, ecologic studies, case-control studies, and cohort studies.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Rothman K. Modern epidemiology, vol. 93. 3rd ed. Philadelphia: LWW; 2008. p. 100.

Google Scholar  

Woodward M. Epidemiology study design and data analysis. 3rd ed. Boca Raton: CRC Press; 2013. p. 19.

HĂ€user W, Schmutzer G, Hilbert A, BrĂ€hler E, Henningsen P. Prevalence of chronic disabling noncancer pain and associated demographic and medical variables: a cross-sectional survey in the general German population. Clin J Pain. 2015;31(10):886–92.

Article   Google Scholar  

Shah A, Hayes C, Martin B. Factors influencing long-term opioid use among opioid naïve patients: an examination of initial prescription characteristics and pain etiologies. J Pain. 2017;18(11):1374–83.

Article   CAS   Google Scholar  

Download references

Author information

Authors and affiliations.

Department of Anesthesiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA

Aaron S. Hess & Alaa Abd-Elsayed

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Aaron S. Hess .

Editor information

Editors and affiliations.

Department of Anesthesiology, University of Wisconsin, School of Medicine and Public Health, Madison, WI, USA

Alaa Abd-Elsayed

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Hess, A.S., Abd-Elsayed, A. (2019). Observational Studies: Uses and Limitations. In: Abd-Elsayed, A. (eds) Pain. Springer, Cham. https://doi.org/10.1007/978-3-319-99124-5_31

Download citation

DOI : https://doi.org/10.1007/978-3-319-99124-5_31

Published : 11 May 2019

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-99123-8

Online ISBN : 978-3-319-99124-5

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

How to present limitations in research

Last updated

30 January 2024

Reviewed by

Limitations don’t invalidate or diminish your results, but it’s best to acknowledge them. This will enable you to address any questions your study failed to answer because of them.

In this guide, learn how to recognize, present, and overcome limitations in research.

  • What is a research limitation?

Research limitations are weaknesses in your research design or execution that may have impacted outcomes and conclusions. Uncovering limitations doesn’t necessarily indicate poor research design—it just means you encountered challenges you couldn’t have anticipated that limited your research efforts.

Does basic research have limitations?

Basic research aims to provide more information about your research topic. It requires the same standard research methodology and data collection efforts as any other research type, and it can also have limitations.

  • Common research limitations

Researchers encounter common limitations when embarking on a study. Limitations can occur in relation to the methods you apply or the research process you design. They could also be connected to you as the researcher.

Methodology limitations

Not having access to data or reliable information can impact the methods used to facilitate your research. A lack of data or reliability may limit the parameters of your study area and the extent of your exploration.

Your sample size may also be affected because you won’t have any direction on how big or small it should be and who or what you should include. Having too few participants won’t adequately represent the population or groups of people needed to draw meaningful conclusions.

Research process limitations

The study’s design can impose constraints on the process. For example, as you’re conducting the research, issues may arise that don’t conform to the data collection methodology you developed. You may not realize until well into the process that you should have incorporated more specific questions or comprehensive experiments to generate the data you need to have confidence in your results.

Constraints on resources can also have an impact. Being limited on participants or participation incentives may limit your sample sizes. Insufficient tools, equipment, and materials to conduct a thorough study may also be a factor.

Common researcher limitations

Here are some of the common researcher limitations you may encounter:

Time: some research areas require multi-year longitudinal approaches, but you might not be able to dedicate that much time. Imagine you want to measure how much memory a person loses as they age. This may involve conducting multiple tests on a sample of participants over 20–30 years, which may be impossible.

Bias: researchers can consciously or unconsciously apply bias to their research. Biases can contribute to relying on research sources and methodologies that will only support your beliefs about the research you’re embarking on. You might also omit relevant issues or participants from the scope of your study because of your biases.

Limited access to data : you may need to pay to access specific databases or journals that would be helpful to your research process. You might also need to gain information from certain people or organizations but have limited access to them. These cases require readjusting your process and explaining why your findings are still reliable.

  • Why is it important to identify limitations?

Identifying limitations adds credibility to research and provides a deeper understanding of how you arrived at your conclusions.

Constraints may have prevented you from collecting specific data or information you hoped would prove or disprove your hypothesis or provide a more comprehensive understanding of your research topic.

However, identifying the limitations contributing to your conclusions can inspire further research efforts that help gather more substantial information and data.

  • Where to put limitations in a research paper

A research paper is broken up into different sections that appear in the following order:

Introduction

Methodology

The discussion portion of your paper explores your findings and puts them in the context of the overall research. Either place research limitations at the beginning of the discussion section before the analysis of your findings or at the end of the section to indicate that further research needs to be pursued.

What not to include in the limitations section

Evidence that doesn’t support your hypothesis is not a limitation, so you shouldn’t include it in the limitation section. Don’t just list limitations and their degree of severity without further explanation.

  • How to present limitations

You’ll want to present the limitations of your study in a way that doesn’t diminish the validity of your research and leave the reader wondering if your results and conclusions have been compromised.

Include only the limitations that directly relate to and impact how you addressed your research questions. Following a specific format enables the reader to develop an understanding of the weaknesses within the context of your findings without doubting the quality and integrity of your research.

Identify the limitations specific to your study

You don’t have to identify every possible limitation that might have occurred during your research process. Only identify those that may have influenced the quality of your findings and your ability to answer your research question.

Explain study limitations in detail

This explanation should be the most significant portion of your limitation section.

Link each limitation with an interpretation and appraisal of their impact on the study. You’ll have to evaluate and explain whether the error, method, or validity issues influenced the study’s outcome and how.

Propose a direction for future studies and present alternatives

In this section, suggest how researchers can avoid the pitfalls you experienced during your research process.

If an issue with methodology was a limitation, propose alternate methods that may help with a smoother and more conclusive research project. Discuss the pros and cons of your alternate recommendation.

Describe steps taken to minimize each limitation

You probably took steps to try to address or mitigate limitations when you noticed them throughout the course of your research project. Describe these steps in the limitation section.

  • Limitation example

“Approaches like stem cell transplantation and vaccination in AD [Alzheimer’s disease] work on a cellular or molecular level in the laboratory. However, translation into clinical settings will remain a challenge for the next decade.”

The authors are saying that even though these methods showed promise in helping people with memory loss when conducted in the lab (in other words, using animal studies), more studies are needed. These may be controlled clinical trials, for example. 

However, the short life span of stem cells outside the lab and the vaccination’s severe inflammatory side effects are limitations. Researchers won’t be able to conduct clinical trials until these issues are overcome.

  • How to overcome limitations in research

You’ve already started on the road to overcoming limitations in research by acknowledging that they exist. However, you need to ensure readers don’t mistake weaknesses for errors within your research design.

To do this, you’ll need to justify and explain your rationale for the methods, research design, and analysis tools you chose and how you noticed they may have presented limitations.

Your readers need to know that even when limitations presented themselves, you followed best practices and the ethical standards of your field. You didn’t violate any rules and regulations during your research process.

You’ll also want to reinforce the validity of your conclusions and results with multiple sources, methods, and perspectives. This prevents readers from assuming your findings were derived from a single or biased source.

  • Learning and improving starts with limitations in research

Dealing with limitations with transparency and integrity helps identify areas for future improvements and developments. It’s a learning process, providing valuable insights into how you can improve methodologies, expand sample sizes, or explore alternate approaches to further support the validity of your findings.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 5 March 2024

Last updated: 25 November 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

IMAGES

  1. 21 Research Limitations Examples (2023)

    limitations of experimental research include which of the following

  2. Limitations in Research

    limitations of experimental research include which of the following

  3. Advantages and Disadvantages of Experimental Research

    limitations of experimental research include which of the following

  4. Advantages and Disadvantages of Experimental Research

    limitations of experimental research include which of the following

  5. PPT

    limitations of experimental research include which of the following

  6. What Are The Research Study's limitations, And How To Identify Them

    limitations of experimental research include which of the following

VIDEO

  1. NACFC 2023

  2. What is work visa & complete information

  3. Descriptive and Experimental Research

  4. OR EP 04 PHASES , SCOPE & LIMITATIONS OF OPERATION RESEARCH

  5. Exploring Research Methodologies in the Social Sciences (4 Minutes)

  6. 5 Key Takeaways From Dr. E. J. Chichilnisky's Appearance on Huberman Lab

COMMENTS

  1. 16 Advantages and Disadvantages of Experimental Research

    6. Experimental research allows cause and effect to be determined. The manipulation of variables allows for researchers to be able to look at various cause-and-effect relationships that a product, theory, or idea can produce. It is a process which allows researchers to dig deeper into what is possible, showing how the various variable ...

  2. 1.4b

    1.4b - Limitations of the Experimental Method. Introduction. Click the card to flip 👆. -things studied in the lab might not be how they are in real life. -some variables cannot be manipulated (IQ, personality, race, drug use) -ethics: experiments that cause too much harm cannot be done. Ex: making people drink huge amounts of alcohol to see ...

  3. Experimental Research

    The limitations of experimental research must also be noted. The first and foremost limitation is that it can only be used when the conditions are appropriate for manipulating the variables. ... field experiments are long considered as a sound experimental practice following various steps of the scientific method. Researchers can also use ...

  4. Chapter 9- Experimental Research Flashcards

    Terms in this set (92) Experimental Research builds on this approach more directly than any other research. The Positivist Approach. Experiment Research can be found in. Education, Criminal Justice, Nursing, Marketing, Journalism, Political Science, Social Work, Sociology. Experiment's basic logic extends beyond what?

  5. Limited by our limitations

    Abstract. Study limitations represent weaknesses within a research design that may influence outcomes and conclusions of the research. Researchers have an obligation to the academic community to present complete and honest limitations of a presented study. Too often, authors use generic descriptions to describe study limitations.

  6. Chapter 10 Experimental Research

    Chapter 10 Experimental Research. Experimental research, often considered to be the "gold standard" in research designs, is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels ...

  7. Research Designs and Their Limitations

    Experimental research involves the manipulation of an independent variable, sometimes referred to as the treatment variable. It looks at the effect on the dependent variable or the outcome. In experimental research the researcher can examine the relationship of variables in terms of their cause and effect. One of the requirements of ...

  8. Experimental and Quasi-Experimental Research

    Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it.

  9. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  10. 17 Advantages and Disadvantages of Experimental Research ...

    10. Experimental research may offer results which apply to only one situation. Although one of the advantages of experimental research is that it allows for duplication by others to obtain the same results, this is not always the case in every situation. There are results that this method can find which may only apply to that specific situation.

  11. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  12. Experimental Research Designs: Types, Examples & Advantages

    This type of experimental research is commonly observed in the physical sciences. 3. Quasi-experimental Research Design. The word "Quasi" means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group.

  13. 8 Advantages and Disadvantages of Experimental Research

    List of Advantages of Experimental Research. 1. It gives researchers a high level of control. When people conduct experimental research, they can manipulate the variables so they can create a setting that lets them observe the phenomena they want. They can remove or control other factors that may affect the overall results, which means they can ...

  14. Limitations of the Study

    Possible Limitations of the Researcher. Access-- if your study depends on having access to people, organizations, data, or documents and, for whatever reason, access is denied or limited in some way, the reasons for this needs to be described.Also, include an explanation why being denied or limited access did not prevent you from following through on your study.

  15. 9 Study design limitations

    9.2 Limitations: internal validity. Internal validity refers to the extent to which a cause-and-effect relationship can be established in a study, eliminating other possible explanations (Sect. 3.8).A discussion of the limitations of internal validity should cover, as appropriate: possible confounding variables; the impact of the Hawthorne, observer, placebo and carry-over effects; the impact ...

  16. 14.2 True experiments

    The elements of a classical experiment are (1) random assignment of participants into an experimental and control group, (2) a pretest to assess the outcome (s) of interest for each group, (3) delivery of an intervention/treatment to the experimental group, and (4) a posttest to both groups to assess potential change in the outcome (s).

  17. Experimental Research: A Comprehensive Guide (2024)

    Experimental research is a scientific method used to test a hypothesis or study the cause-and-effect relationship between variables. In this type of research, the researcher manipulates one variable, called the independent variable, to observe the effect on another variable, called the dependent variable. Experimental research aims to establish ...

  18. Advantages and Limitations of Experiments for Researching ...

    An experiment is the only research method allowing the examination of causal relationships [4, 29]. Consequently, compared to non-experimental studies including case studies, the advantage of experiments is that we are able to directly observe causal relationships. Thus, experiments are connected with a high internal validity.

  19. Experimental research

    10 Experimental research. 10. Experimental research. Experimental research—often considered to be the 'gold standard' in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different ...

  20. Observational Studies: Uses and Limitations

    Observational studies analyze subjects without any experimental intervention by the investigator. Observational studies can be either retrospective, e.g., using an existing clinical, administrative, or public data base to define a group of subjects, or prospective, defining the group in advance, recruiting them, and following them forward in time [].

  21. How to Present the Limitations of a Study in Research?

    Writing the limitations of the research papers is often assumed to require lots of effort. However, identifying the limitations of the study can help structure the research better. Therefore, do not underestimate the importance of research study limitations. 3. Opportunity to make suggestions for further research.

  22. Psy. Ch 1 quiz 3 Flashcards

    Study with Quizlet and memorize flashcards containing terms like In an experiment, the variable that is measured is called the _____. extraneous variable hypothesis independent variable dependent variable, Which of the following is not one of the common limitations of the experimental method? Volunteer bias Generalizability Incorrect Answer Ethnocentrism naturalistic error, Which of the ...

  23. Understanding Limitations in Research

    Methodology limitations. Not having access to data or reliable information can impact the methods used to facilitate your research. A lack of data or reliability may limit the parameters of your study area and the extent of your exploration. Your sample size may also be affected because you won't have any direction on how big or small it ...