helpful professor logo

5 Quasi-Experimental Design Examples

quasi-experimental design, explained below

Quasi-experimental design refers to a type of experimental design that uses pre-existing groups of people rather than random groups.

Because the groups of research participants already exist, they cannot be randomly assigned to a cohort . This makes inferring a causal relationship between the treatment and observed/criterion variable difficult.

Quasi-experimental designs are generally considered inferior to true experimental designs.

Limitations of Quasi-Experimental Design

Since participants cannot be randomly assigned to the grouping variable (male/female; high education/low education), the internal validity of the study is questionable.

Extraneous variables may exist that explain the results. For example, with quasi-experimental studies involving gender, there are numerous cultural and biological variables that distinguish males and females other than gender alone.

Each one of those variables may be able to explain the results without the need to refer to gender.

See More Research Limitations Here

Quasi-Experimental Design Examples

1. smartboard apps and math.

A school has decided to supplement their math resources with smartboard applications. The math teachers research the apps available and then choose two apps for each grade level. Before deciding on which apps to purchase, the school contacts the seller and asks for permission to demo/test the apps before purchasing the licenses.

The study involves having different teachers use the apps with their classes. Since there are two math teachers at each grade level, each teacher will use one of the apps in their classroom for three months. At the end of three months, all students will take the same math exams. Then the school can simply compare which app improved the students’ math scores the most.

The reason this is called a quasi-experiment is because the school did not randomly assign students to one app or the other. The students were already in pre-existing groups/classes.

Although it was impractical to randomly assign students to use one version or the other of the apps, it creates difficulty interpreting the results.

For instance, if students in teacher A’s class did better than the students in teacher B’s class, then can we really say the difference was due to the app? There may be other differences between the two teachers that account for the results. This poses a serious threat to the study’s internal validity.

2. Leadership Training

There is reason to believe that teaching entrepreneurs modern leadership techniques will improve their performance and shorten how long it takes for them to reach profitability. Team members will feel better appreciated and work harder, which should translate to increased productivity and innovation.

This hypothetical study took place in a third-world country in a mid-sized city. The researchers marketed the training throughout the city and received interest from 5 start-ups in the tech sector and 5 in the textile industry. The leaders of each company then attended six weeks of workshops on employee motivation, leadership styles, and effective team management.

At the end of one year, the researchers returned. They conducted a standard assessment of each start-up’s growth trajectory and administered various surveys to employees.

The results indicated that tech start-ups were further along in their growth paths than textile start-ups. The data also showed that tech work teams reported greater job satisfaction and company loyalty than textile work teams.

Although the results appear straightforward, because the researchers used a quasi-experimental design, they cannot say that the training caused the results.

The two groups may differ in ways that could explain the results. For instance, perhaps there is less growth potential in the textile industry in that city, or perhaps tech leaders are more progressive and willing to accept new leadership strategies.

3. Parenting Styles and Academic Performance   

Psychologists are very interested in factors that affect children’s academic performance. Since parenting styles affect a variety of children’s social and emotional profiles, it stands to reason that it may affect academic performance as well. The four parenting styles under study are: authoritarian, authoritative, permissive, and neglectful/uninvolved.

To examine this possible relationship, researchers assessed the parenting style of 120 families with third graders in a large metropolitan city. Trained raters made two-hour home visits to conduct observations of parent/child interactions. That data was later compared with the children’s grades.

The results revealed that children raised in authoritative households had the highest grades of all the groups.

However, because the researchers were not able to randomly assign children to one of the four parenting styles, the internal validity is called into question.

There may be other explanations for the results other than parenting style. For instance, maybe parents that practice authoritative parenting also come from a higher SES demographic than the other parents.

Because they have higher income and education levels, they may put more emphasis on their child’s academic performance. Or, because they have greater financial resources, their children attend STEM camps, co-curricular and other extracurricular academic-orientated classes.

4. Government Reforms and Economic Impact

Government policies can have a tremendous impact on economic development. Making it easier for small businesses to open and reducing bank loans are examples of policies that can have immediate results. So, a third-world country decides to test policy reforms in two mid-sized cities. One city receives reforms directed at small businesses, while the other receives reforms directed at luring foreign investment.  

The government was careful to choose two cities that were similar in terms of size and population demographics.

Over the next five years, economic growth data were collected at the end of each fiscal year. The measures consisted of housing sells, local GDP, and unemployment rates.

At the end of five years the results indicated that small business reforms had a much larger impact on economic growth than foreign investment. The city which received small business reforms saw an increase in housing sells and GDP, but a drop in unemployment. The other city saw stagnant sells and GDP, and a slight increase in unemployment.

On the surface, it appears that small business reform is the better way to go. However, a more careful analysis revealed that the economic improvement observed in the one city was actually the result of two multinational real estate firms entering the market. The two firms specialize in converting dilapidated warehouses into shopping centers and residential properties.

5. Gender and Meditation

Meditation can help relieve stress and reduce symptoms of depression and anxiety. It is a simple and easy to use technique that just about anyone can try. However, are the benefits real or is it just that people believe it can help? To find out, a team of counselors designed a study to put it to a test.

Since they believe that women are more likely to benefit than men, they recruit both males and females to be in their study.

Both groups were trained in meditation by a licensed professional. The training took place over three weekends. Participants were instructed to practice at home at least four times a week for the next three months and keep a journal each time they meditate.

At the end of the three months, physical and psychological health data were collected on all participants. For physical health, participants’ blood pressure was measured. For psychological health, participants filled out a happiness scale and the emotional tone of their diaries were examined.

The results showed that meditation worked better for women than men. Women had lower blood pressure, scored higher on the happiness scale, and wrote more positive statements in their diaries.

Unfortunately, the researchers noticed that men apparently did not actually practice meditation as much as they should. They had very few journal entries and in post-study interviews, a vast majority of men admitted that they only practiced meditation about half the time.

The lack of practice is an extraneous variable. Perhaps if men had adhered to the study instructions, their scores on the physical and psychological measures would have been higher than women’s measures.

The quasi-experiment is used when researchers want to study the effects of a variable/treatment on different groups of people. Groups can be defined based on gender, parenting style, SES demographics, or any number of other variables.

The problem is that when interpreting the results, even clear differences between the groups cannot be attributed to the treatment.

The groups may differ in ways other than the grouping variables. For example, leadership training in the study above may have improved the textile start-ups’ performance if the techniques had been applied at all. Similarly, men may have benefited from meditation as much as women, if they had just tried.

Baumrind, D. (1991). Parenting styles and adolescent development. In R. M. Lerner, A. C. Peterson, & J. Brooks-Gunn (Eds.), Encyclopedia of Adolescence (pp. 746–758). New York: Garland Publishing, Inc.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin.

Matthew L. Maciejewski (2020) Quasi-experimental design. Biostatistics & Epidemiology, 4 (1), 38-47. https://doi.org/10.1080/24709360.2018.1477468

Thyer, Bruce. (2012). Quasi-Experimental Research Designs . Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195387384.001.0001

Dave

Dave Cornell (PhD)

Dr. Cornell has worked in education for more than 20 years. His work has involved designing teacher certification for Trinity College in London and in-service training for state governments in the United States. He has trained kindergarten teachers in 8 countries and helped businessmen and women open baby centers and kindergartens in 3 countries.

  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ Vicarious Punishment: Definition, Examples, Pros and Cons
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 10 Sublimation Examples (in Psychology)
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 10 Fixed Ratio Schedule Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 10 Sensorimotor Stage Examples

Chris

Chris Drew (PhD)

This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.

  • Chris Drew (PhD) #molongui-disabled-link Cognitive Dissonance Theory: Examples and Definition
  • Chris Drew (PhD) #molongui-disabled-link Vicarious Punishment: Definition, Examples, Pros and Cons
  • Chris Drew (PhD) #molongui-disabled-link 10 Sublimation Examples (in Psychology)
  • Chris Drew (PhD) #molongui-disabled-link Social Penetration Theory: Examples, Phases, Criticism

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Advanced Search
  • by Research Center
  • by Document Type
  • Users by Expertise
  • Users by Department
  • Users by Name
  • Latest Additions

Information

  • -->Main Page
  • Getting Started
  • Submitting Your ETD
  • Repository Policies

Over the past couple of decades, teacher effectiveness has become a major focus to improve students’ mathematics learning. Teacher professional development (PD), in particular, has been at the center of efforts aimed at improving teaching practice and the mathematics learning of students. However, empirical evidence for the effectiveness of PD for improving student achievement is mixed and there is limited research-based knowledge about the features of effective PD not only in mathematics but also in other subject areas. In this quasi-experimental study, I examined the effect of a Math and Science Partnership (MSP) PD on student achievement trajectories. Results of hierarchical growth models for this study revealed that content-focused (Algebra1 and Geometry), ongoing PD was effective for improving student achievement (relative to a matched comparison group) in Algebra1 (both for high and low performing students) and in Geometry (for low performing students only). There was no effect of PD on students’ achievement in Algebra2, which was not the focus of the MSP-PD. By demonstrating an effect of PD on student achievement, this study contributes to our growing knowledge base about features of PD programs that appear to contribute to their effectiveness. Moreover, it provides a case study showing how the research design might contribute in important ways to the ability to detect an effect of PD -if one exists- on student achievement. For example, given the data I had from the district, I was able to examine student growth within all Algebra 1, Geometry and Algebra 2 courses, while matching classrooms on aggregate student characteristics and school contexts. This allowed me to eliminate the potential confound of curriculum and to utilize longitudinal models to examine PD effects on students’ growth (relative to a comparison sample) for matched classrooms. Findings of this study have implications for educational practitioners and policymakers in their efforts to design and support effective PD programs in mathematics, and these features likely transfer to the design of PD in all subject areas. Moreover, for educational researchers this study suggests potential strategies for demonstrating robust research-based evidence for the effectiveness of PD on student learning.

Monthly Views for the past 3 years

Plum analytics, actions (login required).

This site is hosted by the University Library System of the University of Pittsburgh as part of its D-Scribe Digital Publishing Program

The ULS Office of Scholarly Communication and Publishing fosters and supports new modes of publishing and information-sharing among researchers.

The University of Pittsburgh and D-Scholarship@Pitt support Open Access to research.

Connect with us

Send comments or questions.

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.3 Quasi-Experimental Research

Learning objectives.

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix quasi means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.

Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

Pretest-Posttest Design

In a pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001). Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Does Psychotherapy Work?

Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952). But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here:

http://psychclassics.yorku.ca/Eysenck/psychotherapy.htm

Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980). They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.

Han Eysenck

In a classic 1952 article, researcher Hans Eysenck pointed out the shortcomings of the simple pretest-posttest design for evaluating the effectiveness of psychotherapy.

Wikimedia Commons – CC BY-SA 3.0.

Interrupted Time Series Design

A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979). Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.

Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Figure 7.5 A Hypothetical Interrupted Time-Series Design

A Hypothetical Interrupted Time-Series Design - The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not

The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not.

Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does not receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve more than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

Key Takeaways

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two college professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.

Discussion: Imagine that a group of obese children is recruited for a study in which their weight is measured, then they participate for 3 months in a program that encourages them to be more active, and finally their weight is measured again. Explain how each of the following might affect the results:

  • regression to the mean
  • spontaneous remission

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin.

Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324.

Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146.

Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

example of research title of quasi experimental design

Home Market Research Research Tools and Apps

Quasi-experimental Research: What It Is, Types & Examples

quasi-experimental research is research that appears to be experimental but is not.

Much like an actual experiment, quasi-experimental research tries to demonstrate a cause-and-effect link between a dependent and an independent variable. A quasi-experiment, on the other hand, does not depend on random assignment, unlike an actual experiment. The subjects are sorted into groups based on non-random variables.

What is Quasi-Experimental Research?

“Resemblance” is the definition of “quasi.” Individuals are not randomly allocated to conditions or orders of conditions, even though the regression analysis is changed. As a result, quasi-experimental research is research that appears to be experimental but is not.

The directionality problem is avoided in quasi-experimental research since the regression analysis is altered before the multiple regression is assessed. However, because individuals are not randomized at random, there are likely to be additional disparities across conditions in quasi-experimental research.

As a result, in terms of internal consistency, quasi-experiments fall somewhere between correlational research and actual experiments.

The key component of a true experiment is randomly allocated groups. This means that each person has an equivalent chance of being assigned to the experimental group or the control group, depending on whether they are manipulated or not.

Simply put, a quasi-experiment is not a real experiment. A quasi-experiment does not feature randomly allocated groups since the main component of a real experiment is randomly assigned groups. Why is it so crucial to have randomly allocated groups, given that they constitute the only distinction between quasi-experimental and actual  experimental research ?

Let’s use an example to illustrate our point. Let’s assume we want to discover how new psychological therapy affects depressed patients. In a genuine trial, you’d split half of the psych ward into treatment groups, With half getting the new psychotherapy therapy and the other half receiving standard  depression treatment .

And the physicians compare the outcomes of this treatment to the results of standard treatments to see if this treatment is more effective. Doctors, on the other hand, are unlikely to agree with this genuine experiment since they believe it is unethical to treat one group while leaving another untreated.

A quasi-experimental study will be useful in this case. Instead of allocating these patients at random, you uncover pre-existing psychotherapist groups in the hospitals. Clearly, there’ll be counselors who are eager to undertake these trials as well as others who prefer to stick to the old ways.

These pre-existing groups can be used to compare the symptom development of individuals who received the novel therapy with those who received the normal course of treatment, even though the groups weren’t chosen at random.

If any substantial variations between them can be well explained, you may be very assured that any differences are attributable to the treatment but not to other extraneous variables.

As we mentioned before, quasi-experimental research entails manipulating an independent variable by randomly assigning people to conditions or sequences of conditions. Non-equivalent group designs, pretest-posttest designs, and regression discontinuity designs are only a few of the essential types.

What are quasi-experimental research designs?

Quasi-experimental research designs are a type of research design that is similar to experimental designs but doesn’t give full control over the independent variable(s) like true experimental designs do.

In a quasi-experimental design, the researcher changes or watches an independent variable, but the participants are not put into groups at random. Instead, people are put into groups based on things they already have in common, like their age, gender, or how many times they have seen a certain stimulus.

Because the assignments are not random, it is harder to draw conclusions about cause and effect than in a real experiment. However, quasi-experimental designs are still useful when randomization is not possible or ethical.

The true experimental design may be impossible to accomplish or just too expensive, especially for researchers with few resources. Quasi-experimental designs enable you to investigate an issue by utilizing data that has already been paid for or gathered by others (often the government). 

Because they allow better control for confounding variables than other forms of studies, they have higher external validity than most genuine experiments and higher  internal validity  (less than true experiments) than other non-experimental research.

Is quasi-experimental research quantitative or qualitative?

Quasi-experimental research is a quantitative research method. It involves numerical data collection and statistical analysis. Quasi-experimental research compares groups with different circumstances or treatments to find cause-and-effect links. 

It draws statistical conclusions from quantitative data. Qualitative data can enhance quasi-experimental research by revealing participants’ experiences and opinions, but quantitative data is the method’s foundation.

Quasi-experimental research types

There are many different sorts of quasi-experimental designs. Three of the most popular varieties are described below: Design of non-equivalent groups, Discontinuity in regression, and Natural experiments.

Design of Non-equivalent Groups

Example: design of non-equivalent groups, discontinuity in regression, example: discontinuity in regression, natural experiments, example: natural experiments.

However, because they couldn’t afford to pay everyone who qualified for the program, they had to use a random lottery to distribute slots.

Experts were able to investigate the program’s impact by utilizing enrolled people as a treatment group and those who were qualified but did not play the jackpot as an experimental group.

How QuestionPro helps in quasi-experimental research?

QuestionPro can be a useful tool in quasi-experimental research because it includes features that can assist you in designing and analyzing your research study. Here are some ways in which QuestionPro can help in quasi-experimental research:

Design surveys

Randomize participants, collect data over time, analyze data, collaborate with your team.

With QuestionPro, you have access to the most mature market research platform and tool that helps you collect and analyze the insights that matter the most. By leveraging InsightsHub, the unified hub for data management, you can ​​leverage the consolidated platform to organize, explore, search, and discover your  research data  in one organized data repository . 

Optimize Your quasi-experimental research with QuestionPro. Get started now!

FREE TRIAL         LEARN MORE

MORE LIKE THIS

Trend Report

Trend Report: Guide for Market Dynamics & Strategic Analysis

May 29, 2024

Cannabis Industry Business Intelligence

Cannabis Industry Business Intelligence: Impact on Research

May 28, 2024

Best Dynata Alternatives

Top 10 Dynata Alternatives & Competitors

May 27, 2024

example of research title of quasi experimental design

What Are My Employees Really Thinking? The Power of Open-ended Survey Analysis

May 24, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Privacy Policy

Research Method

Home » Quasi-Experimental Research Design – Types, Methods

Quasi-Experimental Research Design – Types, Methods

Table of Contents

Quasi-Experimental Design

Quasi-Experimental Design

Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable(s) that is available in a true experimental design.

In a quasi-experimental design, the researcher uses an existing group of participants that is not randomly assigned to the experimental and control groups. Instead, the groups are selected based on pre-existing characteristics or conditions, such as age, gender, or the presence of a certain medical condition.

Types of Quasi-Experimental Design

There are several types of quasi-experimental designs that researchers use to study causal relationships between variables. Here are some of the most common types:

Non-Equivalent Control Group Design

This design involves selecting two groups of participants that are similar in every way except for the independent variable(s) that the researcher is testing. One group receives the treatment or intervention being studied, while the other group does not. The two groups are then compared to see if there are any significant differences in the outcomes.

Interrupted Time-Series Design

This design involves collecting data on the dependent variable(s) over a period of time, both before and after an intervention or event. The researcher can then determine whether there was a significant change in the dependent variable(s) following the intervention or event.

Pretest-Posttest Design

This design involves measuring the dependent variable(s) before and after an intervention or event, but without a control group. This design can be useful for determining whether the intervention or event had an effect, but it does not allow for control over other factors that may have influenced the outcomes.

Regression Discontinuity Design

This design involves selecting participants based on a specific cutoff point on a continuous variable, such as a test score. Participants on either side of the cutoff point are then compared to determine whether the intervention or event had an effect.

Natural Experiments

This design involves studying the effects of an intervention or event that occurs naturally, without the researcher’s intervention. For example, a researcher might study the effects of a new law or policy that affects certain groups of people. This design is useful when true experiments are not feasible or ethical.

Data Analysis Methods

Here are some data analysis methods that are commonly used in quasi-experimental designs:

Descriptive Statistics

This method involves summarizing the data collected during a study using measures such as mean, median, mode, range, and standard deviation. Descriptive statistics can help researchers identify trends or patterns in the data, and can also be useful for identifying outliers or anomalies.

Inferential Statistics

This method involves using statistical tests to determine whether the results of a study are statistically significant. Inferential statistics can help researchers make generalizations about a population based on the sample data collected during the study. Common statistical tests used in quasi-experimental designs include t-tests, ANOVA, and regression analysis.

Propensity Score Matching

This method is used to reduce bias in quasi-experimental designs by matching participants in the intervention group with participants in the control group who have similar characteristics. This can help to reduce the impact of confounding variables that may affect the study’s results.

Difference-in-differences Analysis

This method is used to compare the difference in outcomes between two groups over time. Researchers can use this method to determine whether a particular intervention has had an impact on the target population over time.

Interrupted Time Series Analysis

This method is used to examine the impact of an intervention or treatment over time by comparing data collected before and after the intervention or treatment. This method can help researchers determine whether an intervention had a significant impact on the target population.

Regression Discontinuity Analysis

This method is used to compare the outcomes of participants who fall on either side of a predetermined cutoff point. This method can help researchers determine whether an intervention had a significant impact on the target population.

Steps in Quasi-Experimental Design

Here are the general steps involved in conducting a quasi-experimental design:

  • Identify the research question: Determine the research question and the variables that will be investigated.
  • Choose the design: Choose the appropriate quasi-experimental design to address the research question. Examples include the pretest-posttest design, non-equivalent control group design, regression discontinuity design, and interrupted time series design.
  • Select the participants: Select the participants who will be included in the study. Participants should be selected based on specific criteria relevant to the research question.
  • Measure the variables: Measure the variables that are relevant to the research question. This may involve using surveys, questionnaires, tests, or other measures.
  • Implement the intervention or treatment: Implement the intervention or treatment to the participants in the intervention group. This may involve training, education, counseling, or other interventions.
  • Collect data: Collect data on the dependent variable(s) before and after the intervention. Data collection may also include collecting data on other variables that may impact the dependent variable(s).
  • Analyze the data: Analyze the data collected to determine whether the intervention had a significant impact on the dependent variable(s).
  • Draw conclusions: Draw conclusions about the relationship between the independent and dependent variables. If the results suggest a causal relationship, then appropriate recommendations may be made based on the findings.

Quasi-Experimental Design Examples

Here are some examples of real-time quasi-experimental designs:

  • Evaluating the impact of a new teaching method: In this study, a group of students are taught using a new teaching method, while another group is taught using the traditional method. The test scores of both groups are compared before and after the intervention to determine whether the new teaching method had a significant impact on student performance.
  • Assessing the effectiveness of a public health campaign: In this study, a public health campaign is launched to promote healthy eating habits among a targeted population. The behavior of the population is compared before and after the campaign to determine whether the intervention had a significant impact on the target behavior.
  • Examining the impact of a new medication: In this study, a group of patients is given a new medication, while another group is given a placebo. The outcomes of both groups are compared to determine whether the new medication had a significant impact on the targeted health condition.
  • Evaluating the effectiveness of a job training program : In this study, a group of unemployed individuals is enrolled in a job training program, while another group is not enrolled in any program. The employment rates of both groups are compared before and after the intervention to determine whether the training program had a significant impact on the employment rates of the participants.
  • Assessing the impact of a new policy : In this study, a new policy is implemented in a particular area, while another area does not have the new policy. The outcomes of both areas are compared before and after the intervention to determine whether the new policy had a significant impact on the targeted behavior or outcome.

Applications of Quasi-Experimental Design

Here are some applications of quasi-experimental design:

  • Educational research: Quasi-experimental designs are used to evaluate the effectiveness of educational interventions, such as new teaching methods, technology-based learning, or educational policies.
  • Health research: Quasi-experimental designs are used to evaluate the effectiveness of health interventions, such as new medications, public health campaigns, or health policies.
  • Social science research: Quasi-experimental designs are used to investigate the impact of social interventions, such as job training programs, welfare policies, or criminal justice programs.
  • Business research: Quasi-experimental designs are used to evaluate the impact of business interventions, such as marketing campaigns, new products, or pricing strategies.
  • Environmental research: Quasi-experimental designs are used to evaluate the impact of environmental interventions, such as conservation programs, pollution control policies, or renewable energy initiatives.

When to use Quasi-Experimental Design

Here are some situations where quasi-experimental designs may be appropriate:

  • When the research question involves investigating the effectiveness of an intervention, policy, or program : In situations where it is not feasible or ethical to randomly assign participants to intervention and control groups, quasi-experimental designs can be used to evaluate the impact of the intervention on the targeted outcome.
  • When the sample size is small: In situations where the sample size is small, it may be difficult to randomly assign participants to intervention and control groups. Quasi-experimental designs can be used to investigate the impact of an intervention without requiring a large sample size.
  • When the research question involves investigating a naturally occurring event : In some situations, researchers may be interested in investigating the impact of a naturally occurring event, such as a natural disaster or a major policy change. Quasi-experimental designs can be used to evaluate the impact of the event on the targeted outcome.
  • When the research question involves investigating a long-term intervention: In situations where the intervention or program is long-term, it may be difficult to randomly assign participants to intervention and control groups for the entire duration of the intervention. Quasi-experimental designs can be used to evaluate the impact of the intervention over time.
  • When the research question involves investigating the impact of a variable that cannot be manipulated : In some situations, it may not be possible or ethical to manipulate a variable of interest. Quasi-experimental designs can be used to investigate the relationship between the variable and the targeted outcome.

Purpose of Quasi-Experimental Design

The purpose of quasi-experimental design is to investigate the causal relationship between two or more variables when it is not feasible or ethical to conduct a randomized controlled trial (RCT). Quasi-experimental designs attempt to emulate the randomized control trial by mimicking the control group and the intervention group as much as possible.

The key purpose of quasi-experimental design is to evaluate the impact of an intervention, policy, or program on a targeted outcome while controlling for potential confounding factors that may affect the outcome. Quasi-experimental designs aim to answer questions such as: Did the intervention cause the change in the outcome? Would the outcome have changed without the intervention? And was the intervention effective in achieving its intended goals?

Quasi-experimental designs are useful in situations where randomized controlled trials are not feasible or ethical. They provide researchers with an alternative method to evaluate the effectiveness of interventions, policies, and programs in real-life settings. Quasi-experimental designs can also help inform policy and practice by providing valuable insights into the causal relationships between variables.

Overall, the purpose of quasi-experimental design is to provide a rigorous method for evaluating the impact of interventions, policies, and programs while controlling for potential confounding factors that may affect the outcome.

Advantages of Quasi-Experimental Design

Quasi-experimental designs have several advantages over other research designs, such as:

  • Greater external validity : Quasi-experimental designs are more likely to have greater external validity than laboratory experiments because they are conducted in naturalistic settings. This means that the results are more likely to generalize to real-world situations.
  • Ethical considerations: Quasi-experimental designs often involve naturally occurring events, such as natural disasters or policy changes. This means that researchers do not need to manipulate variables, which can raise ethical concerns.
  • More practical: Quasi-experimental designs are often more practical than experimental designs because they are less expensive and easier to conduct. They can also be used to evaluate programs or policies that have already been implemented, which can save time and resources.
  • No random assignment: Quasi-experimental designs do not require random assignment, which can be difficult or impossible in some cases, such as when studying the effects of a natural disaster. This means that researchers can still make causal inferences, although they must use statistical techniques to control for potential confounding variables.
  • Greater generalizability : Quasi-experimental designs are often more generalizable than experimental designs because they include a wider range of participants and conditions. This can make the results more applicable to different populations and settings.

Limitations of Quasi-Experimental Design

There are several limitations associated with quasi-experimental designs, which include:

  • Lack of Randomization: Quasi-experimental designs do not involve randomization of participants into groups, which means that the groups being studied may differ in important ways that could affect the outcome of the study. This can lead to problems with internal validity and limit the ability to make causal inferences.
  • Selection Bias: Quasi-experimental designs may suffer from selection bias because participants are not randomly assigned to groups. Participants may self-select into groups or be assigned based on pre-existing characteristics, which may introduce bias into the study.
  • History and Maturation: Quasi-experimental designs are susceptible to history and maturation effects, where the passage of time or other events may influence the outcome of the study.
  • Lack of Control: Quasi-experimental designs may lack control over extraneous variables that could influence the outcome of the study. This can limit the ability to draw causal inferences from the study.
  • Limited Generalizability: Quasi-experimental designs may have limited generalizability because the results may only apply to the specific population and context being studied.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Am Med Inform Assoc
  • v.13(1); Jan-Feb 2006

The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics

Associated data.

Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of quasi-experimental study designs that is applicable to medical informatics intervention studies. In addition, the authors performed a systematic review of two medical informatics journals, the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI), to determine the number of quasi-experimental studies published and how the studies are classified on the above-mentioned relative hierarchy. They hope that future medical informatics studies will implement higher level quasi-experimental study designs that yield more convincing evidence for causal links between medical informatics interventions and outcomes.

Quasi-experimental studies encompass a broad range of nonrandomized intervention studies. These designs are frequently used when it is not logistically feasible or ethical to conduct a randomized controlled trial. Examples of quasi-experimental studies follow. As one example of a quasi-experimental study, a hospital introduces a new order-entry system and wishes to study the impact of this intervention on the number of medication-related adverse events before and after the intervention. As another example, an informatics technology group is introducing a pharmacy order-entry system aimed at decreasing pharmacy costs. The intervention is implemented and pharmacy costs before and after the intervention are measured.

In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical informatics as well as in other medical disciplines. However, little is written about these study designs in the medical literature or in traditional epidemiology textbooks. 1 , 2 , 3 In contrast, the social sciences literature is replete with examples of ways to implement and improve quasi-experimental studies. 4 , 5 , 6

In this paper, we review the different pretest-posttest quasi-experimental study designs, their nomenclature, and the relative hierarchy of these designs with respect to their ability to establish causal associations between an intervention and an outcome. The example of a pharmacy order-entry system aimed at decreasing pharmacy costs will be used throughout this article to illustrate the different quasi-experimental designs. We discuss limitations of quasi-experimental designs and offer methods to improve them. We also perform a systematic review of four years of publications from two informatics journals to determine the number of quasi-experimental studies, classify these studies into their application domains, determine whether the potential limitations of quasi-experimental studies were acknowledged by the authors, and place these studies into the above-mentioned relative hierarchy.

The authors reviewed articles and book chapters on the design of quasi-experimental studies. 4 , 5 , 6 , 7 , 8 , 9 , 10 Most of the reviewed articles referenced two textbooks that were then reviewed in depth. 4 , 6

Key advantages and disadvantages of quasi-experimental studies, as they pertain to the study of medical informatics, were identified. The potential methodological flaws of quasi-experimental medical informatics studies, which have the potential to introduce bias, were also identified. In addition, a summary table outlining a relative hierarchy and nomenclature of quasi-experimental study designs is described. In general, the higher the design is in the hierarchy, the greater the internal validity that the study traditionally possesses because the evidence of the potential causation between the intervention and the outcome is strengthened. 4

We then performed a systematic review of four years of publications from two informatics journals. First, we determined the number of quasi-experimental studies. We then classified these studies on the above-mentioned hierarchy. We also classified the quasi-experimental studies according to their application domain. The categories of application domains employed were based on categorization used by Yearbooks of Medical Informatics 1992–2005 and were similar to the categories of application domains employed by Annual Symposiums of the American Medical Informatics Association. 11 The categories were (1) health and clinical management; (2) patient records; (3) health information systems; (4) medical signal processing and biomedical imaging; (5) decision support, knowledge representation, and management; (6) education and consumer informatics; and (7) bioinformatics. Because the quasi-experimental study design has recognized limitations, we sought to determine whether authors acknowledged the potential limitations of this design. Examples of acknowledgment included mention of lack of randomization, the potential for regression to the mean, the presence of temporal confounders and the mention of another design that would have more internal validity.

All original scientific manuscripts published between January 2000 and December 2003 in the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI) were reviewed. One author (ADH) reviewed all the papers to identify the number of quasi-experimental studies. Other authors (ADH, JCM, JF) then independently reviewed all the studies identified as quasi-experimental. The three authors then convened as a group to resolve any disagreements in study classification, application domain, and acknowledgment of limitations.

Results and Discussion

What is a quasi-experiment.

Quasi-experiments are studies that aim to evaluate interventions but that do not use randomization. Similar to randomized trials, quasi-experiments aim to demonstrate causality between an intervention and an outcome. Quasi-experimental studies can use both preintervention and postintervention measurements as well as nonrandomly selected control groups.

Using this basic definition, it is evident that many published studies in medical informatics utilize the quasi-experimental design. Although the randomized controlled trial is generally considered to have the highest level of credibility with regard to assessing causality, in medical informatics, researchers often choose not to randomize the intervention for one or more reasons: (1) ethical considerations, (2) difficulty of randomizing subjects, (3) difficulty to randomize by locations (e.g., by wards), (4) small available sample size. Each of these reasons is discussed below.

Ethical considerations typically will not allow random withholding of an intervention with known efficacy. Thus, if the efficacy of an intervention has not been established, a randomized controlled trial is the design of choice to determine efficacy. But if the intervention under study incorporates an accepted, well-established therapeutic intervention, or if the intervention has either questionable efficacy or safety based on previously conducted studies, then the ethical issues of randomizing patients are sometimes raised. In the area of medical informatics, it is often believed prior to an implementation that an informatics intervention will likely be beneficial and thus medical informaticians and hospital administrators are often reluctant to randomize medical informatics interventions. In addition, there is often pressure to implement the intervention quickly because of its believed efficacy, thus not allowing researchers sufficient time to plan a randomized trial.

For medical informatics interventions, it is often difficult to randomize the intervention to individual patients or to individual informatics users. So while this randomization is technically possible, it is underused and thus compromises the eventual strength of concluding that an informatics intervention resulted in an outcome. For example, randomly allowing only half of medical residents to use pharmacy order-entry software at a tertiary care hospital is a scenario that hospital administrators and informatics users may not agree to for numerous reasons.

Similarly, informatics interventions often cannot be randomized to individual locations. Using the pharmacy order-entry system example, it may be difficult to randomize use of the system to only certain locations in a hospital or portions of certain locations. For example, if the pharmacy order-entry system involves an educational component, then people may apply the knowledge learned to nonintervention wards, thereby potentially masking the true effect of the intervention. When a design using randomized locations is employed successfully, the locations may be different in other respects (confounding variables), and this further complicates the analysis and interpretation.

In situations where it is known that only a small sample size will be available to test the efficacy of an intervention, randomization may not be a viable option. Randomization is beneficial because on average it tends to evenly distribute both known and unknown confounding variables between the intervention and control group. However, when the sample size is small, randomization may not adequately accomplish this balance. Thus, alternative design and analytical methods are often used in place of randomization when only small sample sizes are available.

What Are the Threats to Establishing Causality When Using Quasi-experimental Designs in Medical Informatics?

The lack of random assignment is the major weakness of the quasi-experimental study design. Associations identified in quasi-experiments meet one important requirement of causality since the intervention precedes the measurement of the outcome. Another requirement is that the outcome can be demonstrated to vary statistically with the intervention. Unfortunately, statistical association does not imply causality, especially if the study is poorly designed. Thus, in many quasi-experiments, one is most often left with the question: “Are there alternative explanations for the apparent causal association?” If these alternative explanations are credible, then the evidence of causation is less convincing. These rival hypotheses, or alternative explanations, arise from principles of epidemiologic study design.

Shadish et al. 4 outline nine threats to internal validity that are outlined in ▶ . Internal validity is defined as the degree to which observed changes in outcomes can be correctly inferred to be caused by an exposure or an intervention. In quasi-experimental studies of medical informatics, we believe that the methodological principles that most often result in alternative explanations for the apparent causal effect include (a) difficulty in measuring or controlling for important confounding variables, particularly unmeasured confounding variables, which can be viewed as a subset of the selection threat in ▶ ; (b) results being explained by the statistical principle of regression to the mean . Each of these latter two principles is discussed in turn.

Threats to Internal Validity

Adapted from Shadish et al. 4

An inability to sufficiently control for important confounding variables arises from the lack of randomization. A variable is a confounding variable if it is associated with the exposure of interest and is also associated with the outcome of interest; the confounding variable leads to a situation where a causal association between a given exposure and an outcome is observed as a result of the influence of the confounding variable. For example, in a study aiming to demonstrate that the introduction of a pharmacy order-entry system led to lower pharmacy costs, there are a number of important potential confounding variables (e.g., severity of illness of the patients, knowledge and experience of the software users, other changes in hospital policy) that may have differed in the preintervention and postintervention time periods ( ▶ ). In a multivariable regression, the first confounding variable could be addressed with severity of illness measures, but the second confounding variable would be difficult if not nearly impossible to measure and control. In addition, potential confounding variables that are unmeasured or immeasurable cannot be controlled for in nonrandomized quasi-experimental study designs and can only be properly controlled by the randomization process in randomized controlled trials.

An external file that holds a picture, illustration, etc.
Object name is 16f01.jpg

Example of confounding. To get the true effect of the intervention of interest, we need to control for the confounding variable.

Another important threat to establishing causality is regression to the mean. 12 , 13 , 14 This widespread statistical phenomenon can result in wrongly concluding that an effect is due to the intervention when in reality it is due to chance. The phenomenon was first described in 1886 by Francis Galton who measured the adult height of children and their parents. He noted that when the average height of the parents was greater than the mean of the population, the children tended to be shorter than their parents, and conversely, when the average height of the parents was shorter than the population mean, the children tended to be taller than their parents.

In medical informatics, what often triggers the development and implementation of an intervention is a rise in the rate above the mean or norm. For example, increasing pharmacy costs and adverse events may prompt hospital informatics personnel to design and implement pharmacy order-entry systems. If this rise in costs or adverse events is really just an extreme observation that is still within the normal range of the hospital's pharmaceutical costs (i.e., the mean pharmaceutical cost for the hospital has not shifted), then the statistical principle of regression to the mean predicts that these elevated rates will tend to decline even without intervention. However, often informatics personnel and hospital administrators cannot wait passively for this decline to occur. Therefore, hospital personnel often implement one or more interventions, and if a decline in the rate occurs, they may mistakenly conclude that the decline is causally related to the intervention. In fact, an alternative explanation for the finding could be regression to the mean.

What Are the Different Quasi-experimental Study Designs?

In the social sciences literature, quasi-experimental studies are divided into four study design groups 4 , 6 :

  • Quasi-experimental designs without control groups
  • Quasi-experimental designs that use control groups but no pretest
  • Quasi-experimental designs that use control groups and pretests
  • Interrupted time-series designs

There is a relative hierarchy within these categories of study designs, with category D studies being sounder than categories C, B, or A in terms of establishing causality. Thus, if feasible from a design and implementation point of view, investigators should aim to design studies that fall in to the higher rated categories. Shadish et al. 4 discuss 17 possible designs, with seven designs falling into category A, three designs in category B, and six designs in category C, and one major design in category D. In our review, we determined that most medical informatics quasi-experiments could be characterized by 11 of 17 designs, with six study designs in category A, one in category B, three designs in category C, and one design in category D because the other study designs were not used or feasible in the medical informatics literature. Thus, for simplicity, we have summarized the 11 study designs most relevant to medical informatics research in ▶ .

Relative Hierarchy of Quasi-experimental Designs

O = Observational Measurement; X = Intervention Under Study. Time moves from left to right.

The nomenclature and relative hierarchy were used in the systematic review of four years of JAMIA and the IJMI. Similar to the relative hierarchy that exists in the evidence-based literature that assigns a hierarchy to randomized controlled trials, cohort studies, case-control studies, and case series, the hierarchy in ▶ is not absolute in that in some cases, it may be infeasible to perform a higher level study. For example, there may be instances where an A6 design established stronger causality than a B1 design. 15 , 16 , 17

Quasi-experimental Designs without Control Groups

equation M1

Here, X is the intervention and O is the outcome variable (this notation is continued throughout the article). In this study design, an intervention (X) is implemented and a posttest observation (O1) is taken. For example, X could be the introduction of a pharmacy order-entry intervention and O1 could be the pharmacy costs following the intervention. This design is the weakest of the quasi-experimental designs that are discussed in this article. Without any pretest observations or a control group, there are multiple threats to internal validity. Unfortunately, this study design is often used in medical informatics when new software is introduced since it may be difficult to have pretest measurements due to time, technical, or cost constraints.

equation M2

This is a commonly used study design. A single pretest measurement is taken (O1), an intervention (X) is implemented, and a posttest measurement is taken (O2). In this instance, period O1 frequently serves as the “control” period. For example, O1 could be pharmacy costs prior to the intervention, X could be the introduction of a pharmacy order-entry system, and O2 could be the pharmacy costs following the intervention. Including a pretest provides some information about what the pharmacy costs would have been had the intervention not occurred.

equation M3

The advantage of this study design over A2 is that adding a second pretest prior to the intervention helps provide evidence that can be used to refute the phenomenon of regression to the mean and confounding as alternative explanations for any observed association between the intervention and the posttest outcome. For example, in a study where a pharmacy order-entry system led to lower pharmacy costs (O3 < O2 and O1), if one had two preintervention measurements of pharmacy costs (O1 and O2) and they were both elevated, this would suggest that there was a decreased likelihood that O3 is lower due to confounding and regression to the mean. Similarly, extending this study design by increasing the number of measurements postintervention could also help to provide evidence against confounding and regression to the mean as alternate explanations for observed associations.

equation M4

This design involves the inclusion of a nonequivalent dependent variable ( b ) in addition to the primary dependent variable ( a ). Variables a and b should assess similar constructs; that is, the two measures should be affected by similar factors and confounding variables except for the effect of the intervention. Variable a is expected to change because of the intervention X, whereas variable b is not. Taking our example, variable a could be pharmacy costs and variable b could be the length of stay of patients. If our informatics intervention is aimed at decreasing pharmacy costs, we would expect to observe a decrease in pharmacy costs but not in the average length of stay of patients. However, a number of important confounding variables, such as severity of illness and knowledge of software users, might affect both outcome measures. Thus, if the average length of stay did not change following the intervention but pharmacy costs did, then the data are more convincing than if just pharmacy costs were measured.

The Removed-Treatment Design

equation M5

This design adds a third posttest measurement (O3) to the one-group pretest-posttest design and then removes the intervention before a final measure (O4) is made. The advantage of this design is that it allows one to test hypotheses about the outcome in the presence of the intervention and in the absence of the intervention. Thus, if one predicts a decrease in the outcome between O1 and O2 (after implementation of the intervention), then one would predict an increase in the outcome between O3 and O4 (after removal of the intervention). One caveat is that if the intervention is thought to have persistent effects, then O4 needs to be measured after these effects are likely to have disappeared. For example, a study would be more convincing if it demonstrated that pharmacy costs decreased after pharmacy order-entry system introduction (O2 and O3 less than O1) and that when the order-entry system was removed or disabled, the costs increased (O4 greater than O2 and O3 and closer to O1). In addition, there are often ethical issues in this design in terms of removing an intervention that may be providing benefit.

The Repeated-Treatment Design

equation M6

The advantage of this design is that it demonstrates reproducibility of the association between the intervention and the outcome. For example, the association is more likely to be causal if one demonstrates that a pharmacy order-entry system results in decreased pharmacy costs when it is first introduced and again when it is reintroduced following an interruption of the intervention. As for design A5, the assumption must be made that the effect of the intervention is transient, which is most often applicable to medical informatics interventions. Because in this design, subjects may serve as their own controls, this may yield greater statistical efficiency with fewer numbers of subjects.

Quasi-experimental Designs That Use a Control Group but No Pretest

equation M7

An intervention X is implemented for one group and compared to a second group. The use of a comparison group helps prevent certain threats to validity including the ability to statistically adjust for confounding variables. Because in this study design, the two groups may not be equivalent (assignment to the groups is not by randomization), confounding may exist. For example, suppose that a pharmacy order-entry intervention was instituted in the medical intensive care unit (MICU) and not the surgical intensive care unit (SICU). O1 would be pharmacy costs in the MICU after the intervention and O2 would be pharmacy costs in the SICU after the intervention. The absence of a pretest makes it difficult to know whether a change has occurred in the MICU. Also, the absence of pretest measurements comparing the SICU to the MICU makes it difficult to know whether differences in O1 and O2 are due to the intervention or due to other differences in the two units (confounding variables).

Quasi-experimental Designs That Use Control Groups and Pretests

The reader should note that with all the studies in this category, the intervention is not randomized. The control groups chosen are comparison groups. Obtaining pretest measurements on both the intervention and control groups allows one to assess the initial comparability of the groups. The assumption is that if the intervention and the control groups are similar at the pretest, the smaller the likelihood there is of important confounding variables differing between the two groups.

equation M8

The use of both a pretest and a comparison group makes it easier to avoid certain threats to validity. However, because the two groups are nonequivalent (assignment to the groups is not by randomization), selection bias may exist. Selection bias exists when selection results in differences in unit characteristics between conditions that may be related to outcome differences. For example, suppose that a pharmacy order-entry intervention was instituted in the MICU and not the SICU. If preintervention pharmacy costs in the MICU (O1a) and SICU (O1b) are similar, it suggests that it is less likely that there are differences in the important confounding variables between the two units. If MICU postintervention costs (O2a) are less than preintervention MICU costs (O1a), but SICU costs (O1b) and (O2b) are similar, this suggests that the observed outcome may be causally related to the intervention.

equation M9

In this design, the pretests are administered at two different times. The main advantage of this design is that it controls for potentially different time-varying confounding effects in the intervention group and the comparison group. In our example, measuring points O1 and O2 would allow for the assessment of time-dependent changes in pharmacy costs, e.g., due to differences in experience of residents, preintervention between the intervention and control group, and whether these changes were similar or different.

equation M10

With this study design, the researcher administers an intervention at a later time to a group that initially served as a nonintervention control. The advantage of this design over design C2 is that it demonstrates reproducibility in two different settings. This study design is not limited to two groups; in fact, the study results have greater validity if the intervention effect is replicated in different groups at multiple times. In the example of a pharmacy order-entry system, one could implement or intervene in the MICU and then at a later time, intervene in the SICU. This latter design is often very applicable to medical informatics where new technology and new software is often introduced or made available gradually.

Interrupted Time-Series Designs

equation M11

An interrupted time-series design is one in which a string of consecutive observations equally spaced in time is interrupted by the imposition of a treatment or intervention. The advantage of this design is that with multiple measurements both pre- and postintervention, it is easier to address and control for confounding and regression to the mean. In addition, statistically, there is a more robust analytic capability, and there is the ability to detect changes in the slope or intercept as a result of the intervention in addition to a change in the mean values. 18 A change in intercept could represent an immediate effect while a change in slope could represent a gradual effect of the intervention on the outcome. In the example of a pharmacy order-entry system, O1 through O5 could represent monthly pharmacy costs preintervention and O6 through O10 monthly pharmacy costs post the introduction of the pharmacy order-entry system. Interrupted time-series designs also can be further strengthened by incorporating many of the design features previously mentioned in other categories (such as removal of the treatment, inclusion of a nondependent outcome variable, or the addition of a control group).

Systematic Review Results

The results of the systematic review are in ▶ . In the four-year period of JAMIA publications that the authors reviewed, 25 quasi-experimental studies among 22 articles were published. Of these 25, 15 studies were of category A, five studies were of category B, two studies were of category C, and no studies were of category D. Although there were no studies of category D (interrupted time-series analyses), three of the studies classified as category A had data collected that could have been analyzed as an interrupted time-series analysis. Nine of the 25 studies (36%) mentioned at least one of the potential limitations of the quasi-experimental study design. In the four-year period of IJMI publications reviewed by the authors, nine quasi-experimental studies among eight manuscripts were published. Of these nine, five studies were of category A, one of category B, one of category C, and two of category D. Two of the nine studies (22%) mentioned at least one of the potential limitations of the quasi-experimental study design.

Systematic Review of Four Years of Quasi-designs in JAMIA

JAMIA = Journal of the American Medical Informatics Association; IJMI = International Journal of Medical Informatics.

In addition, three studies from JAMIA were based on a counterbalanced design. A counterbalanced design is a higher order study design than other studies in category A. The counterbalanced design is sometimes referred to as a Latin-square arrangement. In this design, all subjects receive all the different interventions but the order of intervention assignment is not random. 19 This design can only be used when the intervention is compared against some existing standard, for example, if a new PDA-based order entry system is to be compared to a computer terminal–based order entry system. In this design, all subjects receive the new PDA-based order entry system and the old computer terminal-based order entry system. The counterbalanced design is a within-participants design, where the order of the intervention is varied (e.g., one group is given software A followed by software B and another group is given software B followed by software A). The counterbalanced design is typically used when the available sample size is small, thus preventing the use of randomization. This design also allows investigators to study the potential effect of ordering of the informatics intervention.

Although quasi-experimental study designs are ubiquitous in the medical informatics literature, as evidenced by 34 studies in the past four years of the two informatics journals, little has been written about the benefits and limitations of the quasi-experimental approach. As we have outlined in this paper, a relative hierarchy and nomenclature of quasi-experimental study designs exist, with some designs being more likely than others to permit causal interpretations of observed associations. Strengths and limitations of a particular study design should be discussed when presenting data collected in the setting of a quasi-experimental study. Future medical informatics investigators should choose the strongest design that is feasible given the particular circumstances.

Supplementary Material

Dr. Harris was supported by NIH grants K23 AI01752-01A1 and R01 AI60859-01A1. Dr. Perencevich was supported by a VA Health Services Research and Development Service (HSR&D) Research Career Development Award (RCD-02026-1). Dr. Finkelstein was supported by NIH grant RO1 HL71690.

Illustration

  • Basics of Research Process
  • Methodology
  • What Is a Quasi-Experimental Study: A Detailed Guide on Quasi Experiment & Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Other Essays
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips
  • Basics of Dissertation & Thesis Writing

Illustration

  • Essay Guides
  • Research Paper Guides
  • Formatting Guides
  • Admission Guides
  • Dissertation & Thesis Guides

What Is a Quasi-Experimental Study: A Detailed Guide on Quasi Experiment & Examples

quasi-experimental design

Table of contents

Illustration

Use our free Readability checker

A quasi-experimental design is very similar to experimental designs, but unlike the latter, lacks the full control of the experimental method. In a quasi-experimental design, the researcher does not randomly choose participants, as would be done in a true experimental design. Instead, participants are assigned to groups based on existing characteristics, such as age, gender, or medical condition.

You’re in the right place if you need to know what quasi-experimental design is. We got you when it comes to everything academic! For example, we know that quasi-experimental designs create non-random groups. Are you surprised as it is an experiment? But here’s another fact. Such a design has a slightly lower internal validity . But we can still use it. However, you’ll have to read further to understand why. So let’s make a little deal. We promise all answers as for your questions if you promise to read through this guide. Does this sound good? Then carry on as we have lots of exciting things prepared!

What Is a Quasi Experimental Design: Definition

Quasi-experimental design definition is relatively simple, trust us. To make it even easier to remember, here’s a brief list of things you should know about today’s subject:

  • It establishes cause and effect relationships between variables (independent and dependent, to be exact).
  • It might be called an experiment, but researchers assign changed specifications to selected and not random groups.
  • It is usually used to test newly created treatments.
  • Lastly, such experimental studies usually have lower internal validity in research . It is because of the non-random selection and assignments of groups or subjects.

But your results can still be considered credible. So make sure to consider this design for your study!

Advantages of Quasi Experimental Design

Believe it or not, quasi-experimental design has lots of advantages. And, of course, here’s a quick list for you to consider:

  • They have higher external validity threats . Here’s no wonder as such studies are primarily conducted in real-world scenarios. The laboratory ones are overcontrolled and not as close to your real world.
  • They also have higher internal validity. But in this case, we’re comparing the non-experimental studies. Additionally, here the researcher also better controls confounding variables.

P. S. Confounding variables are your third component that influences your study but is not controlled or chosen by the researcher.

Types of Quasi Experimental Design

You have an excellent selection when it comes to the types of quasi- experimental design . So we’re also here to walk you through several of them. Choose wisely:

  • Natural experiments are usually the ones where any researcher does not control the external environment. Instead, we have random or random-like assignments of all the groups. This type, however, is rarely considered a real experiment.
  • Regression discontinuity allows the researcher to decide what threshold will be valid. One group usually follows the threshold, and the second one is commonly below it. Both groups are pretty similar, yet the treatment will act differently in those cases.
  • Nonequivalent groups design involves two very similar groups of people. Your compared groups might be of the same gender, class, ethnicity, and so on. The only thing that differentiates them from one another is treatment. One assigned group will be treated; the other one will not receive the same.

When to Use a Quasi Experimental Study

A quasi-experimental research design can be used for several reasons. It is up to you to choose what type and design fit your purpose the best. Oftentimes, researchers prefer this type because of ethical reasons. We cannot constantly test our hypothesis on actual people. Or, for example, stop treatments for some individuals and allow others to continue your procedures will not be ethical. In any way, shape, or form, even for studies.  Practical reasons are relevant as well. Carrying out experiments is not always possible. Maybe there are too many subjects or other reasons. But everything you need, you’ll find in our following sections. Check them out!

Practical Reasons for Quasi Experimental Research

Quasi-experimental design is a vital substitution in case researchers cannot practically complete the experiment. There might be several reasons that stop scientists from going full-on experimental mode.  Those interested in research can have no funds to complete their study. Therefore, they can easily use the already collected data. If their research is important, there probably exists data that would be enough for them. In such cases, the government is usually the one that collects information about certain groups. Their data might not always be enough. But not everyone can spend time and funds on academic work.

Ethical Reasons for Quasi Experimental Research Design

Quasi-experimental design can also be chosen due to certain ethical issues. In the majority of instances, it is highly unethical to withhold treatment for individuals, specifically on a non-random basis. Thus, experiment cannot be used to determine causal relationships. Here’s where our handy quasi comes to mind. You can use it if you’re unsure of ethical repercussions or if you know that there are some.

Quasi Experimental Design Example

We also want to give you an example of a quasi-experimental design. Out of all our options we've proposed, including regression discontinuity, we'll talk about nonequivalent groups design.

Example of nonequivalent groups design You think that adding an extra class of English will help students receive higher grades. You need to find two groups to test your hypothesis. For example, your first group of eighth-graders will have that class, while your second one won't. You document their grades and consider whether anything has changed. At the end of your experiment, you'll have enough data to conclude whether your newly implemented class has any impact on the children's grades.

Quasi-Experimental Design: Key Takeaways

So there you have it! We know everything about quasi-experimental design and are ready to take over the scholarly world. But we shall prepare a little recap. Here, we’re still determining our favorite relationship between dependent and independent variables , like we would in any standard experiment.  Yet, there are still several differences we will consider. Most importantly, quasi-experimental design is practical and ethical. Thus, it can be used when other methods fail. We also have another tip for you.

Illustration

If you’re still struggling with your research assignments, we got you! You’re always welcome to check out writing service . We were students once, just like you. So we know what it’s like and how we can help you. We’re all about a personalized approach, constant support, quality, and security. See you there! 

Frequently Asked Questions About Quasi Experimental Research Design

1. what are the differences between true experiments and quasi-experiments.

Quasi-experiments and true experiments are quite different, especially when it comes to the circumstances of their use. True experiments utilize random subjects and groups. At the same time, quasi-experiments are not only more ethical and practical but not random at all. Here an effective scholar uses randomization for their assigned groups.

2. Does quasi-experimental have a control group?

Quasi-experiment has no control group whatsoever. It is your main difference from other true and valid experiments. The majority of such studies are carried out in natural environments. Besides, your lack of a control group allows researchers to be more ethical and practical in their studies.

3. Is quasi-experimental design quantitative or qualitative?

The quasi-experimental design takes the best of quantitative and qualitative research or studies. Therefore, we cannot choose between one or another. However, it is worth remembering that researchers and scholars have no full control over their environment. Besides, as was previously mentioned, there is no control group to consider. Everything is not random.

4. What is the strongest quasi experimental design?

There are several types you can choose from, but the strongest quasi-experimental design is definitely a randomized experiment. It is one of the types that allow researchers to receive accurate data. Moreover, it gives you an opportunity to measure required variables and achieve high levels of accuracy. That is why this method is preferred.

Joe_Eckel_1_ab59a03630.jpg

Joe Eckel is an expert on Dissertations writing. He makes sure that each student gets precious insights on composing A-grade academic writing.

You may also like

experimental design

Experimental vs Quasi-Experimental Design: Which to Choose?

Here’s a table that summarizes the similarities and differences between an experimental and a quasi-experimental study design:

What is a quasi-experimental design?

A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment.

Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn’t is not randomized. Instead, the intervention can be assigned to participants according to their choosing or that of the researcher, or by using any method other than randomness.

Having a control group is not required, but if present, it provides a higher level of evidence for the relationship between the intervention and the outcome.

(for more information, I recommend my other article: Understand Quasi-Experimental Design Through an Example ) .

Examples of quasi-experimental designs include:

  • One-Group Posttest Only Design
  • Static-Group Comparison Design
  • One-Group Pretest-Posttest Design
  • Separate-Sample Pretest-Posttest Design

What is an experimental design?

An experimental design is a randomized study design used to evaluate the effect of an intervention. In its simplest form, the participants will be randomly divided into 2 groups:

  • A treatment group: where participants receive the new intervention which effect we want to study.
  • A control or comparison group: where participants do not receive any intervention at all (or receive some standard intervention).

Randomization ensures that each participant has the same chance of receiving the intervention. Its objective is to equalize the 2 groups, and therefore, any observed difference in the study outcome afterwards will only be attributed to the intervention – i.e. it removes confounding.

(for more information, I recommend my other article: Purpose and Limitations of Random Assignment ).

Examples of experimental designs include:

  • Posttest-Only Control Group Design
  • Pretest-Posttest Control Group Design
  • Solomon Four-Group Design
  • Matched Pairs Design
  • Randomized Block Design

When to choose an experimental design over a quasi-experimental design?

Although many statistical techniques can be used to deal with confounding in a quasi-experimental study, in practice, randomization is still the best tool we have to study causal relationships.

Another problem with quasi-experiments is the natural progression of the disease or the condition under study — When studying the effect of an intervention over time, one should consider natural changes because these can be mistaken with changes in outcome that are caused by the intervention. Having a well-chosen control group helps dealing with this issue.

So, if losing the element of randomness seems like an unwise step down in the hierarchy of evidence, why would we ever want to do it?

This is what we’re going to discuss next.

When to choose a quasi-experimental design over a true experiment?

The issue with randomness is that it cannot be always achievable.

So here are some cases where using a quasi-experimental design makes more sense than using an experimental one:

  • If being in one group is believed to be harmful for the participants , either because the intervention is harmful (ex. randomizing people to smoking), or the intervention has a questionable efficacy, or on the contrary it is believed to be so beneficial that it would be malevolent to put people in the control group (ex. randomizing people to receiving an operation).
  • In cases where interventions act on a group of people in a given location , it becomes difficult to adequately randomize subjects (ex. an intervention that reduces pollution in a given area).
  • When working with small sample sizes , as randomized controlled trials require a large sample size to account for heterogeneity among subjects (i.e. to evenly distribute confounding variables between the intervention and control groups).

Further reading

  • Statistical Software Popularity in 40,582 Research Papers
  • Checking the Popularity of 125 Statistical Tests and Models
  • Objectives of Epidemiology (With Examples)
  • 12 Famous Epidemiologists and Why
  • Open access
  • Published: 26 May 2024

Effect of standardized patient simulation-based pedagogics embedded with lecture in enhancing mental status evaluation cognition among nursing students in Tanzania: A longitudinal quasi-experimental study

  • Violeth E. Singano 1 ,
  • Walter C. Millanzi 1 &
  • Fabiola Moshi 1  

BMC Medical Education volume  24 , Article number:  577 ( 2024 ) Cite this article

591 Accesses

1 Altmetric

Metrics details

Nurses around the world are expected to demonstrate competence in performing mental status evaluation. However, there is a gap between what is taught in class and what is practiced for patients with mental illness among nursing students during MSE performance. It is believed that proper pedagogics may enhance this competence. A longitudinal controlled quasi-experimental study design was used to evaluate the effect of using standardized patient simulation-based pedagogics embedded with a lecture in enhancing mental status evaluation cognition among nursing students in Tanzania.

A longitudinal controlled quasi-experimental study design with pre-and post-test design studied 311 nursing students in the Tanga and Dodoma regions. The Standardized Patient Simulation-Based Pedagogy (SPSP) package was administered to the intervention group. Both groups underwent baseline and post-test assessments using a Interviewer-adminstered structured questionnaire as the primary data collection tool, which was benchmarked from previous studies. The effectiveness of the intervention was assessed using both descriptive and inferential statistics, specifically the Difference in Difference linear mixed model, and the t-test was carried out using IBM Statistical Package for Social Science (SPSS) software, version 25.

The participant’s mean age was 21 years ± 2.69 with 68.81% of the students being female. Following the training Students in the intervention group demonstrated a significant increase in MSE cognition post-test, with an overall mean score of ( M ± SD  = 22.15 ± 4.42;p = < 0.0001), against ( M ± SD  = 16.52 ± 6.30) for the control group.

A significant difference exists in the levels of cognition, among nursing students exposed to Mental Status Evaluation (MSE) materials through Standardized Patient Simulation-Based Pedagogy (SPSP) embeded with lectures. When MSE materials are delivered through SPSP along with lectures, the results are significantly superior to using lectures pedagogy alone.

Peer Review reports

Introduction

The most prominent and dominant strategy used to diagnose a mental health problem in a clinical setting is Mental status evaluation MSE [ 1 ]. The type of diagnosis is based on the chief signs and symptoms, and treatment is agreed upon accordingly. The MSE is data received using information gathered by the psychiatrist, clinician, and nurse from direct inquiries and passive assessment during the interview to determine the patient’s actual mental state. The purpose of evaluating the range of mental functions and behaviors at a particular moment gives crucial information for diagnosis and determining the disease’s severity, trajectory, and responsiveness to treatment. Countries such as the U.S. practice mental status evaluation as a diagnostic tool for the diagnosis of mental illness, and the rest of the world uses similar cataloging from the American Psychiatric Association [ 2 ]. Nursing students are expected to demonstrate competence in performing mental status evaluation. However, there is a gap between what is taught in class and what is practiced for patients with mental illness among nursing students during MSE performance. Classroom and clinical pedagogies, such as lecture role play and demonstrations, are implemented to facilitate MSE competencies among nursing students [ 3 ]. Scholars have reported conventional pedagogics such as lectures, demonstrations, and portfolios to be dominantly used in facilitating MSE learning among nursing students [ 4 ]. The predominant use of conventional pedagogy has been linked to anxiety, frustration, stress, and fear in nursing students when they encounter mentally ill patients during their clinical rotation [ 5 , 6 ].

Educators and health workers argue that these abilities are inadequate to provide evidence-based mental health nursing care. They may thus lead to prolonged hospital stays, remissions, drug-resistant and long-term adverse drug effects in mentally ill patients [ 7 ]. The study was conducted on the practices of the nursing students throughout clinical teaching in mental health hospitals and stated that there is a mismatch between theory and practice, insufficient instruction approaches, and an absence of person-mode nurses and coaching staff to facilitate MSE learning for nursing students appropriately [ 8 ]. A study by [ 9 ] on designing instruction to teach MSE reported that nursing students who taught MSE using conventional clinical pedagogics demonstrated inabilities to diagnose patient conditions plan patient care, prevent injury to patients and others, and provide specific management. Moreover, findings from [ 10 ] on the discrepancy between what occurs internally and externally in student mental health nursing showed a significant mismatch between theoretical mental health content knowledge and practical skills when nursing students are developed using conventional clinical pedagogy.

International and national organizations respond to Sustainable Development Goal number four, target number four (SDG), by emphasizing training institutions and teaching hospitals to adopt and implement innovative pedagogics in facilitating MSE learning for learners [ 11 ]. The incorporation of standardized patient simulation-based pedagogy (SPSP) as suggested by other scholars [ 12 , 13 ], appears to demonstrate academic potential, such as enhancing learners’ cognitive and empowering them with self-efficacy when performing MSE. Simulation offers a chance to make cases more challenging without endangering clients, families, or students, as nursing students in clinical practice are frequently tasked with working with amicable and amenable clients and families [ 14 ]. The SP comes to life in front of the learners in the state-of-the-art lab. Students can practice their diagnosis and develop therapeutic clinical expertise in the laboratories, which are offered in a friendly environment [ 15 ].

Similarly, a pilot study using a mixed method was done in Baccalaureate nursing education in the US to examine the use of SPSP compared with the traditional hours used for learning mental health, showing nursing students who received SPSP showed increased confidence and cognition about mental health by 25% compared to traditional hours [ 16 ]. Good MSE cognition among nursing students may ultimately lead to timely and appropriate diagnosis and, thus, positive mental health outcomes for mental illness patients. While the adoption and implementation of SPSP are popular in other countries, published scholarly works are scarce about it in clinical nursing education for MSE cognition among nursing students in Tanzania. It may be time to invest in research about the effect of SPSP embedded with lecture on enhancing MSE cognition among nursing students in this country.

Method and materials

The methodology of this study complied with national and international research ethics. Moreover, the study was conducted by the University of Dodoma’s institutional postgraduate guidelines and standards.

The purpose of the current research was to evaluate the impact of standardized patient simulation-based pedagogy (SPSP) linked with lectures on mental status evaluation cognition among nursing students in Tanzania. To accomplish this, a longitudinal quasi-experimental study design was implemented.

Study population

The target demographic was made up of students enrolled in diploma nursing programs in the regions of Tanga and Dodoma. The study involved 311, diploma nursing students (the age between 16 and 32 years). The reason for selecting middle college nursing students is that they constitute a big population of the future nursing force, which is expected to deliver nursing mental health services in the peripheral community. This study believed that skills provided to the nursing students were beneficial for them since it is targeted to be delivered to the large population for timely diagnosis of mental illness disorders, which most of the population are living in remote areas with inadequate mental health services.

Sampling procedure and technique

The purposive sampling technique was used to sample nursing schools from two regions, 5 nursing schools from Dodoma Central zone and 2 nursing schools from Tanga in the Northern zone, where 311 nursing students sampled and (109) were in the intervention group and (202) nursing students were in the control group then the proportional calculation was done to get the required number of participant’s in each nursing school whereby the simple random sampling were done to select the requires number of participants in each class. After being explained the purpose, and benefit of this study, nursing students who were willing to participate in this study and signed the written informed consent form were included in this study.

Proportional for the intervention group

Whereby n = Total number of sample sizes for each group whether interventional Control croup.

N = Total number of students in both classes.

Nh = Total number of students in each class.

Two nursing schools from Tanga were Tanga College of Health and Allied Sciences (TACOHAS) with a total number of students of 97, college A, and Korogwe Nursing Training Center (KNTC) with a total number of students of 71, college B.

The proportion for college A;

Nh = 97 and 71.

nA = (109/168) 97 = 63.

Therefore, the number of participants in College A was 63.

nB = (109/168) 71 = 46.

Therefore, the number of participants in College B was 46 (making a total sample size of 109 for the intervention group).

The proportion for the control group in the Dodoma region

Five colleges offering Diplomas in nursing from Dodoma are DECCA College of Health and Allied Sciences (DECCA COHAS) with a total number of nursing students 60; Dodoma Institute of Health and Allied Sciences (DIHAS) with a total number of nursing students 81; Saint John’s University with a total number of nursing students of 45; Mvumi Institute of Health and Allied Sciences (MIHAS) with a total number of students of 19; Kondoa School of Nursing with a total number of 47;

Total number of students = 252.

nC = (217/252) 60 = 52.

nD = (217/252) 81 = 70.

nE = (217/252) 45 = 39.

nF = (217/252) 19 = 16.

nG = (217/252) 47 = 40.

Which makes a total sample size of 217.

The required number of participants was obtained through proportional calculation. To select participants, a simple random sampling method was employed by listing the names of students on pieces of paper. The selection was made by choosing participants for every 10th number on the list until the required number was reached, Fig.  1 illustrates this. To prevent contamination, interventional and control groups were assigned to different regions, and participants were not informed about the other study sites. Additionally, the researcher’s assistant was kept unaware of whether participants were part of the control or intervention groups.

figure 1

Study design flow diagram Source: Study plan (2022)

Sample size estimation

The sample size n for this study was determined using WinPepi software version 11.65 [ 17 ]. Findings from the study on simulation-based learning in psychiatry for undergraduates at the University of Zimbabwe Medical School [ 18 ] showed a pre-session mean score of 15.90 and a post-session mean score of 20.05. With a sample size of the effect size of 2, a significance of 95% confidence interval of 5% significance level, and a power of the study of 80%, the ratio of the sample size B: A is a ratio of 1:2.As shown in Fig.  2 , therefore, sample size (n) = 326 Participants (109 in A and 217 in B). This program has been used by different scholars and reported to have statistical validity and reliability in studies [ 19 , 20 , 21 ].

figure 2

Source: Study plan (2022)

WinPepi program for sample size calculation.

Data collection procedure

After obtaining the necessary permissions, an available classroom was designated for the study. The Principal Investigator then introduced the study’s objectives to the participants. Once informed consent was obtained, the students were seated in separate chairs to prevent any potential copying or sharing of responses. Data collection was carried out by interviewer-administered structured questionnaire, with the trained researcher’s trainer. The Principal Investigator was present to provide clarifications when necessary. Once completed, the questionnaires were collected by the trained researcher trainer and securely stored in a locked cupboard by the Principal Investigator.

Data collection tool

This study employed a standardized structured questionnaire benchmarked from previous studies [ 22 ] with 33 items modified from a literature review. The adopted questionnaire for cognition has a test re-test approach which is used to assess the dependability of the study instrument (alpha reliability = 0.770, test-re-test reliability = 0.880). Therefore, the questionnaires used for data collection in this study consisted of two parts: Part “A” collected demographic characteristics profiles of the study participants, Part “B” assessed participants’ MSE cognition (28 items),

Nursing professionals were given the first draft of the instrument, and they were asked to reply to the open-ended questions, propose any changes they believed should be made, and suggest any additional items they thought should be added. Items having a relevancy score of less than 0.7 were removed, and adjustments to the wording were made to the expert’s suggestions. For face and construct validity, a preliminary draft was examined by a second nursing expert from a nursing faculty. Students in their second year of nursing ( n  = 33) provided comments on the tool’s usefulness. After comments from the experts and the nursing students, there were 5 questions from cognition questions that lacked face validity or content validity were removed. A total of 28 questions on cognition remain.

Reliability

To verify the tool’s capabilities for producing the expected results, a pilot study of 10% of the sample size was conducted. The statistical program for the Social Solution (SPSS) software version 25 was used to scale the results from the pilot study. The overall Cronbach’s alpha of cognition was 0.736. As recommended by previous scholars, a Cronbach’s Alpha (α) of ≥ 0.7 was considered a significantly reliable tool for the actual field data collection.

Variable measurement

A structured questionnaire benchmarked from previous studies was used to measure the variable pre- and post-intervention to test cognition. Cognition of MSE was measured using multiple-choice open-ended questions for baseline assessment and immediate 1-week post-intervention, the test had 28 questions for assessing MSE cognition with three domains including (2-questions) on the concept of MSE, (4-questions) on the content of MSE and (22-questions) on MSE implementation. Scores per each correct response ranged from ֞ 0 ֞ point for a wrong response to.

֞ 1 ֞ point for a correct response, and the highest score of MSE cognition were computed as a sum of each item, then cognition was explained as ֞ adequate cognition ֞ for participants who scored 50% and above, and the lowest score was explained as ֞ inadequate cognition ֞ for participants who scored < 50%. The domain of MSE cognition was also measured separately based on the total score that the nursing students scored out of the total score assigned to each domain then, the mean difference between the two groups was measured using a paired t-test.

The SPSP intervention

Table  1 shows the prescription of the intervention training. The intervention took 4 weeks to facilitate both MSE theory and practice. Topics of the MSE materials included a definition of MSE and steps in performing MSE to identify a client with mental illness. Two sessions were conducted in a week, lasting 120 min each. They were facilitated during the morning hours and were negotiated with the principles of the respective colleges. Two sessions were implemented to cover the MSE theoretical and practical sessions, respectively. Both English and Swahili were used alternatively at the convenience of research trainers and participants. The intervention group learned MSE using an SPSP embedded with a lecture compared to the control group, which learned the same MSE materials using lectures and real patient pedagogies. The rationale behind choosing these two approaches was to assess the impact of the intervention on two groups : those who were exposed to the MSE materials via SPSP embedded with lecture and went on to the skills laboratory to interview the SP who is trained and coached to portray sign and symptoms of mental illness, and those who were exposed through MSE lecture methods and actual patients in general medical wards without symptoms of mental illness. Upon completion of the data analysis, a comparison was made between the subjects who were exposed to the MSE materials through SP and the subjects who were exposed to the actual patient who did not exhibit any symptoms of mental illness. Before the intervention, participants in both groups were matched in their sociodemographic profiles, such as age, sex, education level, entry qualification, and marital status, to ensure their similarities before intervention. Pre-tests were then administered to participants to establish their baseline MSE cognition.

The MSE intervention focused on the area where nursing students struggled with technique questions to assess and determine if the patient exhibited the characteristics of hallucination, illusion, delusion, derealization, depersonalization, and insight, terms that can used commonly. To help nursing students understand that what the patient demonstrated or explained reflected the question asked, that failing to probe precisely what the patient was experiencing may lead to the wrong MSE conclusion, and that the SP was trained to answer the questions asked to reflect the reality of what the patient was suffering from, how these questions were asked was given more consideration.

Recruitment and training of SP and research trainers

Training of sp.

Professional actors who know mental health, work at a mental health facility, or have a family relative who has a mental illness, or encountered a person with a mental health problem and who were willing to help the student learn and be able to retain the script of the scenario was recruited as SP. Principles for preparing SPSP were found in the association of Standardized patient education standards, and practice [ 23 ] was applied to ensure SPSP is a safe work environment and training for role portray and feedback to students during debriefing. The agreed-upon formula, primary goals, duties, materials, and structure of the mental health scenario were covered during a weekly 2-hour training class. This class included instruction on scenario reading, guidance in verbal interaction techniques, input on the scenario, debriefing strategies, and discussions on how to reduce learner anxiety during the simulation experiences.

Before the rehearsal, each SP was provided with a scenario that outlined the signs and symptoms of a mentally ill patient. This scenario encompassed the various domains of MSE, with specific questions and answers to which the SP was required to respond in each domain. Emphasis was placed on the domains that nursing students commonly encountered difficulties with during mental status evaluation and clinical practice. For instance, they were trained on how to assess mood and affect, illusions and hallucinations, depersonalization and derealization, orientation, memory, intelligence, insight, and judgment. However, not all SPs were required to portray all domains of symptoms. This is because it’s uncommon for one patient to exhibit all possible symptoms simultaneously. Additionally, having all the symptoms portrayed by the SPs might lead to an exaggeration of the true symptoms of a real patient.

The SPs were thoroughly rehearsed using scenario scripts, and the research team, mental health experts, and nurse tutors who specialize in teaching mental health subjects reviewed their performances. The portrayal of the client’s character was observed, and the experts addressed any areas that required clarification or correction. Out of the four SPs who were willing to participate in this study, two were able to effectively portray the signs and symptoms of a mentally ill patient and were selected for the actual fieldwork implementation.

Implementation of MSE materials in an SPSP

Nursing students were assigned to the interventional group (typical education plus SPSP), which first completed both pre-tests before getting intervention. The MSE lecture method was taught to the students on the first day of the training by the researcher trainers focused on the definition of MSE, steps on performing MSE, and how to perform MSE to identify patient with mental illness disordes. The nursing students were then introduced to the simulation on the following day, and they were informed that the simulation would take place in a skills laboratory, nursing students were invited to the prepared skills laboratory, Students were seated on the semi-cycle sitting plan for easy visualization of the simulation, and then SP together with the nursing student who acted as a nurse were seated at the center and the researcher trainer was there to provide any assistance needed by students during simulation. Interventional students participated in two-hour simulation sessions, with a break in between to prevent student fatigue. Each group consisted of 5 to 8 students. Following the simulation pre-briefing on the scenario was done by the researcher trainer to make sure that they understood the whole simulation process, and SPSP orientation was included in each simulation. Thereafter each nursing student was provided with a checklist of the MSE categories to make a follow-up to what had been assessed during the simulation. SP was brought to the skills laboratory by his relatives dressed in dirty loose- jogging tracksuits and his hair was messy with a history of abnormal behavior characterized by abusive language, over-talkative, threatening his mother and others, reduced sleep during the night, grandiose delusion, persecutory delusion and hearing unknown voices, one nursing student was chosen from the class for each simulation to play the nurse role on how to perform MSE to the patient with abnormal behavior by using the technique and procedures learned during the lecture methods, and the other was designated as an observer. The positions of nurse and observer were available to all students. The duration of each simulation was 15 min, followed by a 10-minutes structural debriefing.

Evaluation of MSE materials in an SPSP

The three-part debrief paradigm, which entails defusing, identifying, and developing [ 24 ], served as the framework for the debriefing sessions. The trained researcher trainer was offered SPSP one-on-one organized time for debriefing immediately following each simulation exercise to examine psychological problems in role acting and how students’ emotional states influence their conduct and communication. To encourage cooperative learning, SP and nursing student observers discussed what they had noticed about communication and evaluation methods. Students playing nurses’ roles were encouraged to speak about their experiences. SP provided feedback via formative and summative methods that involved face-to-face engagement. The trained researcher trainer commented on the student’s responses.

Data analysis

The IBM statistical package of Social Science (SPSS) computer software program version 25 was used to analyze data. The frequency distribution table was used for data cleaning to ensure that all data was recorded accurately. To go through the data, labels had to be applied, value had to be checked and re-assigned for the open-ended questions, noise had to be checked, and the erroneous spellings verification for nominal response had to be rewritten. Additionally, the baseline and end-line data were combined and added during the procedure of the calculation of the important outcome A descriptive and inferential analysis was conducted based on the study’s goal. To calculate the frequencies and the percentage of each participant’s distribution between the two groups, a descriptive analysis was performed to examine participant characteristics Bar chats, mean values, and averages as well as tabular data, were all included in the descriptive evaluation. The pre-post mean score, and post-test mean score, for both the interventional and control groups were compared using the independent samples t-test. To evaluate the effect of the SPSP embedded with a lecture on MSE cognition, among nursing students from baseline to end line, the inferential analysis involved the differences in difference (DID) analysis using a Linear mixed model. A 95% Confidence interval set at a 5% (≤ 0.05) significance level was used to reject the null hypothesis. Results from the parameter multiple measurements were taken into consideration by models, and the groups were considered as fixed influences.

Difference–in–difference (DID) analysis for inferential analysis

By eliminating the confounding variables, difference-in-difference (D-I-D) analysis enables the comparison of changes over time in the results between interventions. The DID design examines the difference between the treatment groups by measuring the change in results between two-time intervals (pre and post) for the intervention and control groups, then subtracting one from another. In this research, the impact of the intervention on cognition change score was evaluated using difference-in-difference analysis using a linear mixed method. The outcomes of the variables’ repeated measurements were taken into account by the model. Interventions were regarded as having fixed effects in this analysis. The following formula is used to present the general fixed-effect DID mixed model

Time is an empty variable for the period, denoted as 1 when the outcome analysis was completed in the final stage and 0 for benchmark evaluation. Here, Y it is the final result for participant i at time t . This variable acts as a substitute variable for the intervention group. The combined parameter Time* Treatment is the relationship between time and the intervention, this ε it is also the amount of error for the participant i outcome measurements at the time t. The value of the intercept in the equation given parameter β 0 , represents the mean outcome value for the group receiving the intervention at the baseline measurement. β 1 is the change in an intervention group’s mean outcome variable between the baseline and the end line Parameter β 2 represents the variation in the mean result variable across individual interventions. The estimate and inference of the difference-in-difference between the two groups are provided by the coefficient of the interaction between groups.

Social demographic characteristics among nursing students

Distribution of the similarity of demographic characteristics among nursing students between intervention and control groups at the baseline. Table 2 reported that among the participants ( n  = 311), who indicated their age 34.86% ( n  = 38) for intervention and 54.46% ( n  = 110) for control were ranged between 21 and 32 years old with their age distribution ( p  = 0.0510) between groups, for those who indicated their gender 57.80% ( n  = 63) were female in intervention group and 68.81% (= 139) were from control group with ( p  = 0.0521) of their gender distribution between groups. However, for those who are single were many in both groups compared to those who are married 95.41% ( n  = 104) for intervention and 97.45% ( n  = 191) with ( p  = 0.3373*) of their distribution between marital status. The distribution of form four education entrance was higher compared to others with 76.15%( n  = 83) for control and 59.90% ( n  = 121) for intervention with ( p  = 0.0540) of their education level distribution between groups. Among all participants, 99.07% ( n  = 107) in intervention and 84.69% ( n  = 166) in control showed interest in nursing with ( p  = 0.0601) of their interest distribution between groups.

The effect of standardized patient simulation-based pedagogics embedded with lecture on MSE cognition among nursing students in Tanzania

As shown in Table  3 . below, the cognition pretest score of the concepts of MSE in the intervention group was ( M ± SD  = 0.87 ± 0.84) and the control group was M ± SD  = 0.81 ± 0.79, p  = 0.5341, the post-test results were M ± SD  = 1.33 ± 0.73 for the intervention group, and the control was M ± SD  = 1.17 ± 0.76; p  = 0.0785, so there is a marked change of MSE content from both groups. However, the MSE content on the baseline was ( M ± SD  = 2.63 ± 1.06) for the intervention and ( M ± SD  = 2.42 ± 0.95; p  = 0.0700) for the control group, the end line score for the intervention group was ( M ± SD  = 3.39 ± 0.80) and control group was ( M ± SD  = 2.85 ± 0.98; P  < 0.0001), baseline findings for implementation of MSE Intervention group scored ( M ± SD  = 11.4 ± 3.00) and control group scored ( M ± SD  = 9.85 ± 3.96; p  = 0.0076 for the pre-test intervention group scored ( M ± SD  = 17.42 ± 3.90) control group scored ( M ± SD  = 13.29 ± 4.58; p = < 0.0001).

The overall pretest was (M = 13.05, SD = 4.63) for intervention group, ( M ± SD  = 12.11 ± 5.21), p  = 0.1189 from control group posttest cognition was ( M ± SD  = 22.15 ± 4.42) for intervention and ( M ± SD  = 16.52 ± 6.30; p = < 0.0001) for control group. There is a significant change in cognition for the intervention and control group for the post-test. According to the substantial mean changes between the pre-test and post-test scores in all categories (Concepts, Content, Implementation, and Overall cognition) for the intervention group, it appears that the intervention has had a significant change effect on the cognition of nursing students in general. A small amount of progress is also seen in the control group, but overall, the intervention group exhibits more development.

Findings of nursing student’s cognition mean score between baseline and end line ( n  = 311)

The finding shows that the mean cognition increased from base to end line between the interventional group and the control group. As shown in Fig.  3 mean score of nursing student cognition increased by ( M ± SD  = 22.15 ± 4.42) for the intervention group, whereas cognition in the control group increased by ( M ± SD  = 16.52 ± 6.30). This implies that the change in cognition mean score from baseline to end line was higher in the intervention group than in the control group.

figure 3

Source: Field data (2023)

Findings of nursing student’s cognition mean score between baseline and end-line.

DID analysis for MSE cognition among nursing students in Tanzania

The fitted model results are presented in Table  4 . The findings indicate that there was a significant improvement in cognition from the baseline to the end line, as indicated by a p-value of < 0.0001. The coefficient for the Difference-in-Differences (D-I-D) analysis, comparing the intervention group to the control arm, was 4.6950. This suggests that the change in cognition from baseline to end line was significantly higher in the intervention group compared to the control group.

The study’s results establish a strong correlation between the impact of Standardized Patient Simulation-Based Pedagogy embedded with lecture (SPSP) and the cognition scores of nursing students. In the final analysis, nursing students exposed to standardized patient-based simulation materials displayed a significantly higher level of cognition regarding Mental Status Evaluation (MSE) when compared to the control group. This outcome aligns with a study conducted at university of Queens Canada on the impact of SPSP in psychiatric nursing on mental health education, which demonstrated a significant cognition improvement [ 25 ].Additionally, nursing students who interacted with standardized patients (SP) during interviews were able to relate what they had learned from the designed teaching pedagogy during the simulation, the simulation’s method of delivery provided nursing students with ample time to interview the SP. The study done by [ 26 ] on the use of SPS to train new nurses supporting this findings that new nursing students cognition improved higher compared to control group whose not exposed to the SPS.

During simulation process in case where clarification was required or certain behaviors were not well understood by the learners, students could request the SP to repeat the behavior, but also the skills of the trained researcher on the delivering of the content contributed to the increasing nursing students’ cognition. This aligns with the findings of a study conducted in Australia to explore the effect of SPSP on mental health education. The study reported that students who used SPSP for teaching scored higher, felt safer, and experienced reduced anxiety levels during examinations, as demonstrated in research [ 27 , 28 ]. Given the challenge of exposing nursing students directly to realistic patients without prior practice in a skills laboratory, exposing nursing students in SPS demonstrated a significant higher level of cognition this changes are due to the fact that students were able to control the learning environment during the simulation, a similar study was conducted in Baccalaureate nursing education in the US to examine the use of SPSP compared to traditional hours dedicated to learning mental health. The study found that student nurses who received SPSP demonstrated a 25% increase in confidence and cognition about mental health compared to traditional instructional hours, as highlighted in the research by [ 16 ]. Observing others successfully perform Mental Status Evaluation (MSE) using Standardized Patients (SP) and receiving encouraging feedback from colleagues and facilitators played a pivotal role in boosting nursing students’ cognition. This research aligns with study done to compare SPS versus mannequins in mental health simulation, which posits that cognition is influenced by positive simulation modalities, guidance through observational learning, approval, and inspiration [ 29 ]. The training program encouraged students to focus on acquiring the necessary knowledge, and the briefing provided during simulation on how MSE should be conducted contributed to building students’ MSE cognition.

However, This outcome aligns with a study conducted by [ 30 ] on the use of SPSP in psychiatric nursing, which demonstrated a substantial improvement in nursing students’ understanding compared to traditional teaching methods. Specifically, the study reported an 80% increase in cognition acquisition when utilizing SPSP as opposed to conventional approaches. These findings are consistent with a study conducted by [ 31 ], which implemented various active teaching methods during simulation to enhance nursing students’ knowledge. Additionally, the manner in which SPs were trained to accurately portray signs and symptoms of patients was instrumental in this process. Furthermore, the design of the SP teaching materials fostered collaboration among nursing students, encouraging each student to actively participate in classroom activities. This collaborative approach played a vital role in enhancing their MSE cognition. These findings are consistent with the work of several scholars, such as [ 32 ] and [ 33 ], who have emphasized the significant contribution of peer-to-peer education in boosting nursing students’ sense of cognition.

The findings of this study suggested that using SPSP embedded with lectures will help increase nursing student MSE cognition among nursing students in Tanzania. This is because there is no skills laboratory for nursing students to practice before encountering a real patient, and the practicum sites for nursing students to practice mental health services, especially MSE, are few. For this reason, nursing students are required to travel far from their institution to practice. This is contrary to the Tanzania curriculum, which states that nursing students should practice in the skills laboratory before going to the clinical. Standardized patients in teaching mental status evaluation is a useful pedagogical method and increases the cognition of the nursing students, while it’s difficult to use real patients because it may cause inconveniences to the patient and the learner. MSE is challenging to assess because it cannot be directly assessed as a physical disease. Nursing students require the technique of performing MSE to get the real symptom from the patient.

Strength of the study

To improve the performance of nursing students, the study addressed clinical pedagogical deficiencies in clinical mental health nursing education on Mental status evaluation to better manage and diagnose people with mental diseases promptly. However, the study has managed to use a control group and enough sample to increase the validity of results and power of the study on the effect of SPSP and their outcome.

Suggestion for further studies

Future researchers should include this training among nursing students at higher institutions. Future studies should address the problems with the study’s design and expand on some of the topics that were not fully explored in this one. Based on the study’s shortcomings, there were several implications that another study might take into account.

Limitations of the study

The generalization of the study findings among nursing students in Tanzania will be difficult since the calculated sample size was 326 and the participants who were willing to participate in this study during actual data collection was 311, even though the response rate was 95%. The results of the study cannot be used to determine whether they apply to all Tanzanian nursing students this is because study participants were the nursing students from the middle college who are pursuing diplomas in nursing from Dodoma and Tanga Regions, and excluded the university students who are also learning MSE and are expected to deliver MSE service within the community, and they also suffer from a lack of stimulation of MSE in a skill-based environment. Consequently, results must be examined and analyzed carefully while considering them. The study employed purposive sampling that cannot tell exactly that the selected participants present the sample of nursing students in Tanzania. However, the Study did not show the separate effect of lecture as embedded in the SPSP training materials and how much contributed to the outcome of interest.

Data availability

The datasets that are used or analyzed in the current study are available from the corresponding author on reasonable request via [email protected] or [email protected].

Abbreviations

Mental Status Evaluation

  • Standardized patient

Statistical Package of Social Sciences

Standardized patient Simulation Pedagogics

University of Dodoma

United states

World Health Organization

Rocha Neto HG, Estellita-Lins CE, Lessa JLM, Cavalcanti MT. Mental State Examination and Its Procedures—Narrative Review of Brazilian Descriptive Psychopathology. Front Psychiatry [Internet]. 2019;10. https://www.frontiersin.org/article/ https://doi.org/10.3389/fpsyt.2019.00077/full .

Ma F. Diagnostic and Statistical Manual of Mental Disorders-5 (DSM-5). In: Encyclopedia of Gerontology and Population Aging [Internet]. Cham: Springer International Publishing; 2021. pp. 1414–25. https://link.springer.com/ https://doi.org/10.1007/978-3-030-22009-9_419 .

García-Mayor S, Quemada-González C, León-Campos Á, Kaknani-Uttumchandani S, Gutiérrez-Rodríguez L, del Mar, Carmona-Segovia A et al. Nursing students’ perceptions on the use of clinical simulation in psychiatric and mental health nursing by means of objective structured clinical examination (OSCE). Nurse Educ Today. 2021;100.

Thomas SP. Thoughts about Teaching Psychiatric-Mental Health Nursing. Issues Ment Health Nurs [Internet]. 2019;40(11):931–931. https://www.tandfonline.com/doi/full/10.1080/01612840.2019.1653729 .

Abraham SP, Cramer C, Palleschi H. Walking on Eggshells: Addressing Nursing Students’ Fear of the Psychiatric Clinical Setting. J Psychosoc Nurs Ment Health Serv [Internet]. 2018;56(9):5–8. https://journals.healio.com/doi/ https://doi.org/10.3928/02793695-20180322-01 .

Wedgeworth ML, Ford CD, Tice JR. I’m scared: Journaling Uncovers Student Perceptions Prior to a Psychiatric Clinical Rotation. J Am Psychiatr Nurses Assoc [Internet]. 2020;26(2):189–95. http://journals.sagepub.com/doi/ https://doi.org/10.1177/1078390319844002 .

Moges S, Belete T, Mekonen T, Menberu M. Lifetime relapse and its associated factors among people with schizophrenia spectrum disorders who are on follow up at Comprehensive Specialized Hospitals in Amhara region, Ethiopia: a cross-sectional study. Int J Ment Health Syst [Internet]. 2021;15(1):42. https://ijmhs.biomedcentral.com/articles/ https://doi.org/10.1186/s13033-021-00464-0 .

Roy K, Nagalla M, Riba MB. Education in Psychiatry for Medical Specialists. In 2019. pp. 119–40. http://link.springer.com/ https://doi.org/10.1007/978-981-10-2350-7_8 .

Lenouvel E, Chivu C, Mattson J, Young JQ, Klöppel S, Pinilla S. Instructional Design Strategies for Teaching the Mental Status Examination and Psychiatric Interview: a Scoping Review. Acad Psychiatry [Internet]. 2022; https://link.springer.com/ https://doi.org/10.1007/s40596-022-01617-0 .

Marszalek MA, Faksvåg H, Frøystadvåg TH, Ness O, Veseth M. A mismatch between what is happening on the inside and going on, on the outside: a qualitative study of therapists’ perspectives on student mental health. Int J Ment Health Syst [Internet]. 2021;15(1):87. https://ijmhs.biomedcentral.com/articles/ https://doi.org/10.1186/s13033-021-00508-5 .

Silva M, De, Roland J. Mental Health Sustainable Dev. 2014;1–32.

Johnson KV, Scott AL, Franks L. Impact of standardized patients on first semester nursing students Self-Confidence, satisfaction, and communication in a simulated clinical case. SAGE Open Nurs. 2020;6(June).

Witt MA, McGaughan K, Smaldone A. Standardized Patient Simulation Experiences Improves Mental Health Assessment and Communication. Clin Simul Nurs [Internet]. 2018;23:16–20. https://doi.org/10.1016/j.ecns.2018.08.002 .

Oudshoorn A, Sinclair B. Using Unfolding Simulations to Teach Mental Health Concepts in Undergraduate Nursing Education. Clin Simul Nurs [Internet]. 2015;11(9):396–401. https://linkinghub.elsevier.com/retrieve/pii/S187613991500050X .

Edward K, Hercelinskyj J, Warelow P, Munro I. Simulation to Practice: Developing Nursing Skills in Mental Health–An Australian Perspective. Int Electron J Health Educ [Internet]. 2007;10(February 2014):60–4. http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ794196&login.asp&site=ehost-live%5Cnhttp://www.aahperd.org/iejhe/template.cfm?template=currentIssue.cfm#volume10

Soccio DA. Effectiveness of Mental Health Simulation in Replacing Traditional Clinical Hours in Baccalaureate Nursing Education. J Psychosoc Nurs Ment Health Serv [Internet]. 2017;55(11):36–43. https://journals.healio.com/doi/ https://doi.org/10.3928/02793695-20170905-03 .

Abramson JH. WINPEPI updated: Computer programs for epidemiologists, and their teaching potential. Epidemiol Perspect Innov [Internet]. 2011;8(1):1. http://www.epi-perspectives.com/content/8/1/1 .

Piette A, Service NH, Muchirahondo F, Mangezi W, Cowan FM. ‘ Simulation-based learning in psychiatry for undergraduates at the University of Zimbabwe medical school. ’ 2015;(March).

Ganz JB, Earles-Vollrath TL, Heath AK, Parker RI, Rispoli MJ, Duran JB. A meta-analysis of single case research studies on aided augmentative and alternative communication systems with individuals with autism spectrum disorders. J Autism Dev Disord. 2012;42(1):60–74.

Article   Google Scholar  

Millanzi WC, Kibusi SM. Exploring the effect of problem based facilitatory teaching approach on motivation to learn: a quasi-experimental study of nursing students in Tanzania. BMC Nurs [Internet]. 2021;20(1):3. https://bmcnurs.biomedcentral.com/articles/ https://doi.org/10.1186/s12912-020-00509-8 .

Parker RI, Vannest KJ, Davis JL. Effect size in single-case research: a review of nine nonoverlap techniques. Behav Modif. 2011;35(4):303–22.

Gabriel A, Violato C. The development of a knowledge test of depression and its treatment for patients suffering from non-psychotic depression: a psychometric assessment. BMC Psychiatry [Internet]. 2009;9(1):56. https://bmcpsychiatry.biomedcentral.com/articles/ https://doi.org/10.1186/1471-244X-9-56 .

Lewis KL, Bohnert CA, Gammon WL, Hölzer H, Lyman L, Smith C et al. The Association of Standardized Patient Educators (ASPE) Standards of Best Practice (SOBP). Adv Simul [Internet]. 2017;2(1):10. http://advancesinsimulation.biomedcentral.com/articles/ https://doi.org/10.1186/s41077-017-0043-4 .

Zigmont JJ, Kappus LJ, Sudikoff SN. The 3D Model of Debriefing: Defusing, Discovering, and Deepening. Semin Perinatol [Internet]. 2011;35(2):52–8. https://linkinghub.elsevier.com/retrieve/pii/S0146000511000048 .

Rabie A, Hakami A. Impact of standardised patient Simulation Training on clinical competence, knowledge, and attitudes in Mental. Health Nurs Educ. 2023;15(9).

Liu Y, Qie D, Wang M, Li Y, Guo D, Chen X, et al. Application of role reversal and standardized patient simulation (SPS) in the training of new nurses. BMC Med Educ. 2023;23(1):1–6.

Alexander L, Sheen J, Rinehart N, Hay M, Boyd L. Mental Health Simulation With Student Nurses: A Qualitative Review. Clin Simul Nurs [Internet]. 2018;14:8–14. https://linkinghub.elsevier.com/retrieve/pii/S1876139917301664 .

Skinner D, Kendall H, Skinner HM, Campbell C. Mental Health Simulation: Effects on Students’ Anxiety and Examination Scores. Clin Simul Nurs [Internet]. 2019;35:33–7. https://linkinghub.elsevier.com/retrieve/pii/S1876139919300222 .

Luebbert R, Perez A, Andrews A, Webster-Cooley T. Standardized Patients Versus Mannequins in Mental Health Simulation. J Am Psychiatr Nurses Assoc [Internet]. 2023;29(4):283–9. http://journals.sagepub.com/doi/10.1177/10783903231183322 .

Conway KA, Scoloveno RL. The Use of Standardized Patients as an Educational Strategy in Baccalaureate Psychiatric Nursing Simulation: A Mixed Method Pilot Study. J Am Psychiatr Nurses Assoc [Internet]. 2022;107839032211010. http://journals.sagepub.com/doi/ https://doi.org/10.1177/10783903221101049 .

Horntvedt M-ET, Nordsteien A, Fermann T, Severinsson E. Strategies for teaching evidence-based practice in nursing education: a thematic literature review. BMC Med Educ [Internet]. 2018;18(1):172. https://bmcmededuc.biomedcentral.com/articles/ https://doi.org/10.1186/s12909-018-1278-z .

Kamali M, Hasanvand S, Kordestani-Moghadam P, Ebrahimzadeh F, Amini M. Impact of dyadic practice on the clinical self-efficacy and empathy of nursing students. BMC Nurs [Internet]. 2023;22(1):8. https://bmcnurs.biomedcentral.com/articles/ https://doi.org/10.1186/s12912-022-01171-y .

Riley J, Mandi DG, Bamouni J, Yaméogo RA, Naïbé DT, Kaboré E et al. No Title. Dasgupta K, editor. PLoS One [Internet]. 2021;4(1):e0205326. https://bmcresnotes.biomedcentral.com/articles/ https://doi.org/10.1186/s13104-018-3275-z .

Download references

Acknowledgements

Dr. W.C. Millanzi (PhD) and Dr. F. Moshi (PhD) are supervisors. This study was conducted by adhering to the international and national guidelines and the University of Dodoma postgraduate guidelines.

No source of funds.

Author information

Authors and affiliations.

Department of Nursing Management and Education, The University of Dodoma, Dodoma, Tanzania

Violeth E. Singano, Walter C. Millanzi & Fabiola Moshi

You can also search for this author in PubMed   Google Scholar

Contributions

V.E.S.: Conceptualization, data collection, data analysis, and writing the manuscript W.C.M.: Conceptualization, supervision, data interpretation, draft and reviewed the manuscript. F.M.: Conceptualization, supervision, data interpretation, draft and reviewed the manuscript. All authors approved the manuscript.

Corresponding author

Correspondence to Violeth E. Singano .

Ethics declarations

Ethical approval and consent to participate in the study.

It is imperative to carry out tasks properly for any research project to be considered, so all protocol processes, including ethical Clearance obtained by the UDOM Institutional Research Review Ethics Committee (IRREC) with research proposal ethical clearance number MA.84/261/61/37 and research permit number MA.84/261/02/35 for Dodoma Region and MA.84/261/02/36 for Tanga Region, Tanzania. Written informed consent was obtained from the participants; respondents participated in the study after being informed and understanding all information concerning the research process. Confidentiality is assured by ensuring that the names of the participants or the training institution are not shown on the data collection instruments or the data collected from them for research purposes. Respondents’ privacy was safeguarded by providing them with separate, unoccupied rooms. The principal investigator maintained a high level of focus throughout the investigation. Data were meticulously managed using a designated key folder exclusively by the Principal Investigator and were not shared externally without the express authorization of both the Principal Investigator and UDOM. In cases where students chose to discontinue their participation in the study, permission was granted after they provided a reason to the principal investigator. Additionally, the respective authorities in the sampled study settings were readily available to manage unforeseen events such as student fainting, asthma attacks, or collapses, as the researcher may not have been able to address these situations adequately.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Singano, V.E., Millanzi, W.C. & Moshi, F. Effect of standardized patient simulation-based pedagogics embedded with lecture in enhancing mental status evaluation cognition among nursing students in Tanzania: A longitudinal quasi-experimental study. BMC Med Educ 24 , 577 (2024). https://doi.org/10.1186/s12909-024-05562-4

Download citation

Received : 16 October 2023

Accepted : 16 May 2024

Published : 26 May 2024

DOI : https://doi.org/10.1186/s12909-024-05562-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Simulation pedagogics

BMC Medical Education

ISSN: 1472-6920

example of research title of quasi experimental design

IMAGES

  1. 5 Quasi-Experimental Design Examples (2024)

    example of research title of quasi experimental design

  2. PPT

    example of research title of quasi experimental design

  3. PPT

    example of research title of quasi experimental design

  4. Example of quasi-experimental research design (10 of 11)

    example of research title of quasi experimental design

  5. What Is A Quasi Experimental Design

    example of research title of quasi experimental design

  6. PPT

    example of research title of quasi experimental design

VIDEO

  1. Quasi-experimental design #quasiexperimentaldesign

  2. Quantitative Research Designs

  3. QUASI

  4. Quasi-Experimental Designs II: Separate Sample Pretest-Posttest Design

  5. Types of Quasi Experimental Research Design

  6. Quasi Experimental Research Design

COMMENTS

  1. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  2. 5 Quasi-Experimental Design Examples (2024)

    Quasi-Experimental Design Examples. 1. Smartboard Apps and Math. A school has decided to supplement their math resources with smartboard applications. The math teachers research the apps available and then choose two apps for each grade level. Before deciding on which apps to purchase, the school contacts the seller and asks for permission to ...

  3. A Quasi-experimental Study of The Effect of Mathematics Professional

    In this quasi-experimental study, I examined the effect of a Math and Science Partnership (MSP) PD on student achievement trajectories. ... Moreover, it provides a case study showing how the research design might contribute in important ways to the ability to detect an effect of PD -if one exists- on student achievement. For example, given the ...

  4. Quasi Experimental Design Overview & Examples

    Quasi-experimental research is a design that closely resembles experimental research but is different. The term "quasi" means "resembling," so you can think of it as a cousin to actual experiments. In these studies, researchers can manipulate an independent variable — that is, they change one factor to see what effect it has.

  5. Quasi-Experimental Design: Types, Examples, Pros, and Cons

    Quasi-Experimental Design: Types, Examples, Pros, and Cons. A quasi-experimental design can be a great option when ethical or practical concerns make true experiments impossible, but the research methodology does have its drawbacks. Learn all the ins and outs of a quasi-experimental design. A quasi-experimental design can be a great option when ...

  6. 7.3 Quasi-Experimental Research

    Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one. The prefix quasi means "resembling.". Thus quasi-experimental research is research that resembles experimental research but is not true experimental research.

  7. Quasi-experimental Research: What It Is, Types & Examples

    Quasi-experimental research designs are a type of research design that is similar to experimental designs but doesn't give full control over the independent variable (s) like true experimental designs do. In a quasi-experimental design, the researcher changes or watches an independent variable, but the participants are not put into groups at ...

  8. Quasi-Experimental Research Design

    The purpose of quasi-experimental design is to investigate the causal relationship between two or more variables when it is not feasible or ethical to conduct a randomized controlled trial (RCT). Quasi-experimental designs attempt to emulate the randomized control trial by mimicking the control group and the intervention group as much as possible.

  9. Use of Quasi-Experimental Research Designs in Education Research

    The increasing use of quasi-experimental research designs (QEDs) in education, brought into focus following the "credibility revolution" (Angrist & Pischke, 2010) in economics, which sought to use data to empirically test theoretical assertions, has indeed improved causal claims in education (Loeb et al., 2017).However, more recently, scholars, practitioners, and policymakers have ...

  10. Quasi-experiment

    A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment.Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control.

  11. Selecting and Improving Quasi-Experimental Designs in Effectiveness and

    QEDs test causal hypotheses but, in lieu of fully randomized assignment of the intervention, seek to define a comparison group or time period that reflects the counter-factual (i.e., outcomes if the intervention had not been implemented) ().QEDs seek to identify a comparison group or time period that is as similar as possible to the treatment group or time period in terms of baseline (pre ...

  12. The Use and Interpretation of Quasi-Experimental Studies in Medical

    Examples of quasi-experimental studies follow. As one example of a quasi-experimental study, a hospital introduces a new order-entry system and wishes to study the impact of this intervention on the number of medication-related adverse events before and after the intervention. ... The authors reviewed articles and book chapters on the design of ...

  13. Quasi-Experimental Design

    A quasi-experimental design is common in social research when a true experimental design may not be possible. Overall, the design types are very similar, except that quasi-experimental design does ...

  14. (PDF) Quasi-Experimental Research Designs

    Abstract and Figures. Quasi-experimental research designs are the most widely used research approach employed to evaluate the outcomes of social work programs and policies. This new volume ...

  15. Quasi-experimental design

    Matthew L. Maciejewski. Quasi-experiments are similar to randomized controlled trials in many respects, but there are many challenges in designing and conducting a quasi-experiment when internal validity threats are introduced from the absence of randomization. This paper outlines design, measurement and statistical issues that must be ...

  16. PDF A Quasi-Experimental Research on the Educational Value of Performance

    learning activities using a quasi-experimental research design. In this research, the three measurement criteria of educational value are suggested as 'improvement & advancement,' 'sincerity & enthusiasm,' and 'individuality & wholeness.' A pre-test was administered to 4 classes (156 students) in 7th grade. Classes were divided into an

  17. PDF QUASI-EXPERIMENTAL or AND SINGLE-CASE EXPERIMENTAL post, DESIGNScopy

    experimental research design. 13.1 An Overview of Quasi-Experimental Designs In this major section, we introduce a common type of research design called the quasi-experimental research design. The quasi-experimental research design, also defined in A quasi-experimental research design is the use of methods and procedures to make observations in

  18. University of North Florida

    University of North Florida

  19. PDF Quantitative Research Designs: Experimental, Quasi-Experimental, and

    Quasi-experimental research does not have randomization of participants to groups. 7. In a human intervention study, will participants, researchers, and staff be blinded from ... trate the design. The examples include some discussion of the results of statistical tests, ... You can often tell from the title of an article whether the study is ...

  20. Quasi-Experimental Design: Research Methods & Examples

    A quasi-experimental design is very similar to experimental designs, but unlike the latter, lacks the full control of the experimental method. In a quasi-experimental design, the researcher does not randomly choose participants, as would be done in a true experimental design. Instead, participants are assigned to groups based on existing ...

  21. PDF Example Evaluation Plan for a Quasi-Experimental Design

    This document provides an example of a detailed evaluation plan for evaluating the effectiveness of an intervention. Developed using the Evaluation Plan Template, the plan is for a quasi-experimental design (QED). The example illustrates the information that an evaluator should include in each section of an evaluation plan, as well as provides ...

  22. Experimental vs Quasi-Experimental Design: Which to Choose?

    A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment. Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn't is not randomized.

  23. quasi-experimental quantitative study: Topics by Science.gov

    Segmented Polynomial Models in Quasi-Experimental Research. ERIC Educational Resources Information Center. Wasik, John L. 1981-01-01. The use of segmented polynomial models is explained. Examples of design matrices of dummy variables are given for the least squares analyses of time series and discontinuity quasi-experimental research

  24. Effect of standardized patient simulation-based pedagogics embedded

    The purpose of the current research was to evaluate the impact of standardized patient simulation-based pedagogy (SPSP) linked with lectures on mental status evaluation cognition among nursing students in Tanzania. To accomplish this, a longitudinal quasi-experimental study design was implemented. Study population