Advertisement

Issue Cover

  • Previous Issue
  • Previous Article
  • Next Article

Clarifying the Research Purpose

Methodology, measurement, data analysis and interpretation, tools for evaluating the quality of medical education research, research support, competing interests, quantitative research methods in medical education.

Submitted for publication January 8, 2018. Accepted for publication November 29, 2018.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Cite Icon Cite
  • Get Permissions
  • Search Site

John T. Ratelle , Adam P. Sawatsky , Thomas J. Beckman; Quantitative Research Methods in Medical Education. Anesthesiology 2019; 131:23–35 doi: https://doi.org/10.1097/ALN.0000000000002727

Download citation file:

  • Ris (Zotero)
  • Reference Manager

There has been a dramatic growth of scholarly articles in medical education in recent years. Evaluating medical education research requires specific orientation to issues related to format and content. Our goal is to review the quantitative aspects of research in medical education so that clinicians may understand these articles with respect to framing the study, recognizing methodologic issues, and utilizing instruments for evaluating the quality of medical education research. This review can be used both as a tool when appraising medical education research articles and as a primer for clinicians interested in pursuing scholarship in medical education.

Image: J. P. Rathmell and Terri Navarette.

Image: J. P. Rathmell and Terri Navarette.

There has been an explosion of research in the field of medical education. A search of PubMed demonstrates that more than 40,000 articles have been indexed under the medical subject heading “Medical Education” since 2010, which is more than the total number of articles indexed under this heading in the 1980s and 1990s combined. Keeping up to date requires that practicing clinicians have the skills to interpret and appraise the quality of research articles, especially when serving as editors, reviewers, and consumers of the literature.

While medical education shares many characteristics with other biomedical fields, substantial particularities exist. We recognize that practicing clinicians may not be familiar with the nuances of education research and how to assess its quality. Therefore, our purpose is to provide a review of quantitative research methodologies in medical education. Specifically, we describe a structure that can be used when conducting or evaluating medical education research articles.

Clarifying the research purpose is an essential first step when reading or conducting scholarship in medical education. 1   Medical education research can serve a variety of purposes, from advancing the science of learning to improving the outcomes of medical trainees and the patients they care for. However, a well-designed study has limited value if it addresses vague, redundant, or unimportant medical education research questions.

What is the research topic and why is it important? What is unknown about the research topic? Why is further research necessary?

What is the conceptual framework being used to approach the study?

What is the statement of study intent?

What are the research methodology and study design? Are they appropriate for the study objective(s)?

Which threats to internal validity are most relevant for the study?

What is the outcome and how was it measured?

Can the results be trusted? What is the validity and reliability of the measurements?

How were research subjects selected? Is the research sample representative of the target population?

Was the data analysis appropriate for the study design and type of data?

What is the effect size? Do the results have educational significance?

Fortunately, there are steps to ensure that the purpose of a research study is clear and logical. Table 1   2–5   outlines these steps, which will be described in detail in the following sections. We describe these elements not as a simple “checklist,” but as an advanced organizer that can be used to understand a medical education research study. These steps can also be used by clinician educators who are new to the field of education research and who wish to conduct scholarship in medical education.

Steps in Clarifying the Purpose of a Research Study in Medical Education

Steps in Clarifying the Purpose of a Research Study in Medical Education

Literature Review and Problem Statement

A literature review is the first step in clarifying the purpose of a medical education research article. 2 , 5 , 6   When conducting scholarship in medical education, a literature review helps researchers develop an understanding of their topic of interest. This understanding includes both existing knowledge about the topic as well as key gaps in the literature, which aids the researcher in refining their study question. Additionally, a literature review helps researchers identify conceptual frameworks that have been used to approach the research topic. 2  

When reading scholarship in medical education, a successful literature review provides background information so that even someone unfamiliar with the research topic can understand the rationale for the study. Located in the introduction of the manuscript, the literature review guides the reader through what is already known in a manner that highlights the importance of the research topic. The literature review should also identify key gaps in the literature so the reader can understand the need for further research. This gap description includes an explicit problem statement that summarizes the important issues and provides a reason for the study. 2 , 4   The following is one example of a problem statement:

“Identifying gaps in the competency of anesthesia residents in time for intervention is critical to patient safety and an effective learning system… [However], few available instruments relate to complex behavioral performance or provide descriptors…that could inform subsequent feedback, individualized teaching, remediation, and curriculum revision.” 7  

This problem statement articulates the research topic (identifying resident performance gaps), why it is important (to intervene for the sake of learning and patient safety), and current gaps in the literature (few tools are available to assess resident performance). The researchers have now underscored why further research is needed and have helped readers anticipate the overarching goals of their study (to develop an instrument to measure anesthesiology resident performance). 4  

The Conceptual Framework

Following the literature review and articulation of the problem statement, the next step in clarifying the research purpose is to select a conceptual framework that can be applied to the research topic. Conceptual frameworks are “ways of thinking about a problem or a study, or ways of representing how complex things work.” 3   Just as clinical trials are informed by basic science research in the laboratory, conceptual frameworks often serve as the “basic science” that informs scholarship in medical education. At a fundamental level, conceptual frameworks provide a structured approach to solving the problem identified in the problem statement.

Conceptual frameworks may take the form of theories, principles, or models that help to explain the research problem by identifying its essential variables or elements. Alternatively, conceptual frameworks may represent evidence-based best practices that researchers can apply to an issue identified in the problem statement. 3   Importantly, there is no single best conceptual framework for a particular research topic, although the choice of a conceptual framework is often informed by the literature review and knowing which conceptual frameworks have been used in similar research. 8   For further information on selecting a conceptual framework for research in medical education, we direct readers to the work of Bordage 3   and Irby et al. 9  

To illustrate how different conceptual frameworks can be applied to a research problem, suppose you encounter a study to reduce the frequency of communication errors among anesthesiology residents during day-to-night handoff. Table 2 10 , 11   identifies two different conceptual frameworks researchers might use to approach the task. The first framework, cognitive load theory, has been proposed as a conceptual framework to identify potential variables that may lead to handoff errors. 12   Specifically, cognitive load theory identifies the three factors that affect short-term memory and thus may lead to communication errors:

Conceptual Frameworks to Address the Issue of Handoff Errors in the Intensive Care Unit

Conceptual Frameworks to Address the Issue of Handoff Errors in the Intensive Care Unit

Intrinsic load: Inherent complexity or difficulty of the information the resident is trying to learn ( e.g. , complex patients).

Extraneous load: Distractions or demands on short-term memory that are not related to the information the resident is trying to learn ( e.g. , background noise, interruptions).

Germane load: Effort or mental strategies used by the resident to organize and understand the information he/she is trying to learn ( e.g. , teach back, note taking).

Using cognitive load theory as a conceptual framework, researchers may design an intervention to reduce extraneous load and help the resident remember the overnight to-do’s. An example might be dedicated, pager-free handoff times where distractions are minimized.

The second framework identified in table 2 , the I-PASS (Illness severity, Patient summary, Action list, Situational awareness and contingency planning, and Synthesis by receiver) handoff mnemonic, 11   is an evidence-based best practice that, when incorporated as part of a handoff bundle, has been shown to reduce handoff errors on pediatric wards. 13   Researchers choosing this conceptual framework may adapt some or all of the I-PASS elements for resident handoffs in the intensive care unit.

Note that both of the conceptual frameworks outlined above provide researchers with a structured approach to addressing the issue of handoff errors; one is not necessarily better than the other. Indeed, it is possible for researchers to use both frameworks when designing their study. Ultimately, we provide this example to demonstrate the necessity of selecting conceptual frameworks to clarify the research purpose. 3 , 8   Readers should look for conceptual frameworks in the introduction section and should be wary of their omission, as commonly seen in less well-developed medical education research articles. 14  

Statement of Study Intent

After reviewing the literature, articulating the problem statement, and selecting a conceptual framework to address the research topic, the final step in clarifying the research purpose is the statement of study intent. The statement of study intent is arguably the most important element of framing the study because it makes the research purpose explicit. 2   Consider the following example:

This study aimed to test the hypothesis that the introduction of the BASIC Examination was associated with an accelerated knowledge acquisition during residency training, as measured by increments in annual ITE scores. 15  

This statement of study intent succinctly identifies several key study elements including the population (anesthesiology residents), the intervention/independent variable (introduction of the BASIC Examination), the outcome/dependent variable (knowledge acquisition, as measure by in In-training Examination [ITE] scores), and the hypothesized relationship between the independent and dependent variable (the authors hypothesize a positive correlation between the BASIC examination and the speed of knowledge acquisition). 6 , 14  

The statement of study intent will sometimes manifest as a research objective, rather than hypothesis or question. In such instances there may not be explicit independent and dependent variables, but the study population and research aim should be clearly identified. The following is an example:

“In this report, we present the results of 3 [years] of course data with respect to the practice improvements proposed by participating anesthesiologists and their success in implementing those plans. Specifically, our primary aim is to assess the frequency and type of improvements that were completed and any factors that influence completion.” 16  

The statement of study intent is the logical culmination of the literature review, problem statement, and conceptual framework, and is a transition point between the Introduction and Methods sections of a medical education research report. Nonetheless, a systematic review of experimental research in medical education demonstrated that statements of study intent are absent in the majority of articles. 14   When reading a medical education research article where the statement of study intent is absent, it may be necessary to infer the research aim by gathering information from the Introduction and Methods sections. In these cases, it can be useful to identify the following key elements 6 , 14 , 17   :

Population of interest/type of learner ( e.g. , pain medicine fellow or anesthesiology residents)

Independent/predictor variable ( e.g. , educational intervention or characteristic of the learners)

Dependent/outcome variable ( e.g. , intubation skills or knowledge of anesthetic agents)

Relationship between the variables ( e.g. , “improve” or “mitigate”)

Occasionally, it may be difficult to differentiate the independent study variable from the dependent study variable. 17   For example, consider a study aiming to measure the relationship between burnout and personal debt among anesthesiology residents. Do the researchers believe burnout might lead to high personal debt, or that high personal debt may lead to burnout? This “chicken or egg” conundrum reinforces the importance of the conceptual framework which, if present, should serve as an explanation or rationale for the predicted relationship between study variables.

Research methodology is the “…design or plan that shapes the methods to be used in a study.” 1   Essentially, methodology is the general strategy for answering a research question, whereas methods are the specific steps and techniques that are used to collect data and implement the strategy. Our objective here is to provide an overview of quantitative methodologies ( i.e. , approaches) in medical education research.

The choice of research methodology is made by balancing the approach that best answers the research question against the feasibility of completing the study. There is no perfect methodology because each has its own potential caveats, flaws and/or sources of bias. Before delving into an overview of the methodologies, it is important to highlight common sources of bias in education research. We use the term internal validity to describe the degree to which the findings of a research study represent “the truth,” as opposed to some alternative hypothesis or variables. 18   Table 3   18–20   provides a list of common threats to internal validity in medical education research, along with tactics to mitigate these threats.

Threats to Internal Validity and Strategies to Mitigate Their Effects

Threats to Internal Validity and Strategies to Mitigate Their Effects

Experimental Research

The fundamental tenet of experimental research is the manipulation of an independent or experimental variable to measure its effect on a dependent or outcome variable.

True Experiment

True experimental study designs minimize threats to internal validity by randomizing study subjects to experimental and control groups. Through ensuring that differences between groups are—beyond the intervention/variable of interest—purely due to chance, researchers reduce the internal validity threats related to subject characteristics, time-related maturation, and regression to the mean. 18 , 19  

Quasi-experiment

There are many instances in medical education where randomization may not be feasible or ethical. For instance, researchers wanting to test the effect of a new curriculum among medical students may not be able to randomize learners due to competing curricular obligations and schedules. In these cases, researchers may be forced to assign subjects to experimental and control groups based upon some other criterion beyond randomization, such as different classrooms or different sections of the same course. This process, called quasi-randomization, does not inherently lead to internal validity threats, as long as research investigators are mindful of measuring and controlling for extraneous variables between study groups. 19  

Single-group Methodologies

All experimental study designs compare two or more groups: experimental and control. A common experimental study design in medical education research is the single-group pretest–posttest design, which compares a group of learners before and after the implementation of an intervention. 21   In essence, a single-group pre–post design compares an experimental group ( i.e. , postintervention) to a “no-intervention” control group ( i.e. , preintervention). 19   This study design is problematic for several reasons. Consider the following hypothetical example: A research article reports the effects of a year-long intubation curriculum for first-year anesthesiology residents. All residents participate in monthly, half-day workshops over the course of an academic year. The article reports a positive effect on residents’ skills as demonstrated by a significant improvement in intubation success rates at the end of the year when compared to the beginning.

This study does little to advance the science of learning among anesthesiology residents. While this hypothetical report demonstrates an improvement in residents’ intubation success before versus after the intervention, it does not tell why the workshop worked, how it compares to other educational interventions, or how it fits in to the broader picture of anesthesia training.

Single-group pre–post study designs open themselves to a myriad of threats to internal validity. 20   In our hypothetical example, the improvement in residents’ intubation skills may have been due to other educational experience(s) ( i.e. , implementation threat) and/or improvement in manual dexterity that occurred naturally with time ( i.e. , maturation threat), rather than the airway curriculum. Consequently, single-group pre–post studies should be interpreted with caution. 18  

Repeated testing, before and after the intervention, is one strategy that can be used to reduce the some of the inherent limitations of the single-group study design. Repeated pretesting can mitigate the effect of regression toward the mean, a statistical phenomenon whereby low pretest scores tend to move closer to the mean on subsequent testing (regardless of intervention). 20   Likewise, repeated posttesting at multiple time intervals can provide potentially useful information about the short- and long-term effects of an intervention ( e.g. , the “durability” of the gain in knowledge, skill, or attitude).

Observational Research

Unlike experimental studies, observational research does not involve manipulation of any variables. These studies often involve measuring associations, developing psychometric instruments, or conducting surveys.

Association Research

Association research seeks to identify relationships between two or more variables within a group or groups (correlational research), or similarities/differences between two or more existing groups (causal–comparative research). For example, correlational research might seek to measure the relationship between burnout and educational debt among anesthesiology residents, while causal–comparative research may seek to measure differences in educational debt and/or burnout between anesthesiology and surgery residents. Notably, association research may identify relationships between variables, but does not necessarily support a causal relationship between them.

Psychometric and Survey Research

Psychometric instruments measure a psychologic or cognitive construct such as knowledge, satisfaction, beliefs, and symptoms. Surveys are one type of psychometric instrument, but many other types exist, such as evaluations of direct observation, written examinations, or screening tools. 22   Psychometric instruments are ubiquitous in medical education research and can be used to describe a trait within a study population ( e.g. , rates of depression among medical students) or to measure associations between study variables ( e.g. , association between depression and board scores among medical students).

Psychometric and survey research studies are prone to the internal validity threats listed in table 3 , particularly those relating to mortality, location, and instrumentation. 18   Additionally, readers must ensure that the instrument scores can be trusted to truly represent the construct being measured. For example, suppose you encounter a research article demonstrating a positive association between attending physician teaching effectiveness as measured by a survey of medical students, and the frequency with which the attending physician provides coffee and doughnuts on rounds. Can we be confident that this survey administered to medical students is truly measuring teaching effectiveness? Or is it simply measuring the attending physician’s “likability”? Issues related to measurement and the trustworthiness of data are described in detail in the following section on measurement and the related issues of validity and reliability.

Measurement refers to “the assigning of numbers to individuals in a systematic way as a means of representing properties of the individuals.” 23   Research data can only be trusted insofar as we trust the measurement used to obtain the data. Measurement is of particular importance in medical education research because many of the constructs being measured ( e.g. , knowledge, skill, attitudes) are abstract and subject to measurement error. 24   This section highlights two specific issues related to the trustworthiness of data: the validity and reliability of measurements.

Validity regarding the scores of a measurement instrument “refers to the degree to which evidence and theory support the interpretations of the [instrument’s results] for the proposed use of the [instrument].” 25   In essence, do we believe the results obtained from a measurement really represent what we were trying to measure? Note that validity evidence for the scores of a measurement instrument is separate from the internal validity of a research study. Several frameworks for validity evidence exist. Table 4 2 , 22 , 26   represents the most commonly used framework, developed by Messick, 27   which identifies sources of validity evidence—to support the target construct—from five main categories: content, response process, internal structure, relations to other variables, and consequences.

Sources of Validity Evidence for Measurement Instruments

Sources of Validity Evidence for Measurement Instruments

Reliability

Reliability refers to the consistency of scores for a measurement instrument. 22 , 25 , 28   For an instrument to be reliable, we would anticipate that two individuals rating the same object of measurement in a specific context would provide the same scores. 25   Further, if the scores for an instrument are reliable between raters of the same object of measurement, then we can extrapolate that any difference in scores between two objects represents a true difference across the sample, and is not due to random variation in measurement. 29   Reliability can be demonstrated through a variety of methods such as internal consistency ( e.g. , Cronbach’s alpha), temporal stability ( e.g. , test–retest reliability), interrater agreement ( e.g. , intraclass correlation coefficient), and generalizability theory (generalizability coefficient). 22 , 29  

Example of a Validity and Reliability Argument

This section provides an illustration of validity and reliability in medical education. We use the signaling questions outlined in table 4 to make a validity and reliability argument for the Harvard Assessment of Anesthesia Resident Performance (HARP) instrument. 7   The HARP was developed by Blum et al. to measure the performance of anesthesia trainees that is required to provide safe anesthetic care to patients. According to the authors, the HARP is designed to be used “…as part of a multiscenario, simulation-based assessment” of resident performance. 7  

Content Validity: Does the Instrument’s Content Represent the Construct Being Measured?

To demonstrate content validity, instrument developers should describe the construct being measured and how the instrument was developed, and justify their approach. 25   The HARP is intended to measure resident performance in the critical domains required to provide safe anesthetic care. As such, investigators note that the HARP items were created through a two-step process. First, the instrument’s developers interviewed anesthesiologists with experience in resident education to identify the key traits needed for successful completion of anesthesia residency training. Second, the authors used a modified Delphi process to synthesize the responses into five key behaviors: (1) formulate a clear anesthetic plan, (2) modify the plan under changing conditions, (3) communicate effectively, (4) identify performance improvement opportunities, and (5) recognize one’s limits. 7 , 30  

Response Process Validity: Are Raters Interpreting the Instrument Items as Intended?

In the case of the HARP, the developers included a scoring rubric with behavioral anchors to ensure that faculty raters could clearly identify how resident performance in each domain should be scored. 7  

Internal Structure Validity: Do Instrument Items Measuring Similar Constructs Yield Homogenous Results? Do Instrument Items Measuring Different Constructs Yield Heterogeneous Results?

Item-correlation for the HARP demonstrated a high degree of correlation between some items ( e.g. , formulating a plan and modifying the plan under changing conditions) and a lower degree of correlation between other items ( e.g. , formulating a plan and identifying performance improvement opportunities). 30   This finding is expected since the items within the HARP are designed to assess separate performance domains, and we would expect residents’ functioning to vary across domains.

Relationship to Other Variables’ Validity: Do Instrument Scores Correlate with Other Measures of Similar or Different Constructs as Expected?

As it applies to the HARP, one would expect that the performance of anesthesia residents will improve over the course of training. Indeed, HARP scores were found to be generally higher among third-year residents compared to first-year residents. 30  

Consequence Validity: Are Instrument Results Being Used as Intended? Are There Unintended or Negative Uses of the Instrument Results?

While investigators did not intentionally seek out consequence validity evidence for the HARP, unanticipated consequences of HARP scores were identified by the authors as follows:

“Data indicated that CA-3s had a lower percentage of worrisome scores (rating 2 or lower) than CA-1s… However, it is concerning that any CA-3s had any worrisome scores…low performance of some CA-3 residents, albeit in the simulated environment, suggests opportunities for training improvement.” 30  

That is, using the HARP to measure the performance of CA-3 anesthesia residents had the unintended consequence of identifying the need for improvement in resident training.

Reliability: Are the Instrument’s Scores Reproducible and Consistent between Raters?

The HARP was applied by two raters for every resident in the study across seven different simulation scenarios. The investigators conducted a generalizability study of HARP scores to estimate the variance in assessment scores that was due to the resident, the rater, and the scenario. They found little variance was due to the rater ( i.e. , scores were consistent between raters), indicating a high level of reliability. 7  

Sampling refers to the selection of research subjects ( i.e. , the sample) from a larger group of eligible individuals ( i.e. , the population). 31   Effective sampling leads to the inclusion of research subjects who represent the larger population of interest. Alternatively, ineffective sampling may lead to the selection of research subjects who are significantly different from the target population. Imagine that researchers want to explore the relationship between burnout and educational debt among pain medicine specialists. The researchers distribute a survey to 1,000 pain medicine specialists (the population), but only 300 individuals complete the survey (the sample). This result is problematic because the characteristics of those individuals who completed the survey and the entire population of pain medicine specialists may be fundamentally different. It is possible that the 300 study subjects may be experiencing more burnout and/or debt, and thus, were more motivated to complete the survey. Alternatively, the 700 nonresponders might have been too busy to respond and even more burned out than the 300 responders, which would suggest that the study findings were even more amplified than actually observed.

When evaluating a medical education research article, it is important to identify the sampling technique the researchers employed, how it might have influenced the results, and whether the results apply to the target population. 24  

Sampling Techniques

Sampling techniques generally fall into two categories: probability- or nonprobability-based. Probability-based sampling ensures that each individual within the target population has an equal opportunity of being selected as a research subject. Most commonly, this is done through random sampling, which should lead to a sample of research subjects that is similar to the target population. If significant differences between sample and population exist, those differences should be due to random chance, rather than systematic bias. The difference between data from a random sample and that from the population is referred to as sampling error. 24  

Nonprobability-based sampling involves selecting research participants such that inclusion of some individuals may be more likely than the inclusion of others. 31   Convenience sampling is one such example and involves selection of research subjects based upon ease or opportuneness. Convenience sampling is common in medical education research, but, as outlined in the example at the beginning of this section, it can lead to sampling bias. 24   When evaluating an article that uses nonprobability-based sampling, it is important to look for participation/response rate. In general, a participation rate of less than 75% should be viewed with skepticism. 21   Additionally, it is important to determine whether characteristics of participants and nonparticipants were reported and if significant differences between the two groups exist.

Interpreting medical education research requires a basic understanding of common ways in which quantitative data are analyzed and displayed. In this section, we highlight two broad topics that are of particular importance when evaluating research articles.

The Nature of the Measurement Variable

Measurement variables in quantitative research generally fall into three categories: nominal, ordinal, or interval. 24   Nominal variables (sometimes called categorical variables) involve data that can be placed into discrete categories without a specific order or structure. Examples include sex (male or female) and professional degree (M.D., D.O., M.B.B.S., etc .) where there is no clear hierarchical order to the categories. Ordinal variables can be ranked according to some criterion, but the spacing between categories may not be equal. Examples of ordinal variables may include measurements of satisfaction (satisfied vs . unsatisfied), agreement (disagree vs . agree), and educational experience (medical student, resident, fellow). As it applies to educational experience, it is noteworthy that even though education can be quantified in years, the spacing between years ( i.e. , educational “growth”) remains unequal. For instance, the difference in performance between second- and third-year medical students is dramatically different than third- and fourth-year medical students. Interval variables can also be ranked according to some criteria, but, unlike ordinal variables, the spacing between variable categories is equal. Examples of interval variables include test scores and salary. However, the conceptual boundaries between these measurement variables are not always clear, as in the case where ordinal scales can be assumed to have the properties of an interval scale, so long as the data’s distribution is not substantially skewed. 32  

Understanding the nature of the measurement variable is important when evaluating how the data are analyzed and reported. Medical education research commonly uses measurement instruments with items that are rated on Likert-type scales, whereby the respondent is asked to assess their level of agreement with a given statement. The response is often translated into a corresponding number ( e.g. , 1 = strongly disagree, 3 = neutral, 5 = strongly agree). It is remarkable that scores from Likert-type scales are sometimes not normally distributed ( i.e. , are skewed toward one end of the scale), indicating that the spacing between scores is unequal and the variable is ordinal in nature. In these cases, it is recommended to report results as frequencies or medians, rather than means and SDs. 33  

Consider an article evaluating medical students’ satisfaction with a new curriculum. Researchers measure satisfaction using a Likert-type scale (1 = very unsatisfied, 2 = unsatisfied, 3 = neutral, 4 = satisfied, 5 = very satisfied). A total of 20 medical students evaluate the curriculum, 10 of whom rate their satisfaction as “satisfied,” and 10 of whom rate it as “very satisfied.” In this case, it does not make much sense to report an average score of 4.5; it makes more sense to report results in terms of frequency ( e.g. , half of the students were “very satisfied” with the curriculum, and half were not).

Effect Size and CIs

In medical education, as in other research disciplines, it is common to report statistically significant results ( i.e. , small P values) in order to increase the likelihood of publication. 34 , 35   However, a significant P value in itself does necessarily represent the educational impact of the study results. A statement like “Intervention x was associated with a significant improvement in learners’ intubation skill compared to education intervention y ( P < 0.05)” tells us that there was a less than 5% chance that the difference in improvement between interventions x and y was due to chance. Yet that does not mean that the study intervention necessarily caused the nonchance results, or indicate whether the between-group difference is educationally significant. Therefore, readers should consider looking beyond the P value to effect size and/or CI when interpreting the study results. 36 , 37  

Effect size is “the magnitude of the difference between two groups,” which helps to quantify the educational significance of the research results. 37   Common measures of effect size include Cohen’s d (standardized difference between two means), risk ratio (compares binary outcomes between two groups), and Pearson’s r correlation (linear relationship between two continuous variables). 37   CIs represent “a range of values around a sample mean or proportion” and are a measure of precision. 31   While effect size and CI give more useful information than simple statistical significance, they are commonly omitted from medical education research articles. 35   In such instances, readers should be wary of overinterpreting a P value in isolation. For further information effect size and CI, we direct readers the work of Sullivan and Feinn 37   and Hulley et al. 31  

In this final section, we identify instruments that can be used to evaluate the quality of quantitative medical education research articles. To this point, we have focused on framing the study and research methodologies and identifying potential pitfalls to consider when appraising a specific article. This is important because how a study is framed and the choice of methodology require some subjective interpretation. Fortunately, there are several instruments available for evaluating medical education research methods and providing a structured approach to the evaluation process.

The Medical Education Research Study Quality Instrument (MERSQI) 21   and the Newcastle Ottawa Scale-Education (NOS-E) 38   are two commonly used instruments, both of which have an extensive body of validity evidence to support the interpretation of their scores. Table 5 21 , 39   provides more detail regarding the MERSQI, which includes evaluation of study design, sampling, data type, validity, data analysis, and outcomes. We have found that applying the MERSQI to manuscripts, articles, and protocols has intrinsic educational value, because this practice of application familiarizes MERSQI users with fundamental principles of medical education research. One aspect of the MERSQI that deserves special mention is the section on evaluating outcomes based on Kirkpatrick’s widely recognized hierarchy of reaction, learning, behavior, and results ( table 5 ; fig .). 40   Validity evidence for the scores of the MERSQI include its operational definitions to improve response process, excellent reliability, and internal consistency, as well as high correlation with other measures of study quality, likelihood of publication, citation rate, and an association between MERSQI score and the likelihood of study funding. 21 , 41   Additionally, consequence validity for the MERSQI scores has been demonstrated by its utility for identifying and disseminating high-quality research in medical education. 42  

Fig. Kirkpatrick’s hierarchy of outcomes as applied to education research. Reaction = Level 1, Learning = Level 2, Behavior = Level 3, Results = Level 4. Outcomes become more meaningful, yet more difficult to achieve, when progressing from Level 1 through Level 4. Adapted with permission from Beckman and Cook, 2007.2

Kirkpatrick’s hierarchy of outcomes as applied to education research. Reaction = Level 1, Learning = Level 2, Behavior = Level 3, Results = Level 4. Outcomes become more meaningful, yet more difficult to achieve, when progressing from Level 1 through Level 4. Adapted with permission from Beckman and Cook, 2007. 2  

The Medical Education Research Study Quality Instrument for Evaluating the Quality of Medical Education Research

The Medical Education Research Study Quality Instrument for Evaluating the Quality of Medical Education Research

The NOS-E is a newer tool to evaluate the quality of medication education research. It was developed as a modification of the Newcastle-Ottawa Scale 43   for appraising the quality of nonrandomized studies. The NOS-E includes items focusing on the representativeness of the experimental group, selection and compatibility of the control group, missing data/study retention, and blinding of outcome assessors. 38 , 39   Additional validity evidence for NOS-E scores includes operational definitions to improve response process, excellent reliability and internal consistency, and its correlation with other measures of study quality. 39   Notably, the complete NOS-E, along with its scoring rubric, can found in the article by Cook and Reed. 39  

A recent comparison of the MERSQI and NOS-E found acceptable interrater reliability and good correlation between the two instruments 39   However, noted differences exist between the MERSQI and NOS-E. Specifically, the MERSQI may be applied to a broad range of study designs, including experimental and cross-sectional research. Additionally, the MERSQI addresses issues related to measurement validity and data analysis, and places emphasis on educational outcomes. On the other hand, the NOS-E focuses specifically on experimental study designs, and on issues related to sampling techniques and outcome assessment. 39   Ultimately, the MERSQI and NOS-E are complementary tools that may be used together when evaluating the quality of medical education research.

Conclusions

This article provides an overview of quantitative research in medical education, underscores the main components of education research, and provides a general framework for evaluating research quality. We highlighted the importance of framing a study with respect to purpose, conceptual framework, and statement of study intent. We reviewed the most common research methodologies, along with threats to the validity of a study and its measurement instruments. Finally, we identified two complementary instruments, the MERSQI and NOS-E, for evaluating the quality of a medical education research study.

Bordage G: Conceptual frameworks to illuminate and magnify. Medical education. 2009; 43(4):312–9.

Cook DA, Beckman TJ: Current concepts in validity and reliability for psychometric instruments: Theory and application. The American journal of medicine. 2006; 119(2):166. e7–166. e116.

Franenkel JR, Wallen NE, Hyun HH: How to Design and Evaluate Research in Education. 9th edition. New York, McGraw-Hill Education, 2015.

Hulley SB, Cummings SR, Browner WS, Grady DG, Newman TB: Designing clinical research. 4th edition. Philadelphia, Lippincott Williams & Wilkins, 2011.

Irby BJ, Brown G, Lara-Alecio R, Jackson S: The Handbook of Educational Theories. Charlotte, NC, Information Age Publishing, Inc., 2015

Standards for Educational and Psychological Testing (American Educational Research Association & American Psychological Association, 2014)

Swanwick T: Understanding medical education: Evidence, theory and practice, 2nd edition. Wiley-Blackwell, 2013.

Sullivan GM, Artino Jr AR: Analyzing and interpreting data from Likert-type scales. Journal of graduate medical education. 2013; 5(4):541–2.

Sullivan GM, Feinn R: Using effect size—or why the P value is not enough. Journal of graduate medical education. 2012; 4(3):279–82.

Tavakol M, Sandars J: Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part II. Medical teacher. 2014; 36(10):838–48.

Support was provided solely from institutional and/or departmental sources.

The authors declare no competing interests.

Citing articles via

Most viewed, email alerts, related articles, social media, affiliations.

  • ASA Practice Parameters
  • Online First
  • Author Resource Center
  • About the Journal
  • Editorial Board
  • Rights & Permissions
  • Online ISSN 1528-1175
  • Print ISSN 0003-3022
  • Anesthesiology
  • ASA Monitor

Silverchair Information Systems

  • Terms & Conditions Privacy Policy
  • Manage Cookie Preferences
  • © Copyright 2024 American Society of Anesthesiologists

This Feature Is Available To Subscribers Only

Sign In or Create an Account

medical education experimental research

  • Get new issue alerts Get alerts
  • Submit a Manuscript

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

How Common Are Experimental Designs in Medical Education? Findings from a Bibliometric Analysis of Recent Dissertations and Theses

Royal, Kenneth D; Rinaldo, Jason CB 1

Department of Clinical Sciences, North Carolina State University, Raleigh, NC, USA

1 Office of Assessment, Rawls College of Business, Texas Tech University, Lubbock, TX, USA

Address for correspondence: Dr. Kenneth D. Royal, Department of Clinical Sciences, North Carolina State University, 1060 William Moore Dr. Raleigh, NC 27607, USA E-mail: [email protected]

This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.

Background: 

There has been a recent influx of researchers in the field of medical education coming from medical and health science backgrounds. Researchers from health fields often misunderstand that studies involving experimental designs are relatively rare throughout educational research. Experts in education research note that experimental designs largely are incompatible with educational studies due to various contextual, legal, and ethical issues.

Purpose: 

We sought to investigate the frequency with which experimental designs have been utilized in recent medical education dissertations and theses.

Methods: 

A bibliometric analysis of dissertations and theses completed in the field of medical education between 2011 and 2016.

Results: 

Fewer than 10% of doctoral dissertations and master's theses involved some type of experimental design. Only 6.12% of all dissertation and master's projects involved randomized experiments.

Conclusions: 

Randomized experiments occur only slightly more frequently in medical education than other educational fields.

INTRODUCTION

In recent years, there has been a significant influx in the number of individuals conducting research in the field of medical education. The vast majority of these researchers come from medical and health science backgrounds and have little to no formal training in education research. These researchers tend to be accustomed to experimental designs, as these research designs are frequently used in drug trials and health-related research. This design is considered to be the most rigorous, least biased, and of the highest quality.[ 1 ] Naturally, many of these individuals have called for an increased use of experimental designs and randomized controlled trials in medical education studies. Unfortunately, educational studies often do not allow for the same randomization as can be found in drug or treatment conditions.

Concomitantly, numerous medical education researchers have questioned the quality of medical education research due to what they perceive to be a lack of rigorous research designs. Many of these individuals immediately dismiss any work that does not involve an experimental design. These hegemonistic views about research designs typically are perceived as insulting by researchers with formal training in the field of education. Such attacks on nonexperimental designs are reminiscent of the “Paradigm Wars” in which researchers in the social and behavioral sciences debated the superiority of quantitative versus qualitative methods for decades before eventually agreeing that each type of inquiry has its own merits and appropriateness for use.

Unquestionably, there is a significant difference between research in the medical and health sciences and research in the education sciences.[ 2 ] For instance, what is entirely appropriate in medicine may be entirely inappropriate, unethical, impractical, and/or illegal in education (and vice versa).[ 2 ] Further, the nature of the variables studied between the sciences and the methods used to handle those challenges often are unique to one's discipline. In fact, experimental designs largely are incompatible with most educational contexts. To illustrate this point, the National Science Foundation's 2015–2016 Survey of Earned Doctorates reported there are 40 subdisciplines within the field of education.[ 3 ] In a review of all dissertations completed across these many education subdisciplines, research has noted that <1% involved randomized experiments.[ 4 ]

Interestingly, the disconnect between medical/health science researchers and education researchers whose training typically is rooted in the social and behavioral sciences appears to come down to a single, but significant, difference in perspective. That is, medical and health science researchers typically begin their research with an experimental design in mind and then craft a research question that can be answered within an experimental context. Education and other social and behavioral researchers, on the other hand, begin with a particular research question in mind and then identify an appropriate research design to best answer the question(s). These fundamental differences in research norms based on one's disciplinary training tends to create confusion about what constitutes “good” research in interdisciplinary fields such as medical education where the biomedical and social/behavioral sciences converge.

Any ongoing confusion about what constitutes “good” medical education research is harmful for the medical education community and need resolution. For example, many medical education researchers with formal training in education research have expressed frustration with reviewers and editors that insinuate any study that does not utilize some form of experimental design is “weak,” “flawed,” or otherwise incapable of yielding valid results.[ 2 ] On the other hand, many researchers with little or no formal training in education research are frustrated by article submissions that do not meet their expectations for a high-quality research design.[ 2 ] Thus, some resolution, or dialog toward resolution, is necessary.

To that end, the purpose of this study was to investigate the research designs utilized in recent dissertations and theses in the subject area of medical education. The rationale is that dissertations and theses provide unique insights about research in the field of medical education. For example, dissertations and theses may originate from a variety of academic disciplines, tend to be rather exhaustive works often exceeding a hundred pages, typically involve rigorous research designs, usually are led by a committee chair/mentor who is a content expert in the subject area, and almost always are evaluated by a team of 3–5 (or more) faculty experts, many of whom also are subject matter experts and specialists in research methodology.

A bibliometric analysis was conducted using ProQuest Dissertations/Theses, the largest repository for dissertations and theses in the world with >3.8 million works. A search was conducted using the keyword “medical education” in the subject field. This resulted in a total of 147 dissertations and theses. Search parameters included the most recent 5 years, with a specific date range of August 1, 2011 to August 1, 2016. Each dissertation and thesis was reviewed to determine whether an experimental or quasi-experimental design was employed.

Of the 147 dissertations and theses identified, 14 (9.52%) utilized either an experimental or quasi-experimental design. Of these 14 studies, 11 (78.57%) were dissertations and 3 (21.42%) were theses. With respect to experimental design type, 9 (64.29%) involved randomized experiments, and 5 (35.71%) were quasi-experimental in nature. Of the 9 randomized experimental studies, 7 (77.78%) were dissertations, and 2 (22.22%) were theses.

With respect to degree type, of the 14 experimental or quasi-experimental designs used, 6 (42.9%) were Doctorates in Education (EdD), 4 (28.6%) were Doctorates of Philosophy (PhD), 1 (7.14%) was a Doctorate of Public Health, and 3 (21.42%) were Masters of Science degrees. Finally, with respect to the colleges granting these degrees, 7 (50.00%) were from colleges of education, whereas 7 (50.00%) came from other colleges (e.g., medicine, public health, engineering, and arts and sciences). A full breakdown of results is presented in Table 1 .

T1-7

Overall, 9.52% of doctoral dissertations and master's theses in the subject area of medical education over the 5-year period involved some type of experimental design. Only 6.12% of these projects involved randomized experiments, a rate that is only slightly higher than a typical dissertation or thesis in the field of education.[ 4 ] These results support the notion that research in medical education has far more in common with the methodological conventions of education and the social and behavioral sciences than those of the medical and health sciences.

Of course, some may argue dissertations and theses are subpar in quality when compared to peer-reviewed, published literature. Although there certainly will be some variability in the quality of graduate projects (much like there is variability in quality of peer-reviewed papers), there also is much reason to believe these graduate projects may actually be of superior quality to many peer-reviewed papers published in the medical education literature (e.g., incredibly rigorous requirements, expert faculty oversight and mentoring, and subject matter expert evaluators). Thus, it would be a mistake to dismiss the quality of dissertations and theses because they have yet to undergo peer-review by an academic journal, especially given many of the long-standing concerns relating to academic peer-review (e.g., reviewer shortage and questionable quality reviews).[ 19 20 ] In fact, many fine peer-reviewed publications stem directly from a researcher's graduate project.

CONCLUSIONS

Medical education research greatly benefits from interdisciplinary perspectives, which includes methodological practice and research design. Certainly, researchers are encouraged to pursue experimental designs when ethical, legal, and operational constraints permit. However, results of this study further illustrate that conducting randomized experiments in education is an ambitious, but generally unattainable goal. Thus, it should serve as a reminder that it is important to reassess assumptions about research designs when conducting research in the field of medical education.

Financial support and sponsorship

Conflicts of interest.

Both authors are editors for Education in the Health Professions. Thus, peer-review was initiated and performed by an independent editor associated with the journal.

Bibliometrics; dissertations and theses; education research; experiments; medical education; research design; research quality

  • + Favorites
  • View in Gallery

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Realist methods in medical education research: what are they and what can they contribute?

Affiliation.

  • 1 Centre for Primary Care and Public Health, Blizard Institute, Queen Mary University of London, London, UK. [email protected]
  • PMID: 22150200
  • DOI: 10.1111/j.1365-2923.2011.04045.x

Context: Education is a complex intervention which produces different outcomes in different circumstances. Education researchers have long recognised the need to supplement experimental studies of efficacy with a broader range of study designs that will help to unpack the 'how' and 'why' questions and illuminate the many, varied and interdependent mechanisms by which interventions may work (or fail to work) in different contexts.

Methods: One promising approach is realist evaluation, which seeks to establish what works, for whom, in what circumstances, in what respects, to what extent, and why. This paper introduces the realist approach and explains why it is particularly suited to education research. It gives a brief introduction to the philosophical assumptions underlying realist methods and outlines key principles of realist evaluation (designed for empirical studies) and realist review (the application of realist methods to secondary research).

Discussion: The paper warns that realist approaches are not a panacea and lists the circumstances in which they are likely to be particularly useful.

© Blackwell Publishing Ltd 2012.

PubMed Disclaimer

  • To every complex problem there is a simple solution.. Roberts TE. Roberts TE. Med Educ. 2012 Jan;46(1):9-10. doi: 10.1111/j.1365-2923.2011.04171.x. Med Educ. 2012. PMID: 22150190 No abstract available.

Similar articles

  • Critical Realism and Realist Inquiry in Medical Education. Ellaway RH, Kehoe A, Illing J. Ellaway RH, et al. Acad Med. 2020 Jul;95(7):984-988. doi: 10.1097/ACM.0000000000003232. Acad Med. 2020. PMID: 32101916
  • Collaboration between parents and SLTs produces optimal outcomes for children attending speech and language therapy: Gathering the evidence. Klatte IS, Lyons R, Davies K, Harding S, Marshall J, McKean C, Roulstone S. Klatte IS, et al. Int J Lang Commun Disord. 2020 Jul;55(4):618-628. doi: 10.1111/1460-6984.12538. Epub 2020 May 8. Int J Lang Commun Disord. 2020. PMID: 32383829 Free PMC article.
  • What works, for whom and under what circumstances? Using realist methodology to evaluate complex interventions in nursing: A scoping review. Palm R, Hochmuth A. Palm R, et al. Int J Nurs Stud. 2020 Sep;109:103601. doi: 10.1016/j.ijnurstu.2020.103601. Epub 2020 May 12. Int J Nurs Stud. 2020. PMID: 32590248 Review.
  • Realist review--a new method of systematic review designed for complex policy interventions. Pawson R, Greenhalgh T, Harvey G, Walshe K. Pawson R, et al. J Health Serv Res Policy. 2005 Jul;10 Suppl 1:21-34. doi: 10.1258/1355819054308530. J Health Serv Res Policy. 2005. PMID: 16053581 Review.
  • An overview of realist evaluation for simulation-based education. Graham AC, McAleer S. Graham AC, et al. Adv Simul (Lond). 2018 Jul 17;3:13. doi: 10.1186/s41077-018-0073-6. eCollection 2018. Adv Simul (Lond). 2018. PMID: 30026966 Free PMC article.
  • Educational interventions aimed at improving knowledge of delirium among nursing home staff-a realist review. Molitor V, Busse TS, Giehl C, Lauer R, Otte IC, Vollmar HC, Thürmann P, Holle B, Palm R. Molitor V, et al. BMC Geriatr. 2024 Jul 25;24(1):633. doi: 10.1186/s12877-024-05213-9. BMC Geriatr. 2024. PMID: 39054433 Free PMC article. Review.
  • Contextually appropriate nurse staffing models: a realist review protocol. Tate K, Penconek T, Booth A, Harvey G, Flynn R, Lalleman P, Wolbers I, Hoben M, Estabrooks CA, Cummings GG; ReSoNANCE (Realist Synthesis of Nursing in Australia Netherlands Canada and England). Tate K, et al. BMJ Open. 2024 May 6;14(5):e082883. doi: 10.1136/bmjopen-2023-082883. BMJ Open. 2024. PMID: 38719308 Free PMC article.
  • Two for One: Merging Continuing Professional Development and Faculty Development in the CATE Curriculum for Pharmacy Preceptors. Kwan D, Leslie K, Dubins D, Guo A, Haddadi E, Steenhof N. Kwan D, et al. Can J Hosp Pharm. 2024 Apr 10;77(2):e3465. doi: 10.4212/cjhp.3465. eCollection 2024. Can J Hosp Pharm. 2024. PMID: 38601130
  • Survey on Medical Students' Attitudes Toward Medical Practice Just Before Clinical Clerkship in Japan. Komasawa N, Yokohira M. Komasawa N, et al. Cureus. 2024 Jan 25;16(1):e52899. doi: 10.7759/cureus.52899. eCollection 2024 Jan. Cureus. 2024. PMID: 38406098 Free PMC article.
  • How do medical schools influence their students' career choices? A realist evaluation. Thomas A, Kinston R, Yardley S, McKinley RK, Lefroy J. Thomas A, et al. Med Educ Online. 2024 Dec 31;29(1):2320459. doi: 10.1080/10872981.2024.2320459. Epub 2024 Feb 25. Med Educ Online. 2024. PMID: 38404035 Free PMC article.
  • Search in MeSH

Related information

  • Cited in Books

Grants and funding

  • 10/1008/07/DH_/Department of Health/United Kingdom
  • HS&DR/10/1008/07/DH_/Department of Health/United Kingdom

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Advertisement

Advertisement

Reflections on experimental research in medical education

  • Reflections
  • Published: 22 April 2008
  • Volume 15 , pages 455–464, ( 2010 )

Cite this article

medical education experimental research

  • David A. Cook 1 &
  • Thomas J. Beckman 1  

2796 Accesses

102 Citations

3 Altmetric

Explore all metrics

As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest–posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest–posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

What works may hurt: side effects in education.

medical education experimental research

Researching in Surgical Education: An Orientation

medical education experimental research

Understanding the Complexities of Experimental Analysis in the Context of Higher Education

Explore related subjects.

  • Artificial Intelligence

For a recent discussion of whether medical education is a hard or soft science, see Gruppen ( 2008 ).

Randomization cannot control for mortality (loss to follow-up), but it can facilitate analyses seeking to explore the implications of high participant dropout.

When pretests are used, researchers should not calculate the difference between pretest and posttest scores and statistically analyze the difference or change scores. Although this method is commonly used (indeed, we are guilty of having used it), it is inferior to the more appropriate use of the pretest as a covariate (along with treatment group and other relevant variables) in multivariate statistical models. See Cronbach and Furby ( 1970 ) and Norman and Streiner ( 2007 ) for detailed discussions.

Pretests may also be useful in randomized trials comparing active interventions if no treatment effect is found, by providing evidence that the lack of effect is not due to similarly ineffective interventions or an insensitive measurement tool (an exploration of the absolute effects of the treatments rather than the relative effects between groups). However, this analysis parallels the single-group pretest-posttest study with all attendant limitations.

Baernstein, A., Liss, H. K., Carney, P. A., & Elmore, J. G. (2007). Trends in study methods used in Undergraduate Medical Education Research, 1969–2007. Journal of the American Medical Association, 298 , 1038–1045.

Article   Google Scholar  

Beckman, T. J., & Cook, D. A. (2004). Educational epidemiology. Journal of the American Medical Association, 292 , 2969.

Benson, K., & Hartz, A. J. (2000). A comparison of observational studies and randomized, controlled trials. New England Journal of Medicine, 342 , 1878–1886.

Bland, J. M., & Altman, D. G. (1994). Statistic notes: Regression towards the mean. British Medical Journal, 308 , 1499.

Google Scholar  

Bordage, G. (2007). Moving the field forward: Going beyond quantitative–qualitative. Academic Medicine, 82 (10 suppl), S126–S128.

Callahan, C. A., Hojat, M., & Gonnella, J. S. (2007). Volunteer bias in medical education research: An empirical study of over three decades of longitudinal data. Medical Education, 41 , 746–753.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Chicago: Rand McNally.

Carney, P. A., Nierenberg, D. W., Pipas, C. F., Brooks, W. B., Stukel, T. A., & Keller, A. M. (2004). Educational epidemiology: Applying population-based design and analytic approaches to study medical education. Journal of the American Medical Association, 292 , 1044–1050.

Concato, J., Shah, N., & Horwitz, R. I. (2000). Randomized, controlled trials, observational studies, and the hierarchy of research designs. New England Journal of Medicine, 342 , 1887–1892.

Cook, D. A. (2005). The research we still are not doing: An agenda for the study of computer-based learning. Academic Medicine, 80 , 541–548.

Cook, D. A., Beckman, T. J., & Bordage, G. (2007). Quality of reporting of experimental studies in medical education: A systematic review. Medical Education, 41 , 737–745.

Cook, D. A., Bordage, G., & Schmidt, H. G. (2008). Description, justification, and clarification: A framework for classifying the purposes of research in medical education. Medical Education, 42 , 128–133.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings . Boston: Houghton Mifflin.

Cook, D. A., Thompson, W. G., Thomas, K. G., Thomas, M. R., & Pankratz, V. S. (2006). Impact of self-assessment questions and learning styles in web-based learning: A randomized, controlled, crossover trial. Academic Medicine, 81 , 231–238.

Cronbach, L. J. (1982). Designing evaluations of educational and social problems . San Francisco: Jossey-Bass.

Cronbach, L. J., & Furby, L. (1970). How should we measure “change”—or should we? Psychological Bulletin, 74 , 68–80.

Dauphinee, W. D., & Wood-Dauphinee, S. (2004). The need for evidence in medical education: The development of best evidence medical education as an opportunity to inform, guide, and sustain medical education research. Academic Medicine, 79 , 925–930.

Des Jarlais, D. C., Lyles, C., & Crepaz, N. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94 , 361–366.

Education Group for Guidelines on Evaluation. (1999). Guidelines for evaluating papers on educational interventions. British Medical Journal, 318 , 1265–1267.

Fraenkel, J. R., & Wallen, N. E. (2003). How to design and evaluate research in education . New York, NY: McGraw-Hill.

Gruppen, L. D. (2008). Is medical education research ‘hard’ or ‘soft’ research? Advances in Health Sciences Education: Theory and Practice, 13, 1–2.

Harden, R. M., Grant, J., Buckley, G., & Hart, I. R. (1999). BEME Guide No. 1: Best evidence medical education. Medical Teacher, 21 , 553–562.

Harris, I. (2003). What does “the discovery of grounded theory” have to say to medical education? Advances in Health Sciences Education, 8 , 49–61.

Hutchinson, L. (1999). Evaluating and researching the effectiveness of educational interventions. British Medical Journal, 318 , 1267–1269.

Kennedy, T. J., & Lingard, L. A. (2007). Questioning competence: A discourse analysis of attending physicians’ use of questions to assess trainee competence. Academic Medicine, 82 (10 suppl), S12–S15.

Kirkpatrick, D. (1996). Revisiting Kirkpatrick’s four-level model. Training and Development, 50 (1), 54–59.

Norman, G. (2003). RCT = results confounded and trivial: The perils of grand educational experiments. Medical Education, 37 , 582–584.

Norman, G. R., & Streiner, D. L. (2007). Biostatistics: The bare essentials (Vol. 3). Hamilton: BC Decker.

Papadakis, M. A., Teherani, A., Banach, M. A., Knettler, T. R., Rattner, S. L., Stern, D. T., et al. (2005). Disciplinary action by medical boards and prior behavior in medical school. New England Journal of Medicine, 353 , 2673–2682.

Price, E. G., Beach, M. C., Gary, T. L., Robinson, K. A., Gozu, A., Palacio, A., et al. (2005). A systematic review of the methodological rigor of studies evaluating cultural competence training of health professionals. Academic Medicine, 80 , 578–586.

Shea, J. A., Arnold, L., & Mann, K. V. (2004). A RIME perspective on the quality and relevance of current and future medical education research. Academic Medicine, 79 , 931–938.

Tamblyn, R., Abrahamowicz, M., Dauphinee, D., Wenghofer, E., Jacques, A., Klass, D., et al. (2007). Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. Journal of the American Medical Association, 298 , 993–1001.

Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: Evidence from meta-analysis. Psychological Methods, 6 , 413–429.

Woods, N. N., Brooks, L. R., & Norman, G. R. (2005). The value of basic science in clinical diagnosis: Creating coherence among signs and symptoms. Medical Education, 39 , 107–112.

Download references

Author information

Authors and affiliations.

Division of General Internal Medicine, Mayo Clinic College of Medicine, Baldwin 4-A, 200 First Street SW, Rochester, Minnesota, 55905, USA

David A. Cook & Thomas J. Beckman

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David A. Cook .

Rights and permissions

Reprints and permissions

About this article

Cook, D.A., Beckman, T.J. Reflections on experimental research in medical education. Adv in Health Sci Educ 15 , 455–464 (2010). https://doi.org/10.1007/s10459-008-9117-3

Download citation

Received : 18 March 2008

Accepted : 08 April 2008

Published : 22 April 2008

Issue Date : August 2010

DOI : https://doi.org/10.1007/s10459-008-9117-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical education
  • Research methods
  • Research design
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Dtsch Arztebl Int
  • v.106(15); 2009 Apr

Types of Study in Medical Research

Bernd röhrig.

1 MDK Rheinland-Pfalz, Referat Rehabilitation/Biometrie, Alzey

Jean-Baptist du Prel

2 Zentrum für Präventive Pädiatrie, Zentrum für Kinder- und Jugendmedizin, Mainz

Daniel Wachtlin

3 Interdisziplinäres Zentrum Klinische Studien (IZKS), Fachbereich Medizin der Universität Mainz

Maria Blettner

4 Institut für Medizinische Biometrie, Epidemiologie und Informatik (IMBEI), Johannes Gutenberg Universität Mainz

The choice of study type is an important aspect of the design of medical studies. The study design and consequent study type are major determinants of a study’s scientific quality and clinical value.

This article describes the structured classification of studies into two types, primary and secondary, as well as a further subclassification of studies of primary type. This is done on the basis of a selective literature search concerning study types in medical research, in addition to the authors’ own experience.

Three main areas of medical research can be distinguished by study type: basic (experimental), clinical, and epidemiological research. Furthermore, clinical and epidemiological studies can be further subclassified as either interventional or noninterventional.

Conclusions

The study type that can best answer the particular research question at hand must be determined not only on a purely scientific basis, but also in view of the available financial resources, staffing, and practical feasibility (organization, medical prerequisites, number of patients, etc.).

The quality, reliability and possibility of publishing a study are decisively influenced by the selection of a proper study design. The study type is a component of the study design (see the article "Study Design in Medical Research") and must be specified before the study starts. The study type is determined by the question to be answered and decides how useful a scientific study is and how well it can be interpreted. If the wrong study type has been selected, this cannot be rectified once the study has started.

After an earlier publication dealing with aspects of study design, the present article deals with study types in primary and secondary research. The article focuses on study types in primary research. A special article will be devoted to study types in secondary research, such as meta-analyses and reviews. This article covers the classification of individual study types. The conception, implementation, advantages, disadvantages and possibilities of using the different study types are illustrated by examples. The article is based on a selective literature research on study types in medical research, as well as the authors’ own experience.

Classification of study types

In principle, medical research is classified into primary and secondary research. While secondary research summarizes available studies in the form of reviews and meta-analyses, the actual studies are performed in primary research. Three main areas are distinguished: basic medical research, clinical research, and epidemiological research. In individual cases, it may be difficult to classify individual studies to one of these three main categories or to the subcategories. In the interests of clarity and to avoid excessive length, the authors will dispense with discussing special areas of research, such as health services research, quality assurance, or clinical epidemiology. Figure 1 gives an overview of the different study types in medical research.

An external file that holds a picture, illustration, etc.
Object name is Dtsch_Arztebl_Int-106-0262_001.jpg

Classification of different study types

*1 , sometimes known as experimental research; *2 , analogous term: interventional; *3 , analogous term: noninterventional or nonexperimental

This scheme is intended to classify the study types as clearly as possible. In the interests of clarity, we have excluded clinical epidemiology — a subject which borders on both clinical and epidemiological research ( 3 ). The study types in this area can be found under clinical research and epidemiology.

Basic research

Basic medical research (otherwise known as experimental research) includes animal experiments, cell studies, biochemical, genetic and physiological investigations, and studies on the properties of drugs and materials. In almost all experiments, at least one independent variable is varied and the effects on the dependent variable are investigated. The procedure and the experimental design can be precisely specified and implemented ( 1 ). For example, the population, number of groups, case numbers, treatments and dosages can be exactly specified. It is also important that confounding factors should be specifically controlled or reduced. In experiments, specific hypotheses are investigated and causal statements are made. High internal validity (= unambiguity) is achieved by setting up standardized experimental conditions, with low variability in the units of observation (for example, cells, animals or materials). External validity is a more difficult issue. Laboratory conditions cannot always be directly transferred to normal clinical practice and processes in isolated cells or in animals are not equivalent to those in man (= generalizability) ( 2 ).

Basic research also includes the development and improvement of analytical procedures—such as analytical determination of enzymes, markers or genes—, imaging procedures—such as computed tomography or magnetic resonance imaging—, and gene sequencing—such as the link between eye color and specific gene sequences. The development of biometric procedures—such as statistical test procedures, modeling and statistical evaluation strategies—also belongs here.

Clinical studies

Clinical studies include both interventional (or experimental) studies and noninterventional (or observational) studies. A clinical drug study is an interventional clinical study, defined according to §4 Paragraph 23 of the Medicines Act [Arzneimittelgesetz; AMG] as "any study performed on man with the purpose of studying or demonstrating the clinical or pharmacological effects of drugs, to establish side effects, or to investigate absorption, distribution, metabolism or elimination, with the aim of providing clear evidence of the efficacy or safety of the drug."

Interventional studies also include studies on medical devices and studies in which surgical, physical or psychotherapeutic procedures are examined. In contrast to clinical studies, §4 Paragraph 23 of the AMG describes noninterventional studies as follows: "A noninterventional study is a study in the context of which knowledge from the treatment of persons with drugs in accordance with the instructions for use specified in their registration is analyzed using epidemiological methods. The diagnosis, treatment and monitoring are not performed according to a previously specified study protocol, but exclusively according to medical practice."

The aim of an interventional clinical study is to compare treatment procedures within a patient population, which should exhibit as few as possible internal differences, apart from the treatment ( 4 , e1 ). This is to be achieved by appropriate measures, particularly by random allocation of the patients to the groups, thus avoiding bias in the result. Possible therapies include a drug, an operation, the therapeutic use of a medical device such as a stent, or physiotherapy, acupuncture, psychosocial intervention, rehabilitation measures, training or diet. Vaccine studies also count as interventional studies in Germany and are performed as clinical studies according to the AMG.

Interventional clinical studies are subject to a variety of legal and ethical requirements, including the Medicines Act and the Law on Medical Devices. Studies with medical devices must be registered by the responsible authorities, who must also approve studies with drugs. Drug studies also require a favorable ruling from the responsible ethics committee. A study must be performed in accordance with the binding rules of Good Clinical Practice (GCP) ( 5 , e2 – e4 ). For clinical studies on persons capable of giving consent, it is absolutely essential that the patient should sign a declaration of consent (informed consent) ( e2 ). A control group is included in most clinical studies. This group receives another treatment regimen and/or placebo—a therapy without substantial efficacy. The selection of the control group must not only be ethically defensible, but also be suitable for answering the most important questions in the study ( e5 ).

Clinical studies should ideally include randomization, in which the patients are allocated by chance to the therapy arms. This procedure is performed with random numbers or computer algorithms ( 6 – 8 ). Randomization ensures that the patients will be allocated to the different groups in a balanced manner and that possible confounding factors—such as risk factors, comorbidities and genetic variabilities—will be distributed by chance between the groups (structural equivalence) ( 9 , 10 ). Randomization is intended to maximize homogeneity between the groups and prevent, for example, a specific therapy being reserved for patients with a particularly favorable prognosis (such as young patients in good physical condition) ( 11 ).

Blinding is another suitable method to avoid bias. A distinction is made between single and double blinding. With single blinding, the patient is unaware which treatment he is receiving, while, with double blinding, neither the patient nor the investigator knows which treatment is planned. Blinding the patient and investigator excludes possible subjective (even subconscious) influences on the evaluation of a specific therapy (e.g. drug administration versus placebo). Thus, double blinding ensures that the patient or therapy groups are both handled and observed in the same manner. The highest possible degree of blinding should always be selected. The study statistician should also remain blinded until the details of the evaluation have finally been specified.

A well designed clinical study must also include case number planning. This ensures that the assumed therapeutic effect can be recognized as such, with a previously specified statistical probability (statistical power) ( 4 , 6 , 12 ).

It is important for the performance of a clinical trial that it should be carefully planned and that the exact clinical details and methods should be specified in the study protocol ( 13 ). It is, however, also important that the implementation of the study according to the protocol, as well as data collection, must be monitored. For a first class study, data quality must be ensured by double data entry, programming plausibility tests, and evaluation by a biometrician. International recommendations for the reporting of randomized clinical studies can be found in the CONSORT statement (Consolidated Standards of Reporting Trials, www.consort-statement.org ) ( 14 ). Many journals make this an essential condition for publication.

For all the methodological reasons mentioned above and for ethical reasons, the randomized controlled and blinded clinical trial with case number planning is accepted as the gold standard for testing the efficacy and safety of therapies or drugs ( 4 , e1 , 15 ).

In contrast, noninterventional clinical studies (NIS) are patient-related observational studies, in which patients are given an individually specified therapy. The responsible physician specifies the therapy on the basis of the medical diagnosis and the patient’s wishes. NIS include noninterventional therapeutic studies, prognostic studies, observational drug studies, secondary data analyses, case series and single case analyses ( 13 , 16 ). Similarly to clinical studies, noninterventional therapy studies include comparison between therapies; however, the treatment is exclusively according to the physician’s discretion. The evaluation is often retrospective. Prognostic studies examine the influence of prognostic factors (such as tumor stage, functional state, or body mass index) on the further course of a disease. Diagnostic studies are another class of observational studies, in which either the quality of a diagnostic method is compared to an established method (ideally a gold standard), or an investigator is compared with one or several other investigators (inter-rater comparison) or with himself at different time points (intra-rater comparison) ( e1 ). If an event is very rare (such as a rare disease or an individual course of treatment), a single-case study, or a case series, are possibilities. A case series is a study on a larger patient group with a specific disease. For example, after the discovery of the AIDS virus, the Center for Disease Control (CDC) in the USA collected a case series of 1000 patients, in order to study frequent complications of this infection. The lack of a control group is a disadvantage of case series. For this reason, case series are primarily used for descriptive purposes ( 3 ).

Epidemiological studies

The main point of interest in epidemiological studies is to investigate the distribution and historical changes in the frequency of diseases and the causes for these. Analogously to clinical studies, a distinction is made between experimental and observational epidemiological studies ( 16 , 17 ).

Interventional studies are experimental in character and are further subdivided into field studies (sample from an area, such as a large region or a country) and group studies (sample from a specific group, such as a specific social or ethnic group). One example was the investigation of the iodine supplementation of cooking salt to prevent cretinism in a region with iodine deficiency. On the other hand, many interventions are unsuitable for randomized intervention studies, for ethical, social or political reasons, as the exposure may be harmful to the subjects ( 17 ).

Observational epidemiological studies can be further subdivided into cohort studies (follow-up studies), case control studies, cross-sectional studies (prevalence studies), and ecological studies (correlation studies or studies with aggregated data).

In contrast, studies with only descriptive evaluation are restricted to a simple depiction of the frequency (incidence and prevalence) and distribution of a disease within a population. The objective of the description may also be the regular recording of information (monitoring, surveillance). Registry data are also suited for the description of prevalence and incidence; for example, they are used for national health reports in Germany.

In the simplest case, cohort studies involve the observation of two healthy groups of subjects over time. One group is exposed to a specific substance (for example, workers in a chemical factory) and the other is not exposed. It is recorded prospectively (into the future) how often a specific disease (such as lung cancer) occurs in the two groups ( figure 2a ). The incidence for the occurrence of the disease can be determined for both groups. Moreover, the relative risk (quotient of the incidence rates) is a very important statistical parameter which can be calculated in cohort studies. For rare types of exposure, the general population can be used as controls ( e6 ). All evaluations naturally consider the age and gender distributions in the corresponding cohorts. The objective of cohort studies is to record detailed information on the exposure and on confounding factors, such as the duration of employment, the maximum and the cumulated exposure. One well known cohort study is the British Doctors Study, which prospectively examined the effect of smoking on mortality among British doctors over a period of decades ( e7 ). Cohort studies are well suited for detecting causal connections between exposure and the development of disease. On the other hand, cohort studies often demand a great deal of time, organization, and money. So-called historical cohort studies represent a special case. In this case, all data on exposure and effect (illness) are already available at the start of the study and are analyzed retrospectively. For example, studies of this sort are used to investigate occupational forms of cancer. They are usually cheaper ( 16 ).

An external file that holds a picture, illustration, etc.
Object name is Dtsch_Arztebl_Int-106-0262_002.jpg

Graphical depiction of a prospective cohort study (simplest case [2a]) and a retrospective case control study (2b)

In case control studies, cases are compared with controls. Cases are persons who fall ill from the disease in question. Controls are persons who are not ill, but are otherwise comparable to the cases. A retrospective analysis is performed to establish to what extent persons in the case and control groups were exposed ( figure 2b ). Possible exposure factors include smoking, nutrition and pollutant load. Care should be taken that the intensity and duration of the exposure is analyzed as carefully and in as detailed a manner as possible. If it is observed that ill people are more often exposed than healthy people, it may be concluded that there is a link between the illness and the risk factor. In case control studies, the most important statistical parameter is the odds ratio. Case control studies usually require less time and fewer resources than cohort studies ( 16 ). The disadvantage of case control studies is that the incidence rate (rate of new cases) cannot be calculated. There is also a great risk of bias from the selection of the study population ("selection bias") and from faulty recall ("recall bias") (see too the article "Avoiding Bias in Observational Studies"). Table 1 presents an overview of possible types of epidemiological study ( e8 ). Table 2 summarizes the advantages and disadvantages of observational studies ( 16 ).

Study of rare diseases such as cancersCase control studies
Study of rare exposure, such as exposure to industrial chemicalsCohort studies in a population group in which there has been exposure (e.g. industrial workers)
Study of multiple exposures, such as the combined effect of oral contraceptives and smoking on myocardial infarctionCase control studies
Study of multiple end points, such as mortality from different causesCohort studies
Estimate of the incidence rate in exposed populationsExclusively cohort studies
Study of covariables which change over timePreferably cohort studies
Study of the effect of interventionsIntervention studies
Selection biasN/A231
Recall biasN/A331
Loss to follow-upN/AN/A13
Confounding3221
Time required1223
Costs1223

1 = slight; 2 = moderate; 3 = high; N/A, not applicable.

*Individual cases may deviate from this pattern.

Selecting the correct study type is an important aspect of study design (see "Study Design in Medical Research" in volume 11/2009). However, the scientific questions can only be correctly answered if the study is planned and performed at a qualitatively high level ( e9 ). It is very important to consider or even eliminate possible interfering factors (or confounders), as otherwise the result cannot be adequately interpreted. Confounders are characteristics which influence the target parameters. Although this influence is not of primary interest, it can interfere with the connection between the target parameter and the factors that are of interest. The influence of confounders can be minimized or eliminated by standardizing the procedure, stratification ( 18 ), or adjustment ( 19 ).

The decision as to which study type is suitable to answer a specific primary research question must be based not only on scientific considerations, but also on issues related to resources (personnel and finances), hospital capacity, and practicability. Many epidemiological studies can only be implemented if there is access to registry data. The demands for planning, implementation, and statistical evaluation for observational studies should be just as high for observational studies as for experimental studies. There are particularly strict requirements, with legally based regulations (such as the Medicines Act and Good Clinical Practice), for the planning, implementation, and evaluation of clinical studies. A study protocol must be prepared for both interventional and noninterventional studies ( 6 , 13 ). The study protocol must contain information on the conditions, question to be answered (objective), the methods of measurement, the implementation, organization, study population, data management, case number planning, the biometric evaluation, and the clinical relevance of the question to be answered ( 13 ).

Important and justified ethical considerations may restrict studies with optimal scientific and statistical features. A randomized intervention study under strictly controlled conditions of the effect of exposure to harmful factors (such as smoking, radiation, or a fatty diet) is not possible and not permissible for ethical reasons. Observational studies are a possible alternative to interventional studies, even though observational studies are less reliable and less easy to control ( 17 ).

A medical study should always be published in a peer reviewed journal. Depending on the study type, there are recommendations and checklists for presenting the results. For example, these may include a description of the population, the procedure for missing values and confounders, and information on statistical parameters. Recommendations and guidelines are available for clinical studies ( 14 , 20 , e10 , e11 ), for diagnostic studies ( 21 , 22 , e12 ), and for epidemiological studies ( 23 , e13 ). Since 2004, the WHO has demanded that studies should be registered in a public registry, such as www.controlled-trials.com or www.clinicaltrials.gov . This demand is supported by the International Committee of Medical Journal Editors (ICMJE) ( 24 ), which specifies that the registration of the study before inclusion of the first subject is an essential condition for the publication of the study results ( e14 ).

When specifying the study type and study design for medical studies, it is essential to collaborate with an experienced biometrician. The quality and reliability of the study can be decisively improved if all important details are planned together ( 12 , 25 ).

Acknowledgments

Translated from the original German by Rodney A. Yeates, M.A., Ph.D.

Conflict of interest statement

The authors declare that there is no conflict of interest in the sense of the International Committee of Medical Journal Editors.

Application of the Case Study Method in Medical Education

  • This person is not on ResearchGate, or hasn't claimed this research yet.

Anastasiya Spaska at Ajman University

  • Ajman University

Abstract and Figures

The results of student surveys on the impact of case study methods on the acquisition of practical clinical skills

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Stephen Tetteh Engmann

  • BMC Med Educ
  • Chengrui Zhang
  • Xinxin Wang
  • L. Vakulenko
  • Liudmyla Badogina
  • O. Obolonska

Samsonenko Sv

  • Yaroslav Tsekhmister
  • Kulsum Kulsum

Taufik Suryadi

  • Shikun Wang
  • Michael Dawson
  • Melissa Rankin
  • Grahame Smith
  • Rebecca Donkin
  • Heather Yule

Trina Fyfe

  • Srikar Chamala
  • Heather T. D. Maness

Christopher R Cogle

  • Oleksandr Napryeyenko
  • Natalija Napryeyenko

Donatella Marazziti

  • Guoruey Wong

Henriette Loeffler-Stastka

  • Yin-Ji Liang
  • Wei-Ju Chen
  • Shuang Zhou
  • Chen-Li Lin

Shilpa Suneja

  • Charanjeet Kaur
  • Br Columbia Med J
  • Tad A. Manalo
  • Michelle Higgins

Brian Pettitt-Schieber

  • Lindsey Hartsell
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Search Close search
  • Find a journal
  • Search calls for papers
  • Journal Suggester
  • Open access publishing

We’re here to help

Find guidance on Author Services

Publication Cover

Open access

Vaccine hesitancy educational interventions for medical students: A systematic narrative review in western countries

  • Cite this article
  • https://doi.org/10.1080/21645515.2024.2397875

Introduction

Discussion and conclusion, conclusions.

  • Supplemental material
  • Acknowledgements

Disclosure statement

Additional information.

  • Full Article
  • Figures & data
  • Supplemental
  • Reprints & Permissions
  • View PDF PDF

Physician recommendations can reduce vaccine hesitancy (VH) and improve uptake yet are often done poorly and can be improved by early-career training. We examined educational interventions for medical students in Western countries to explore what is being taught, identify effective elements, and review the quality of evidence. A mixed methods systematic narrative review, guided by the JBI framework, assessed the study quality using MERSQI and Cote & Turgeon frameworks. Data were extracted to analyze content and framing, with effectiveness graded using value-based judgment. Among the 33 studies with 30 unique interventions, effective studies used multiple methods grounded in educational theory to teach knowledge, skills, and attitudes. Most interventions reinforced a deficit-based approach (assuming VH stems from misinformation) which can be counterproductive. Effective interventions used hands-on, interactive methods emulating real practice, with short- and long-term follow-ups. Evidence-based approaches like motivational interviewing should frame interventions instead of the deficit model.

  • Vaccine hesitancy
  • medical student education
  • educational interventions
  • vaccine uptake
  • early-career training
  • mixed methods review
  • deficit-based approach
  • evidence-based approaches
  • motivational interviewing
  • systematic narrative review

Vaccines protect populations from serious communicable diseases, yet the persistent phenomenon of vaccine hesitancy (VH) presents an escalating challenge to public health. Citation 1 , Citation 2 VH is defined as a motivational state of being conflicted about, or opposed to, getting vaccinated Citation 3 that may lead to a delay in acceptance or refusal of vaccines despite availability of vaccination services. Citation 4 It has been a World Health Organisation top ten major health concerns since 2019. Citation 5 VH continues to grow in the general population, with worsening childhood and flu vaccine uptake over time Citation 6 , Citation 7 that has declined further in many countries since the COVID pandemic. Citation 8 , Citation 9

Key studies Citation 10 , Citation 11 highlight healthcare professionals’ (HCPs) recommendations as a central strategy for improving population vaccine confidence. Unlike public health policy and media communication, this personal approach allows for discussion of concerns, and may be more effective. Citation 12 , Citation 13 These discussions are complex, challenging, and require specific training and skills to be effective. Citation 13–16 Vaccine recommendation can also differ between HCPs, likely influenced by models of medical training, professional values, and culture; Citation 17–19 and is influenced by HCP VH. Citation 19–21 Grouping interventions for different HCPs together can therefore lose granularity for application, so this study focuses on medical students as proto-doctors.

Addressing this topic in medical school allows for early modification of attitudes, skills, and knowledge that may enhance patient-doctor communications throughout a career. Yet various national curriculum reviews across western cultures show that VH is not often taught – or taught well – in medical schools, resulting in low feelings of preparedness and student VH. Citation 22–26

The most instinctive strategies to navigate these conversations, such as the deficit model, Citation 27 are not necessarily the best. This model, devised in the 1970s as a way to improve public understanding of science, is less of a technique and more of a lens through which decision-making is viewed. Citation 28 , Citation 29 It assumes that patients’ choices are determined by their knowledge – they are refusing vaccines because they have the ‘wrong’ information. Giving them the ‘right’ information therefore enables them to make the ‘right’ choice. Citation 27 This is commonly found elsewhere in medical communication skills teaching, underpinning approaches to information giving in practice. Citation 29–31 Where patients may be encountering a concept for the first time with few pre-conceptions and emotions that may influence a decision, the model has some applications. However, it has come under more recent criticism for this assumption since most decisions are rarely made in a vacuum. Citation 32 , Citation 33 VH is a spectrum but can be categorized into hesitant, delayers/selective refusers and refusers. Citation 14 , Citation 34 For these groups, information given using the deficit model in VH can come across as paternalistic, be ineffective, and even backfire by entrenching hesitant patients further in their views. Citation 13 , Citation 35–38 Alternative models of communication include approaches which take a holistic view, prioritize trust-building, individualize responses, and utilize motivational interviewing techniques. Citation 14 , Citation 34 , Citation 39

Recent reviews Citation 40 , Citation 41 have focused on educational interventions for HCPs, including students globally across a range of healthcare professions. Lip et al. Citation 40 also included professionals and focused on the regulation of emotional reactions of clinicians. Ours explores methods taught, their effectiveness, and most of the studies we identified did not feature in their reviews.

This review selected studies from culturally western nations (Appendix 1). While an imperfect definition, there are important pragmatic differences between western and non-western countries that should not be ignored Citation 42 , Citation 43 and may affect the nature of VH education. Important differences in healthcare systems such as structure, patient profiles, and burden of disease Citation 42 , Citation 44 , Citation 45 may affect the wider relationship and perceptions between patients and HCPs. Patient-doctor communication models in non-western countries may lean toward more collectivist values such as reinforcement of social hierarchy, while individualistic communication models such as mutualism or shared-decision making tend to be less well taught and embedded in non-western medical schools. Citation 46–55 The nature of VH is complex and multi-factorial, rooted in many traits innate to all people regardless of cultural background. However, some differences between western and non-western countries, such as barriers to vaccine access, Citation 56–58 may impact on the context and priorities of educational interventions. Finally, much of the research into novel patient-doctor communication models has only taken place in culturally western nations, Citation 14 , Citation 59–64 so caution should be taken when applying to all cultures. Citation 65

We carried out a systematic narrative review of interventions that included medical students in western culturally similar countries with the aim to synthesize what is being taught about VH, to identify which elements are effective, and to review the quality of evidence available.

Defining the intervention

This review used a mixed methods systematic narrative review with convergent integrated approach (PROSPERO ID = CRD42022320425), guided by the JBI methodological framework. Citation 66 PRISMA Extension for systematic reviews was followed for reporting. A systematic approach was chosen as the research question aims to explore the nature of what is currently taught. The narrative/mixed methods elements were chosen because it was anticipated that educational interventions would be challenging to directly compare against each other due to heterogenous evaluation measures.

Kirkpatrick (KP) levels were used to stratify study outcomes. With a background in assessing the impact of training initiatives in industry, these have been adapted by the Best Evidence Medical Education collaboration to be applied to medical education with widespread use. Citation 67 , Citation 68 While elements of the hierarchy exist, it is not always necessary that patient outcomes are the desired priority – outcomes of evaluation, or that the development of knowledge, attitudes, and skills have inherent value in themselves. Citation 69

Search strategy

We searched for studies that described educational interventions regarding vaccination or VH in medical students. An initial limited search for index terms was undertaken in Medline, EMBASE, CINAHL, PsycINFO, ERIC, SCOPUS, and Web of Science with assistance from a medical librarian and subject field experts. This informed the development of keywords for a full search (Appendix 1) in May 2023 (re-ran for updates in December 2023) with no language restrictions and publications from January 2005. Population VH became more discussed in the literature around this time following the Wakefield controversy, Citation 70 , Citation 71 a significant avian influenza outbreak in the Americas and European regions, Citation 72 and the start of social media. References were searched in all selected studies.

Study selection

Inclusion/exclusion criteria were finalized by all authors after initial studies review. Titles and abstracts were screened by one reviewer (PW) after removal of duplicates using EndNote, with 10% of studies screened by all other authors. All full texts selected were assessed against the inclusion criteria by PW and at least one other author. Any disagreements were resolved through discussion.

Critical appraisal/quality assessment

Eligible studies were critically appraised by two independent reviewers for methodological quality using the MERSQI tool Citation 73 (quantitative) and Cote & Turgeon’s framework Citation 74 (qualitative) (Appendix 2). MERSQI has ten items in 6 domains, scoring 3 points in each domain for a maximum score of 18. Studies were deemed low quality (0–6), medium quality, Citation 7–12 and high quality. Citation 13–18 Cote & Turgeon’s framework has 12 questions over 5 domains, each with a judgment-based response of yes/no. We modified this to high/medium/low for more granularity, assigning numerical values of low = 0 points, medium = 1 point, high = 2 points to come to the final overall gradings of low (0–8), medium, Citation 9–16 and high qualities. Citation 17–24 Disagreements were resolved through discussion.

Data charting/extraction

Extracted data was gathered into a team-developed Excel sheet (Appendix 3). Eleven studies were contacted for additional data to determine inclusion/exclusion. Seven responded with the full data requested (Appendix 4), while four did not reply after three attempts. Interventions evaluated with more than one quantitative study were combined and treated as one study with one reference (outlined in the Results table). Where multiple measures existed, the quality score for each was measured in the Results table.

We explored the theoretical approach (whether an intervention was based on any educational theory, and if so how) and framing (the nature and evidence-base of the methods taught) of studies. We also examined whether interventions reinforced a deficit model of communication and alternative approaches. Where this was not clear in the study, this was discussed within the whole research team using the definition outlined above.

Data transformation and synthesis

Where statistical tests were present and correctly used, we have defined p  < .05, effect size > 0.25, and confidence interval not including zero as significant. Quantitative results were transformed into narrative description or interpretation. These ‘qualitized’ data were assembled with the qualitative data based on similarity in meaning to produce a set of integrated findings using a convergent integrated approach. Citation 66

Figure 1. PRISMA diagram for study identification.

Figure 1. PRISMA diagram for study identification.

Table 1. Summary of content.

Interventions were of generally medium quality with MERSQI scores from 5.5 to 14.5 (median 9.5/18). Interventions aimed to improve medical student knowledge, attitudes, and skills (communication and administration) around vaccines and VH. Taught content and teaching methods were heterogenous. These were generally theorized to improve provider recommendation and ability to address concerns, resulting in theoretically improved patient vaccine uptake and reduced patient VH.

Nineteen studies took place in the USA, Citation 26 , Citation 77–94 ten in Europe Citation 75–76 , Citation 95–102 and one in Australia. Citation 103 Student sample sizes ranged from 4 Citation 103 to 421 Citation 99 (median 84). All Italian, German, and Spanish interventions were didactic in nature, focusing on knowledge of processes in development and delivery. Other European (UK, Finland) and Australian interventions were either interactive, including skills, attitudes, and knowledge, or focused on service delivery. North America involved a mix of both. Two studies took place across multiple institutions, Citation 91 , Citation 92 while seven collected data over sequential years. Citation 78 , Citation 79 , Citation 82 , Citation 84 , Citation 88 , Citation 92 , Citation 94 Sixteen studies included students in pre-clinical years (1 st –2 nd year, or all years) Citation 26–77 , Citation 78–80 , Citation 82–84 , Citation 86–93 , Citation 99–100 while 17 included clinical years (3 rd –final year, or all years). Citation 26–75 , Citation 76–79 , Citation 81–85 , Citation 90–91 , Citation 93–97 , Citation 99–103 One did not specify the student stage. Citation 98

What is the content, approach, and framing of these interventions?

Twenty-four interventions described content around vaccinology. Citation 26 , Citation 77–85 , Citation 87–92 , Citation 95–101 , Citation 103 Vaccinology included vaccine biochemistry, development, delivery and monitoring, and vaccine-preventable diseases. Improving knowledge was generally thought to improve student vaccine confidence and recommendations. This was mostly delivered in a didactic format through expert-delivered lecture or pre-reading. Twenty-five interventions included content around VH. Citation 26 , Citation 75–88 , Citation 90–98 This involved identifying and addressing patient VH, either in theory or in practice. Interventions included reasons for hesitancy and how to address these Citation 77 , Citation 78 , Citation 82 , Citation 83 , Citation 85 , Citation 87 , Citation 88 , Citation 96–98 and/or provided opportunities to practice VH communication skills with real or simulated patients. Citation 75–77 , Citation 80 , Citation 82 , Citation 83 , Citation 87 , Citation 90 , Citation 93 , Citation 94 Some only taught theoretical communication skills. Citation 26 , Citation 81 , Citation 86 , Citation 91 , Citation 92 , Citation 95 , Citation 98 Eight interventions provided opportunities for students to give or practice giving vaccinations. Citation 77 , Citation 78 , Citation 84 , Citation 86 , Citation 89 , Citation 96 , Citation 102 , Citation 103

Thirteen interventions focused on childhood vaccinations. Citation 75–77 , Citation 81–83 , Citation 85 , Citation 87 , Citation 88 , Citation 94 , Citation 96 , Citation 98 , Citation 100 These mostly addressed knowledge of vaccine safety and the importance of recommendation to correct misinformation. Some studies addressed improving student attitudes toward the vaccines and vaccine mandates Citation 82 , Citation 88 , Citation 100 or toward hesitant patients, Citation 75 , Citation 76 though baseline attitudes toward vaccines were already supportive. Nine interventions focused on HPV vaccines. Citation 26 , Citation 77–79 , Citation 85–87 , Citation 90–92 , Citation 96 These all theorized that improving student vaccination attitudes through knowledge-based teaching and emphasizing strong, consistent recommendation to patients will improve uptake.

Seven interventions addressed flu vaccines. Citation 77 , Citation 78 , Citation 84 , Citation 89 , Citation 96 , Citation 99 , Citation 103 These focused on the student’s perceived importance of flu vaccines, aiming to encourage student vaccination to protect others and improve likelihood of vaccine recommendation. Some used flu vaccination service delivery as a way to improve this. Citation 84 , Citation 89 , Citation 103 Six interventions addressed COVID. Citation 82 , Citation 86 , Citation 93 , Citation 97 , Citation 101 , Citation 102 These used the pandemic to capitalize on student interest and explain vaccine production and safety processes to build trust in vaccine safety. This was then often related back to other vaccines. Five interventions discussed vaccines generally, usually referring to the processes around vaccine development and delivery. Citation 80 , Citation 86 , Citation 88 , Citation 95 , Citation 97 The three interventions also addressed other vaccines including travel vaccines Citation 96 , Citation 103 or shingles. Citation 87

Design and framing

Most interventions did not overtly describe any theoretical grounding in how the intervention was delivered. Of those that did, there was variable quality in how this was presented. Two studies Citation 87 , Citation 102 clearly described and applied educational theory; however, while other authors referenced other theories, they did not explain how these underpinned their approach. Citation 26 , Citation 95 , Citation 96

Seven interventions Citation 26 , Citation 75–76 , Citation 85–87 , Citation 90 overtly taught evidence-based methods, such as motivational interviewing or presumptive approach, Citation 39 , Citation 107–109 to address VH. Two studies Citation 81 , Citation 102 described student-patient or observed doctor-patient discussions, while another study Citation 94 described student discussions with standardized patient role-players without supporting taught communication techniques. Both evidence-based (e.g., trust-building) and non-evidence-based methods (e.g., participatory methods and deficit model communication) were observed, but students were unsure which to emulate or use. The two studies Citation 92 , Citation 93 did not overtly name evidence-based approaches but suggested approaches that attempted to address patient health ideas and concerns meaningfully. Seven studies Citation 26 , Citation 77 , Citation 78 , Citation 82 , Citation 85 , Citation 90 , Citation 101 focused on countering vaccine misinformation using a clear deficit model approach. Six others Citation 83 , Citation 84 , Citation 89 , Citation 96 , Citation 100 , Citation 103 described a patient presenting without significant prior knowledge or concerns, also conveying a deficit model approach to vaccine counseling. Eight studies Citation 79 , Citation 80 , Citation 88 , Citation 91 , Citation 95 , Citation 97–99 did not contain enough information in the text to determine a clear approach, however the framing of evaluation questions and limited details provided suggest a deficit model lens.

What level of evidence is available for how well these interventions work and why?

Study design.

Twenty studies used pre-post study design, with four of these comparing two or more groups against each other Citation 77 , Citation 84 , Citation 92 , Citation 100 and 16 using pre-post tests on the same group. Citation 26 , Citation 78–80 , Citation 85–88 , Citation 90–91 , Citation 93–95 , Citation 97–98 , Citation 101–103 One study stated a cross-sectional study design with controls. Citation 99 Nine did not state their study design but collected data on participants post-intervention. Citation 75–76 , Citation 81–83 , Citation 89–94 , Citation 96–102 Six studies were qualitative in nature Citation 75 , Citation 76 , Citation 81 , Citation 89 , Citation 94 , Citation 102 while the other 22 were quantitative. Five of the quantitative studies also gathered qualitative data through survey format. Citation 80 , Citation 84 , Citation 93 , Citation 96 , Citation 103

Evaluation tools

Most studies used a single evaluation tool, generally, a survey, Citation 26 , Citation 77–80 , Citation 82–88 , Citation 90–101 , Citation 103 with only two using multiple evaluation tools to gather data. Citation 77 , Citation 87 Surveys generally ask questions around vaccination attitudes, with some testing (usually biomedical) knowledge. Most evaluations measured immediately post-intervention; however, eight studies measured impact between 1 week and 3 months post-intervention, Citation 77 , Citation 78 , Citation 83 , Citation 84 , Citation 87 , Citation 92 , Citation 93 , Citation 100 and two studies measured impact greater than 1 y post-intervention. Citation 77 , Citation 87 Belterman et al. Citation 96 used a validated questionnaire to measure the effectiveness of teaching interventions. No other validated surveys were used; however, two studies used original surveys informed by validated surveys. Citation 77 , Citation 87

Effectiveness

Most studies assessed attitudes and knowledge, with a few assessing skills, though these were mostly self-perceived abilities. Mixed quality of evidence for effectiveness was found throughout. High quality of evidence for effectiveness was provided by only a few studies for attitudes toward mandates, Citation 79 self-efficacy, Citation 94 , Citation 102 and own VH; Citation 92 knowledge of vaccine-preventable diseases (VPDs)/vaccines Citation 77 , Citation 92 and VH, Citation 92 and satisfaction. Citation 92 , Citation 100 Sutton et al. Citation 77 showed no significant effect of the intervention to maintain long-term confidence recommendation and pro-mandate attitudes above a control.

Most provided medium-low quality of evidence for effectiveness for attitudes (confidence recommending Citation 26 , Citation 78 , Citation 79 , Citation 85 , Citation 89 , Citation 90 , Citation 93 , Citation 94 , Citation 103 , pro-mandates Citation 78 , Citation 82 , student vaccine confidence, Citation 78 , Citation 79 , Citation 82 , Citation 84 , Citation 88 , Citation 90 , Citation 91 , Citation 99 , Citation 100 , Citation 103 self-efficacy, Citation 80 patient perspective, Citation 75 , Citation 76 , Citation 93 knowledge (of VPDs, Citation 26 , Citation 83 , Citation 84 , Citation 91 of vaccines, Citation 79–83 , Citation 84–100 , Citation 101 , Citation 103 of VH, Citation 89 and of patient-provider discussions, Citation 81 , Citation 94 , Citation 102 understanding of patient perspective, Citation 75 , Citation 84 and confidence in own knowledge); Citation 84 , Citation 87 , Citation 88 , Citation 95 , Citation 96 , Citation 103 skills (observed ability to recommend, Citation 75–77 , Citation 87 self-perceived ability to recommend, Citation 89 , Citation 93 , Citation 96 , Citation 102 confidence in own ability to administer, Citation 84 , Citation 103 and ability to determine vaccine indication); Citation 100 and satisfaction. Citation 82 , Citation 85 , Citation 86 , Citation 93 , Citation 102 Correlation was shown between knowledge and attitudes. Citation 92 Interactive online methods were more effective than leaflets, video, or control in changing attitudes. Citation 99

Some studies offered evidence of limited/no change in attitudes Citation 26 , Citation 90 or knowledge, Citation 83 , Citation 84 , Citation 91 or in certain demographic groups. Citation 79 , Citation 90 , Citation 100 Jenkins et al. Citation 94 found that in students given opportunities to practice, discuss, and reflect on skills, but with minimal structured communication guidance, some students identified approaches to learn that are likely to be unsuccessful or even backfire. The practical session improved confidence but had reduced competence, with no difference in overall attitudes, when compared to a lecture. Citation 100 Perceived educational benefits of service provision are reduced rapidly, with long-term commitments motivated intrinsically or financially. Citation 93 , Citation 102

This review explored educational interventions to address vaccination/VH for medical students in western cultural settings. It identified a wide range of interventions in the peer-reviewed literature of mixed quality in educational/study design and outcome. Interventions addressed knowledge, skills, and attitudes around vaccines/VH, mostly showing improvement in these domains. Conclusions around effectiveness were limited by study design and heterogeneity, with no single-objective conclusion possible. Most studies measured knowledge or attitudes, with skills and satisfaction less well measured. Only one intervention measured clinical outcomes, reducing translational impact on practice.

A value-based judgment weighing results against study quality, design, and sample size suggested that the most effective interventions used multiple methods and grounding in educational theory to address knowledge, skills, and attitudes together and were supported by considered study designs with multiple forms of evaluation. Interactive group work with opportunities for reflection was more effective at shaping attitudes and improving knowledge retention (especially when supported by brief didactic methods), while actor-based role-plays showed long-term improvement in skills. There were two further key findings from this review: intervention framing and evaluation quality.

Intervention framing

No single approach to addressing VH has been proven to be universally effective. However, some approaches, such as adapted motivational interviewing models Citation 39 , Citation 60 , Citation 107 , Citation 108 or the presumptive methods, Citation 63 , Citation 64 , Citation 107 , Citation 109 , Citation 110 show promise as emerging communication models in western countries. These should still be critically applied. For example, the presumptive method has less evaluation of its effects on the long-term patient–doctor relationship and has largely been tested in the USA. In countries like the UK, where such direct communication could be less well perceived, this may also backfire.

Despite being shown to be ineffective and potentially backfire, Citation 13 , Citation 35–38 most interventions analyzed in this review unintentionally reinforced a deficit-based approach. This framing may have been an unconscious decision since it is a prevailing approach in much of medicine. Citation 111–113 This is even evident in one study in this review, Citation 94 where students were allowed to find their own approaches to addressing VH. The inclusion of a deficit-based approach in just a small part of an educational intervention could be argued to be minimal in impact. However, its presence reinforces the unchallenged assumption (both in learned practice and assessment of intervention effectiveness metrics) that VH individuals may be making their decisions purely on lack of correct information, rather than taking a more holistic view. Many of these studies have good educational outcomes, but it is unclear, or even unlikely, how far these will later translate into good clinical outcomes if physicians have effectively learned ineffective methods.

Further, deficit model approaches may support polarized attitudes around vaccine mandates being desirable. Citation 77–79 , Citation 82 While ethically justifiable, strongly pro-mandate attitudes, unmoderated by understanding of patient values and worldviews, may exacerbate disconnection between HCPs and patients. Citation 11 , Citation 114 , Citation 115

Finally, deficit model framing has been suggested by some authors to impact student attitudes toward vaccines. For example, some interventions suggest a link between improved knowledge and reduced hesitancy. Citation 78 , Citation 88 , Citation 92 However, several deficit-framed interventions that improved knowledge showed persistent and unchanged hesitancy or lack of vaccine confidence. Citation 79 , Citation 82 , Citation 90 , Citation 99 , Citation 100 VH in medical students should be examined for interventions, especially when interventions do not improve vaccine confidence.

Evaluation quality

Studies that contained more meaningful results had pre-post study design, with students tested immediately before, afterward, and again several months later, with control groups if possible. Citation 77 , Citation 87 , Citation 92 While few interventions took place at multiple sites, multi-site comparison added little to the results and complicated reporting with variability of delivery and lack of local context that confounded results. Instead, evaluations were more meaningful when taking place at well-described single sites over multiple years. Citation 79 , Citation 81 , Citation 88 , Citation 92 Descriptions of local context and VH were largely absent but remain important since cultural attitudes across western cultures cannot be assumed to be homogenous. Citation 116 , Citation 117 Further, descriptions of medical students' prior learning were rare Citation 96 but were valuable for transferability since appropriate learning objectives may vary depending on student stage.

Evaluations also offered the most value when using multiple evaluation tools. Carefully designed surveys that considered the nature of what was being investigated were able to capture important data about student knowledge and attitudes and allowed comparison over time. Objective measures of performance in skill were particularly useful Citation 77 , Citation 87 – if underused – and complemented knowledge and attitude data by illuminating where confidence may outweigh competence. Citation 100

Most studies lacked in-depth exploration into what was being learned and how but rather speculated on learning processes from limited quantitative data. Since VH is a psycho-behavioral issue likely to vary between contexts, Citation 118 qualitative data are particularly important to examine why an intervention works, so results can be transferable. Dialog rather than survey methods offered richer results. Citation 75 , Citation 102 Patient perspectives were entirely lacking in evaluation bar one. Citation 87

Strengths and limitations

This study took a systematic approach to data collection so is unlikely to have missed important published studies. The decision not to restrict based on study quality gives a better portrait of what may or may not work, with further detail possible through the narrative format. Omitting gray literature, however, may have missed some interventions but does predispose a higher baseline quality.

All findings are also limited to western cultural settings. The rationale for selecting countries within western settings refers to broad trends and generalizations with comparable healthcare settings which may support allowed implementation from findings. VH as defined in this paper is a global problem not limited to high-income Western countries. However, the nature, prevalence, and reasons for hesitancy may differ significantly between Western and non-Western countries as suggested by other related reviews. Citation 11 A worldwide report by the Wellcome Trust in 2018 Citation 119 found that high-income countries had significantly more concerns about the safety of vaccines than low- or middle-income countries. More recently, a report by the Vaccine Confidence Project Citation 120 found that since the COVID pandemic, confidence in the USA and Europe remains low, with large losses of confidence in several countries within these areas. However, China, India, and Mexico were all found to buck the global trend of declining vaccine confidence, with significant improvements found to be shaped by collectivist cultural contexts not embedded in Western individualist cultures. Various countries in Asia and Africa have also suffered significant losses in confidence; however, though for many of these countries, there does not exist sufficient research exploring the reasons for this.

As outlined in the introduction, there are important differences between Western and non-Western countries in healthcare contexts, the roles and medical education of different healthcare professionals, and socio-cultural communication norms or values, especially between doctors and patients. These mean that synthesizing and applying interventions designed within Western cultural settings for implementation outside of these settings may not be appropriate. However, we also recognize that intra-cultural differences within countries may sometimes be even greater than those between countries by different metrics. We recommend reviews of interventions in various non-western cultural settings, considering the respective nature of VH and patient-doctor communication models. We also recommend that each institution considers their local sociocultural demographics and engages with local communities to take account of communication norms and trust in HCPs and vaccines to develop tailored interventions.

MERSQI has several limitations. High scoring designs (multi-site, RCTs) are not always practical for educational interventions. Measuring response rates in percentages means that total participants are not weighted. Measuring whether something was reported or not, rather than the quality of reporting, lacks granularity. The Kirkpatrick scale, also reflected in MERSQI, assumes a hierarchy, when capturing multiple levels may be useful and ‘lower’ levels still have desirable value. Many studies used evaluations that may be more prone to bias, such as self-reported measures; however, these are reflected in MERSQI ratings. Further, these scoring systems do not assess the accuracy of reported studies and assume their infallibility. One study claims significance when no statistical test of significance has been done Citation 82 while another used Pearson’s correlation coefficient when a Wilcoxon signed rank test may be more appropriate, likely leading to misleading results. Citation 91 We have adjusted our reporting in this review around these additional observations.

The mixed methods nature of this review means that it is unable to give more definitive/objective markers of evidence – though captures more data and offers more detail for transferability.

The heterogeneity of studies made direct comparisons impractical and limits the generalizability of results when viewed through a positivist paradigm. Instead, the evidence for effectiveness rating was developed within a post-positivist paradigm, balancing the goal of generalizability with the acknowledgment of the complexity and variability of educational interventions within a wider social context. Many interventions included a self-selected population either in the participating group or through survey completion. While this has been reported in Appendix 3, and taken into account in the effectiveness rating, the effect of this is difficult to quantify. The evidence for effectiveness rating does not pretend to be an objective or validated tool. Conclusions from this must be taken with caution, however we hope that this offers more granularity.

Effective interventions utilized hands-on interactive methods emulating real practice, supported by didactic methods, to develop knowledge, skills, and attitudes around addressing VH. However, most interventions are teaching the deficit model, a non-evidence-based framing which may significantly reduce effectiveness in practice. Interventions should instead consider the overt and covert framing of knowledge, skills, and attitudes within evidence-based approaches such as motivational interviewing.

Future research should include evaluating interventions with study designs that incorporate short- and long-term follow-up. These should include multiple objective assessments of knowledge and skill, evaluation of attitudes toward vaccines and VH patients, and assessing real-world patient impact where possible.

No ethics panel was required as this is a review of published literature.

Author contributions statement

Philip White – conception and design, analysis, and interpretation of the data, drafting of the paper, revising it critically for intellectual content, and final approval of the version to be published.

Hugh Alberti – conception and design, analysis, and interpretation of the data, revising it critically for intellectual content, and final approval of the version to be published.

Gill Rowlands – conception and design, analysis, and interpretation of the data, revising it critically for intellectual content, and final approval of the version to be published.

Eugene Tang – conception and design, analysis, and interpretation of the data, revising it critically for intellectual content, and final approval of the version to be published.

Dominique Gagnon – analysis and interpretation of the data, revising it critically for intellectual content, and final approval of the version to be published.

Ève Dubé – conception and design, analysis, and interpretation of the data, revising it critically for intellectual content, and final approval of the version to be published.

Appendix 1 Inclusion and Exclusion criteria.docx

Appendix 4 supplementary study material_.docx, appendix 2 quality rating.pdf, appendix 3 additional extracted data.pdf, acknowledgments.

Many thanks to Anthony Codd, Lily Lamb, Greet Hendrickx, Rebecca Harris, and Fiona Beyer for their advice and assistance in developing this review.

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed on the publisher’s website at https://doi.org/10.1080/21645515.2024.2397875

Notes on contributors

Philip white.

Philip White is a National Institute of Health and Social Care Research (NIHR) Academic Clinical Fellow in General Practice based at Newcastle University. His research interests include vaccine hesitancy and medical education.

Hugh Alberti

Hugh Alberti is a General Practitioner and the Subdean for Primary and Community Care at Newcastle University. He leads a team of GP trainees and clinical teaching fellows focused on the development of educational initiatives and research projects within the realm of primary care. His research interests focus primarily on medical education, particularly within primary care and community settings.

Gill Rowlands

Gill Rowlands is a General Practitioner and a Professor in the Population Health Sciences Institute. Her main research interests are in the area of health inequalities, particularly the role of health literacy in health, and the role of GPs in identifying and addressing the problems faced by patients with lower health literacy. She founded the Health Literacy Group UK and has authored and co-authored over 70 publications in peer-reviewed journals, co-edited three health literacy textbooks and authored seven chapters in health literacy textbooks. She provides expert advice on health literacy to the Royal College of General Practice, NHS England, Belfast Healthy Cities, and the Health Service Executive (Ireland). She chairs the Health Literacy Global Working Group of the International Union of Health Promotion and Education.

Eugene Tang

Eugene Tang is a NIHR Clinical Lecturer and General Practitioner based at Newcastle University. His research interests include post-stroke cognition, dementia, risk prediction modeling, and reducing inequalities in care.

Dominique Gagnon

Dominique Gagnon is a scientific advisor in immunization at the Quebec National Institute of Public Health. She has over a decade of experience working on various vaccination-related projects, with a significant focus on addressing vaccine hesitancy.

Eve Dubé is a medical anthropologist. She is a professor in the Department of Anthropology at Laval University in Quebec (Canada). She is also affiliated with the Quebec National Institute of Public Health and the Research Center of the CHU de Quebec-Université Laval. Most of her research focuses on the socio-cultural aspects of vaccination. She is also interested in vaccine hesitancy and doing various projects in that field. She was a member of the WHO working group on Vaccine Hesitancy. Since 2014, she is leading the Social Sciences and Humanities Network (SSHN) of the Canadian Immunization Research Network.

  • Cascini F, Pantovic A, Al-Ajlouni Y, Failla G, Ricciardi W. Attitudes, acceptance and hesitancy among the general population worldwide to receive the COVID-19 vaccines and their contributing factors: a systematic review. EClinicalMedicine . 2021; 40 :101113. doi:10.1016/j.eclinm.2021.101113.   PubMed Google Scholar
  • Hussain A, Ali S, Ahmed M, Hussain S. The anti-vaccination movement: a regression in modern medicine. Cureus . 2018; 10 ( 7 ):e2919. doi:10.7759/cureus.2919.   PubMed Web of Science ® Google Scholar
  • WHO. Understanding the behavioural and social drivers of vaccine uptake. Position paper. Weekly epidemiological record. 2022. Contract No.: 97.   Google Scholar
  • MacDonald NE. Vaccine hesitancy: definition, scope and determinants. Vaccine . 2015; 33 ( 34 ):4161–15. doi:10.1016/j.vaccine.2015.04.036.   PubMed Web of Science ® Google Scholar
  • WHO. Ten Threats to global health in 2019 [internet]. World Health Organisation. 2019 [accessed 2023 Nov 7]. https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019 .   Google Scholar
  • Larson Hjdf A, Karafillakis E, Rawal M. State of vaccine confidence in the EU 2018 . Luxembourg: The European Commission; 2018.   Google Scholar
  • Larson HJ, de Figueiredo A, Xiahong Z, Schulz WS, Verger P, Johnston IG, Cook AR, Jones NS. The state of vaccine confidence 2016: global insights through a 67-country survey. EBioMedicine . 2016; 12 :295–301. doi:10.1016/j.ebiom.2016.08.042.   PubMed Web of Science ® Google Scholar
  • Causey K, Fullman N, Sorensen RJD, Galles NC, Zheng P, Aravkin A, Danovaro-Holliday MC, Martinez-Piedra R, Sodha SV, Velandia-González MP, et al. Estimating global and regional disruptions to routine childhood vaccine coverage during the COVID-19 pandemic in 2020: a modelling study. Lancet . 2021; 398 ( 10299 ):522–34. doi:10.1016/S0140-6736(21)01337-4.   PubMed Web of Science ® Google Scholar
  • Falope O, Nyaku MK, O’Rourke C, Hermany LV, Plavchak B, Mauskopf J, Hartley L, Kruk ME. Resilience learning from the COVID-19 pandemic and its relevance for routine immunization programs. Expert Rev Vaccines . 2022; 21 ( 11 ):1621–36. doi:10.1080/14760584.2022.2116007.   PubMed Web of Science ® Google Scholar
  • Dubé E, Laberge C, Guay M, Bramadat P, Roy R, Bettinger J. Vaccine hesitancy: an overview. Hum Vaccin Immunother . 2013; 9 ( 8 ):1763–73. doi:10.4161/hv.24657.   PubMed Web of Science ® Google Scholar
  • Verger P, Botelho-Nevers E, Garrison A, Gagnon D, Gagneur A, Gagneux-Brunon A, Dubé E. Vaccine hesitancy in health-care providers in Western countries: a narrative review. Expert Rev Vaccines . 2022; 21 ( 7 ):909–27. doi:10.1080/14760584.2022.2056026.   PubMed Web of Science ® Google Scholar
  • Jarrett C, Wilson R, O’Leary M, Eckersberger E, Larson HJ. Strategies for addressing vaccine hesitancy – a systematic review. Vaccine . 2015; 33 ( 34 ):4180–90. doi:10.1016/j.vaccine.2015.04.040.   PubMed Web of Science ® Google Scholar
  • Dubé E, Gagnon D, MacDonald NE. Strategies intended to address vaccine hesitancy: review of published reviews. Vaccine . 2015; 33 ( 34 ):4191–203. doi:10.1016/j.vaccine.2015.04.041.   PubMed Web of Science ® Google Scholar
  • Leask J, Kinnersley P, Jackson C, Cheater F, Bedford H, Rowles G. Communicating with parents about vaccination: a framework for health professionals. BMC Pediatrics . 2012; 12 ( 1 ):154. doi:10.1186/1471-2431-12-154.   PubMed Web of Science ® Google Scholar
  • Shen SC, Dubey V. Addressing vaccine hesitancy: clinical guidance for primary care physicians working with parents. Can Fam Physician . 2019;65:175–81.   PubMed Web of Science ® Google Scholar
  • Paterson P, Meurice F, Stanberry LR, Glismann S, Rosenthal SL, Larson HJ. Vaccine hesitancy and healthcare providers. Vaccine . 2016; 34 ( 52 ):6700–6. doi:10.1016/j.vaccine.2016.10.042.   PubMed Web of Science ® Google Scholar
  • Dybsand LL, Hall KJ, Carson PJ. Immunization attitudes, opinions, and knowledge of healthcare professional students at two midwestern universities in the United States. BMC Med Educ . 2019; 19 ( 1 ):242. doi:10.1186/s12909-019-1678-8.   PubMed Web of Science ® Google Scholar
  • Voglino G, Barbara A, Dallagiacoma G, Santangelo OE, Provenzano S, Gianfredi V. Do degree programs affect health profession students’ attitudes and opinions toward vaccinations? An Italian multicenter study. Saf Health Work . 2022; 13 ( 1 ):59–65. doi:10.1016/j.shaw.2021.10.005.   PubMed Web of Science ® Google Scholar
  • Kaur M, Coppeta L, Olesen OF. Vaccine hesitancy among healthcare workers in Europe: a systematic review. Vaccines . 2023; 11 ( 11 ):1657. doi:10.3390/vaccines11111657.   Web of Science ® Google Scholar
  • Cuschieri S, Grech V. A comparative assessment of attitudes and hesitancy for influenza vis-à-vis COVID-19 vaccination among healthcare students and professionals in Malta. Z Gesundh Wiss . 2022; 30 ( 10 ):2441–8. doi:10.1007/s10389-021-01585-z.   PubMed Web of Science ® Google Scholar
  • Neufeind J, Betsch C, Zylka-Menhorn V, Wichmann O. Determinants of physician attitudes towards the new selective measles vaccine mandate in Germany. BMC Public Health . 2021; 21 ( 1 ):566. doi:10.1186/s12889-021-10563-9.   PubMed Web of Science ® Google Scholar
  • Baessler F, Zafar A, Mengler K, Natus RN, Dutt AJ, Kuhlmann M, Çinkaya E, Hennes S. A needs-based analysis of teaching on vaccinations and COVID-19 in German medical schools. Vaccines . 2022; 10 ( 6 ):975. doi:10.3390/vaccines10060975.   Web of Science ® Google Scholar
  • Costantino C, Amodio E, Calamusa G, Vitale F, Mazzucco W. Could university training and a proactive attitude of coworkers be associated with influenza vaccination compliance? A multicentre survey among Italian medical residents. BMC Med Educ . 2016; 16 ( 1 ):38. doi:10.1186/s12909-016-0558-8.   PubMed Google Scholar
  • Kerneis S, Jacquet C, Bannay A, May T, Launay O, Verger P, Pulcini C, Abgueguen P, Ansart S, Bani-Sadr F, et al. Vaccine education of medical students: a nationwide Cross-sectional survey. Am J Prev Med . 2017; 53 ( 3 ):97–104. doi:10.1016/j.amepre.2017.01.014.   PubMed Web of Science ® Google Scholar
  • Pelly LP, Pierrynowski Macdougall DM, Halperin BA, Strang RA, Bowles SK, Baxendale DM, McNeil SA. The VAXED PROJECT: an assessment of immunization education in Canadian health professional programs. BMC Med Educ . 2010; 10 ( 1 ):86. doi:10.1186/1472-6920-10-86.   PubMed Web of Science ® Google Scholar
  • Richman AR, Torres E, Wu Q, Eldridge D, Lawson L. The evaluation of a digital health intervention to improve human papillomavirus vaccine recommendation practices of medical students. J Cancer Educ: Off J Am Assoc Cancer Educ . 2022; 38 ( 4 ):1208–14. doi:10.1007/s13187-022-02250-z.   PubMed Web of Science ® Google Scholar
  • Hornsey MJ, Harris EA, Fielding KS. The psychological roots of anti-vaccination attitudes: a 24-nation investigation. Health Psychol . 2018; 37 ( 4 ):307–15. doi:10.1037/hea0000586.   PubMed Web of Science ® Google Scholar
  • Ko H. In science communication, why does the idea of a public deficit always return? How do the shifting information flows in healthcare affect the deficit model of science communication? Public Underst Sci . 2016; 25 ( 4 ):427–32. doi:10.1177/0963662516629746.   PubMed Web of Science ® Google Scholar
  • Schiavo R. How the information deficit model helps create unidirectional and paternalistic mode of healthcare communication. J Commun Healthcare . 2018;11(4):239–40.   Google Scholar
  • Health Education England. Public Health England, NHS england and the community health and learning foundation. Health literacy 'how to' guide. Health Education England; 2020 [accessed 2024 Sept 03]. https://www.hee.nhs.uk/sites/default/files/documents/Health%20literacy%20how%20to%20guide.pdf   Google Scholar
  • Paterick TE, Patel N, Tajik AJ, Chandrasekaran K. Improving health outcomes through patient education and partnerships with patients. Proc (Bayl Univ Med Cent) . 2017; 30 ( 1 ):112–13. doi:10.1080/08998280.2017.11929552.   PubMed Google Scholar
  • Jones AM, Omer SB, Bednarczyk RA, Halsey NA, Moulton LH, Salmon DA. Parents’ source of vaccine information and impact on vaccine attitudes, beliefs, and nonmedical exemptions. Adv Prev Med . 2012; 2012 :932741. doi:10.1155/2012/932741.   PubMed Google Scholar
  • Larson HJ, Jarrett C, Eckersberger E, Smith DM, Paterson P. Understanding vaccine hesitancy around vaccines and vaccination from a global perspective: a systematic review of published literature, 2007–2012. Vaccine . 2014; 32 ( 19 ):2150–9. doi:10.1016/j.vaccine.2014.01.081.   PubMed Web of Science ® Google Scholar
  • Healy CM, Pickering LK. How to communicate with vaccine-hesitant parents. Pediatrics . 2011; Suppl 127 ( Supplement_1 ):S127–33. doi:10.1542/peds.2010-1722S.   Google Scholar
  • Kaufman J, Ryan R, Walsh L, Horey D, Leask J, Robinson P, Hill S. Face-to-face interventions for informing or educating parents about early childhood vaccination. Cochrane Database Syst Rev . 2018; 5 ( 5 ):Cd010038. doi:10.1002/14651858.CD010038.pub3.   PubMed Web of Science ® Google Scholar
  • Tuckerman J, Kaufman J, Danchin M. Effective approaches to combat vaccine hesitancy. Pediatr Infect Dis J . 2022; 41 ( 5 ):243–5. doi:10.1097/INF.0000000000003499.   PubMed Web of Science ® Google Scholar
  • Betsch C, Sachse K. Debunking vaccination myths: strong risk negations can increase perceived vaccination risks. Health Psychol . 2013; 32 ( 2 ):146–55. doi:10.1037/a0027387.   PubMed Web of Science ® Google Scholar
  • Nyhan B, Reifler J, Richey S, Freed GL. Effective messages in vaccine promotion: a randomized trial. Pediatrics . 2014; 133 ( 4 ):e835–42. doi:10.1542/peds.2013-2365.   PubMed Web of Science ® Google Scholar
  • Gagneur A, Gosselin V, È D. Motivational interviewing: a promising tool to address vaccine hesitancy. Vaccine . 2018; 36 ( 44 ):6553–5. doi:10.1016/j.vaccine.2017.10.049.   PubMed Web of Science ® Google Scholar
  • Lip A, Pateman M, Fullerton MM, Chen HM, Bailey L, Houle S, Davidson S, Constantinescu C. Vaccine hesitancy educational tools for healthcare providers and trainees: a scoping review. Vaccine . 2023; 41 ( 1 ):23–35. doi:10.1016/j.vaccine.2022.09.093.   PubMed Web of Science ® Google Scholar
  • Tchoualeu DD, Fleming M, Traicoff DA. A systematic review of pre-service training on vaccination and immunization. Vaccine . 2023; 41 ( 20 ):3156–70. doi:10.1016/j.vaccine.2023.03.062.   PubMed Web of Science ® Google Scholar
  • Sims DA. When I say … global south and global north. Med Educ . 2024; 58 ( 3 ):286–7. doi:10.1111/medu.15263.   PubMed Web of Science ® Google Scholar
  • Themrise K, Seye A, Catherine K, Madhukar P. How we classify countries and people—and why it matters. BMJ Global Health . 2022; 7 ( 6 ):e009704. doi:10.1136/bmjgh-2022-009704.   PubMed Web of Science ® Google Scholar
  • Dubé E, Gagnon D, Nickels E, Jeram S, Schuster M. Mapping vaccine hesitancy–country-specific characteristics of a global phenomenon. Vaccine . 2014; 32 ( 49 ):6649–54. doi:10.1016/j.vaccine.2014.09.039.   PubMed Web of Science ® Google Scholar
  • Naidu T. Southern exposure: levelling the northern tilt in global medical and medical humanities education. Adv Health Sci Educ Theory Pract . 2021; 26 ( 2 ):739–52. doi:10.1007/s10459-020-09976-9.   PubMed Web of Science ® Google Scholar
  • Claramita M, Nugraheni MDF, van Dalen J, van der Vleuten C. Doctor–patient communication in Southeast Asia: a different culture? Adv Health Sci Educ Theory Pract . 2013; 18 ( 1 ):15–31. doi:10.1007/s10459-012-9352-5.   PubMed Web of Science ® Google Scholar
  • Claramita M, Utarini A, Soebono H, Van Dalen J, Van der Vleuten C. Doctor–patient communication in a Southeast Asian setting: the conflict between ideal and reality. Adv Health Sci Educ Theory Pract . 2011; 16 ( 1 ):69–80. doi:10.1007/s10459-010-9242-7.   PubMed Web of Science ® Google Scholar
  • Mathooko JmaK JK. African perspectives. In: Ten Have HAMJaG B. editor. Handbook of global bioethics . Dordrecht: Springer; 2013. p. 252–68.   Google Scholar
  • Nagpal N. Incidents of violence against doctors in India: can these be prevented? Natl Med J India . 2017;30:97–100.   PubMed Web of Science ® Google Scholar
  • Ozeki-Hayashi R, Wilkinson DJC. Shinmi (親身): a distinctive Japanese medical virtue? Asian Bioethics Rev . 2023; doi:10.1007/s41649-023-00261-6.   Google Scholar
  • Pun JKH, Chan EA, Wang S, Slade D. Health professional-patient communication practices in east asia: an integrative review of an emerging field of research and practice in Hong Kong, South Korea, Japan, Taiwan, and Mainland China. Patient Educ Couns . 2018; 101 ( 7 ):1193–206. doi:10.1016/j.pec.2018.01.018.   PubMed Web of Science ® Google Scholar
  • Sekimoto M, Asai A, Ohnishi M, Nishigaki E, Fukui T, Shimbo T, et al. Patients’ preferences for involvement in treatment decision making in Japan. BMC Fam Pract . 2004; 5 :1. doi:10.1186/1471-2296-5-1.   PubMed Google Scholar
  • Thompson GA, Segura J, Cruz D, Arnita C, Whiffen LH. Cultural differences in patients’ preferences for paternalism: comparing Mexican and American patients’ preferences for and experiences with physician paternalism and patient autonomy. Int J Environ Res Public Health . 2022; 19 ( 17 ):10663. doi:10.3390/ijerph191710663.   PubMed Web of Science ® Google Scholar
  • Thompson GA, Whiffen LH. Can physicians demonstrate high quality care using paternalistic practices? A case study of paternalism in latino physician–patient interactions. Qual Health Res . 2018; 28 ( 12 ):1910–22. doi:10.1177/1049732318783696.   PubMed Web of Science ® Google Scholar
  • Unger J-P, Van Dormael M, Criel B, Van der Vennet J, De Munck P. A PLEA for an initiative to strengthen family medicine in public health care services of developing countries. Int J Health Serv . 2002; 32 ( 4 ):799–815. doi:10.2190/FN20-AGDQ-GYCP-P8R6.   PubMed Web of Science ® Google Scholar
  • Phillips DE, Dieleman JL, Lim SS, Shearer J. Determinants of effective vaccine coverage in low and middle-income countries: a systematic review and interpretive synthesis. BMC Health Serv Res . 2017; 17 ( 1 ):681. doi:10.1186/s12913-017-2626-0.   PubMed Google Scholar
  • Sallam M. COVID-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines (Basel) . 2021; 9 ( 2 ):160. doi:10.3390/vaccines9020160.   PubMed Web of Science ® Google Scholar
  • Solís Arce JS, Warren SS, Meriggi NF, Scacco A, McMurry N, Voors M, Syunyaev G, Malik AA, Aboutajdine S, Adeojo O, et al. COVID-19 vaccine acceptance and hesitancy in low- and middle-income countries. Nat Med . 2021; 27 ( 8 ):1385–94. doi:10.1038/s41591-021-01454-y.   PubMed Web of Science ® Google Scholar
  • Dubé E, Vivion M, MacDonald NE. Vaccine hesitancy, vaccine refusal and the anti-vaccine movement: influence, impact and implications. Expert Rev Vaccines . 2015; 14 ( 1 ):99–117. doi:10.1586/14760584.2015.964212.   PubMed Web of Science ® Google Scholar
  • Bidkhanian PA. Motivational interviewing technique as a means of decreasing vaccine hesitancy in children and adolescents during the COVID-19 pandemic. Eur Psychiatry . 2023; 66 ( S1 ):S740–S. doi:10.1192/j.eurpsy.2023.1555.   Google Scholar
  • Brewer NT, Hall ME, Malo TL, Gilkey MB, Quinn B, Lathren C. Announcements versus conversations to improve HPV vaccination coverage: a randomized trial. Pediatrics . 2017; 139 ( 1 ):e20161764. doi:10.1542/peds.2016-1764.   PubMed Web of Science ® Google Scholar
  • Dybsand LL, Hall KJ, Ulven JC, Carson PJ. Improving provider confidence in addressing the vaccine-hesitant parent: a pilot project of 2 contrasting communication strategies. Clin Pediatrics . 2019; 59 ( 1 ):87–91. doi:10.1177/0009922819884572.   PubMed Web of Science ® Google Scholar
  • Opel DJ, Heritage J, Taylor JA, Mangione-Smith R, Salas HS, Devere V, Zhou C, Robinson JD. The architecture of provider-parent vaccine discussions at health supervision visits. Pediatrics . 2013; 132 ( 6 ):1037–46. doi:10.1542/peds.2013-2037.   PubMed Web of Science ® Google Scholar
  • Douglas JO, Jeffrey DR, Heather S, Christine S, Kathleen G, Amanda FD. ‘Presumptively initiating vaccines and optimizing talk with motivational interviewing’ (PIVOT with MI) trial: a protocol for a cluster randomised controlled trial of a clinician vaccine communication intervention. BMJ Open . 2020; 10 ( 8 ):e039299. doi:10.1136/bmjopen-2020-039299.   PubMed Web of Science ® Google Scholar
  • Rashid MA, Griffin A. Is west really best? The discourse of modernisation in global medical school regulation policy. Teach Learn Med . 2023; Medicine.1–12. doi:10.1080/10401334.2023.2230586.   PubMed Web of Science ® Google Scholar
  • Lizarondo L, Stern C, Carrier J, Godfrey C, Rieger K, Salmond S, Apostolo J, Kirkpatrick P, Loveday H. Chapter 8: mixed methods systematic reviews. JBI manual for evidence synthesis [internet]. JBI. 2020. https://synthesismanual.jbi.global .   Google Scholar
  • Harden RM, Grant J, Buckley G, Hart IR. BEME guide No. 1: best evidence medical education. Med Teach . 1999; 21 ( 6 ):553–62. doi:10.1080/01421599978960.   PubMed Web of Science ® Google Scholar
  • Sullivan GM. Deconstructing quality in education research. J Grad Med Educ . 2011; 3 ( 2 ):121–4. doi:10.4300/JGME-D-11-00083.1.   PubMed Google Scholar
  • Yardley S, Dornan T. Kirkpatrick’s levels and education ‘evidence’. Med Educ . 2012; 46 ( 1 ):97–106. doi:10.1111/j.1365-2923.2011.04076.x.   PubMed Web of Science ® Google Scholar
  • Eggertson L. Lancet retracts 12-year-old article linking autism to MMR vaccines. Cmaj . 2010; 182 ( 4 ):E199–200. doi:10.1503/cmaj.109-3179.   PubMed Web of Science ® Google Scholar
  • Rao TS, Andrade C. The MMR vaccine and autism: sensation, refutation, retraction, and fraud. Indian J Psychiatry . 2011; 53 ( 2 ):95–6. doi:10.4103/0019-5545.82529.   PubMed Google Scholar
  • Schmid P, Rauber D, Betsch C, Lidolt G, Denker ML. Barriers of influenza vaccination intention and behavior - a systematic review of influenza vaccine hesitancy, 2005 - 2016. PLOS ONE . 2017; 12 ( 1 ):e0170550. doi:10.1371/journal.pone.0170550.   PubMed Web of Science ® Google Scholar
  • Cook DA, Reed DA. Appraising the quality of medical education research methods: the medical education research study quality instrument and the Newcastle-Ottawa Scale-education. Acad Med . 2015; 90 ( 8 ):1067–76. doi:10.1097/ACM.0000000000000786.   PubMed Web of Science ® Google Scholar
  • Côté L, Turgeon J. Appraising qualitative research articles in medicine and medical education. Med Teach . 2005; 27 ( 1 ):71–5. doi:10.1080/01421590400016308.   PubMed Web of Science ® Google Scholar
  • Koski K, Lehto JT, Hakkarainen K. Simulated encounters with vaccine-hesitant parents: arts-based video scenario and a writing exercise. J Med Educ Curric Dev . 2018; 5 :9. doi:10.1177/2382120518790257.   Google Scholar
  • Koski K, Lehto JT, Hakkarainen K. Physician self-disclosure and vaccine-critical parents׳ trust: preparing medical students for parents׳ difficult questions. Health Professions Educ . 2019; 5 ( 3 ):253–8. doi:10.1016/j.hpe.2018.09.005.   Google Scholar
  • Sutton S, Azar SS, Evans LK, Murtagh A, McCarthy C, John MS. HPV knowledge retention and concurrent increase in vaccination rates 1.5 years after a novel HPV workshop in medical school. J Cancer Educ: Off J Am Assoc Cancer Educ . 2021; 38 ( 1 ):240–7. doi:10.1007/s13187-021-02106-y.   PubMed Web of Science ® Google Scholar
  • Afonso N, Kavanagh M, Swanberg S. Improvement in attitudes toward influenza vaccination in medical students following an integrated, curricular intervention. Vaccine . 2014; 32 ( 4 ):502–6. doi:10.1016/j.vaccine.2013.11.043.   PubMed Web of Science ® Google Scholar
  • Berenson AB, Hirth JM, Fuchs EL, Chang M, Rupp RE. An educational intervention to improve attitudes regarding HPV vaccination and comfort with counseling among US medical students. Hum Vaccin Immunother . 2020; 16 ( 5 ):1139–44. doi:10.1080/21645515.2019.1692558.   PubMed Web of Science ® Google Scholar
  • Brisolara KF, Gasparini S, Davis AH, Sanne S, Andrieu SC, James J, Mercante DE, De Carvalho RB, Patel Gunaldo T. Supporting health system transformation through an interprofessional education experience focused on population health. J Interprofessional Care . 2019; 33 ( 1 ):125–8. doi:10.1080/13561820.2018.1530646.   PubMed Web of Science ® Google Scholar
  • Caruso Brown AE, Suryadevara M, Welch TR, Botash AS. “Being persistent without being pushy”: student reflections on vaccine hesitancy. Narrat . 2017; 7 ( 1 ):59–70. doi:10.1353/nib.2017.0018.   Google Scholar
  • Chase AJ, Demory ML. Counteracting vaccine misinformation: an active learning module. Med Sci Educ . 2023; 33 ( 3 ):1–4. doi:10.1007/s40670-023-01785-0.   PubMed Google Scholar
  • Chase AJ, Heck AJ. Educating first‐year medical students on inactive vaccine components. Med Educ . 2020; 54 ( 5 ):446–7. doi:10.1111/medu.14111.   PubMed Web of Science ® Google Scholar
  • Chen G, Kazmi M, Chen DL, Phillips J. Improving medical student clinical knowledge and skills through influenza education. Med Sci Educ . 2021; 31 ( 5 ):1645–51. doi:10.1007/s40670-021-01355-2.   PubMed Google Scholar
  • Coleman A, Lehman D. A flipped classroom and case-based curriculum to prepare medical students for vaccine-related conversations with parents. MedEdPORTAL: J Teach Learn Resour . 2017; 13 :10582. doi:10.15766/mep_2374-8265.10582.   PubMed Google Scholar
  • Collins RA, Zeitouni J, Veesart A, Chacon J, Wong A, Byrd T. Establishment of a vaccine administration training program for medical students. Bayl Univ Med Cent Proc . 2023; 36 ( 2 ):157–60. doi:10.1080/08998280.2022.2137373.   PubMed Google Scholar
  • Kelekar A, Rubino I, Kavanagh M, Lewis-Bedz R, LeClerc G, Pedell L, Afonso N. Vaccine hesitancy counseling—an educational intervention to teach a critical skill to preclinical medical students. Med Sci Educ . 2022; 32 ( 1 ):141–7. doi:10.1007/s40670-021-01495-5.   PubMed Google Scholar
  • Onello E, Friedrichsen S, Krafts K, Simmons G, Diebel K. First year allopathic medical student attitudes about vaccination and vaccine hesitancy. Vaccine . 2020; 38 ( 4 ):808–14. doi:10.1016/j.vaccine.2019.10.094.   PubMed Web of Science ® Google Scholar
  • Rizal RE, Mediratta RP, Xie J, Kambhampati S, Hills-Evans K, Montacute T, Zhang M, Zaw C, He J, Sanchez M, et al. Galvanizing medical students in the administration of influenza vaccines: the stanford flu crew. Adv Med Educ Pract . 2015; 6 :471–7. doi:10.2147/AMEP.S70294.   PubMed Google Scholar
  • Schnaith A, Erickson B, Evans E, Vogt C, Tinsay A, Schmidt T, Tessier K. An innovative medical student curriculum to address human papillomavirus vaccine hesitancy. Pediatrics Conf: Natl Conf Educ . 2018; 144 ( 2 ):237–237. doi:10.1542/peds.144.2MA3.237.   Google Scholar
  • Thanasuwat B, Leung SOA, Welch K, Duffey-Lind E, Pena N, Feldman S, Villa A. Human papillomavirus (HPV) education and knowledge among medical and dental trainees. J Cancer Educ: Off J Am Assoc Cancer Educ . 2022; 38 ( 3 ):971–6. doi:10.1007/s13187-022-02215-2.   PubMed Web of Science ® Google Scholar
  • Wiley R, Shelal Z, Bernard C, Urbauer D, Toy E, Ramondetta L. Human papillomavirus: from basic science to clinical management for preclinical medical students. MedEdPORTAL publ . 2018; 14 :10787. doi:10.15766/mep_2374-8265.10787.   PubMed Google Scholar
  • Wu JF, Abenoza N, Bosco JM, Minshew LM, Beckius A, Kastner M, Hilgeman B, Muntz MD. COVID-19 vaccination telephone outreach: an analysis of the medical student experience. Med Educ Online . 2023; 28 ( 1 ):2207249. doi:10.1080/10872981.2023.2207249.   PubMed Web of Science ® Google Scholar
  • Jenkins MC, Paul CR, Chheda S, Hanson JL. Qualitative analysis of reflective writing examines medical student learning about vaccine hesitancy. Asia Pac Scholar . 2023; 8 ( 2 ):36–46. doi:10.29060/TAPS.2023-8-2/OA2855.   Google Scholar
  • Bechini A, Moscadelli A, Sartor G, Shtylla J, Guelfi MR, Bonanni P, Boccalini, S. Impact assessment of an educational course on vaccinations in a population of medical students. J . 2019;60(3):171–7.   Google Scholar
  • Beltermann E, Krane S, Kiesewetter J, Fischer MR, Schelling J. See your GP, see the world - an activating course concept for fostering students’ competence in performing vaccine and travel consultations. GMS Z Med Ausbild . 2015; 32 ( 3 ):Doc28. doi:10.3205/zma000970.   PubMed Google Scholar
  • Boccalini S, Vannacci A, Crescioli G, Lombardi N, Del Riccio M, Albora G, Shtylla J, Masoni M, Guelfi MR, Bonanni P, et al. Knowledge of university students in health care settings on vaccines and vaccinations strategies: impact evaluation of a specific educational training course during the COVID-19 pandemic period in Italy. Vaccines . 2022; 10 ( 7 ):1085. doi:10.3390/vaccines10071085.   Web of Science ® Google Scholar
  • Marotta C, Raia DD, Ventura G, Casuccio N, Dieli F, D’Angelo C, Restivo, V, Costantino, C, Vitale, F Casuccio, A. Improvement in vaccination knowledge among health students following an integrated extra curricular intervention, an explorative study in the university of palermo. J . 2017;58(2):93–8.   Google Scholar
  • Mena G, Llupia A, Garcia-Basteiro AL, Sequera VG, Aldea M, Bayas JM, Trilla A. Educating on professional habits: attitudes of medical students towards diverse strategies for promoting influenza vaccination and factors associated with the intention to get vaccinated. BMC Med Educ . 2013; 13 ( 1 ):6. doi:10.1186/1472-6920-13-99.   PubMed Google Scholar
  • Rill V, Steffen B, Wicker S. Evaluation of a vaccination seminar in regard to medical students’ attitudes and their theoretical and practical vaccination-specific competencies. GMS J Med Educ . 2020; 37 ( 4 ):14. doi:10.3205/zma001331.   Web of Science ® Google Scholar
  • Bechini A, Vannacci A, Salvati C, Crescioli G, Lombardi N, Chiesi F, Shtylla J, Del Riccio M, Bonanni P, Boccalini S, et al. Knowledge and training of Italian students in healthcare settings on COVID-19 vaccines and vaccination strategies, one year after the immunization campaign. J Preventative Med Hyg . 2023; 64 ( 2 ):152–60. doi:10.15167/2421-4248/jpmh2023.64.2.2934.   Google Scholar
  • Driessen J, Hearn R. Development of hidden curriculum skills in a COVID-19 vaccination centre. Clin Teach . 2023; 21 ( 2 ). doi:10.1111/tct.13642.   PubMed Google Scholar
  • Carroll PR, Hanrahan J. Development and evaluation of an interprofessional student-led influenza vaccination clinic for medical, nursing and pharmacy students. Pharm Pract (1886-3655) . 2021; 19 ( 4 ):1–12. doi:10.18549/PharmPract.2021.4.2449.   Web of Science ® Google Scholar
  • Hanrahan JR, Carroll PR. Student-led interprofessional influenza vaccination clinic in a time of coronavirus. Med Educ . 2020; 54 ( 11 ):1078–9. doi:10.1111/medu.14323.   PubMed Web of Science ® Google Scholar
  • Evans L, Matley E, Oberbillig M, Margetts E, Darrow L. HPV knowledge and attitudes among medical and professional students at a Nevada university: a focus on oropharyngeal cancer and mandating the vaccine. J Cancer Educ . 2020; 35 ( 4 ):774–81. doi:10.1007/s13187-019-01529-y.   PubMed Web of Science ® Google Scholar
  • Wiley R, Shelal Z, Bernard C, Urbauer D, Toy E, Ramondetta L. Team-based learning module for undergraduate medical education: a module focused on the human papilloma virus to increase willingness to vaccinate. J Cancer Educ . 2019; 34 ( 2 ):357–62. doi:10.1007/s13187-017-1311-7.   PubMed Web of Science ® Google Scholar
  • Dempsey AF, Pyrznawoski J, Lockhart S, Barnard J, Campagna EJ, Garrett K, Fisher A, Dickinson LM, O’Leary ST. Effect of a health care professional communication training intervention on adolescent human papillomavirus vaccination: a cluster randomized clinical trial. JAMA Pediatr . 2018; 172 ( 5 ):e180016. doi:10.1001/jamapediatrics.2018.0016.   PubMed Web of Science ® Google Scholar
  • Lemaitre T, Carrier N, Farrands A, Gosselin V, Petit G, Gagneur A. Impact of a vaccination promotion intervention using motivational interview techniques on long-term vaccine coverage: the PromoVac strategy. Hum Vaccin Immunother . 2019; 15 ( 3 ):732–9. doi:10.1080/21645515.2018.1549451.   PubMed Web of Science ® Google Scholar
  • Opel DJ, Mangione-Smith R, Robinson JD, Heritage J, DeVere V, Salas HS, Zhou C, Taylor JA. The influence of provider communication behaviors on parental vaccine acceptance and visit experience. Am J Public Health . 2015; 105 ( 10 ):1998–2004. doi:10.2105/AJPH.2014.302425.   PubMed Web of Science ® Google Scholar
  • Opel DJ, Robinson JD, Heritage J, Korfiatis C, Taylor JA, Mangione-Smith R. Characterizing providers’ immunization communication practices during health supervision visits with vaccine-hesitant parents: a pilot study. Vaccine . 2012; 30 ( 7 ):1269–75. doi:10.1016/j.vaccine.2011.12.129.   PubMed Web of Science ® Google Scholar
  • Clapper TC. How the information deficit model helps create unidirectional and paternalistic mode of healthcare communication. J Commun Healthcare . 2018;11(4):239–40.   Google Scholar
  • Alexandra LJF. How to communicate evidence to patients. Drug Ther Bull . 2019; 57 ( 8 ):119. doi:10.1136/dtb.2019.000008.   PubMed Google Scholar
  • King A, Hoppe RB. “Best practice” for patient-centered communication: a narrative review. J Grad Med Educ . 2013; 5 ( 3 ):385–93. doi:10.4300/JGME-D-13-00072.1.   PubMed Google Scholar
  • Bardosh K, de Figueiredo A, Gur-Arie R, Jamrozik E, Doidge J, Lemmens T, Keshavjee S, Graham JE, Baral S. The unintended consequences of COVID-19 vaccine policy: why mandates, passports and restrictions may cause more harm than good. BMJ Glob Health . 2022; 7 ( 5 ):e008684. doi:10.1136/bmjgh-2022-008684.   PubMed Web of Science ® Google Scholar
  • Sween L, Ekeoduru R, Mann D. Ethics and pitfalls of vaccine mandates. ASA Monit . 2022; 86 ( 2 ):24–5. doi:10.1097/01.ASM.0000820408.65886.28.   Google Scholar
  • de Figueiredo A, Simas C, Karafillakis E, Paterson P, Larson HJ. Mapping global trends in vaccine confidence and investigating barriers to vaccine uptake: a large-scale retrospective temporal modelling study. Lancet . 2020; 396 ( 10255 ):898–908. doi:10.1016/S0140-6736(20)31558-0.   PubMed Web of Science ® Google Scholar
  • Nuwarda RF, Ramzan I, Weekes L, Kayser V. Vaccine hesitancy: contemporary issues and historical background. Vaccines (Basel) . 2022; 10 ( 10 ):1595. doi:10.3390/vaccines10101595.   PubMed Web of Science ® Google Scholar
  • Oduwole EO, Pienaar ED, Mahomed H, Wiysonge CS. Overview of tools and measures investigating vaccine hesitancy in a Ten year period: a scoping review. Vaccines . 2022; 10 ( 8 ):1198. doi:10.3390/vaccines10081198.   Web of Science ® Google Scholar
  • Trust W. Long term outcome and pulmonary vein reconnection of patients undergoing cryoablation and/or radiofrequency ablation: results from the cryo versus RF trial. J Atr Fibrillation . 2018; 11 ( 3 ). doi:10.4022/jafib.2072.   Google Scholar
  • Wiegand M, Eagan R, Karimov R, Lin L, Larson H, Figueiredo A. Global declines in vaccine confidence from 2015 to 2022: a large-scale retrospective analysis. Preprints with The Lancet . 2023. doi:10.2139/ssrn.4438003.   Google Scholar
  • Back to Top

Related research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations. Articles with the Crossref icon will open in a new tab.

  • People also read
  • Recommended articles

To cite this article:

Download citation, your download is now in progress and you may close this window.

  • Choose new content alerts to be informed about new research of interest to you
  • Easy remote access to your institution's subscriptions on any device, from any location
  • Save your searches and schedule alerts to send you new results
  • Export your search results into a .csv file to support your research

Login or register to access this feature

Register now or learn more

Instant insights, infinite possibilities

77 interesting medical research topics for 2024

Last updated

25 November 2023

Reviewed by

Brittany Ferri, PhD, OTR/L

Short on time? Get an AI generated summary of this article instead

Medical research is the gateway to improved patient care and expanding our available treatment options. However, finding a relevant and compelling research topic can be challenging.

Use this article as a jumping-off point to select an interesting medical research topic for your next paper or clinical study.

  • How to choose a medical research topic

When choosing a research topic , it’s essential to consider a couple of things. What topics interest you? What unanswered questions do you want to address? 

During the decision-making and brainstorming process, here are a few helpful tips to help you pick the right medical research topic:

Focus on a particular field of study

The best medical research is specific to a particular area. Generalized studies are often too broad to produce meaningful results, so we advise picking a specific niche early in the process. 

Maybe a certain topic interests you, or your industry knowledge reveals areas of need.

Look into commonly researched topics

Once you’ve chosen your research field, do some preliminary research. What have other academics done in their papers and projects? 

From this list, you can focus on specific topics that interest you without accidentally creating a copycat project. This groundwork will also help you uncover any literature gaps—those may be beneficial areas for research.

Get curious and ask questions

Now you can get curious. Ask questions that start with why, how, or what. These questions are the starting point of your project design and will act as your guiding light throughout the process. 

For example: 

What impact does pollution have on children’s lung function in inner-city neighborhoods? 

Why is pollution-based asthma on the rise? 

How can we address pollution-induced asthma in young children? 

  • 77 medical research topics worth exploring in 2023

Need some research inspiration for your upcoming paper or clinical study? We’ve compiled a list of 77 topical and in-demand medical research ideas. Let’s take a look. 

  • Exciting new medical research topics

If you want to study cutting-edge topics, here are some exciting options:

COVID-19 and long COVID symptoms

Since 2020, COVID-19 has been a hot-button topic in medicine, along with the long-term symptoms in those with a history of COVID-19. 

Examples of COVID-19-related research topics worth exploring include:

The long-term impact of COVID-19 on cardiac and respiratory health

COVID-19 vaccination rates

The evolution of COVID-19 symptoms over time

New variants and strains of the COVID-19 virus

Changes in social behavior and public health regulations amid COVID-19

Vaccinations

Finding ways to cure or reduce the disease burden of chronic infectious diseases is a crucial research area. Vaccination is a powerful option and a great topic to research. 

Examples of vaccination-related research topics include:

mRNA vaccines for viral infections

Biomaterial vaccination capabilities

Vaccination rates based on location, ethnicity, or age

Public opinion about vaccination safety 

Artificial tissues fabrication

With the need for donor organs increasing, finding ways to fabricate artificial bioactive tissues (and possibly organs) is a popular research area. 

Examples of artificial tissue-related research topics you can study include:

The viability of artificially printed tissues

Tissue substrate and building block material studies

The ethics and efficacy of artificial tissue creation

  • Medical research topics for medical students

For many medical students, research is a big driver for entering healthcare. If you’re a medical student looking for a research topic, here are some great ideas to work from:

Sleep disorders

Poor sleep quality is a growing problem, and it can significantly impact a person’s overall health. 

Examples of sleep disorder-related research topics include:

How stress affects sleep quality

The prevalence and impact of insomnia on patients with mental health conditions

Possible triggers for sleep disorder development

The impact of poor sleep quality on psychological and physical health

How melatonin supplements impact sleep quality

Alzheimer’s and dementia 

Cognitive conditions like dementia and Alzheimer’s disease are on the rise worldwide. They currently have no cure. As a result, research about these topics is in high demand. 

Examples of dementia-related research topics you could explore include:

The prevalence of Alzheimer’s disease in a chosen population

Early onset symptoms of dementia

Possible triggers or causes of cognitive decline with age

Treatment options for dementia-like conditions

The mental and physical burden of caregiving for patients with dementia

  • Lifestyle habits and public health

Modern lifestyles have profoundly impacted the average person’s daily habits, and plenty of interesting topics explore its effects. 

Examples of lifestyle and public health-related research topics include:

The nutritional intake of college students

The impact of chronic work stress on overall health

The rise of upper back and neck pain from laptop use

Prevalence and cause of repetitive strain injuries (RSI)

  • Controversial medical research paper topics

Medical research is a hotbed of controversial topics, content, and areas of study. 

If you want to explore a more niche (and attention-grabbing) concept, here are some controversial medical research topics worth looking into:

The benefits and risks of medical cannabis

Depending on where you live, the legalization and use of cannabis for medical conditions is controversial for the general public and healthcare providers.

Examples of medical cannabis-related research topics that might grab your attention include:

The legalization process of medical cannabis

The impact of cannabis use on developmental milestones in youth users

Cannabis and mental health diagnoses

CBD’s impact on chronic pain

Prevalence of cannabis use in young people

The impact of maternal cannabis use on fetal development 

Understanding how THC impacts cognitive function

Human genetics

The Human Genome Project identified, mapped, and sequenced all human DNA genes. Its completion in 2003 opened up a world of exciting and controversial studies in human genetics.

Examples of human genetics-related research topics worth delving into include:

Medical genetics and the incidence of genetic-based health disorders

Behavioral genetics differences between identical twins

Genetic risk factors for neurodegenerative disorders

Machine learning technologies for genetic research

Sexual health studies

Human sexuality and sexual health are important (yet often stigmatized) medical topics that need new research and analysis.

As a diverse field ranging from sexual orientation studies to sexual pathophysiology, examples of sexual health-related research topics include:

The incidence of sexually transmitted infections within a chosen population

Mental health conditions within the LGBTQIA+ community

The impact of untreated sexually transmitted infections

Access to safe sex resources (condoms, dental dams, etc.) in rural areas

  • Health and wellness research topics

Human wellness and health are trendy topics in modern medicine as more people are interested in finding natural ways to live healthier lifestyles. 

If this field of study interests you, here are some big topics in the wellness space:

Gluten sensitivity

Gluten allergies and intolerances have risen over the past few decades. If you’re interested in exploring this topic, your options range in severity from mild gastrointestinal symptoms to full-blown anaphylaxis. 

Some examples of gluten sensitivity-related research topics include:

The pathophysiology and incidence of Celiac disease

Early onset symptoms of gluten intolerance

The prevalence of gluten allergies within a set population

Gluten allergies and the incidence of other gastrointestinal health conditions

Pollution and lung health

Living in large urban cities means regular exposure to high levels of pollutants. 

As more people become interested in protecting their lung health, examples of impactful lung health and pollution-related research topics include:

The extent of pollution in densely packed urban areas

The prevalence of pollution-based asthma in a set population

Lung capacity and function in young people

The benefits and risks of steroid therapy for asthma

Pollution risks based on geographical location

Plant-based diets

Plant-based diets like vegan and paleo diets are emerging trends in healthcare due to their limited supporting research. 

If you’re interested in learning more about the potential benefits or risks of holistic, diet-based medicine, examples of plant-based diet research topics to explore include:

Vegan and plant-based diets as part of disease management

Potential risks and benefits of specific plant-based diets

Plant-based diets and their impact on body mass index

The effect of diet and lifestyle on chronic disease management

Health supplements

Supplements are a multi-billion dollar industry. Many health-conscious people take supplements, including vitamins, minerals, herbal medicine, and more. 

Examples of health supplement-related research topics worth investigating include:

Omega-3 fish oil safety and efficacy for cardiac patients

The benefits and risks of regular vitamin D supplementation

Health supplementation regulation and product quality

The impact of social influencer marketing on consumer supplement practices

Analyzing added ingredients in protein powders

  • Healthcare research topics

Working within the healthcare industry means you have insider knowledge and opportunity. Maybe you’d like to research the overall system, administration, and inherent biases that disrupt access to quality care. 

While these topics are essential to explore, it is important to note that these studies usually require approval and oversight from an Institutional Review Board (IRB). This ensures the study is ethical and does not harm any subjects. 

For this reason, the IRB sets protocols that require additional planning, so consider this when mapping out your study’s timeline. 

Here are some examples of trending healthcare research areas worth pursuing:

The pros and cons of electronic health records

The rise of electronic healthcare charting and records has forever changed how medical professionals and patients interact with their health data. 

Examples of electronic health record-related research topics include:

The number of medication errors reported during a software switch

Nurse sentiment analysis of electronic charting practices

Ethical and legal studies into encrypting and storing personal health data

Inequities within healthcare access

Many barriers inhibit people from accessing the quality medical care they need. These issues result in health disparities and injustices. 

Examples of research topics about health inequities include:

The impact of social determinants of health in a set population

Early and late-stage cancer stage diagnosis in urban vs. rural populations

Affordability of life-saving medications

Health insurance limitations and their impact on overall health

Diagnostic and treatment rates across ethnicities

People who belong to an ethnic minority are more likely to experience barriers and restrictions when trying to receive quality medical care. This is due to systemic healthcare racism and bias. 

As a result, diagnostic and treatment rates in minority populations are a hot-button field of research. Examples of ethnicity-based research topics include:

Cancer biopsy rates in BIPOC women

The prevalence of diabetes in Indigenous communities

Access inequalities in women’s health preventative screenings

The prevalence of undiagnosed hypertension in Black populations

  • Pharmaceutical research topics

Large pharmaceutical companies are incredibly interested in investing in research to learn more about potential cures and treatments for diseases. 

If you’re interested in building a career in pharmaceutical research, here are a few examples of in-demand research topics:

Cancer treatment options

Clinical research is in high demand as pharmaceutical companies explore novel cancer treatment options outside of chemotherapy and radiation. 

Examples of cancer treatment-related research topics include:

Stem cell therapy for cancer

Oncogenic gene dysregulation and its impact on disease

Cancer-causing viral agents and their risks

Treatment efficacy based on early vs. late-stage cancer diagnosis

Cancer vaccines and targeted therapies

Immunotherapy for cancer

Pain medication alternatives

Historically, opioid medications were the primary treatment for short- and long-term pain. But, with the opioid epidemic getting worse, the need for alternative pain medications has never been more urgent. 

Examples of pain medication-related research topics include:

Opioid withdrawal symptoms and risks

Early signs of pain medication misuse

Anti-inflammatory medications for pain control

  • Identify trends in your medical research with Dovetail

Are you interested in contributing life-changing research? Today’s medical research is part of the future of clinical patient care. 

As your go-to resource for speedy and accurate data analysis , we are proud to partner with healthcare researchers to innovate and improve the future of healthcare.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

  • Download PDF
  • CME & MOC
  • Share X Facebook Email LinkedIn
  • Permissions

Practical Guide to Education Program Evaluation Research

  • 1 Department of Surgery, Medical College of Wisconsin, Milwaukee
  • 2 Department of Emergency Medicine, University of Colorado, Aurora
  • 3 Statistical Editor, JAMA Surgery
  • 4 Department of Surgery, VA Boston Healthcare System, Boston University, Harvard Medical School, Boston, Massachusetts
  • Editorial Improving the Integrity of Surgical Education Scholarship Amalia Cochran, MD, MA; Dimitrios Stefanidis, MD, PhD; Melina R. Kibbe, MD JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Survey Research in Surgical Education Adnan A. Alseidi, MD, EdM; Jason S. Haukoos, MD, MSc; Christian de Virgilio, MD JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Common Flaws With Surgical Education Research Dimitrios Stefanidis, MD, PhD; Laura Torbeck, PhD; Amy H. Kaji, MD, PhD JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Machine Learning and Artificial Intelligence in Surgical Education Research Daniel A. Hashimoto, MD; Julian Varas, MD; Todd A. Schwartz, DrPH JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Surgical Simulation Research Aimee K. Gardner, PhD; Amy H. Kaji, MD, PhD; Marja Boermeester, MD, PhD JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Qualitative Research in Surgical Education Gurjit Sandhu, PhD; Amy H. Kaji, MD, PhD; Amalia Cochran, MD, MA JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Assessment Tool Development for Surgical Education Research Mohsen M. Shabahang, MD, PhD; Todd A. Schwartz, DrPH; Liane S. Feldman, MD JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Experimental and Quasi-Experimental Research in Surgical Education Roy Phitayakorn, MD, MHPE; Todd A. Schwartz, DrPH; Gerard M. Doherty, MD JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Pragmatic Clinical Trials in Surgical Education Research Karl Y. Bilimoria, MD, MS; Jason S. Haukoos, MD, MSc; Gerard M. Doherty, MD JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Ethics in Surgical Education Research Michael M. Awad, MD, PhD, MHPE; Amy H. Kaji, MD, PhD; Timothy M. Pawlik, MD, PhD, MTS, MPH, MBA JAMA Surgery
  • Guide to Statistics and Methods Practical Guide to Curricular Development Research Kevin Y. Pei, MD, MHS; Todd A. Schwartz, DrPH; Marja A. Boermeester, MD, PhD JAMA Surgery

Program evaluation is the systematic assessment of a program’s implementation. In medical education, evaluation includes the synthesis and analysis of educational programs, which in turn provides evidence for educational best practices. In medical education, as in other fields, the quality of the synthesis is dependent on the rigor by which evaluations are performed. Individual program evaluation is best achieved when similar programs apply the same scientific rigor and methodology to assess outcomes, thus allowing for direct comparisons. The pedagogy of a given program, particularly in medical education, can be driven by nonscientific forces (ie, political, faddism, or ideology) rather than evidence. 1 This often impedes more rapid progress of educational methods to achieve an educational goal compared with a more evidence-based practice. 2

  • Editorial Improving the Integrity of Surgical Education Scholarship JAMA Surgery

Read More About

de Moya M , Haukoos JS , Itani KMF. Practical Guide to Education Program Evaluation Research. JAMA Surg. 2024;159(6):706–707. doi:10.1001/jamasurg.2023.6702

Manage citations:

© 2024

Artificial Intelligence Resource Center

Surgery in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Departments of RMS

Surgical Department

medical education experimental research

Head of department:

Candidate of Medical Science Lesovik Vasilina Sergeevna

Department of Internal Medicine

medical education experimental research

M.D., Professor Plotkin Alexander Vyacheslavovich

Department of Fundamental Medicine

medical education experimental research

Ph.D., Professor Magomed Saidovich Haidakov

Department of Medical Biology

medical education experimental research

Ph.D., Professor Dmitri Menglet

[email protected]

+7 (968) 408 42 - 69

Dubna, Ulitsa Programmistov 4, building 6 Special Economic Zone “Dubna”

  • State registration certificate
  • Certificate of registration with the tax authority
  • Statute IAMR

Perelman School of Medicine at the University of Pennsylvania

  • About the Program
  • CME Programs/Publications
  • Medical School Curriculum

Elena Atochina-Vasserman, MD

Atochina

Dr. Atochina earned her M.D. from Tomsk Medical School and a Ph.D. in Biochemistry in 1990 from Russian Cardiology Research Center, Moscow, Russia. She completed post-doctoral training at the University of Miami with Dr. James Ryan and at the University of Pennsylvania with Dr. Aron B. Fisher and Dr. Vladimir Muzykantov at the Institute for Environmental Medicine. Following three years as a Research Associate in the laboratory of Dr. Michael F. Beers, she was appointed as a Senior Laboratory Investigator and now directs the Lung Host Defense and Inflammation Core in Dr Beers' laboratory.

Dr. Atochina-Vasserman's early work included studies of selective vascular (in vivo) targeting of enzyme therapeutics for containment of oxidative and vascular stress, inflammation, endothelial injury and ischemia-reperfusion. More recently her work has focused primarily on the study of the response of the lung to a variety of challenges including Pneumocystis murina, Aspergilllus fumigatus, bleomycin, and hyperoxia. She has also examined the interrelationship between surfactant protein D and macrophage function with a special emphasis on the biochemistry of the structure-function changes induced by post translational modification of SP-D by nitric oxide.

About the Program | CME Programs and Publications Medical School Curriculum | Faculty | Resources | Contact Us | Home

COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK

Parliament, Office Building, Building, Architecture, Urban, Postal Office, Grass, Plant, City, Town

Research Assistant

  • Pathology and Cell Biology
  • Columbia University Medical Center
  • Opening on: Sep 26 2024
  • Technical Grade 5
  • Job Type: Support Staff - Union
  • Bargaining Unit: SSA
  • Regular/Temporary: Regular
  • End Date if Temporary:
  • Hours Per Week: 35
  • Standard Work Schedule:
  • Salary Range: $59,845.49 - $59,845.49

Position Summary

The Department of Pathology and Cell Biology is seeking a Research Assistant with demonstrated experience in flow cytometry to work in the Immunogenetics Lab.  The work will involve hands-on processing biological samples for clinical flow cytometry tests for lymphoid, non-lymphoid and malignant leukocyte immunophenotyping. The candidate will be involved in R&D work of developing new assays for primary immunodeficiency diagnosis. Training in laboratory protocols and result reporting will be provided. An important part of the job will relate to the development of multi-color (10+) Flow Cytometry assays. In addition, the responsibilities will include a minor component of quality control/assurance (validation of new reagents, tests and instrumentation, instrument maintenance and troubleshooting, regulatory compliance), and training of residents and new personnel.

Responsibilities

  • 40% - Performing flow cytometry
  • 40% - Recording/analyzing data
  • 15% - General lab maintenance
  • 5% - Other duties as assigned

Minimum Qualifications

  • Bachelor's Degree and at least one and one-half years of related experience or equivalent education, training and experience

Preferred Qualifications

  • Bachelor's degree (or higher) in natural sciences (Biology preferred) 
  • Prior experience in flow cytometry is required, experience in immunology and tissue culture is a plus 
  • Ability to communicate effectively with colleagues, and internal and external stakeholders.
  • Impeccable record-keeping skills, compatible with a clinical operation
  • Dedication and commitment to patient needs, willingness to work overtime on emergency cases
  • Ability to work in teams, good interpersonal skills 
  • A good degree of computer literacy (Word processing, Excel spreadsheets)  good degree of computer literacy (Word processing, Excel spreadsheets)

Any of the following qualifications confer the candidate a strong advantage:

  • NY State clinical laboratory technologist ‘CLT’ license
  • Familiarity with BD hardware and software (especially FACS Canto) and/or FCS Express

Equal Opportunity Employer / Disability / Veteran

Columbia University is committed to the hiring of qualified local residents.

Commitment to Diversity 

Columbia university is dedicated to increasing diversity in its workforce, its student body, and its educational programs. achieving continued academic excellence and creating a vibrant university community require nothing less. in fulfilling its mission to advance diversity at the university, columbia seeks to hire, retain, and promote exceptionally talented individuals from diverse backgrounds.  , share this job.

Thank you - we'll send an email shortly.

Other Recently Posted Jobs

Chart Program Manager

Athletic resource provider.

Refer someone to this job

medical education experimental research

  • ©2022 Columbia University
  • Accessibility
  • Administrator Log in

Wait! Before you go, are you interested in a career at Columbia University? Sign up here! 

Thank you, for sharing your information. A member of our team will reach out to you soon!

Columbia University logo

This website uses cookies as well as similar tools and technologies to understand visitors' experiences. By continuing to use this website, you consent to Columbia University's usage of cookies and similar technologies, in accordance with the Columbia University Website Cookie Notice .

COMMENTS

  1. Research in Medical Education

    Research in Medical Education . A Primer for Medical Students . Anu Atluru, Anil Wadhwani, Katie Maurer, Angad Kochar, Dan London, Erin ... Early MedEd research was conducted in the traditions of experimental psychology and cognitive science by behavioral scientists not necessarily having a background in medicine. In the past several decades,

  2. Quantitative Research Methods in Medical Education

    A common experimental study design in medical education research is the single-group pretest-posttest design, which compares a group of learners before and after the implementation of an intervention. 21 In essence, a single-group pre-post design compares an experimental group (i.e., postintervention) to a "no-intervention" control ...

  3. Practical Guide to Experimental and Quasi-Experimental Research in

    Experimental and quasi-experimental study designs primarily stem from the positivism research paradigms, which argue that there is an objective truth to reality that can be discerned using the scientific method. 1 This hypothetico-deductive scientific model is a circular process that begins with a literature review to build testable hypotheses, experimental design that manipulates some ...

  4. Reflections on experimental research in medical education

    As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders.

  5. Appraising the Quality of Medical Education Research Methods

    The MERSQI was developed in 2007 as part of a study examining associations between funding for and quality of medical education research, and was "designed to measure the [methodological] quality of experimental, quasi-experimental, and observational studies." 8 Content domains and specific items were developed from literature on study ...

  6. Medical education research: evidence, evaluation and experience

    Medical education. research. research methods. The need for a firm evidence base for clinical practice is undisputed. Evidence of effectiveness, and of ineffectiveness or harm, necessarily informs decisions about patient care. The hierarchy of evidence is dominated by systematic reviews, meta-analyses and randomised controlled trials.

  7. Making sense of meta-analysis in medical education research

    In medical education research, the experimental variable is typically an education intervention, for instance, the impact of simulation-based education on the development of clinical reasoning or the performance of medical students. The researcher conducting an experimental study manipulates the intervention of interest (e.g., simulation-based ...

  8. Research in medical education: three decades of progress

    Educ Res 1995;24:5-11. This research has led to major advances in performance assessment—for example, the Medical Council of Canada now administers a performance examination to 1800 licensure candidates each year. 16 Changes in assessment methods at the school level have, however, been much slower in coming. 17.

  9. How Common Are Experimental Designs in Medical Education

    perts in education research note that experimental designs largely are incompatible with educational studies due to various contextual, legal, and ethical issues. Purpose: We sought to investigate the frequency with which experimental designs have been utilized in recent medical education dissertations and theses. Methods: A bibliometric analysis of dissertations and theses completed in the ...

  10. PDF Reflections on experimental research in medical education

    appropriate for many education research questions (Fraenkel and Wallen 2003). However, we will focus on experimental research, which seems particularly problematic for medical education investigators.

  11. Realist methods in medical education research: what are they ...

    Context: Education is a complex intervention which produces different outcomes in different circumstances. Education researchers have long recognised the need to supplement experimental studies of efficacy with a broader range of study designs that will help to unpack the 'how' and 'why' questions and illuminate the many, varied and interdependent mechanisms by which interventions may work (or ...

  12. Reflections on experimental research in medical education

    1. We agree that medical education. research should be rigorous. However, certain aspects of study design require special. consideration and emphasis. Descriptive, correlational, causal ...

  13. Reflections on experimental research in medical education

    As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently ...

  14. Quantitative Research Methods in Medical Education

    The past three decades of research have seen substantial advances in medical education, much of it directly related to the application of sophisticated quantitative methods, particularly in the area of student assessment. ... The chapter distinguishes four research traditions - experimental, epidemiological, psychometric, and correlational ...

  15. Types of Study in Medical Research

    Basic medical research (otherwise known as experimental research) includes animal experiments, cell studies, biochemical, genetic and physiological investigations, and studies on the properties of drugs and materials. In almost all experiments, at least one independent variable is varied and the effects on the dependent variable are investigated.

  16. Application of the Case Study Method in Medical Education

    case study method is considered to be the link between theory and practice in. medical education (Turk et al., 2019). The case study dates back to Harvard Law School in the 1870s (Servant-Miklos ...

  17. 2.4: Experimental Design and rise of statistics in medical research

    The principles of good experiments include many steps beyond simply choosing treatments and controls. In Chapter 5 we'll go into more depth, but I wished to list for you some of the key principles of good experimental design. With respect to human-subject research, the researcher needs to protect against many sources of potential bias.

  18. Vaccine hesitancy educational interventions for medical students: A

    His research interests focus primarily on medical education, particularly within primary care and community settings. Gill Rowlands Her main research interests are in the area of health inequalities, particularly the role of health literacy in health, and the role of GPs in identifying and addressing the problems faced by patients with lower ...

  19. 77 Exciting Medical Research Topics (2024)

    These issues result in health disparities and injustices. Examples of research topics about health inequities include: The impact of social determinants of health in a set population. Early and late-stage cancer stage diagnosis in urban vs. rural populations. Affordability of life-saving medications.

  20. Practical Guide to Education Program Evaluation Research

    Practical Guide to Experimental and Quasi-Experimental Research in Surgical Education. Roy Phitayakorn, MD, MHPE; Todd A. Schwartz, DrPH; Gerard M. Doherty, MD. JAMA Surgery. Guide to Statistics and Methods. ... In medical education, as in other fields, the quality of the synthesis is dependent on the rigor by which evaluations are performed. ...

  21. Clinical virtual simulation in nursing education: Randomized controlled

    [Correction Notice: An Erratum for this article was reported in Vol 21(6)[e14155] of Journal of Medical Internet Research (see record 2019-44689-001). In the original article, a paragraph from the Results section under the subheading "Self-Efficacy Perception" was erroneously duplicated in the Discussion section under the subheading "Clinical Virtual Simulation in Nursing Education." The ...

  22. Education

    Department of Medical Biology - Head of Department - Ph.D., Professor Dmitri Menglet Programs of the Departments include face-to-face classes and webinars, as well as lecture courses. RMS regularly publishes methodical manuals, reference and informational materials on various issues of clinical and experimental medicine.

  23. Cultural Competency Medical Education Program

    Elena Atochina-Vasserman, M.D.,Ph.D, is a Senior Laboratory Investigator in the Pulmonary, Allergy, and Critical Care Division (PACCD) of the Department of Medicine at the University of Pennsylvania. Dr. Atochina earned her M.D. from Tomsk Medical School and a Ph.D. in Biochemistry in 1990 from Russian Cardiology Research Center, Moscow, Russia.

  24. Research Assistant

    Job Type: Support Staff - Union Bargaining Unit: SSA Regular/Temporary: Regular End Date if Temporary: Hours Per Week: 35 Standard Work Schedule: Building: Salary Range: $59,845.49 - $59,845.49 The salary of the finalist selected for this role will be set based on a variety of factors, including but not limited to departmental budgets, qualifications, experience, education, licenses, specialty ...

  25. About Institute

    The National Medical Research Center of Surgery named after A. Vishnevsky is a large educational center. Each year about 250 young professionals and experienced Russian doctors come to the Center to pass their post-graduate programs and to get additional professional education.