Nursing Research Nursing Test Bank and Practice Questions (60 Items)

research article 1 questionnaire quizlet

Welcome to your nursing test bank and practice questions for nursing research.

Nursing Research Test Bank

Nursing research has a great significance on the contemporary and future professional nursing practice , thus rendering it an essential component of the educational process. Research is typically not among the traditional responsibilities of an entry-level  nurse . Many nurses are involved in either direct patient care or administrative aspects of health care. However, nursing research is a growing field in which individuals within the profession can contribute a variety of skills and experiences to the science of nursing care. Nursing research is critical to the nursing profession and is necessary for continuing advancements that promote optimal nursing care. Test your knowledge about nursing research in this 60-item nursing test bank .

Quiz Guidelines

Before you start, here are some examination guidelines and reminders you must read:

  • Practice Exams : Engage with our Practice Exams to hone your skills in a supportive, low-pressure environment. These exams provide immediate feedback and explanations, helping you grasp core concepts, identify improvement areas, and build confidence in your knowledge and abilities.
  • You’re given 2 minutes per item.
  • For Challenge Exams, click on the “Start Quiz” button to start the quiz.
  • Complete the quiz : Ensure that you answer the entire quiz. Only after you’ve answered every item will the score and rationales be shown.
  • Learn from the rationales : After each quiz, click on the “View Questions” button to understand the explanation for each answer.
  • Free access : Guess what? Our test banks are 100% FREE. Skip the hassle – no sign-ups or registrations here. A sincere promise from Nurseslabs: we have not and won’t ever request your credit card details or personal info for our practice questions. We’re dedicated to keeping this service accessible and cost-free, especially for our amazing students and nurses. So, take the leap and elevate your career hassle-free!
  • Share your thoughts : We’d love your feedback, scores, and questions! Please share them in the comments below.

Quizzes included in this guide are:

Recommended Resources

Recommended books and resources for your NCLEX success:

Disclosure: Included below are affiliate links from Amazon at no additional cost from you. We may earn a small commission from your purchase. For more information, check out our privacy policy .

Saunders Comprehensive Review for the NCLEX-RN Saunders Comprehensive Review for the NCLEX-RN Examination is often referred to as the best nursing exam review book ever. More than 5,700 practice questions are available in the text. Detailed test-taking strategies are provided for each question, with hints for analyzing and uncovering the correct answer option.

research article 1 questionnaire quizlet

Strategies for Student Success on the Next Generation NCLEXÂź (NGN) Test Items Next Generation NCLEXÂź-style practice questions of all types are illustrated through stand-alone case studies and unfolding case studies. NCSBN Clinical Judgment Measurement Model (NCJMM) is included throughout with case scenarios that integrate the six clinical judgment cognitive skills.

research article 1 questionnaire quizlet

Saunders Q & A Review for the NCLEX-RNÂź Examination This edition contains over 6,000 practice questions with each question containing a test-taking strategy and justifications for correct and incorrect answers to enhance review. Questions are organized according to the most recent NCLEX-RN test blueprint Client Needs and Integrated Processes. Questions are written at higher cognitive levels (applying, analyzing, synthesizing, evaluating, and creating) than those on the test itself.

research article 1 questionnaire quizlet

NCLEX-RN Prep Plus by Kaplan The NCLEX-RN Prep Plus from Kaplan employs expert critical thinking techniques and targeted sample questions. This edition identifies seven types of NGN questions and explains in detail how to approach and answer each type. In addition, it provides 10 critical thinking pathways for analyzing exam questions.

research article 1 questionnaire quizlet

Illustrated Study Guide for the NCLEX-RNÂź Exam The 10th edition of the Illustrated Study Guide for the NCLEX-RN Exam, 10th Edition. This study guide gives you a robust, visual, less-intimidating way to remember key facts. 2,500 review questions are now included on the Evolve companion website. 25 additional illustrations and mnemonics make the book more appealing than ever.

research article 1 questionnaire quizlet

NCLEX RN Examination Prep Flashcards (2023 Edition) NCLEX RN Exam Review FlashCards Study Guide with Practice Test Questions [Full-Color Cards] from Test Prep Books. These flashcards are ready for use, allowing you to begin studying immediately. Each flash card is color-coded for easy subject identification.

research article 1 questionnaire quizlet

Recommended Links

If you need more information or practice quizzes, please do visit the following links:

An investment in knowledge pays the best interest. Keep up the pace and continue learning with these practice quizzes:

  • Nursing Test Bank: Free Practice Questions UPDATED ! Our most comprehenisve and updated nursing test bank that includes over 3,500 practice questions covering a wide range of nursing topics that are absolutely free!
  • NCLEX Questions Nursing Test Bank and Review UPDATED! Over 1,000+ comprehensive NCLEX practice questions covering different nursing topics. We’ve made a significant effort to provide you with the most challenging questions along with insightful rationales for each question to reinforce learning.

4 thoughts on “Nursing Research Nursing Test Bank and Practice Questions (60 Items)”

Thanks for the well prepared questions and answers. It will be of a great help for those who look up your contributions.

Hi Zac, we’re having some performance issues with the quizzes so we’re forced to change their settings in the meantime. We are working on a solution and will revert the changes once we’re sure that the problem is resolved. Thanks for the understanding!

I need pass question and answer on nursing research

Leave a Comment Cancel reply

Study Site Homepage

  • Request new password
  • Create a new account

Research Methods in Early Childhood: An Introductory Guide

Student resources, multiple choice quiz.

Test your understanding of each chapter by taking the quiz below. Click anywhere on the question to reveal the answer. Good luck!

1. A literature review is best described as:

  • A list of relevant articles and other published material you have read about your topic, describing the content of each source
  • An internet search for articles describing research relevant to your topic criticising the methodology and reliability of the findings
  • An evaluative overview of what is known about a topic, based on published research and theoretical accounts, which serves as a basis for future research or policy decisions
  • An essay looking at the theoretical background to your research study

2. Choose the best answer. A literature review is

  • Conducted after you have decided upon your research question
  • Helps in the formulation of your research aim and research question
  • Is the last thing to be written in your research report
  • Is not part of a research proposal

3. Choose the best answer. Which is the most reliable source of information for your literature review?

  • A TV documentary
  • A newspaper article
  • A peer reviewed research article
  • A relevant chapter from a textbook

4. Choose the best answer. Critical analysis means

  • Subjecting the literature to a process of interrogation in order to assess the relevance, authenticity and reliability of the literature together with the summarising of common thematic areas of discussion
  • An evaluation of past research being critical of the methodology used and describing how your methodology will be an improvement
  • An analysis of theoretical approaches showing how they are no longer valid according to our current state of knowledge
  • Looking at the way articles are structured, pointing out logical inconsistencies

5. Which is not a reason for accurate referencing in your literature review?

  • Accurate referencing is needed so that tutors can follow up your sources and check that you have reported them accurately
  • Accurate referencing is needed so that researchers who read your work are alerted to source that might be helpful for them
  • Referencing shows that you go to the library when not in lectures
  • Accurate referencing is required because it is an academic convention

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.38(48); 2023 Dec 11
  • PMC10713437

Logo of jkms

Designing, Conducting, and Reporting Survey Studies: A Primer for Researchers

Olena zimba.

1 Department of Clinical Rheumatology and Immunology, University Hospital in Krakow, Krakow, Poland.

2 National Institute of Geriatrics, Rheumatology and Rehabilitation, Warsaw, Poland.

3 Department of Internal Medicine N2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine.

Armen Yuri Gasparyan

4 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, UK.

Survey studies have become instrumental in contributing to the evidence accumulation in rapidly developing medical disciplines such as medical education, public health, and nursing. The global medical community has seen an upsurge of surveys covering the experience and perceptions of health specialists, patients, and public representatives in the peri-pandemic coronavirus disease 2019 period. Currently, surveys can play a central role in increasing research activities in non-mainstream science countries where limited research funding and other barriers hinder science growth. Planning surveys starts with overviewing related reviews and other publications which may help to design questionnaires with comprehensive coverage of all related points. The validity and reliability of questionnaires rely on input from experts and potential responders who may suggest pertinent revisions to prepare forms with attractive designs, easily understandable questions, and correctly ordered points that appeal to target respondents. Currently available numerous online platforms such as Google Forms and Survey Monkey enable moderating online surveys and collecting responses from a large number of responders. Online surveys benefit from disseminating questionnaires via social media and other online platforms which facilitate the survey internationalization and participation of large groups of responders. Survey reporting can be arranged in line with related recommendations and reporting standards all of which have their strengths and limitations. The current article overviews available recommendations and presents pointers on designing, conducting, and reporting surveys.

INTRODUCTION

Surveys are increasingly popular research studies that are aimed at collecting and analyzing opinions of diverse subject groups at certain periods. Initially and predominantly employed for applied social science research, 1 surveys have maintained their social dimension and transformed into indispensable tools for analyzing knowledge, perceptions, prevalence of clinical conditions, and practices in the medical sciences. 2 In rapidly developing disciplines with social dimensions such as medical education, public health, and nursing, online surveys have become essential for monitoring and auditing healthcare and education services 3 , 4 and generating new hypotheses and research questions. 5 In non-mainstream science countries with uninterrupted Internet access, online surveys have also been praised as useful studies for increasing research activities. 6

In 2016, the Medical Subject Headings (MeSH) vocabulary of the US National Library of Medicine introduced "surveys and questionnaires" as a structured keyword, defining survey studies as "collections of data obtained from voluntary subjects" ( https://www.ncbi.nlm.nih.gov/mesh/?term=surveys+and+questionnaires ). Such studies are instrumental in the absence of evidence from randomized controlled trials, systematic reviews, and cohort studies. Tagging survey reports with this MeSH term is advisable for increasing the retrieval of relevant documents while searching through Medline, Scopus, and other global databases.

Surveys are relatively easy to conduct by distributing web-based and non-web-based questionnaires to large groups of potential responders. The ease of conduct primarily depends on the way of approaching potential respondents. Face-to-face interviews, regular postmails, e-mails, phone calls, and social media posts can be employed to reach numerous potential respondents. Digitization and social media popularization have improved the distribution of questionnaires, expanded respondents' engagement, facilitated swift data processing, and globalization of survey studies. 7

SURVEY REPORTING GUIDANCE

Despite the ease of survey studies and their importance for maintaining research activities across academic disciplines, their methodological quality, reproducibility, and implications vary widely. The deficiencies in designing and reporting are the main reason for the inefficiency of some surveys. For instance, systematic analyses of survey methodologies in nephrology, transfusion medicine, and radiology have indicated that less than one-third of related reports provide valid and reliable data. 8 , 9 , 10 Additionally, no discussions of respondents' representativeness, reasons for nonresponse, and generalizability of the results have been pinpointed as drawbacks of some survey reports. The revealed deficiencies have justified the need for survey designing and data processing in line with reporting recommendations, including those listed on the EQUATOR Network website ( https://www.equator-network.org/ ).

Arguably, survey studies lack discipline-specific and globally-acceptable reporting guidance. The diversity of surveyed subjects and populations is perhaps the main confounder. Although most questionnaires contain socio-demographic questions, there are no reporting guidelines specifically tailored to comprehensively inquire specialists across different academic disciplines, patients, and public representatives.

The EQUATOR Network platform currently lists some widely promoted documents with statements on conducting and reporting web-based and non-web-based surveys ( Table 1 ). 11 , 12 , 13 , 14 The oldest published recommendation guides on postal, face-to-face, and telephone interviews. 1 One of its critical points highlights the need to formulate a clear and explicit question/objective to run a focused survey and to design questionnaires with respondent-friendly layout and content. 1 The Checklist for Reporting Results of Internet E-Surveys (CHERRIES) is the most-used document for reporting online surveys. 11 The CHERRIES checklist included points on ensuring the reliability of online surveys and avoiding manipulations with multiple entries by the same users. 11 A specific set of recommendations, listed by the EQUATOR Network, is available for specialists who plan web-based and non-web-based surveys of knowledge, attitude, and practice in clinical medicine. 12 These recommendations help design valid questionnaires, survey representative subjects with clinical knowledge, and complete transparent reporting of the obtained results. 12

COVID-19 = coronavirus disease 2019.

From January 2018 to December 2019, three rounds of surveying experts with interest in surveys and questionnaires allowed reaching consensus on a set of points for reporting web-based and non-web-based surveys. 13 The Consensus-Based Checklist for Reporting of Survey Studies included a rating of 19 items of survey reports, from titles to acknowledgments. 13 Finally, rapid recommendations on online surveys amid the coronavirus disease 2019 (COVID-19) pandemic were published to guide the authors on how to choose social media and other online platforms for disseminating questionnaires and targeting representative groups of respondents. 14

Adhering to a combination of these recommendations is advisable to minimize the limitations of each document and increase the transparency of survey reports. For cross-sectional analyses of large sample sizes, additionally consulting the STROBE standard of the EQUATOR Network may further improve the accuracy of reporting respondents' inclusion and exclusion criteria. In fact, there are examples of online survey reports adhering to both CHERRIES and STROBE recommendations. 15 , 16

ETHICS CONSIDERATIONS

Although health research authorities in some countries lack mandates for full ethics review of survey studies, obtaining formal review protocols or ethics waivers is advisable for most surveys involving respondents from more than one country. And following country-based regulations and ethical norms of research are therefore mandatory. 14 , 17

Full ethics review or exemption procedures are important steps for planning and conducting ethically sound surveys. Given the non-interventional origin and absence of immediate health risks for participants, ethics committees may approve survey protocols without a full ethics review. 18 A full ethics review is however required when the informational and psychological harms of surveys increase the risk. 18 Informational harms may result from unauthorized access to respondents' personal data and stigmatization of respondents with leaked information about social diseases. Psychological harms may include anxiety, depression, and exacerbation of underlying psychiatric diseases.

Survey questionnaires submitted for evaluation should indicate how informed consent is obtained from respondents. 13 Additionally, information about confidentiality, anonymity, questionnaire delivery modes, compensations, and mechanisms preventing unauthorized access to questionnaires should be provided. 13 , 14 Ethical considerations and validation are especially important in studies involving vulnerable and marginalized subjects with diminished autonomy and poor social status due to dementia, substance abuse, inappropriate sexual behavior, and certain infections. 18 , 19 , 20 Precautions should be taken to avoid confidentiality breaches and bot activities when surveying via insecure online platforms. 21

Monetary compensation helps attract respondents to fill out lengthy questionnaires. However, such incentives may create mechanisms deceiving the system by surveyees with a primary interest in compensation. 22 Ethics review protocols may include points on recording online responders' IP addresses and blocking duplicate submissions from the same Internet locations. 22 IP addresses are viewed as personal information in the EU, but not in the US. Notably, IP identification may deter some potential responders in the EU. 21

PATIENT KNOWLEDGE AND PERCEPTION SURVEYS

The design of patient knowledge and perception surveys is insufficiently defined and poorly explored. Although such surveys are aimed at consistently covering research questions on clinical presentation, prevention, and treatment, more emphasis is now placed on psychometric aspects of designing related questionnaires. 23 , 24 , 25 Targeting responsive patient groups to collect reliable answers is yet another challenge that can be addressed by distributing questionnaires to patients with good knowledge of their diseases, particularly those registering with university-affiliated clinics and representing patient associations. 26 , 27 , 28

The structure of questionnaires may differ for surveys of patient groups with various age-dependent health issues. Care should be taken when children are targeted since they often report a variety of modifiable conditions such as anxiety and depression, musculoskeletal problems, and pain, affecting their quality of life. 29 Likewise, gender and age differences should be considered in questionnaires addressing the quality of life in association with mental health and social status. 30 Questionnaires for older adults may benefit from including questions about social support and assistance in the context of caring for aging diseases. 31 Finally, addressing the needs of digital technologies and home-care applications may help to ensure the completeness of questionnaires for older adults with sedentary lifestyles and mobility disabilities. 32 , 33

SOCIAL MEDIA FOR QUESTIONNAIRE DISTRIBUTION

The widespread use of social media has made it easier to distribute questionnaires to a large number of potential responders. Employing popular platforms such as Twitter and Facebook has become particularly useful for conducting nationwide surveys on awareness and concerns about global health and pandemic issues. 34 , 35 When various social media platforms are simultaneously employed, participants' sociodemographic factors such as gender, age, and level of education may confound the study results. 36 Knowing targeted groups' preferred online networking and communication sites may better direct the questionnaire distribution. 37 , 38 , 39

Preliminary evidence suggests that distributing survey links via social-media accounts of individual users and organized e-groups with interest in specific health issues may increase their engagement and correctness of responses. 40 , 41

Since surveys employing social media are publicly accessible, related questionnaires should be professionally edited to easily inquire target populations, avoid sensitive and disturbing points, and ensure privacy and confidentiality. 42 , 43 Although counting e-post views is feasible, response rates of social-media distributed questionnaires are practically impossible to record. The latter is an inherent limitation of such surveys.

SURVEY SAMPLING

Establishing connections with target populations and diversifying questionnaire dissemination may increase the rigor of current surveys which are abundantly administered. 44 Sample sizes depend on various factors, including the chosen topic, aim, and sampling strategy (random or non-random). 12 Some topics such as COVID-19 and global health may easily attract the attention of large respondent groups motivated to answer a variety of questionnaire questions. In the beginning of the pandemic, most surveys employed non-random (non-probability) sampling strategies which resulted in analyses of numerous responses without response rate calculations. These qualitative research studies were mainly aimed to analyze opinions of specialists and patients exposed to COVID-19 to develop rapid guidelines and initiate clinical trials.

Outside the pandemic, and beyond hot topics, there is a growing trend of low response rates and inadequate representation of target populations. 45 Such a trend makes it difficult to design and conduct random (probability) surveys. Subsequently, hypotheses of current online surveys often omit points on randomization and sample size calculation, ending up with qualitative analyses and pilot studies. In fact, convenience (non-random or non-probability) sampling can be particularly suitable for previously unexplored and emerging topics when overviewing literature cannot help estimate optimal samples and entirely new questionnaires should be designed and tested. The limitations of convenience sampling minimize the generalizability of the conclusions since the sample representativeness is uncertain. 45

Researchers often employ 'snowball' sampling techniques with initial surveyees forwarding the questionnaires to other interested respondents, thereby maximizing the sample size. Another common technique for obtaining more responses relies on generating regular social media reminders and resending e-mails to interested individuals and groups. Such tactics can increase the study duration but cannot exclude the participation bias and non-response.

Purposive or targeted sampling is perhaps the most precise technique when knowing the target population size and respondents' readiness to correctly fill the questionnaires and ensure an exact estimate of response rate, close to 100%. 46

DESIGNING QUESTIONNAIRES

Correctness, confidentiality, privacy, and anonymity are critical points of inquiry in questionnaires. 47 Correctly worded and convincingly presented survey invitations with consenting options and reassurances of secure data processing may increase response rates and ensure the validity of responses. 47 Online surveys are believed to be more advantageous than offline inquiries for ensuring anonymity and privacy, particularly for targeting socially marginalized and stigmatized subjects. Online study design is indeed optimal for collecting more responses in surveys of sex- and gender-related and otherwise sensitive topics.

Performing comprehensive literature reviews, consultations with subject experts, and Delphi exercises may all help to specify survey objectives, identify questionnaire domains, and formulate pertinent questions. Literature searches are required for in-depth topic coverage and identification of previously published relevant surveys. By analyzing previous questionnaire characteristics, modifications can be made to designing new self-administered surveys. The justification of new studies should correctly acknowledge similar published reports to avoid redundancies.

The initial part of a questionnaire usually includes a short introduction/preamble/cover letter that specifies the objectives, target respondents, potential benefits and risks, and moderators' contact details for further inquiries. This part may motivate potential respondents to consent and answer questions. The specifics, volume, and format of other parts are dependent on revisions in response to pretesting and pilot testing. 48 The pretesting usually involves co-authors and other contributors, colleagues with the subject interest while the pilot testing usually involves 5-10 target respondents who are well familiar with the subject and can swiftly complete the questionnaires. The guidance obtained at the pretesting and pilot testing allows editing, shortening, or expanding questionnaire sections. Although guidance on questionnaire length and question numbers is scarce, some experts empirically consider 5 domains with 5 questions in each as optimal. 12 Lengthy questionnaires may be biased due to respondents' fatigue and inability to answer numerous and complicated questions. 46

Questionnaire revisions are aimed at ensuring the validity and consistency of questions, implying the appeal to relevant responders and accurate covering of all essential points. 45 Valid questionnaires enable reliable and reproducible survey studies that end up with the same responses to variably worded and located questions. 45

Various combinations of open-ended and close-ended questions are advisable to comprehensively cover all pertinent points and enable easy and quick completion of questionnaires. Open-ended questions are usually included in small numbers since these require more time to respond. 46 Also, the interpretation and analysis of responses to open-ended questions hardly contribute to generating robust qualitative data. 49 Close-ended questions with single and multiple-choice answers constitute the main part of a questionnaire, with single answers easier to analyze and report. Questions with single answers can be presented as 3 or more Likert scales (e.g., yes/no/do not know).

Avoiding too simplistic (yes/no) questions and replacing them with Likert-scale items may increase the robustness of questionnaire analyses. 50 Additionally, constructing easily understandable questions, excluding merged items with two or more points, and moving sophisticated questions to the beginning of a questionnaire may add to the quality and feasibility of the study. 50

Survey studies are increasingly conducted by health professionals to swiftly explore opinions on a wide range of topics by diverse groups of specialists, patients, and public representatives. Arguably, quality surveys with generalizable results can be instrumental for guiding health practitioners in times of crises such as the COVID-19 pandemic when clinical trials, systematic reviews, and other evidence-based reports are scarcely available or absent. Online surveys can be particularly valuable for collecting and analyzing specialist, patient, and other subjects' responses in non-mainstream science countries where top evidence-based studies are scarce commodities and research funding is limited. Accumulated expertise in drafting quality questionnaires and conducting robust surveys is valuable for producing new data and generating new hypotheses and research questions.

The main advantages of surveys are related to the ease of conducting such studies with limited or no research funding. The digitization and social media advances have further contributed to the ease of surveying and growing global interest toward surveys among health professionals. Some of the disadvantages of current surveys are perhaps those related to imperfections of digital platforms for disseminating questionnaires and analysing responses.

Although some survey reporting standards and recommendations are available, none of these comprehensively cover all items of questionnaires and steps in surveying. None of the survey reporting standards is based on summarizing guidance of a large number of contributors involved in related research projects. As such, presenting the current guidance with a list of items for survey reports ( Table 2 ) may help better design and publish related articles.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Zimba O.
  • Formal analysis: Zimba O, Gasparyan AY.
  • Writing - original draft: Zimba O.
  • Writing - review & editing: Zimba O, Gasparyan AY.

Tool in School: Quizlet

  • Posted May 22, 2018
  • By Lory Hough

Quizlet

When 15-year-old Andrew Sutherland created a software program in 2005 to help him study 111 French terms for a test on animals, little did he imagine that the program would eventually become one of the fastest-growing free education tools, with 30 million monthly users from 130 countries.

“Quizlet has absolutely become a valuable tool,” Sutherland says. “In the United States, half of all high school students and a third of all college students use us every month. That’s not something I expected to happen when I made it in high school, and it speaks to how essential it has become.”

Part of the appeal is that Quizlet takes a simple idea — picture paper flash cards — but gives it a modern twist. Online users create study sets (terms and definitions) or use study sets created by others, including classmates. They then have multiple ways to study the information: virtual flashcards or typing in answers to written or audio prompts. There are also two games: match (drag the correct answer) and gravity (type the correct answer as asteroids fall).

The online format is key, he says. “The appeal of a digital learning tool is that it can ask much more dynamic questions than what you can do with paper. Quizlet can figure out what material you’re struggling with and just focus on that. It can also verify what you know and coach you to only stop studying when it thinks you’re ready.”

Recently, they launched Quizlet Live for students to work in teams during class. Teacher feedback was key, he says, but adds, “My favorite type of feedback is hearing from teachers about new-use cases. The other day I was at a chocolate store and wearing my Quizlet shirt. The woman there said she uses Quizlet to train all their new employees about their chocolate. I want that job!”

Ed Magazine Logo

Ed. Magazine

The magazine of the Harvard Graduate School of Education

Related Articles

Girl in classroom reading on a tablet

Can Tablets Transform Teaching?

Graphical illustration of a laptop surrounded by representations of students

For Note Taking, Low-Tech Is Often Best

Prasanth Nori on Zoom call

Q+A: Prasanth Nori, Ed.M.’19

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9 Survey research

Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930–40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.

The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organisations or dyads—pairs of organisations, such as buyers and sellers—are also studied using surveys, such studies often use a specific person from each unit as a ‘key informant’ or a ‘proxy’ for that unit. Consequently, such surveys may be subject to respondent bias if the chosen informant does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employees’ perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.

Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking habits), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area—such as an entire country—can be covered by postal, email, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analysing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is more economical in terms of researcher time, effort and cost than other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed at the end of this chapter.

Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be postal, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.

Questionnaire surveys

Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardised manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed in such a way that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.

Most questionnaire surveys tend to be self-administered postal surveys , where the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes. Postal surveys are advantageous in that they are unobtrusive and inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from postal surveys tend to be quite low since most people ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey, or they may even simply lose it. Hence, the researcher must continuously monitor responses as they are being returned, track and send non-respondents repeated reminders (two or three reminders at intervals of one to one and a half months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.

A second type of survey is a group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with one another. This format is convenient for the researcher, and a high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organisations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.

A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an email request for participation in the survey with a link to a website where the survey may be completed. Alternatively, the survey may be embedded into an email, and can be completed and returned via email. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people who do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward a younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic if the survey link is posted on LISTSERVs or bulletin boards instead of being emailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., postal survey and online survey), allowing respondents to select their preferred method of response.

Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.

Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:

Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances? (circle one): yes / no.

Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment?: manufacturing / consumer services / retail / education / healthcare / tourism and hospitality / other.

Ordinal response , where respondents have more than two ordered options, such as: What is your highest level of education?: high school / bachelor’s degree / postgraduate degree.

Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.

Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.

Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) [1] recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinised for the following issues:

Is the question clear and understandable ?: Survey questions should be stated in very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialised group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. Is the question worded in a negative manner ?: Negatively worded questions such as ‘Should your local government not raise taxes?’ tend to confuse many respondents and lead to inaccurate responses. Double-negatives should be avoided when designing survey questions.

Is the question ambiguous ?: Survey questions should not use words or expressions that may be interpreted differently by different respondents (e.g., words like ‘any’ or ‘just’). For instance, if you ask a respondent, ‘What is your annual income?’, it is unclear whether you are referring to salary/wages, or also dividend, rental, and other income, whether you are referring to personal income, family income (including spouse’s wages), or personal and business income. Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.

Does the question have biased or value-laden words ?: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) [2] examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for ‘assistance to the poor’ and less for ‘welfare’, even though both terms had the same meaning. In this study, more support was also observed for ‘halting rising crime rate’ and less for ‘law enforcement’, more for ‘solving problems of big cities’ and less for ‘assistance to big cities’, and more for ‘dealing with drug addiction’ and less for ‘drug rehabilitation’. A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinised to avoid biased language.

Is the question double-barrelled ?: Double-barrelled questions are those that can have multiple answers. For example, ‘Are you satisfied with the hardware and software provided for your work?’. In this example, how should a respondent answer if they are satisfied with the hardware, but not with the software, or vice versa? It is always advisable to separate double-barrelled questions into separate questions: ‘Are you satisfied with the hardware provided for your work?’, and ’Are you satisfied with the software provided for your work?’. Another example: ‘Does your family favour public television?’. Some people may favour public TV for themselves, but favour certain cable TV programs such as Sesame Street for their children.

Is the question too general ?: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provided a response scale ranging from ‘not at all’ to ‘extremely well’, if that person selected ‘extremely well’, what do they mean? Instead, ask more specific behavioural questions, such as, ‘Will you recommend this book to others, or do you plan to read other books by the same author?’. Likewise, instead of asking, ‘How big is your firm?’ (which may be interpreted differently by respondents), ask, ‘How many people work for your firm?’, and/or ‘What is the annual revenue of your firm?’, which are both measures of firm size.

Is the question too detailed ?: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household, or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.

Is the question presumptuous ?: If you ask, ‘What do you see as the benefits of a tax cut?’, you are presuming that the respondent sees the tax cut as beneficial. Many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire services. Avoid questions with built-in presumptions.

Is the question imaginary ?: A popular question in many television game shows is, ‘If you win a million dollars on this show, how will you spend it?’. Most respondents have never been faced with such an amount of money before and have never thought about it—they may not even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period—and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.

Do respondents have the information needed to correctly answer the question ?: Oftentimes, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, these responses tend to be inaccurate given the subjects’ lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or ask teachers about how much their students are learning, or ask high-schoolers, ‘Do you think the US Government acted appropriately in the Bay of Pigs crisis?’.

Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:

Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.

Never start with an open ended question.

If following a historical sequence of events, follow a chronological order from earliest to latest.

Ask about one topic at a time. When switching topics, use a transition, such as, ‘The next section examines your opinions about
’

Use filter or contingency questions as needed, such as, ‘If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3′.

Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:

People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.

Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).

For organisational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.

Thank your respondents for their participation in your study.

Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.

Interview survey

Interviews are a more personalised data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardised set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that are not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike postal surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Interviewers need special interviewing skills as they are considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.

The most typical form of interview is a personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favoured by some respondents, while others may feel uncomfortable allowing a stranger into their homes. However, skilled interviewers can persuade respondents to co-operate, dramatically improving response rates.

A variation of the personal interview is a group interview, also called a focus group . In this technique, a small group of respondents (usually 6–10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.

A third type of interview survey is a telephone interview . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI). This is increasing being used by academic, government, and commercial survey researchers. Here the interviewer is a telephone operator who is guided through the interview process by a computer program displaying instructions and questions to be asked. The system also selects respondents randomly using a random digit dialling technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.

Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:

Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. They should also rehearse and time the interview prior to the formal study.

Locate and enlist the co-operation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedules at sometimes undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.

Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents will not be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.

Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.

Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.

Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, they should speak in an imperative and confident tone, such as, ‘I’d like to take a few minutes of your time to interview you for a very important study’, instead of, ‘May I come in to do an interview?’. They should introduce themself, present personal credentials, explain the purpose of the study in one to two sentences, and assure respondents that their participation is voluntary, and their comments are confidential, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to record the interview, they should ask for respondents’ explicit permission before doing so. Even if the interview is recorded, the interviewer must take notes on key issues, probes, or verbatim phrases

During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:

The silent probe: Just pausing and waiting without going into the next question may suggest to respondents that the interviewer is waiting for more detailed response.

Overt encouragement: An occasional ‘uh-huh’ or ‘okay’ may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what the respondent says.

Ask for elaboration: Such as, ‘Can you elaborate on that?’ or ‘A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?’.

Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, ‘What I’m hearing is that you found that experience very traumatic’ and then pause and wait for the respondent to elaborate.

After the interview is completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.

Biases in survey research

Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.

Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20 per cent is typical in a postal survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, this may indicate a systematic reason for the low response rate, which may in turn raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalisability, but the observed outcomes may also be an artefact of the biased sample. Several strategies may be employed to improve response rates:

Advance notification: Sending a short letter to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their co-operation. A variation of this technique may be to ask the respondent to return a prepaid postcard indicating whether or not they are willing to participate in the study.

Relevance of content: People are more likely to respond to surveys examining issues of relevance or importance to them.

Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.

Endorsement: For organisational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organisation. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.

Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.

Interviewer training: Response rates for interviews can be improved with skilled interviewers trained in how to request interviews, use computerised dialling techniques to identify potential respondents, and schedule call-backs for respondents who could not be reached.

Incentives : Incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth may increase response rates.

Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.

Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates

Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone when the survey is being conducted—for instance, if they are at work—and will include a disproportionate number of respondents who have landline telephone services with listed phone numbers and people who are home during the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the wrong population, such as asking teachers (or parents) about their students’ (or children’s) academic learning, or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalisability claims about inferences drawn from the biased sample.

Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as, ‘Do you think that your project team is dysfunctional?’, ‘Is there a lot of office politics in your workplace?’, ‘Or have you ever illegally downloaded music files from the Internet?’, the researcher may not get truthful responses. This tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner is called the ‘social desirability bias’, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviours, or perhaps their memory of such events may have evolved with time and no longer be retrievable. For instance, if a respondent is asked to describe his/her utilisation of computer technology one year ago, or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artefacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff, MacKenzie, Lee & Podsakoff, 2003), [3] Lindell and Whitney’s (2001) [4] market variable technique, and so forth. This bias can potentially be avoided if the independent and dependent variables are measured at different points in time using a longitudinal survey design, or if these variables are measured using different methods, such as computerised recording of dependent variable versus questionnaire-based self-rating of independent variables.

  • Dillman, D. (1978). Mail and telephone surveys: The total design method . New York: Wiley. ↵
  • Rasikski, K. (1989). The effect of question wording on public support for government spending. Public Opinion Quarterly , 53(3), 388–394. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology , 88(5), 879–903. http://dx.doi.org/10.1037/0021-9010.88.5.879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology , 86(1), 114–121. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

SYSTEMATIC REVIEW article

Quantifying cognitive and affective impacts of quizlet on learning outcomes: a systematic review and comprehensive meta-analysis.

Osman 
zdemir

  • 1 Foreign Language Education, School of Foreign Languages, Selcuk University, Konya, Türkiye
  • 2 Foreign Language Education, School of Foreign Languages, Akdeniz University, Antalya, Türkiye

Background: This study synthesizes research on the impact of Quizlet on learners’ vocabulary learning achievement, retention, and attitude. Quizlet’s implementation in language education is posited to enhance the learning experience by facilitating the efficient and engaging assimilation of new linguistic concepts. The study aims to determine the extent to which Quizlet influences vocabulary learning achievement, retention, and attitude.

Methods: Employing a meta-analysis approach, this study investigates the primary research question: “Does Quizlet affect students’ vocabulary learning achievement, learning retention, and attitude?” Data were collected from various databases, identifying 94 studies, of which 23 met the inclusion criteria. The coding reliability was established at 98%, indicating a high degree of agreement among experts. A combination of random and fixed effects models was used to analyze the effect size of Quizlet on each outcome variable.

Results: Quizlet was found to have a statistically significant impact on learners’ vocabulary learning achievement, retention, and attitude. Specifically, it showed moderate effects on vocabulary learning achievement ( g = 0.62) and retention ( g = 0.74), and a small effect on student attitude ( g = 0.37). The adoption of the fixed effects model for attitude was due to homogeneous distribution, while the random effects model was used for achievement and retention because of heterogeneous distribution.

Conclusion: Quizlet enhances vocabulary learning achievement, retention, and has small positive effect on learner attitude. Its integration into language education curricula is recommended to leverage these benefits. Further research is encouraged to explore the optimization of Quizlet and similar platforms for educational success.

1 Introduction

The exponential expansion of digital technologies within the realm of pedagogy has sparked an escalating curiosity in scrutinizing their effects on academic achievement among students. This surge in interest calls for a thorough examination of how these technological tools are reshaping educational practices and outcomes. Students are frequently referred to as “Digital Natives” on account of their innate fluency with various technological devices such as computers, the internet, and video games ( Prensky, 2009 ). This inherent proficiency has been pivotal in driving the integration of digital tools in educational settings. The seamless incorporation of these technologies into classrooms, particularly language learning classrooms, highlights the evolving dynamics of modern education and emphasizes the need for empirical research to assess their impact. The field of language education has undergone a significant transformation due to the increasing influence of technology, resulting in a shift towards the integration of computers, mobile devices, and technology into teaching and learning practices ( Aprilani, 2021 ). This paradigm shift demonstrates the critical role of technology in facilitating innovative teaching and learning strategies and thus improving the quality and accessibility of education. This integration has not only reshaped traditional educational methodologies but has also necessitated the incorporation of information technology (IT) into the teaching and learning process ( Eady and Lockyer, 2013 ). As a result, the academic community is increasingly focused on understanding the impact of these changes on pedagogical practices, teaching and learning processes and student outcomes. In the domain of language education, the utilization of mobile technology has the capacity to transcend the constraints imposed by conventional learning methodologies in terms of spatial and temporal limitations, which ultimately caters to the individualized learning requirements of contemporary tertiary level scholars ( Lin and Chen, 2022 ). This situation emphasizes the importance of investigating the effectiveness of mobile technologies in improving the quality of the learning process in language education and meeting the different needs of learners. Moreover, the global application of technology in English teaching and learning has been instrumental, benefiting both teachers and learners by facilitating classroom activities and accelerating language acquisition through the use of technology and its services ( Nguyen and Van Le, 2022 ). Moreover, the integration of technology within the language learning milieu cultivates a heightened sense of self-directed and malleable learning methodology. Studies conducted on the domain of Computer-Assisted Language Learning (CALL) and Mobile-Assisted Language Learning (MALL) have inferred that the application of technological tools in the language learning process especially in the acquisition of vocabulary, particularly for non-native speakers, can be an efficacious strategy ( McLean et al., 2013 ; Chatterjee, 2022 ). This body of research provides a compelling rationale for investigating specific digital tools like Quizlet and their potential to enhance language learning. As this digital transformation in language education continues, it becomes critical to examine specific digital tools and their unique contributions to this evolving educational paradigm, especially how they combine traditional methods with innovative technology-based strategies. Amidst this technological revolution in education, the role of specific tools such as Quizlet becomes increasingly significant. By focusing on Quizlet, this study aims to bridge the gap in the literature regarding the effectiveness of digital tools in enhancing vocabulary acquisition among language learners. Integrating technology into the language learning environment fosters a sense of independent and flexible learning methodology, especially in terms of vocabulary acquisition for non-native speakers. It is precisely at this point that the functionality and applicability of tools such as Quizlet, as part of the trend in digital education, becomes important in representing an intersection between traditional learning methodologies and modern, technology-enhanced approaches. Therefore, this study seeks to contribute to the broader discourse on digital education by examining the impact of Quizlet on vocabulary learning, retention and attitude, thereby offering insights into its role as a transformative tool in language education. By systematically reviewing and meta-analyzing existing literature, our study aims to provide a definitive assessment of Quizlet’s role in the digital education landscape, highlighting its potential as a transformative educational tool.

2 Literature review

In light of this reality, a myriad of applications with a focus on improving cognitive and emotional aspects of learners have surfaced on the internet, a substantial proportion of which can be readily downloaded and employed by users without incurring any costs. This proliferation of digital resources presents a convenient and easily accessible means for language learners to supplement their vocabulary acquisition, retention, motivation and attitude endeavors. An exemplar of such innovative technological solutions is Quizlet, a widely utilized online platform that provides a diverse array of educational tools, comprising interactive flashcards, gamified activities, and evaluation assessments, among others.

Andrew Sutherland designed a learning aid in 2005 that facilitated his academic excellence in French vocabulary assessment. He imparted it to his peers, and it resulted in a similar achievement in their respective assessments. Quizlet has since emerged as a powerful educational resource that has gained immense popularity, serving more than 60 million students and learners each month. Its widespread usage spans a wide range of disciplines, including mathematics, medicine, and foreign language acquisition, among others ( Quizlet, 2023 ). Quizlet provides a platform that enables learners to curate personalized study materials consisting of conceptual units coupled with their corresponding definitions or elucidations. Learners engage with these instructional modules through varied modes of learning, such as flashcards, games, and quizzes ( Fursenko et al., 2021 ). It is a popular web-based platform that offers a range of study tools, including flashcards, games, and quizzes. Quizlet is a well-known online learning application that enables users to build and study interactive resources like games and flashcards. According to Quizlet (2023) , learning can be improved by using it in a variety of contexts and areas. The Quizlet mobile application is particularly effective for constructing vocabulary content. It has been proposed as a convenient and pleasurable method for acquiring vocabulary knowledge ( Davie and Hilber, 2015 ). Within the app, users can access vocabulary “sets” created by other users, or they can generate their own sets and access them as flashcards or through a gaming interface ( Senior, 2022 ). Quizlet is renowned for its distinctive attributes that pertain to the creation of flashcards, multilingual capacity, and the ability to incorporate images, among other forms of diverse exercises. However, it lacks the provision of scheduling and expanded retrieval intervals as the learning process advances. There are various ways in which vocabulary sets in Quizlet can be disseminated, including but not limited to printing, embedding, incorporating URL links, and utilizing QR codes. These options provide a range of alternatives for learners to study at their preferred pace, allowing for individualized and autonomous learning experiences ( Waluyo and Bucol, 2021 ). This shift places Quizlet within a broader movement towards digitalization in education, juxtaposing its role with other emerging educational technologies.

The implementation of Quizlet in language education can potentially augment the learning experience and facilitate the assimilation of new linguistic concepts in a more efficient and engaging manner ( Wang et al., 2021 ). While many educators and students have claimed that Quizlet improves cognitive and emotional learning outcomes, the empirical evidence supporting this claim has been mixed. Alastuey and Nemeth (2020) discussed the effects of Quizlet on vocabulary acquisition, highlighting the cognitive, affective, and motivational benefits for students creating their own learning material. However, Nguyen and Van Le (2022) pointed out the limited empirical research on the effectiveness of Quizlet, indicating a gap in the evidence. İnci (2020) reported that the use of the application in foreign language lessons improved learner engagement of learners. On the other hand, Berliani and Katemba (2021) found that most students considered Quizlet effective in learning vocabulary, supporting the positive impact on cognitive and emotional learning outcomes. Additionally, Setiawan and Wiedarti (2020) reported that the Quizlet application positively influences students’ performance and motivation in learning vocabulary. Therefore, while some studies support the claim of Quizlet’s positive effects on cognitive and emotional learning outcomes, there is also a need for further empirical research to provide a more comprehensive understanding of its impact. Nguyen et al. (2022) found that Quizlet positively influences students’ performance and autonomy in learning vocabulary ( Nguyen et al., 2022 ). Similarly, Sanosi (2018) conducted an experimental-design study that investigated the effect of Quizlet on vocabulary acquisition, highlighting its potential for enhancing vocabulary learning ( Anjaniputra and Salsabila, 2018 ; Sanosi, 2018 ) reported that Quizlet fostered learners’ engagement and persistence in vocabulary learning, indicating its usefulness as a learning tool ( Anjaniputra and Salsabila, 2018 ). Furthermore, Alastuey and Nemeth (2020) examined the use of Quizlet in an urban high school language arts class and demonstrated that students using Quizlet outperformed those in the Non-Quizlet group on weekly vocabulary tests, emphasizing its positive impact on vocabulary acquisition ( Alastuey and Nemeth, 2020 ; Setiawan and Wiedarti, 2020 ) found that the Quizlet application positively influenced students’ performance and autonomy in learning vocabulary, further supporting its effectiveness ( Setiawan and Wiedarti, 2020 ). These findings align with the research by Körlü and Mede (2018) , which indicated that Quizlet had a positive impact on students’ performance and autonomy in vocabulary learning ( Körlü and Mede, 2018 ). Also, several studies have demonstrated the positive influence of Quizlet on student’s cognitive learning such as vocabulary learning and retention ( Barr, 2016 ; Özer and Koçoğlu, 2017 ; Ashcroft et al., 2018 ; Körlü and Mede, 2018 ; Sanosi, 2018 ; Andarab, 2019 ; Çinar and Ari, 2019 ; Toy, 2019 ; Arslan, 2020 ; Chaikovska and Zbaravska, 2020 ; Tanjung, 2020 ; Van et al., 2020 ; Akhshik, 2021 ; Aksel, 2021 ; Fursenko et al., 2021 ; Ho and Kawaguchi, 2021 ; Kurtoğlu, 2021 ; Setiawan and Putro, 2021 ; Atalan, 2022 ; Lin and Chen, 2022 ; Nguyen and Van Le, 2022 , 2023 ).

However, these studies also show that the effectiveness of tools such as Quizlet can vary considerably depending on various factors such as students’ readiness, the teaching-learning process, the learning environment and the type of language skill targeted. Along these lines, a critical review of how Quizlet affects learning outcomes is still evolving. While some studies demonstrate the positive effects of technologies such as Quizlet, others offer a more cautious view, painting the other side of the coin and pointing to limitations and variable outcomes in different educational contexts. For example, some students prefer Quizlet because of its convenience, usefulness, practicality and effectiveness, while others express dissatisfaction with certain features and errors ( Pham, 2022 ). Moreover, the effectiveness of Quizlet in vocabulary learning has been a subject of research, with some studies indicating its success in enhancing vocabulary acquisition and retention ( Körlü and Mede, 2018 ; Al-Malki, 2020 ; Mykytka, 2023 ), while others suggest that its use does not necessarily lead to autonomous learning ( Setiawan and Wiedarti, 2020 ).

Given the varied and sometimes contradictory findings of related studies in the literature, a more general, systematic and comprehensive approach is needed to understand the real impact of Quizlet on learning outcomes. While generally positive in the literature, the different perspectives and diverse results reported highlight the complexity of evaluating the effectiveness of digital learning tools such as Quizlet. This underlines the need for a more comprehensive, nuanced and evidence-based evaluation and is the main focus of the current research. Unfortunately, there is little and contradictory empirical data to support Quizlet’s impact on learning outcomes. As a result, the purpose of this research is to consolidate and assess the body of knowledge on the effect of Quizlet on learning outcomes through a thorough meta-analysis and systematic review. This research aims to fill the existing gap in empirical research, provide valuable insights, and contribute to the literature in terms of having a final say on the broader understanding of Quizlet’s effectiveness in language learning. Consequently, a comprehensive and systematic review of the existing literature is warranted to evaluate the impact of Quizlet especially on student emotional learning outcomes. This meta-analysis aims to fill this gap by synthesizing the available research on Quizlet and providing a quantitative assessment of its effectiveness in enhancing student cognitive and emotional learning outcomes. By conducting a systematic review and comprehensive meta-analysis, this research aims to determine whether Quizlet’s utilization leads to a significant difference in student outcomes in these key areas compared to traditional or alternative learning methods. Through a rigorous and transparent synthesis of the empirical evidence, this study seeks to shed light on the potential of Quizlet to improve student cognitive and emotional learning outcomes and inform future research and practice in the field of digital education.

The detailed literature review revealed a scarcity of quantitative studies suitable for a meta-analysis, particularly concerning the effects of Quizlet on aspects such as student motivation, confidence, learner engagement, and anxiety. Consequently, this study focuses on quantifying the impact of Quizlet on foreign language learners’ vocabulary learning achievement, retention, and attitude through a systematic review and comprehensive meta-analysis. The study investigates whether the use of Quizlet results in a significant difference in student scores in these areas compared to other learning methods.

3 Methodology

3.1 model of the research.

In this study, we conducted a comprehensive examination of quantitative research focusing on the application of Quizlet in vocabulary learning. The selection of studies for this meta-analysis was guided by systematic and rigorous methods, as recommended by the PRISMA guidelines. In accordance with PRISMA guidelines, researchers have employed systematic review and meta-analysis methodologies to ensure transparency, reproducibility, and rigor in their studies ( Page et al., 2021 ). The PRISMA guidelines provide a comprehensive checklist for reporting systematic reviews and meta-analyses, encompassing various aspects such as study selection, data extraction, and synthesis methods ( Page et al., 2021 ). Adhering to these guidelines enhances the quality and reliability of the research findings, thereby contributing to evidence-based decision-making in diverse fields.

This study involved a comprehensive search of literature from the inception of Quizlet in 2005 to the present year, 2023. Our focus on the period from 2016 to 2023 is based on the emergence of a pivotal study in 2016, which was the first to explore the impact of Quizlet on the identified outcomes. Our analysis encompassed a broad spectrum of educational settings and demographics, reflecting the diverse populations engaged in using this tool. The core intervention we scrutinized was Quizlet’s utilization for enhancing vocabulary learning, its retention, and its influence on learner attitudes. To gauge Quizlet’s efficacy, we included studies that provided a comparative analysis between Quizlet and traditional learning methodologies or other educational technologies. This comparative approach enabled us to assess the relative effectiveness of Quizlet in achieving the desired educational outcomes. Our primary outcomes of interest were learners’ achievement in vocabulary acquisition, the retention of this knowledge over time, and their attitudes towards the use of Quizlet as a learning tool. The temporal scope of our review was strategically chosen, with 2016 marking the emergence of a pivotal study that set a precedent in this research area, thereby shaping the subsequent investigations into Quizlet’s impact in educational contexts. In accordance with PRISMA guidelines, we have employed a meta-analytic survey approach to determine the effect size of Quizlet’s impact on foreign language learners’ vocabulary learning, retention, and attitudes. This includes a thorough evaluation of study quality, risk of bias, and the applicability of findings.

3.2 Data collection and coding

In the course of gathering data for this research, an examination of diverse databases was conducted, encompassing the YÖK national thesis center, Google Scholar, Selcuk University Academic Search Engine, DergiPark, Proquest, Sage Journals, Eric, Wiley Online Library, Taylor & Francis Online, Science Direct, Jstor, and Springer Link databases. Throughout this phase, the fundamental concepts under consideration were “Quizlet” and “mobile flashcards.” In the preliminary scrutiny of this research, a comprehensive total of 94 studies were discerned. Nevertheless, 71 of these studies were omitted from the meta-analysis owing to their qualitative nature, irrelevant outcomes such as absence of exploration into academic achievement, learning retention, student attitude, and the format such as the transformation from thesis to article format. A significant number of studies were based on qualitative research methodologies. While these studies provide valuable insights, our meta-analysis focused on quantifiable outcomes that could be statistically analyzed. Therefore, studies that primarily employed qualitative methods such as interviews, narrative analysis, or case studies were excluded. Several studies did not align with the specific outcomes of interest for our research. Our meta-analysis aimed to explore the impact of “Quizlet” on academic achievement, learning retention, and student attitudes. Studies that did not investigate these specific outcomes, or that focused on peripheral aspects not directly related to our research questions, were omitted. This included studies that might have used similar tools or technologies but did not measure the outcomes relevant to our analysis. For studies that were available in both thesis and article formats, we chose to exclude the thesis versions. This decision was made because our research aimed to understand how the condensation and refinement involved in transforming a thesis into a journal article could impact the presentation and interpretation of research findings. Including both formats of the same study could lead to redundancy and skew the meta-analysis results. Flow chart in ( Figure 1 ) shows the process of scanning the literature and inclusion–exclusion of accessed studies in meta-analysis.

www.frontiersin.org

Figure 1 . Flow chart showing the process of scanning the literature and inclusion of accessed studies in meta-analysis.

To ensure systematic data analysis, we employed a two-stage coding process. In the first stage, each study was preliminarily coded based on its relevance to our key concepts. This step helped in filtering out studies that did not directly contribute to our research questions. The second stage involved a more detailed coding procedure, where two independent researchers coded the remaining studies for more specific variables such as research methodology, population, outcomes measured, and main findings. Any discrepancies between the coders were resolved through discussion and consensus, ensuring a high degree of inter-coder reliability. This step was crucial in maintaining the objectivity and consistency of the data coding process. The validity and reliability calculations related to the coding process are discussed in detail in the section titled “Reliability and validity of the research” of this study.

3.3 Inclusion criteria

Prescribed protocols are advised for the execution of meta-analyses, as delineated by Field and Gillett (2010) and Bernard et al. (2014) . Fundamentally, these scholars advocate for a meticulous assessment of the literature search process and a thorough evaluation of the selected studies with regard to potential publication bias, which should be undertaken prior to the initiation of pertinent statistical analyses. In order to be incorporated into the meta-analysis, prospective studies were required to conform to several specific eligibility criteria. Firstly, an eligible study had to investigate the impact of Quizlet. Secondly, the study had to make a comparative assessment of the impact of Quizlet in relation to a control group. Prospective-pre-post designs that did not incorporate a control or comparison group were excluded from consideration, as they failed to account for potential influences stemming from natural development or extraneous variables. Additionally, in order to meet the inclusion criteria, a study had to be conducted within an educational setting or possess a clear relevance to educational outcomes. This encompassed all levels of education, spanning from tertiary to secondary and primary levels. Furthermore, an eligible study was required to furnish adequate data for the computation of an effect size. Figure 1 reflects flow chart of the process of scanning the literature and inclusion of accessed studies in meta-analysis.

In order to ensure comprehensive coverage, a meticulous internet search was carried out according to the inclusion criteria mentioned in Figure 1 and a total of 94 studies were initially identified related to the topic. During the screening process, a significant portion of these studies were excluded due to several factors. Excluded studies included non-parametric data, qualitative features, quantitative data, and studies that did not examine the effects of Quizlet on cognitive and emotional outcomes. Exclusion of studies with qualitative characteristics was necessary since they did not comply with the quantitative methodology needed for this meta-analysis. Some studies were excluded due to the lack of control groups. The inclusion of both control and experimental groups in studies is crucial for ensuring the validity of research findings. The importance of experimental and quasi-experimental designs for generalized causal inference in meta-analyses of research is emphasized in literature ( Preece, 1983 ; Anderson-Cook, 2005 ; Morris, 2007 ). By meticulously applying these inclusion criteria, the meta-analysis carefully selected studies with precision. This thorough approach ensured that only the most relevant and suitable sources were included, thereby enhancing the overall quality and reliability of the analysis. To measure the effects of the Quizlet on vocabulary learning achievement, retention, and attitude, 23 carefully selected studies that met the inclusion requirements made up the study’s sample. These studies had a range of sample sizes and covered a range of study types. Table 1 provides a detailed summary of the publication year, study type, research courses, and sample sizes of the included studies, giving a clear picture of the make-up of the meta-analysis sample.

www.frontiersin.org

Table 1 . Features of the studies included in the meta-analysis.

3.4 Reliability and validity of the research

When performing meta-analysis studies, it is imperative to methodically assemble descriptive data that highlights the important aspects of the included research. Careful data collection is necessary for this, where relevant information is meticulously recorded to facilitate additional analysis. The process of coding in meta-analysis studies is crucial for converting descriptive data into numerical data, enabling statistical analysis ( Neyeloff et al., 2012 ). Coding processes are essential for the synthesis of data and are used to convert descriptive data into a format that can be subjected to statistical analysis ( Berkeljon and Baldwin, 2009 ). According to Karadağ et al. (2015) , coding is a methodical and scientific approach that is utilized to extract pertinent and accurate data from the massive amount of data that is collected throughout the investigations.

In the coding process of this study to guarantee the accuracy of the coding process, a stringent methodology was developed. The coding was done in compliance with the coding form, which was established prior to the analysis. Creating a unique coding system that was both general and distinct enough to capture the features of any kind of research was the primary goal of this process. The coding process was done independently by two experts, each with extensive knowledge and expertise in the field. The coding forms completed by the first and second experts were carefully compared in order to assess the level of agreement between them. The calculation of Inter-rater Reliability (IRR) can be quantitatively assessed using the formula agreement/(agreement + disagreement) x 100. This formula enables a quantitative assessment of the consistency between the two experts’ coding ( Hripcsak and Rothschild, 2005 ). It is important to note that the calculation of IRR is crucial in various fields, including medical informatics, forensic psychology, and educational measurement ( Hripcsak and Heitjan, 2002 ; Cook et al., 2008 ; Guarnera and Murrie, 2017 ). The use of this formula allows for the measurement of specific agreement, which is essential in quantifying interrater reliability and assessing the reliability of a gold standard in various studies ( Hripcsak and Heitjan, 2002 ). The calculated Inter-rater Reliability (IRR) score provided a measure of the degree of agreement between the two experts, indicating the reliability of the coding process. In this study, the reliability was determined to be 98%, signifying a high level of concordance between the experts’ assessments. The high reliability score of this meta-analysis enhances the overall reliability and robustness of the results by providing assurance about the coding process’s precision and consistency.

3.5 Data analysis procedure

This study utilizes Hedges’s g as the measurement unit for effect size. A significance level of 95% is established. The total effect size is then determined by first calculating the effect sizes for each study included in the meta-analysis. Two models, namely fixed effects and random effects are employed to determine the overall effect size.

The effect sizes obtained from the analyses were interpreted using the effect size classification proposed by Thalheimer and Cook (2002) and Hunter and Schmidt (2015) . These researchers discuss methods of meta-analysis and provide insights into the interpretation of effect sizes. According to them, the classification of effect sizes as small (0.15 ≤ Hedges’s g  < 0.40), moderate (0.40 ≤ Hedges’s g  < 0.75), large (0.75 ≤ Hedges’s g  < 1.10), very large (1.10 ≤ Hedges’s g  < 1.45), and excellent (Hedges’s g  ≥ 1.45). These references provide a comprehensive understanding of the interpretation of effect sizes, aligning with the specified ranges for Hedges’s g .

This study’s data analysis produced several noteworthy conclusions, which are discussed in more detail below.

4.1 Effect size

The effect size, represented as “d,” was determined as the result of dividing the discrepancy between the treatment conditions by the amalgamated standard deviation of the two study groups ( Borenstein et al., 2021 ). Cohen’s d or Hedges’ g represent the effect size when utilizing contrast groups ( Hedges, 1983 ; Hedges and Olkin, 1985 ; Cohen, 1988 ; Hartung et al., 2008 ; Borenstein et al., 2009 ). Hedges’ g is particularly useful for small sample sizes and is preferred when the studies being compared have different sample sizes or variances ( Light et al., 1994 ).

The decision on whether to use a fixed effects model or a random effects model for calculating effect sizes in a study is crucial. Calculating effect sizes “d” and “g” by dividing the discrepancy between the means of each group’s experimental and control cohorts by their pooled standard deviations is a purposeful and meaningful procedure ( Borenstein et al., 2009 ). While the random effects model aims to generalize findings beyond the included studies by assuming that the selected studies are random samples from a larger population, the fixed effects model is ideal for drawing conclusions on the studies included in the meta-analysis ( Cheung et al., 2012 ). The random effects model is a good option for meta-analysis when the goal is to make the results more broadly applicable than the individual research. A random effects model takes into account both within- and between-study variation, making it more cautious and producing a broader confidence interval ( Ma et al., 2015 ). The second criterion is dependent upon the number of studies that are included in the meta-analysis. The fixed effects model is considered appropriate when the number of studies is less than five ( Aydin et al., 2020 ). Studies are deemed homogenous if the variance in effect sizes amongst them is only attributable to sampling error; in a meta-analysis, this source of variation can be accounted for by employing the fixed effect model ( Idris and Saidin, 2010 ). The random effects model assumes a normal distribution of genuine effect sizes and estimates the mean and variance of this distribution, whereas the fixed effects model assumes a common true effect size across all studies and calculates this common effect size ( Spineli and Pandis, 2020 ). The third criterion is whether statistical heterogeneity exists between effect sizes. The random effects model must be used when heterogeneity is found, as explained by Tufanaru et al. (2015) . A random effects model, which takes into account both within- and between-study variance, can be more suited if there is significant heterogeneity among the studies. Conversely, a fixed effects approach would be more appropriate if the trials are quite homogeneous ( Danos, 2020 ). It is important to consider the assumptions and implications of each model when making this decision, as the choice of model can impact the interpretation and generalization of the results ( Konstantopoulos, 2006 ).

In this study, the statistical data on vocabulary learning and retention of Quizlet application were interpreted according to the random effects model since they showed heterogeneous distribution (see Tables 2 , 3 ), and the data on student attitude were interpreted according to the fixed effects model since they showed homogeneous distribution (see Table 4 ).

www.frontiersin.org

Table 2 . Homogeneity test results: Q -statistic, I 2 and tau-square statistics assessing the impact of Quizlet on vocabulary learning achievement.

www.frontiersin.org

Table 3 . Homogeneity test results: Q -statistic, I 2 and tau-square statistics assessing the impact of Quizlet on vocabulary retention.

www.frontiersin.org

Table 4 . Homogeneity test results: Q-statistic, I 2 and tau-square statistics assessing the impact of Quizlet on student attitude.

4.2 The meta-analysis outcomes pertaining to the influence of Quizlet on vocabulary learning achievement

In pursuit of addressing the primary research query, the study sought to ascertain the extent to which Quizlet, as supported by experimental study findings, contributes to students’ vocabulary learning. To unravel this quandary, meticulous analyses were conducted on the pertinent data extracted from the studies encompassed within the research. Heterogeneity in the context of meta-analysis refers to sampling error and the variation in results seen across many research papers ( Borenstein et al., 2009 ). To evaluate the degree to which the conclusions of each research study are influenced by both the sampling error and the fluctuation or population variance in the estimated effect size, it becomes essential to conduct a heterogeneity assessment. The results of the heterogeneity test also help to identify if the study fits better with a fixed effect model or a random effect model. As a result, any of these impact models is used to calculate the effect magnitude or summarizing effect of the study’s findings for further research. In this work, heterogeneity testing is examined using Q-statistics in conjunction with its p -value, I 2 and τ 2 parameters, all of which are listed in Table 2 .

According to the homogeneity test in Table 2 , the average effect size Q-statistical value of the Quizlet on vocabulary learning is calculated as 113.069 at 20 degrees of freedom at 95% significance level and is found to be statistically significant ( Q  = 113.069; p  < 0.05). According to the Q -value results of the research data, it can be said that the distribution is heterogenous. The tau-square value (τ 2 ), which estimates the variance of the true mean effect size, is calculated as (τ 2 ) 0.284, and the I 2 statistic is calculated as 82.312. This I 2 value calculated for the vocabulary learning variable indicates that we can explain 82.312% of the variance in the average effect size calculated in the studies included in the meta-analysis with the data we have and indicates a high level of heterogeneity.

In Figure 2 , the lines flanking the squares represent the lower and upper bounds of effect sizes within a 95% confidence interval, while the rhombus indicates the overall effect size of the studies. Upon examination, the smallest effect size is −0.408, and the largest is 2.083. The weight percentage provided alongside the effect size values quantitatively represents the contribution of each study to the overall outcome of the meta-analysis.

www.frontiersin.org

Figure 2 . Effect size values related to vocabulary learning achievement.

The results that are provided in Table 5 indicate that Quizlet has moderate impact on vocabulary learning. These empirical results highlight the critical need and effectiveness of using Quizlet as an instructional tool to help students develop higher order lexical knowledge. Using the Classic Fail-Safe N analysis, a technique used to determine the strength of the meta-analysis under consideration, it was confirmed that Quizlet has a modest effect (effect size, g  = 0.62) on the improvement of vocabulary proficiency. Table 6 presents the Classic Fail-Safe N Analysis of this examination, providing additional insight into the validity and strength of the determined outcomes. Classic fail-safe N analysis is utilized to determine the stability of results and to identify the potential impact of unpublished studies on the overall conclusions of a meta-analysis ( Erford et al., 2010 ) and it provides an indication of the robustness of the findings and is employed to evaluate the stability of the meta-analytic results ( Acar et al., 2017 ).

www.frontiersin.org

Table 5 . Average effect sizes and confidence interval lower and upper values by effect model.

www.frontiersin.org

Table 6 . Classic fail-safe N analysis.

Based on the Classic Fail-Safe N analysis, it becomes apparent that an additional 590 studies would be necessary to potentially alter the conclusion drawn from the meta-analysis, suggesting that Quizlet’s impact on vocabulary learning is either negligible or negative ( p  < 0.05). The inclusion of these 590 studies reporting no substantial impact of Quizlet on vocabulary learning would be pivotal to reconsidering the overall outcome of the meta-analysis. Their incorporation would significantly enhance the breadth of evaluation regarding Quizlet’s relationship with vocabulary learning, enriching our understanding of its potential effects as an instructional tool.

Figure 3 illustrates the distribution of effect sizes in accordance with Hedges’s funnel chart (Funnel plot of precision). The funnel plot is a widely used tool in meta-analysis to visually assess the presence of publication bias and small-study effects ( Egger et al., 1997 ). It provides a graphical display of the relationship between the effect size estimates and a measure of study precision, such as the standard error or sample size ( Kiran et al., 2016 ). Funnel plots are particularly useful in exploring sources of heterogeneity and bias in meta-analyses ( Schild and Voracek, 2013 ). These graphical representations not only aid in visualizing the effect size distribution but also underscore the importance of assessing publication bias in meta-analytical studies, ensuring a comprehensive and unbiased evaluation of Quizlet’s impact on vocabulary learning achievement.

www.frontiersin.org

Figure 3 . Funnel plot on publication bias of studies examining the effect of Quizlet on vocabulary learning achievement.

According to Figure 3 , it becomes evident that the investigations fail to exhibit an asymmetrical distribution with respect to the overall effect size. The funnel’s edge in the graphic is marked by a ± slope. Figure 3 reveals that significant differences or anomalies in distribution are conspicuously missing. To put it differently, the distribution does not display a pronounced concentration on one side. The graphic clearly conveys that there are many studies that are located outside of the funnel, highlighting the significant diversity that exists within this cohort and making it possible to say that the group is heterogeneous. Our thorough investigation turned up no concrete proof of publication bias among the variety of research we included in our meta-analysis. The absence of an asymmetric clustering at a singular point within the distribution signifies that the study sample does not exhibit a predisposition towards favoring the Quizlet, thereby enhancing the reliability of this meta-analysis study.

Table 7 meticulously catalogues the results of the Begg and Mazumdar rank correlation test. This statistical test evaluates the relationship between the standardized treatment effect and the variance of the treatment effect using Kendall’s tau ( Gjerdevik and Heuch, 2014 ).

www.frontiersin.org

Table 7 . Begg and Mazumdar rank correlation.

In Table 7 , the Begg and Mazumdar rank correlation test has unveiled that the composite study sample integrated into the meta-analysis does not demonstrate any signs of bias (tau b  = 0.25; p  > 0.05). Consequently, the findings derived from the scrutiny of effect sizes originating from the constituent studies are deemed to possess a high degree of reliability. This signifies that the inferences drawn from the meta-analysis concerning the influence of the Quizlet on the assessed parameters can be characterized as sturdy and trustworthy. These results highlight the validity and reliability of the conclusions drawn from our research, which can be regarded as robust and unwavering.

4.3 The meta-analysis outcomes pertaining to the influence of Quizlet on vocabulary retention

The secondary inquiry in this study aimed to ascertain the impact of Quizlet on retaining vocabulary. To address this, a meticulous analysis of relevant data gleaned from the research was undertaken. Homogeneity assessments, as detailed in Table 3 , were conducted to ascertain the suitability of employing either the fixed effects model or the random effects model for computing the effect sizes associated with the influence of Quizlet on vocabulary retention. These assessments aimed to discern the best approach to quantify the impact of Quizlet on the retention of vocabulary across different study conditions or populations. Moving from these methodological decisions, the subsequent focus lay in interpreting the implications of Quizlet’s influence on vocabulary retention within diverse study contexts.

As a result of the homogeneity test, the average effect size Q -statistical value of the Quizlet on vocabulary retention is calculated as 20.997 at 4 degrees of freedom at 95% significance level and is found to be statistically significant ( Q  = 20.997; p  < 0.05). According to the Q-value results of the research data, it can be said that the distribution is heterogenous. The tau-square value (τ 2 ), which estimates the variance of the true mean effect size, is calculated as (τ 2 ) 0.259, and the I 2 statistic is calculated as 80.947. This I 2 value calculated for the vocabulary retention variable indicates that we can explain 80.947% of the variance in the average effect size calculated in the studies included in the meta-analysis with the data we have and indicates a high level of heterogeneity.

In Figure 4 , the lines bordering the squares depict the range of effect sizes encompassed by a 95% confidence interval, with the rhombus denoting the aggregate effect size derived from the studies. Analysis reveals effect sizes ranging from 0.000 to 1.217. Additionally, the weight percentage accompanying each effect size quantifies the relative impact of individual studies on the collective result of the meta-analysis. This visual representation not only delineates the variability in effect sizes but also emphasizes the influence of each study on the overall outcome, providing a nuanced understanding of the meta-analytical findings.

www.frontiersin.org

Figure 4 . Effect size values related to vocabulary retention.

The findings outlined in Table 8 demonstrate that Quizlet exerts a moderate impact on vocabulary retention (Hedges’ g  = 0.743). Employing the Classic Fail-Safe N analysis, a method aimed at gauging the robustness of the meta-analysis, affirmed the moderate effect of Quizlet ( g  = 0.74) in advancing vocabulary retention. Table 9 within the study further expounds upon this analysis, offering deeper insights into the credibility and potency of the conclusions drawn from the investigation. This robust analysis not only validates the efficacy of Quizlet but also emphasizes its substantive contribution to enhancing vocabulary retention among learners.

www.frontiersin.org

Table 8 . Average effect sizes and confidence interval lower and upper values by effect model.

www.frontiersin.org

Table 9 . Classic fail-safe N analysis.

Classic Fail-Safe N analysis reveals that an additional 44 studies would be required to potentially modify the conclusion drawn from the meta-analysis. This indicates a possibility that Quizlet’s impact on vocabulary retention might be insignificant or even negative ( p  < 0.05). The inclusion of these 44 studies, which report no substantial impact of Quizlet on vocabulary retention, would be crucial in reconsidering the overall outcome of the meta-analysis. Their inclusion would significantly broaden the scope of the evaluation concerning Quizlet’s association with vocabulary retention, thereby enriching our understanding of its potential as an instructional tool. Additionally, Figure 5 demonstrates the distribution of effect sizes based on Hedges’s funnel chart (Funnel plot of precision).

www.frontiersin.org

Figure 5 . Funnel plot on publication bias of studies examining the effects of Quizlet on vocabulary retention.

Figure 5 displays a funnel plot illustrating the distribution of effect sizes across studies. The funnel in the plot is bounded by a ± slope. The graphic indicates that some studies fall outside the slope curve, suggesting heterogeneity within the group.

Based on the data synthesized in Table 10 , employing the Begg and Mazumdar rank correlation test revealed no indications of bias within the combined study sample used in the meta-analysis (tau b  = 0.60; p  > 0.05). As a result, the scrutiny of effect sizes extracted from these studies is considered highly reliable. This substantiates the robustness of the conclusions drawn from the meta-analysis, specifically regarding Quizlet’s influence on the evaluated parameters. These outcomes not only affirm the validity and reliability of our research findings but also underscore the steadfastness and solidity of the conclusions derived. It emphasizes the credibility of our study’s outcomes, reinforcing the confidence in the observed effects of Quizlet on the assessed parameters.

www.frontiersin.org

Table 10 . Begg and Mazumdar rank correlation.

4.4 The meta-analysis outcomes pertaining to the influence of Quizlet on student attitude

As a result of the homogeneity test, the average effect size Q -statistical value of the Quizlet on student attitude is calculated as 2.003 at 1 degree of freedom at 95% significance level and is found to be statistically insignificant ( Q  = 2.003; p  > 0.05). According to the Q -value results of the research data, it can be said that the distribution is homogenous. The tau-square value (τ 2 ), which estimates the variance of the true mean effect size, was calculated as (τ 2 ) 0.065, and the I 2 statistic is calculated as 50.081. This I 2 value calculated for the student attitude variable indicates that we can explain 50.081% of the variance in the average effect size calculated in the studies included in the meta-analysis with the data we have and indicates a high level of homogeneity. Since the results of the homogeneity test analysis and I 2 statistics indicate that the studies on the average effect size of the Quizlet on student attitude does not differ statistically from each other, the analyses were calculated according to the fixed effects model ( Figure 6 ).

www.frontiersin.org

Figure 6 . Effect size values related to student attitude.

It is seen that the weights of the studies in the meta-analysis are close to each other. It can be visually understood from the forest plot that the effect sizes are generally concentrated at a low level and the overall effect size is low in width.

The meta-analysis values of the results obtained from the 2 studies included in the meta-analysis are given in Table 11 . Table 11 shows the homogeneous distribution value, average effect size and confidence intervals of the studies according to the effect model.

www.frontiersin.org

Table 11 . Average effect sizes and confidence interval lower and upper values by effect model.

The findings outlined in Table 11 demonstrate that Quizlet exerts a small impact on student attitude (Hedges’ g  = 0.377). According to the fixed effects model, the lower limit of the 95% confidence interval is 0.029, the upper limit is 0.725. Classical Fail-Safe N analysis, Funnel Plot and Begg and Mazumdar Rank Correlation values could not be calculated since the number of studies examining the effect of Quizlet on student attitude according to the inclusion criteria of this study was limited to 2 in the literature.

5 Discussion

The objective of this meta-analysis was to analyze the overall results acquired from studies that examined the impact of Quizlet on foreign language learners’ cognitive and emotional outcomes such as vocabulary learning achievement, retention, and attitude. This study synthesized previous research to determine the Quizlet’s impact level. The trustworthiness of the research findings is demonstrated by the confidence intervals derived from the meta-small analysis. A thorough examination of the Quizlet’s effect on numerous outcome measures was made possible by the combination of experimental and quasi-experimental research.

To begin with, the study’s first research issue focuses on the impact of Quizlet on students’ vocabulary learning achievement as measured by experimental experiments. To address this topic, a meta-analysis incorporating 21 relevant studies was conducted. To determine whether the fixed effects model or the random effects model is appropriate for the research, a homogeneity test was first carried out. The homogeneity test results showed a statistically significant difference ( Q = 113.069; p < 0.05), indicating a diverse distribution of effect sizes among the studies. A further indication of the significant degree of variability among the studies was the obtained I 2 value of 82.312%. The random effects model was used to calculate effect estimates because of the significant heterogeneity that was seen. According to the random effects model, the average effect size of the studies included in the meta-analysis on vocabulary learning achievement was calculated as 0.62 ( g = 0.62). The findings of the meta-analysis showed an increase in vocabulary learning achievement in favor of students who were involved in the learning and teaching process using Quizlet. In terms of vocabulary learning achievement, it was found that the size of effect falls in the moderate interval. This modest effect size emphasizes the nuanced role of digital tools in education, where the impact of technology is significant but not uniformly variable across settings and groups of learners. The adoption of technologically enhanced learning environments, such as the use of Quizlet, signals a broader shift towards digital literacy and its integration into educational frameworks. Also, this effect can be explained by the rapid adaptation of students to technological integration and the effective use of innovative and technological learning methods in the classroom teaching process. When it comes to learning vocabulary in a foreign language and the number of words to be learned in a foreign language is high, it is thought that technological applications provide more effective learning. Chen et al. (2021) express that integrating educational games into language education is effective in improving students’ vocabulary acquisition, aligning with the idea of a broader shift towards digital literacy. In addition, Dewi (2023) highlights the modern shift towards the use of Quizlet for vocabulary learning, suggesting that the integration of digital tools into educational frameworks is important. The portability of laptops and smartphones has prompted the development of novel instructional methods that are thought to improve English language proficiency especially vocabulary learning ( Chaikovska and Zbaravska, 2020 ). With the help of this process, associating target words with visuals such as various graphics, pictures, images, cartoons, etc. is thought to benefit vocabulary learning by improving the cerebral schemas of foreign language vocabulary learners. Chaikovska and Zbaravska (2020) also state that associating unknown words with visuals, such as bright graphics and exaggerated pictures, benefits vocabulary acquisition. According to Andarab (2019) , using technology to contextualize vocabulary items during vocabulary acquisition improves the vocabulary learning process. This reinforcement of learning through visual aids and contextualization aligns with cognitive theories of multimedia learning, which posit that learners can more effectively process and retain information when it is presented in both verbal and visual formats. By suggesting that visual context plays an important role in memory and learning, it is suggested that visual context directs spatial attention, helps to realize implicit learning and can increase memory retention ( Chun and Jiang, 1998 ). The findings also highlight Quizlet’s usefulness in vocabulary learning and highlight the need of utilizing Quizlet to contextualize lexical items in collocations to improve vocabulary acquisition. Chaikovska and Zbaravska (2020) , while explaining the effect of Quizlet application on vocabulary learning, emphasized that the graphic presentation of the words in the program sets enriches cognitive visualization and can increase the level of word memorization by using the potential of the right hemisphere of the learners’ brain. In his study, Sanosi (2018) also highlights Quizlet’s potential in improving vocabulary learning. Sanosi (2018) asserts that Quizlet’s effectiveness as an e-learning tool for enhancing vocabulary acquisition can be linked to the increasing influence of information technology in many facets of life. The majority of daily duties for the younger generation of learners are completed on smart devices that are connected to the internet. Quizlet’s incorporation into daily technology use illustrates how learning and technology may coexist together and shows how relevant it is to the modern student’s digital habits. The principles of timed repetition and active recall, which support effective learning processes, form the foundation of Quizlet’s cognitive engagement, which is facilitated through repeated exploration and interactive quizzes.

The second research question addressed in this study focused on the effectiveness of the Quizlet on students’ vocabulary learning retention. A meta-analysis including five relevant research was carried out to provide an answer to this query. A homogeneity test was performed to determine the suitability of employing either the fixed effects model or the random effects model in the research. The results of the homogeneity test indicated a statistically significant difference, signifying a heterogeneous distribution of effect sizes among the studies ( Q  = 20.997; p  < 0.05). The mean of the effect sizes was calculated as 0.74 ( g  = 0.74). It was determined that the I 2 value obtained in the study showed heterogeneity with 80.947%. Since there was heterogeneity among the studies, effect sizes were calculated with the random effects model. According to the research findings, the average effect size indicated a moderate level of effectiveness in favor of the Quizlet. Quizlet has a moderate impact on vocabulary retention for those learning a foreign language. This impact can be attributed to several factors. The positive impact of Quizlet on retention can be attributed to the testing effect, which suggests that the act of quizzing helps learners identify knowledge gaps and actively seek out new information ( Karpicke and Roediger, 2008 ; Nguyen and Van Le, 2022 ). This finding illuminates the importance of Quizlet’s interactive features for retention of learnt information and the importance of engaging in retrieval practice. The theoretical basis for this effect can be attributed to the test effect, which suggests that retrieval practice increases long-term memory retention by engaging in continuous repetition. This principle underlines the importance of incorporating active retrieval techniques into learning strategies, especially for vocabulary retention. This theory is supported by research indicating that repeated testing produces a significant positive effect on delayed recall, compared to repeated studying after learning ( Karpicke and Roediger, 2008 ). Additionally, Nguyen et al. (2022) expressed that Quizlet creates motivation and arouses interest in learning within students’ consciousness, contributing to its impact on vocabulary retention. Moreover, the convenience and effectiveness of Quizlet, along with its features designed to be fun, make it appealing to students, thereby positively influencing vocabulary retention ( Aprilani and Suryaman, 2021 ; Pham, 2022 ). The fun and motivational aspects of Quizlet, along with its interactive and engaging design, support intrinsic motivation theories that emphasize the role of enjoyment and interest in sustaining learning engagement, achievement and retention. The impact of Quizlet on vocabulary retention can also be attributed to its ability to address prevalent problems among learners in the digital era, such as low participation and difficulties in maintaining learners’ attention to lessons ( Anjaniputra and Salsabila, 2018 ). Moreover, the incorporation of integrated skills and cognitive visualization in Quizlet makes it a useful ICT tool in vocabulary learning, contributing to its impact on vocabulary retention ( Chaikovska and Zbaravska, 2020 ). Dual coding theory, which enhances vocabulary retention by activating both verbal and nonverbal systems, is responsible for Quizlet’s effect on students’ success in vocabulary learning and retention ( Körlü and Mede, 2018 ). Quizlet’s effectiveness in increasing vocabulary retention through dual coding theory is thought to promote more robust encoding and retrieval of vocabulary by enhancing and strengthening the synergistic interaction between linguistic and visual information processing. Furthermore, it has been determined that Quizlet’s influence on students’ performance in vocabulary learning and retention is partially due to the dual coding theory (DCT) ( Körlü and Mede, 2018 ). According to the study’s participants, utilizing Quizlet gave them confidence that they had learned the vocabulary items and helped them rapidly and easily remember the new words they had learnt thanks to its study and fun aspects ( Körlü and Mede, 2018 ). This confidence and ease of recall provided by Quizlet can not only make and enhance the learning experience more effective, but can also contribute to a positive learning environment, in line with self-efficacy theory, which suggests that belief in one’s ability to succeed in certain situations or perform a task can significantly influence learning outcomes ( Laufer and Hulstijn, 2001 ). The application of Quizlet has been found to contribute to the development of linguistic intelligence, enriching students’ vocabulary banks and enhancing their vocabulary mastery, thereby impacting vocabulary retention ( Lubis et al., 2022 ). The use of Quizlet as a learning resource has also been recognized as an effort to develop the digital literacy of learners and motivate them to learn, further contributing to its impact on vocabulary retention ( Setiawan and Putro, 2021 ). The role of Quizlet in advancing digital literacy and linguistic intelligence underlines the multifaceted benefits of digital learning tools, suggesting that their impact extends beyond immediate learning outcomes to include broader educational and developmental gains. Research on the impact of visual features on vocabulary learning and retention ( Hashemi and Pourgharib, 2013 ) may provide an explanation for this discrepancy as it suggests that incorporating visual aids helps students remember and retain words more readily. On the contrary, there is also a study that supports that Quizlet is less effective in retention than traditional paper flashcards. In the study by Ashcroft et al. (2018) , the findings indicate that for delayed gains, there is an even stronger negative association between proficiency and Quizlet’s improved performance compared to paper flashcards. In actuality, advanced individuals lost their digital gains much more quickly than they did their paper gains. This contrast provides an opportunity to further investigate the conditions under which digital tools such as Quizlet optimize learning outcomes and highlights the importance of personalized and adaptive learning approaches that address learners’ different needs and proficiency levels.

The last problem of this study was to investigate the effectiveness of Quizlet on students’ attitude based on the findings of the studies. To find an answer to this problem, a meta-analysis of 2 studies was conducted. Homogeneity test was conducted to determine whether it is appropriate to use the fixed effects model or the random effects model in the research. According to the results of the homogeneity test, no difference was found, and it was concluded that the effect size distribution of the studies was homogeneous ( Q = 2.003; p > 0.05). It was determined that the I 2 value obtained in the study showed homogeneity with 50.081%. Since there was homogeneity among the studies, effect sizes were calculated with the fixed effects model. According to the fixed effects model, the average effect size of the studies included in the meta-analysis attitude was calculated as 0.37 ( g = 0.37). According to the research findings, the mean effect size value was found to be positive. The average effect size indicated a small level of effectiveness. This small but positive effect on attitudes towards language learning with Quizlet can be explained by the fact that digital tools can increase student engagement and motivation, but the magnitude of this effect can vary depending on factors such as implementation processes, student preferences and educational context. This variability requires a careful and detailed understanding of how digital tools such as Quizlet fit into the wider educational system and their role in shaping student attitudes and motivation. This also can be explained by the lack of quantitative studies measuring student attitudes towards the Quizlet application. The paucity of limited quantitative research on student attitudes towards Quizlet underlines the need for more robust empirical research that can offer deeper insights into how digital tools influence students’ psychological and emotional engagement in language learning. The lack of quantitative studies measuring student attitudes towards the Quizlet application is further supported by ( Mykytka, 2023 ), who highlighted the focus on vocabulary acquisition and the limited investigation of the effect of Quizlet on other skills. On the other hand, there are many qualitative studies proving that the Quizlet application increases student attitude and motivation positively ( Chien, 2015 ; Dizon, 2016 ; Alastuey and Nemeth, 2020 ; Setiawan and Wiedarti, 2020 ; Lubis et al., 2022 ; Nguyen et al., 2022 ; Pham, 2022 ; Zeitlin and Sadhak, 2022 ). Qualitative studies supporting the positive impact of Quizlet on student attitude and motivation emphasize the subjective and experiential dimensions of learning with digital tools and reveal that the effectiveness of these tools can be significantly affected by students’ perceptions and experiences. The effectiveness of Quizlet in increasing students’ attitude and motivation towards learning has been widely documented in the literature. The reasons for this positive influence can be attributed to various factors. Firstly, Quizlet has been shown to create a motivational learning environment by making the process of vocabulary acquisition more enjoyable and engaging for students and thus having positive impact on attitude ( Alastuey and Nemeth, 2020 ; Chaikovska and Zbaravska, 2020 ; Setiawan and Wiedarti, 2020 ; Aprilani and Suryaman, 2021 ; Berliani and Katemba, 2021 ; Mykytka, 2023 ). This enhancement of the learning environment through engagement and enjoyment reflects broader principles of educational psychology that emphasize the importance of positive emotional experiences in enhancing learning and student attitude. The interactive and gamified nature of Quizlet, such as its flashcards, quizzes, and other interactive activities, enhance students’ attitude, interest and intrinsic motivation in learning vocabulary ( Körlü and Mede, 2018 ; Alastuey and Nemeth, 2020 ; Nguyen et al., 2022 ; Mykytka, 2023 ). The gamification of learning processes facilitated by using Quizlet is also considered to be compatible with the principles of game-based learning, which suggest that game elements can significantly increase attitude and engagement and thus positively affect learning outcomes. Additionally, the convenience and effectiveness of Quizlet have been reported to positively influence students’ attitude, motivation and interest in vocabulary learning ( Pham, 2022 ; Zeitlin and Sadhak, 2022 ). This positive effect of Quizlet on student attitude and achievement demonstrates the importance of the role of user-friendly and effective digital tools in increasing student engagement and satisfaction in the learning process. And, Quizlet app’s wide range of activities and high degree of instant feedback, which paper flashcards could not match, may have increased and maintained students’ attitude engagement and motivation ( Ashcroft et al., 2018 ). In contrast to traditional learning methods, the immediate and fast feedback provided by Quizlet facilitates more effective learning by helping students to quickly identify and correct errors, which can be explained by the feedback loop theory, which suggests that timely and relevant feedback is crucial for learning, engagement and attitude. This theory is supported by the research which discusses the immediate feedback assessment technique and its role in promoting learning and correcting inaccurate first responses ( Epstein et al., 2002 ; Dihoff et al., 2004 ).

The current study demonstrates that Quizlet is an appropriate educational technology for fostering the learners’ cognitive and emotional dimensions in foreign learning process, especially in vocabulary learning. Apart from the technology tools and programs that aid in language acquisition, Quizlet has become a highly effective instrument for improving vocabulary knowledge and fostering learners’ attitude. After taking into account all of its benefits, Quizlet is an efficient web-based mobile learning tool that is entertaining, motivating, and helpful for learning vocabulary. Also, the design of the Quizlet application is suitable for autonomous learners, which enhances students’ attitude and motivation in learning vocabulary ( Setiawan and Wiedarti, 2020 ). This availability for autonomous learning reflects the growing demand for personalized and self-directed learning opportunities in modern education and highlights Quizlet’s role in meeting these evolving educational needs. The literature indicates that students’ opinions of Quizlet as a tool for vocabulary development were important in determining whether or not language learners should utilize it to increase their vocabulary. The importance and evaluation of learner views emphasizes the importance of learner-centered approaches in educational technology research and shows that the effectiveness of digital tools such as Quizlet can be significantly affected by learners’ achievement, learning retention, attitudes. Overall, the literature supports the importance of Quizlet in enhancing learners’ vocabulary learning, retention, and attitude, making it a valuable tool for educators and learners alike. This comprehensive perspective emphasizes the multifaceted impact of Quizlet on language learning, from learning achievement and retention to student attitudes, and underlines the need for its integration into language teaching practices. The study’s overall conclusions highlight the importance of incorporating mobile assisted language learning and teaching resources like Quizlet into teaching and learning practices and curricula. Quizlet was found to be a useful and effective application that supports students’ performance and autonomy in vocabulary learning by giving them increased exposure to the target words through a variety of functions and a study environment that is like a game. It is thought that the integration of Quizlet into language teaching, especially vocabulary teaching, could mean a shift towards more interactive, engaging and learner-centered approaches, reflecting the ongoing evolution of educational paradigms in the digital age.

6 Conclusion

This meta-analysis was conducted to investigate the effects of Quizlet on the cognitive and affective outcomes of foreign language learners, especially on vocabulary learning achievement, retention and attitude, and to reach a general conclusion on this issue. The findings of this study suggest that Quizlet has a moderate positive effect on vocabulary learning and retention and a small effect on learner attitude, suggesting that Quizlet may have the potential to be a valuable educational technology tool in language acquisition. The integration of Quizlet into the learning process not only facilitates students’ adaptation to modern technological tools, but also enhances their ability to effectively learn and retain large amounts of foreign vocabulary knowledge. This effectiveness can be partly attributed to Quizlet’s combination of visual aids and cognitive visualization techniques, which, as many studies have shown, play a crucial role in enhancing vocabulary acquisition and aiding retention. These visual components, coupled with the interactive nature of Quizlet, can help create a more engaging and immersive learning environment, which can be especially useful in the context of foreign language acquisition, especially in vocabulary learning, where visual associations can significantly support the learning process.

Despite these positive findings, our analysis also indicates a relatively smaller effect of Quizlet on learners’ attitudes towards vocabulary learning. This may be attributed to the limited number of quantitative studies specifically addressing this aspect of language learning. While existing qualitative research provides some evidence of Quizlet’s positive impact on learner motivation and attitude, there is a clear need for more rigorous, quantitative investigations to substantiate these observations. The discrepancy between qualitative and quantitative findings highlights a potential gap in the research, suggesting that future studies should aim to quantitatively measure and understand the nuances of how digital learning tools like Quizlet influence learners’ attitudes. Thanks to technological advances and the latest educational techniques, teachers are now able to use a wide range of online and mobile applications. The nuanced relationship between technology use and student attitude raises important questions about the adaptability and effectiveness of digital tools in different learning contexts. As digital learning environments continue to evolve, understanding these dynamics becomes increasingly important to optimize their design and implementation for maximum educational benefit. In light of these findings, it is clear that Quizlet shows considerable promise in improving vocabulary learning and retention but that further research is needed to fully understand its impact on student attitudes. Such insights can be important for educators and curriculum designers to make informed decisions about integrating digital tools such as Quizlet into language learning programs. The future of language education increasingly involves the integration of technology into educational processes. It is of utmost importance that such endeavors are developed and channeled in such a way that this integration contributes positively to student learning, achievement, attitudes and many other aspects.

In conclusion, while our study confirms the effectiveness of Quizlet in improving vocabulary learning, retention and learner attitude, it also highlights areas where further research is needed. Quizlet, with its interactive and engaging features, emerges as a valuable tool that facilitates the learning process and is effective and helpful in the learning processes of language educators and learners. It is thought that it may be important to integrate mobile assisted language learning resources such as Quizlet into educational practices and curricula as auxiliary tools to support foreign language learners’ performance in vocabulary acquisition and to improve their attitudes positively. This study may shed light and open new avenues for future research on how different aspects of Quizlet and similar platforms can be optimized for educational success.

7 Limitations and future research directions

7.1 quantitative research on learner attitudes.

A significant gap has been identified in quantitative research addressing the impact of Quizlet on students’ attitudes. Future studies should utilize robust quantitative methodologies to systematically assess how Quizlet affects student attitudes.

7.2 Temporal scope of studies

Longitudinal studies are recommended to examine the sustained impact of Quizlet on vocabulary learning over long periods of time. Such studies would provide invaluable insights into the long-term effectiveness and adaptability of Quizlet in evolving educational settings.

7.3 Comparative efficacy studies

Future research should take a more comprehensive approach, including a wider range of comparative analyses between Quizlet and both traditional and digital learning tools. Such comparative studies would deepen our understanding of Quizlet’s effectiveness relative to other methods and provide detailed insights into Quizlet’s unique advantages and areas for improvement.

7.4 Contextual diversity in learning environments

The majority of the studies reviewed focused on formal educational settings. There is a need for research exploring the use of Quizlet in more varied contexts, including informal learning environments and self-directed learning scenarios. Investigating Quizlet’s application in these diverse settings would offer a more holistic understanding of its adaptability and effectiveness across different learning modalities.

7.5 Technological advancements and evolving educational technologies

The rapid development of technology and its integration into educational contexts requires the continuous evaluation of tools such as Quizlet. Future studies should not only focus on Quizlet’s current functionalities, but also consider emerging technological developments that may affect its educational utility. Continuous evaluation is crucial to understand how evolving features and changes in user interaction with technology affect learning outcomes.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

OÖ: Writing – original draft. HS: Writing – review & editing.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Note: References marked with an asterisk indicate studies included in the meta-analysis.

Google Scholar

Acar, F., Seurinck, R., Eickhoff, S. B., and Moerkerke, B. (2017). Assessing robustness against potential publication bias in coordinate based fMRI meta-analyses using the fail-safe N. BioRxiv 189001, 1–48. doi: 10.1101/189001

Crossref Full Text | Google Scholar

*Akhshik, M. (2021). Learn language vocabulary with mobile application: a case study of Quizlet. In The 6th international conference on computer games; challenges and opportunities (CGCO2021), Isfahan, Iran, 1–5.

*Aksel, A. (2021). Vocabulary learning with Quizlet in higher education. Lang. Educ. Technol. , 1, 53–62.

Alastuey, M., and Nemeth, K. (2020). Quizlet and podcasts: effects on vocabulary acquisition. Comput. Assist. Lang. Learn. 35, 1407–1436. doi: 10.1080/09588221.2020.1802601

Al-Malki, M. (2020). Quizlet: an online application to enhance EFL foundation students’ vocabulary acquisition at Rustaq college of education, Oman. Arab World English J. 6, 332–343. doi: 10.24093/awej/call6.22

*Andarab, M. S. (2019). Learning vocabulary through collocating on Quizlet. Univ. J. Educ. Res. , 7, 980–985. doi: 10.13189/ujer.2019.070409

Anderson-Cook, C. (2005). Experimental and quasi-experimental designs for generalized causal inference. J. Am. Stat. Assoc. 100:708. doi: 10.1198/jasa.2005.s22

Anjaniputra, A., and Salsabila, V. (2018). The merits of quizlet for vocabulary learning at tertiary level. Indonesian Efl J. 4:1. doi: 10.25134/ieflj.v4i2.1370

Aprilani, D. N. (2021). Students’ perception in learning English vocabulary through quizlet. J. English Teach. 7, 343–353. doi: 10.33541/jet.v7i3.3064

*Arslan, M. S. (2020). The effects of using quizlet on vocabulary enhancement of tertiary level ESP learners. (Unpublished Master’s Thesis). Çağ University, Mersin.

*Ashcroft, R. J., Cvitkovic, R., and Praver, M. (2018). Digital flashcard L2 vocabulary learning out-performs traditional flashcards at lower proficiency levels: a mixed-methods study of 139 Japanese university students. Eurocall Rev. , 26, 14–28, doi: 10.4995/eurocall.2018.7881

*Atalan, E. (2022). The use of quizlet in teaching vocabulary to 9th grade EFL students. Unpublished Master’s Thesis, Anadolu University, Eskişehir.

Aydin, M., Okmen, B., Sahin, S., and Kilic, A. (2020). The meta-analysis of the studies about the effects of flipped learning on students’ achievement. Turk. Online J. Dist. Educ. 22, 33–51. doi: 10.17718/tojde.849878

*Barr, B. W. B. (2016). Checking the effectiveness of quizlet as a tool for vocabulary learning. Center English Lingua Franca J. , 2, 36–48

Berkeljon, A., and Baldwin, S. (2009). An introduction to meta-analysis for psychotherapy outcome research. Psychother. Res. 19, 511–518. doi: 10.1080/10503300802621172

Berliani, N., and Katemba, C. (2021). The art of enhancing vocabulary through technology. J. Smart 7, 35–45. doi: 10.52657/js.v7i1.1340

Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., and Abrami, P. C. (2014). A meta-analysis of blended learning and technology use in higher education: from the general to the applied. J. Comput. High. Educ. 26, 87–122. doi: 10.1007/s12528-013-9077-3

Borenstein, M., Hedges, L. V., Higgins, J. P. T., and Rothstein, H. R. (2009). Introduction to meta-analysis . United Kingdom: John Wiley & Sons Ltd.

Borenstein, M., Hedges, L. V., Higgins, J. P., and Rothstein, H. R. (2021). Introduction to meta-analysis . Oxford, UK: John Wiley & Sons.

*Chaikovska, O., and Zbaravska, L. (2020). The efficiency of quizlet-based EFL vocabulary learning in preparing undergraduates for state English exam. Adv. Educ. , 7, 84–90. doi: 10.20535/2410-8286.197808

Chatterjee, S. (2022). Computer assisted language learning (CALL) and mobile assisted language learning (MALL); hefty tools for workplace English training: an empirical study. Int. J. English Learn. Teach. Skills 4, 1–7. doi: 10.15864/ijelts.4212

Chen, J., Yang, S., and Mei, B. (2021). Towards the sustainable development of digital educational games for primary school students in China. Sustain. For. 13:7919. doi: 10.3390/su13147919

Cheung, M., Ho, R., Lim, Y., and Mak, A. (2012). Conducting a meta-analysis: basics and good practices. Int. J. Rheum. Dis. 15, 129–135. doi: 10.1111/j.1756-185x.2012.01712.x

PubMed Abstract | Crossref Full Text | Google Scholar

Chien, C. W. (2015). Analysis the effectiveness of three online vocabulary flashcard websites on L2 learners’ level of lexical knowledge. English Lang. Teach. 8, 111–121.

Chun, M. M., and Jiang, Y. (1998). Contextual cueing: implicit learning and memory of visual context guides spatial attention. Cogn. Psychol. 36, 28–71. doi: 10.1006/cogp.1998.0681

*Çinar, İ., and Ari, A. (2019). The effects of Quizlet on secondary school students’ vocabulary learning and attitudes towards English. Asian J. Instruct. , 7, 60–73.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences . US: Lawrence, Erlbaum.

Cook, D., Dupras, D., Beckman, T., Thomas, K., and Pankratz, V. (2008). Effect of rater training on reliability and accuracy of mini-CEX scores: a randomized, controlled trial. J. Gen. Intern. Med. 24, 74–79. doi: 10.1007/s11606-008-0842-3

Danos, D. (2020). Toward a transparent meta-analysis. Southwest Respirat. Crit. Care Chronicles 8, 60–62. doi: 10.12746/swrccc.v8i33.641

Davie, N., and Hilber, T. (2015). Mobile-assisted language learning: student attitudes to using smartphones to learn English vocabulary. In 11th international conference Mobile learning, Madeira, Portugal

Dewi, N. P. (2023). The implementation of quizlet to learn vocabulary towards junior high school students. JOEPALLT 11:66. doi: 10.35194/jj.v11i1.2967

Dihoff, R. E., Brosvic, G. M., Epstein, M. A., and Cook, M. J. (2004). Provision of feedback during preparation for academic testing: learning is enhanced by immediate but not delayed feedback. Psychol. Rec. 54, 207–231. doi: 10.1007/bf03395471

Dizon, G. (2016). Quizlet in the EFL classroom: enhancing academic vocabulary acquisition of Japanese university students. Teach. English Technol. 16, 40–56.

Eady, M. J., and Lockyer, L. (2013). Tools for learning: technology and teaching strategies: learning to teach in the primary school. Queensland University of Technology, Australia, 71–89. Available at: https://scholars.uow.edu.au/display/publication76376

Egger, M., Smith, G., Schneider, M., and Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ 315, 629–634. doi: 10.1136/bmj.315.7109.629

Epstein, M. A., Lazarus, A. D., Calvano, T. B., Matthews, K., Hendel, R. A., Epstein, B., et al. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. Psychol. Rec. 52, 187–201. doi: 10.1007/bf03395423

Erford, B., Savin-Murphy, J., and Butler, C. (2010). Conducting a meta-analysis of counseling outcome research. Counsel. Outcome Res. Eval. 1, 19–43. doi: 10.1177/2150137809356682

Field, A. P., and Gillett, R. (2010). How to do a meta-analysis. Br. J. Math. Statistic. Psychol. 63, 665–694. doi: 10.1348/000711010X502733

*Fursenko, T., Bystrova, B., and Druz, Y. (2021). Integrating Quizlet into aviation English course. Advanced Educ. , 8, 118–127. doi: 10.20535/2410-8286.217990

Gjerdevik, M., and Heuch, I. (2014). Improving the error rates of the Begg and mazumdar test for publication bias in fixed effects meta-analysis. BMC Med. Res. Methodol. 14, 1–16. doi: 10.1186/1471-2288-14-109

Guarnera, L., and Murrie, D. (2017). Field reliability of competency and sanity opinions: a systematic review and meta-analysis. Psychol. Assess. 29, 795–818. doi: 10.1037/pas0000388

Hartung, J., Knapp, G., and Sinha, B. K. (2008). Statistical meta-analysis with applications . New York: John Wiley and Sons

Hashemi, M., and Pourgharib, B. (2013). The effect of visual instruction on new vocabularies learning. Int. J. Basic Sci. Appl. Res. 2, 623–627.

Hedges, L. V. (1983). A random effects model for effect sizes. Psychol. Bull. 93, 388–395. doi: 10.1037/0033-2909.93.2.388

Hedges, L. V., and Olkin, I. (1985). Statistical methods for meta-analysis . London: Academic Press.

*Ho, T. T. H., and Kawaguchi, S. (2021). The effectiveness of Quizlet in improving EFL learners’ recetomi Kptive vocabulary acquisition J. Engl. Lang. Lit. , 15, 115–159

Hripcsak, G., and Heitjan, D. (2002). Measuring agreement in medical informatics reliability studies. J. Biomed. Inform. 35, 99–110. doi: 10.1016/s1532-0464(02)00500-2

Hripcsak, G., and Rothschild, A. (2005). Agreement, the f-measure, and reliability in information retrieval. J. Am. Med. Inform. Assoc. 12, 296–298. doi: 10.1197/jamia.m1733

Hunter, J., and Schmidt, F. (2015). Methods of meta-analysis: Correcting error and bias in research findings . California, USA: Sage Publications.

Idris, N. R. N., and Saidin, N. (2010). The effects of the choice of Meta analysis model on the overall estimates for continuous data with missing standard deviations. In 2010 second international conference on computer engineering and applications 2, 369–373

İnci, A. Ö. (2020). The impact of CALL on learners’ engagement and building vocabulary through quizlet. Unpublished Master’s Thesis, Bahçeşehir University, İstanbul.

Karadağ, E., Bektaş, F., Çoğaltay, N., and Yalçın, M. (2015). The effect of educational leadership on students’ achievement: a meta-analysis study. Asia Pac. Educ. Rev. 16, 79–93. doi: 10.1007/s12564-015-9357-x

Karpicke, J., and Roediger, H. (2008). The critical importance of retrieval for learning. Science 319, 966–968. doi: 10.1126/science.1152408

Kiran, A., Crespillo, A., and Rahimi, K. (2016). Graphics and statistics for cardiology: data visualisation for meta-analysis. Heart 103, 19–23. doi: 10.1136/heartjnl-2016-309685

Konstantopoulos, S. (2006). Fixed and mixed effects models in meta-analysis. IZA Discussion Paper No. 2198, Available at SSRN: https://ssrn.com/abstract=919993

*Körlü, H., and Mede, E. (2018). Autonomy in vocabulary learning of Turkish EFL learners. Eurocall Rev. , 26, 58–70. doi: 10.4995/eurocall.2018.10425

*Kurtoğlu, U. (2021). Vocabulary teaching through web 2.0 tools: A comparison of Kahoot! And Quizlet. (Unpublished Maters Thesis), Trakya University, Edirne.

Laufer, B., and Hulstijn, J. (2001). Incidental vocabulary acquisition in a second language: the construct of task-induced involvement. Appl. Linguis. 22, 1–26. doi: 10.1093/applin/22.1.1

Light, R., Cooper, H., and Hedges, L. (1994). The handbook of research synthesis. J. Am. Stat. Assoc. 89:1560. doi: 10.2307/2291021

*Lin, S., and Chen, Y. (2022). An empirical study of the effectiveness of Quizlet - based IELTS reading vocabulary acquisition. Int. J. Soc. Sci. Educ. Res. , 5, 198–204. doi: 10.6918/IJOSSER.202205_5(5).0031

Lubis, A. H., Johan, S. A., and Alessandro, R. V. (2022). Quizlet as an electronic flashcard to assist foreign language vocabulary learning. In Proceedings of the sixth international conference on language, literature, culture, and education (ICOLLITE 2022), 71–76

Ma, Y., Yuan, W., Cui, W., and Li, M. (2015). Meta-analysis reveals significant association of 3′-utr vntr in slc6a3 with smoking cessation in Caucasian populations. Pharmacogenomics J. 16, 10–17. doi: 10.1038/tpj.2015.44

McLean, S., Hogg, N., and Rush, T. W. (2013). Vocabulary learning through an online computerized flashcard site. JALT CALL J. 9, 79–98. doi: 10.29140/jaltcall.v9n1.149

Morris, S. (2007). Estimating effect sizes from pretest-posttest-control group designs. Organ. Res. Methods 11, 364–386. doi: 10.1177/1094428106291059

Mykytka, I. (2023). The use of Quizlet to enhance L2 vocabulary acquisition. Encuentro J. 31, 56–69. doi: 10.37536/ej.2023.31.2123

Neyeloff, J., Fuchs, S., and Moreira, L. (2012). Meta-analyses and forest plots using a Microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis. BMC. Res. Notes 5, 1–6. doi: 10.1186/1756-0500-5-52

Nguyen, T. T., Nguyen, D. T., Nguyen, D. L. Q. K., Mai, H. H., and Le, T. T. X. (2022). Quizlet as a tool for enhancing autonomous learning of English vocabulary. AsiaCALL Online J. 13, 150–165. doi: 10.54855/acoj221319

*Nguyen, L. Q., and Le, H.Van. (2022). Quizlet as a learning tool for enhancing L2 learners’ lexical retention: should it be used in class or at home? Hum. Behav. Emerg. Technol. , 2022. doi: 10.1155/2022/8683671, 1–10

Nguyen, L. Q., and Le, H.Van. (2023). The role of Quizlet learning tool in learners’ lexical retention: a quasi-experimental study. Int. J. Emerg. Technol. Learn. , 18, 38–50. doi: 10.3991/ijet.v18i03.34919

*Özer, Y. E., and Koçoğlu, Z. (2017). The use of quizlet flashcard software and its effects on vocabulary learning. Dil Dergisi , 168, 61–82. doi: 10.1501/dilder_0000000238

Page, M., Moher, D., Bossuyt, P., Boutron, I., Hoffmann, T., Mulrow, C., et al. (2021). Prisma 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ 372, 1–36. doi: 10.1136/bmj.n160

Pham, A. T. (2022). University students’ perceptions on the use of Quizlet in learning vocabulary. Int. J. Emerg. Technol. Learn. 17, 54–63. doi: 10.3991/ijet.v17i07.29073

Preece, P. (1983). The calculation of effect size in meta-analyses of research. J. Res. Sci. Teach. 20, 183–184. doi: 10.1002/tea.3660200210

Prensky, M. (2009). H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate: Journal of Online Education 5. Available at: https://www.learntechlib.org/p/104264/ (Accessed February 23, 2024).

Quizlet (2023). About Quizlet. Available at: https://quizlet.com/mission (Accessed April 15, 2023)

*Sanosi, A. B. (2018). The effect of quizlet on vocabulary acquisition. Asian J. Educ. E-Learn. , 6, 71–77. doi: 10.24203/ajeel.v6i4.5446

Schild, A., and Voracek, M. (2013). Less is less: a systematic review of graph use in meta-analyses. Res. Synth. Methods 4, 209–219. doi: 10.1002/jrsm.1076

Senior, J. (2022). Vocabulary taught via mobile application gamification: receptive, productive and long-term usability of words taught using Quizlet and Quizlet live. In 2022 international conference on business analytics for technology and security (ICBATS) 1–7)

*Setiawan, M. R., and Putro, N. P. S. (2021). Quizlet application effect on senior high school students vocabulary acquisition. In ELLiC (English language and literature international conference) proceedings, 4, 84–98.

Setiawan, M., and Wiedarti, P. (2020). The effectiveness of Quizlet application towards students’ motivation in learning vocabulary. Stud. English Lang. Educ. 7, 83–95. doi: 10.24815/siele.v7i1.15359

Spineli, L., and Pandis, N. (2020). Fixed-effect versus random-effects model in meta-regression analysis. Am. J. Orthod. Dentofacial Orthop. 158, 770–772. doi: 10.1016/j.ajodo.2020.07.016

*Tanjung, A. P. (2020). The effect of the quizlet application on the vocabulary mastery of students in class VII Mts Al-Washliyah bah Gunung. (Unpublished Master Thesis), State Islamic University of North Sumatra.

Thalheimer, W., and Cook, S. (2002). How to calculate effect sizes from published research: a simplified methodology. Work Learn. Res. 1, 1–9.

*Toy, F. (2019). The effects of quizlet on students’ and EFL teachers’ perceptions on vocabulary learning/teaching process. (Unpublished Mater’s Thesis), Süleyman Demirel University, Isparta.

Tufanaru, C., Munn, Z., Stephenson, M., and Aromataris, E. (2015). Fixed or random effects meta-analysis? Common methodological issues in systematic reviews of effectiveness. Int. J. Evid. Based Healthc. 13, 196–207. doi: 10.1097/XEB.0000000000000065

*Van, H. D., Thuyet, P. T. S., and Thanh, H. N. (2020). Using quizlet to enhance vocabulary acquisiton of non-English major freshmen. In Proceedings of the 8th OpenTesol international conference 2020 language education for global competence: Finding authentic voices and embracing meaningful practices, 576–590.

Waluyo, B., and Bucol, J. L. (2021). The impact of gamified vocabulary learning using Quizlet on low-proficiency students. Comput. Assist. Lang. Learn. Electronic J. 22, 164–185.

*Wang, L.-C. C., Lam, E. T. C., and Hu, Z. (2021). Effects of Quizlet-based learning activities on American high school students’ beliefs and confidence in learning Chinese as a foreign language. Int. J. Technol. Teach. Learn. , 17, 18–37. doi: 10.37120/ijttl.2021.17.1.02

Zeitlin, B. D., and Sadhak, N. D. (2022). Attitudes of an international student cohort to the Quizlet study system employed in an advanced clinical health care review course. Educ. Inf. Technol. 28, 3833–3857. doi: 10.1007/s10639-022-11371-3

Keywords: Quizlet, attitude, retention, achievement, meta-analysis, vocabulary learning, mobile assisted learning

Citation: Özdemir O and Seçkin H (2024) Quantifying cognitive and affective impacts of Quizlet on learning outcomes: a systematic review and comprehensive meta-analysis. Front. Psychol . 15:1349835. doi: 10.3389/fpsyg.2024.1349835

Received: 05 December 2023; Accepted: 16 February 2024; Published: 06 March 2024.

Reviewed by:

Copyright © 2024 Özdemir and Seçkin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Osman Özdemir, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

This paper is in the following e-collection/theme issue:

Published on 31.5.2024 in Vol 26 (2024)

This is a member publication of National University of Singapore

Examining the Role of Information Behavior in Linking Cancer Risk Perception and Cancer Worry to Cancer Fatalism in China: Cross-Sectional Survey Study

Authors of this article:

Author Orcid Image

Original Paper

  • Lianshan Zhang 1 , PhD   ; 
  • Shaohai Jiang 2 , PhD  

1 School of Media and Communication, Shanghai Jiao Tong University, Shanghai, China

2 Department of Communications and New Media, National University of Singapore, Singapore, Singapore

Corresponding Author:

Shaohai Jiang, PhD

Department of Communications and New Media

National University of Singapore

Blk AS6, 11 Computing Drive

Singapore, 117416

Phone: 65 6516 2003

Email: [email protected]

Background: Reducing cancer fatalism is essential because of its detrimental impact on cancer-related preventive behaviors. However, little is known about factors influencing individuals’ cancer fatalism in China.

Objective: With a general basis of the extended parallel process model, this study aims to examine how distinct cancer-related mental conditions (risk perception and worry) and different information behaviors (information seeking vs avoidance) become associated with cancer fatalism, with an additional assessment of the moderating effect of information usefulness.

Methods: Data were drawn from the Health Information National Trends Survey in China, which was conducted in 2017 (N=2358). Structural equation modeling and bootstrapping methods were performed to test a moderated mediation model and hypothesized relationships.

Results: The results showed that cancer risk perception and cancer worry were positively associated with online health information seeking. In addition, cancer worry was positively related to cancer information avoidance. Moreover, online health information seeking was found to reduce cancer fatalism, while cancer information avoidance was positively associated with cancer fatalism. The results also indicated that the perceived usefulness of cancer information moderated this dual-mediation pathway.

Conclusions: The national survey data indicate that cancer mental conditions should not be treated as homogeneous entities, given their varying functions and effects. Apart from disseminating useful cancer information to encourage individuals to adaptively cope with cancer threats, we advocate for health communication programs to reduce cancer information avoidance to alleviate fatalistic beliefs about cancer prevention.

Introduction

Cancer is rapidly becoming a global health burden and is the leading cause of death in >110 countries [ 1 ]. In China, the context for this study, cancer incidence and mortality have escalated, with an estimated 4.8 million new cancer cases and 3.2 million new cancer deaths in 2022, approximately 40% higher cancer mortality than in the United States [ 2 ]. Despite the crude cancer deaths, studies have found that globally 40% to 50% of cancers are preventable by choosing positive lifestyle factors, such as following a healthy diet, maintaining regular exercise and cancer screenings, and reducing tobacco use and alcohol consumption [ 3 ].

To promote cancer prevention, fostering positive coping beliefs is an essential step. However, many people still hold fatalistic beliefs about cancer, considering it as neither preventable nor curable [ 4 ]. Those with fatalism contend that external forces, such as fate and predestination, control the causes and outcomes of cancer and hence deny the need to engage in any other form of coping [ 5 ]. Such maladaptive coping modes have been documented in both Eastern and Western societies, despite limited studies in Asia [ 6 ]. In China, cancer fatalism has long been prevalent, and it carries a negative connotation (eg, hopelessness and pessimism) that is associated with negative action tendencies in the face of cancer risks [ 7 ].

Hence, it is important to reduce cancer fatalism, and health information seeking may play a key role. Past research has documented the benefits of health information seeking, such as lowering health anxiety, managing health-related uncertainties, and increasing health literacy and confidence in fighting cancer [ 8 ]. However, people are not always active in searching for health information. Instead, some people intentionally avoid cancer information or prevent exposure to related topics, which is called cancer information avoidance (CIA) [ 9 ]. People who consistently avoid cancer information may miss opportunities to be empowered in making informed decisions to take positive coping behaviors. In past studies, CIA has shown to be associated with low levels of perceived behavioral control, cancer knowledge, and delays in seeking help [ 10 , 11 ]. Although the detrimental impacts of CIA on maladaptive coping have been suggested, prior research predominantly concentrated on health information seeking, inadvertently overlooking the simultaneous examination of information seeking and CIA as distinct appraisals within the context of cancer fatalism development from the theoretical lens of the extended parallel process model (EPPM). This narrow focus hinders a comprehensive understanding of various information behaviors and their potentially varying implications on fatalistic beliefs concerning cancer prevention, particularly considering that CIA is more prevalent than avoidance of any other disease-related information given its threatening nature [ 12 ]. Thus, it is valuable to investigate how cancer fatalism may be influenced by 2 distinct appraisals—the danger control process related to information seeking and the fear control process related to CIA—which are believed to have apparently contrasting effects on cancer fatalism.

To investigate why some people actively seek cancer information on their own initiative while others choose to avoid it, one must take into account cancer-related affect and cognition, such as cancer worry and cancer risk perception [ 13 ]. Noticeably, cancer worry and cancer risk perception are distinct constructs, and they act in different ways in influencing people’s information behavior [ 14 ]. However, how distinct cancer mental conditions are associated with different information behaviors (seeking vs avoidance), which further become associated with cancer fatalism, has not been addressed. As mentioned earlier, the EPPM provides a guiding theoretical framework for our examination, which demonstrates that in the face of a threat, individuals may engage in different information responses (adaptive vs maladaptive) based on their risk appraisals, which can further make a difference in outcome variables such as one’s threat coping tendencies [ 15 ]. Considering that cancer fatalism involves individuals who deny their coping or behavioral needs [ 16 - 18 ], it is reasonable to expect that different information behaviors that individuals engaged in would be associated with different levels of fatalism, reflecting individuals’ negative coping needs.

Apart from cancer-related mental conditions, people’s information behavior is also influenced by information-carrier characteristics, for example, perceived cancer information usefulness, especially in the complex digital information environment [ 19 ]. In our study context, individuals would perceive the information to be useful if they deem the information can provide them with useful information or resources to deal with cancers. In this sense, perceived cancer information usefulness can be understood as a manifestation of response efficacy (eg, a belief as to whether a recommended response works in preventing a given threat) from the theoretical perspective of EPPM. Moreover, EPPM articulates that whether individuals engage in adaptive response (eg, information seeking) or maladaptive response (eg, information avoidance) depends on the interplay between threat appraisals (eg, severity and susceptibility) and efficacy appraisals (eg, response efficacy), suggesting the moderating role cancer information usefulness plays in our study.

The EPPM has traditionally been applied to elucidate how perceived self-threat influences an individual’s coping tendencies in the context of explicit message persuasion. However, a growing body of research has expanded the EPPM’s scope to include contexts beyond message persuasion, such as the incidental influence context [ 20 ]. While originally not designed for persuasion purposes, media coverage of health concerns has been found to incidentally influence variables relevant to public health, such as risk perceptions and effective responses [ 21 , 22 ]. This is not surprising, given that individuals need to be made aware of potential threats, and authorities are tasked with providing guidance on how to address them [ 23 ]. Consequently, the EPPM has been a valuable framework for application in nonpersuasion contexts to understand why and how people respond to health threats, often influenced by daily exposure to media reports containing persuasive health messages [ 23 ]. Hence, building upon the core tenets of the EPPM and drawing from prior empirical studies applying the EPPM to nonpersuasion contexts [ 24 , 25 ], one of the objectives of our study is to examine how individuals’ subjective evaluation of a threat (ie, cancer) becomes associated with their coping responses via 2 appraisals in a nonpersuasion context. Within this context, the perceived threat is expected to be shaped by persuasive health messages that individuals encounter daily in the media. In this regard, it is important to note that our study does not seek to examine the effects of the intentionally crafted persuasive message on health outcomes (eg, attitude or behavioral change) or to test all the postulations of the EPPM. Instead, our focus centers on predicting individuals’ coping responses through 2 appraisals (ie, danger control and fear control), which are grounded in their subjective evaluations of a threat and efficacy.

In light of the above, this study examines the path from 2 distinct cancer mental conditions (cancer worry and cancer risk perception) to 2 information behaviors (health information seeking and CIA) and further onward to cancer fatalism, considering the moderating role of perceived cancer information usefulness ( Figure 1 ). The next sections discuss the key concepts of this study and offer evidence for the proposed pathways.

research article 1 questionnaire quizlet

Study Hypotheses

Cancer risk perception, cancer worry, and information behavior.

Cancer risk perception and cancer worry are salient cancer-related thoughts and feelings that have been frequently investigated. Specifically, cancer risk perception has been largely conceptualized as one’s cognitive evaluation of perceived susceptibility to getting cancer, whereas cancer worry has been primarily regarded as an affective response to cancer threat [ 26 , 27 ]. In particular, Chae [ 14 ] developed a cancer-related mental condition model that differentiated cancer worry and cancer risk perception. She contended that cancer worry is a more affective condition compared to cancer risk perception, a more cognitive state. In other words, cancer worry is a mental activity that is closely linked to one’s emotions (eg, anxiety and fear) triggered by cancer threats and thus an affective-cognitive condition. Cancer risk perception centers on one’s rational judgment of the likelihood of getting cancer, which often involves deliberative and intellectual assessment and thus a cognitive appraisal.

Previous studies have documented ample evidence in linking cancer risk perception and cancer worry to health information seeking. For example, Nan et al [ 28 ] found that higher levels of cancer risk perception were associated with a greater likelihood of seeking prostate and breast cancer information. Yoo et al [ 29 ] indicated that people who perceived themselves with a high degree of getting cervical cancer were more likely to seek health information on social media. In the same vein, heightened cancer worry has been argued as a motivator for information acquisition. For instance, Griffin et al [ 30 ] demonstrated that personal worry prompts one’s information needs in coping with health risks. The planned risk information seeking model [ 31 ] and its subsequent studies further confirmed the positive association between personal worry and searches for health information. Consistent with previous research, this study has the following hypotheses:

  • Hypothesis 1: Cancer risk perception will be positively associated with online health information seeking (OHIS).
  • Hypothesis 2: Cancer worry will be positively associated with OHIS.

Despite such motivational triggers, a growing body of research has made a seemingly competing argument, stating that cancer risk perception and cancer worry may lead to more CIA. For instance, Moser et al [ 32 ] found that cancer is a substantial threat to many people who consider it a death sentence, increasing their fear and anxiety. Under such circumstances, people refuse to be exposed to cancer-related information to reduce uncomfortable feelings [ 33 ]. This inhibiting role of cancer worry and risk perception is also elucidated by the EPPM [ 15 ], which demonstrates 2 appraisals people may adopt in dealing with threats. On the one hand, when people perceive a high appraisal of threat (eg, heightened risk perception), they may be activated to take adaptive actions (eg, information seeking) to better equip themselves in coping with the threat, known as the danger control process. On the other hand, people might choose defensive avoidance (eg, information avoidance) to escape the potential of eliciting negative emotions and feelings, known as the fear control process. In line with this notion of the EPPM, some people would engage in CIA in reducing unconformable feelings, especially when they perceive a high degree of cancer threats [ 34 ]. Several studies provide empirical evidence for this argument [ 11 ]. For example, Case et al [ 35 ] demonstrated that people tended to avoid or ignore threatening information to manage emotional states such as anxiety and fear. Vrinten et al [ 11 ] also found that CIA significantly increased, as cancer worry escalated. Hence, in light of prior literature, this study postulates the following hypotheses:

  • Hypothesis 3: Cancer risk perception will be positively associated with CIA.
  • Hypothesis 4: Cancer worry will be positively associated with CIA.

OHIS, CIA, and Fatalistic Beliefs About Cancer Prevention

Fatalism, a deterministic outlook that one’s health is controlled by external forces, and therefore, there is no need to engage in positive coping behaviors, has been viewed as a prominent barrier to cancer prevention and screening behaviors [ 5 , 36 , 37 ]. By definition [ 5 ], cancer fatalism can be understood as one’s negative behavioral tendency (eg, no need to cope and refusing coping behaviors) in the face of cancer threats [ 16 , 18 ]. Although some studies have approached cancer fatalism as a concept embedded in culture, primarily investigating its influence on information behaviors [ 12 , 38 ], we argue that cancer fatalism is a malleable concept that can be intervened through media learning and health education, such as information and knowledge acquisition from media use. In fact, numerous empirical studies have provided strong evidence of the positive impact of educational attainment and health literacy in reducing cancer fatalism [ 10 , 39 - 42 ]. These findings suggest that diverse information behaviors (ie, seeking and avoidance), involving varying levels of media exposure and educational opportunities, can make a significant difference in shaping the development of cancer fatalism. Thus, it is both reasonable and essential to examine the relationship from information behaviors to cancer fatalism. It is worth noting that both cancer-specific information seeking and general health information seeking are beneficial. While cancer-specific information seeking aids in gaining cancer-related knowledge, general health information seeking is effective in narrowing disparities in health literacy, thereby reducing cancer fatalism [ 43 ]. Particularly in this digital era, the internet offers convenient access to health information. With useful health information, patients have a better understanding of their health conditions, prescription drugs, treatments, and disease management options, which can empower them, reducing cancer fatalism [ 41 , 44 ]. Health information exchange with physicians or peers on the internet may also encourage individuals to take a more active role in preventive behaviors, lowering fatalistic beliefs about cancer [ 43 ]. By contrast, if people intentionally avoid cancer information, they might lose opportunities to receive information relevant to them, increasing health uncertainties and cancer fatalism [ 33 ].

Our reasoning is well aligned with the theoretical standpoint of EPPM, which demonstrates that the 2 information responses (adaptive vs maladaptive) that individuals adopt driven by their threat appraisals would lead to disparities in outcome variables such as one’s threat coping tendency. Contextualized in this study, individuals who take adaptive actions in engaging in health information seeking tend to be well equipped with cancer-related knowledge, which in turn helps eliminate their fatalistic belief about cancer prevention, whereas individuals who choose defensive steps in engaging in information avoidance are more likely to be vulnerable to cancer fatalism due to their refusing coping behaviors [ 18 ]. A couple of empirical studies have also illustrated that CIA can lead to fatalistic beliefs about cancer and less frequent cancer screenings [ 10 , 17 ]. Hence, based on prior literature, we proposed the following hypotheses:

  • Hypothesis 5: OHIS is negatively associated with fatalistic beliefs about cancer prevention.
  • Hypothesis 6: CIA is positively associated with fatalistic beliefs about cancer prevention.

So far, this study reviewed 2 well-established relationships that link 3 elements: cancer mental conditions, information behaviors, and cancer fatalism. Given the established 2-step relationship, an underlying dual pathway between cancer risk perception and cancer fatalism as well as between cancer worry and cancer fatalism is likely to be mediated by OHIS and CIA, which suggests the following hypotheses:

  • Hypothesis 7: OHIS will mediate (1) the relationship between cancer risk perception and cancer fatalism and (2) the relationship between cancer worry and cancer fatalism.
  • Hypothesis 8: CIA will mediate (1) the relationship between cancer risk perception and cancer fatalism and (2) the relationship between cancer worry and cancer fatalism.

Moderating Role of Perceived Usefulness of Online Cancer Information

Given the dynamic process of information seeking that involves interactions between information seekers and information platforms, we need to consider how information seekers perceive health information. Specifically, we investigated the moderating role of one’s perceived information usefulness, a vital information-carrier predictor of individuals’ information behavior [ 45 ]. Barbour et al [ 46 ] demonstrated that if people viewed health information as questionable and unclear, they tended to avoid such information to reduce stress and uncertainties despite their serious illnesses. A review study also concluded that the decision to seek or avoid cancer information was contingent upon situational factors, such as the usefulness of the information [ 33 ]. As Johnson [ 45 ] posited, information seekers are concerned about the content of the information. They put greater effort into seeking information that is deemed useful in coping with their cancer threats. Conversely, if they consider the information to be less effective, they may have a higher tendency to avoid it.

Moreover, echoing the EPPM [ 15 ], engaging in fear (information avoidance) or danger control (information seeking) is a synergistic effect of 2 appraisals: threat (eg, severity and susceptibility) and efficacy (eg, response efficacy and self-efficacy). Specifically, response efficacy refers to the perception of whether the provided information or recommended response works in allaying the threat [ 34 ]. Particularly relevant to the information environment, useful cancer information is a typical resource that offers people informational and emotional strategies to cope with threats [ 47 ]. Therefore, conceptualizing the usefulness of cancer information as a manifestation of response efficacy, it is expected that the relationship between one’s cancer-related mental conditions and information behavior will be moderated by the perceived usefulness of cancer information from the theoretical perspective of EPPM. Accordingly, the following hypothesis is posited:

  • Hypothesis 9: The perceived usefulness of online cancer information will moderate (1) the relationship between cancer risk perception and OHIS, (2) the relationship between cancer risk perception and CIA, (3) the relationship between cancer worry and OHIS, and (4) the relationship between cancer worry and CIA.

Data and Participants

This study used cross-sectional data from the Health Information National Trends Survey (HINTS) in China (HINTS-China). Similar to the HINTS that has been implemented in the United States since 2003, the current HINTS-China was conducted in May 2017. HINTS-China is an international collaboration involving the National Cancer Institute, the Chinese Ministry of Health, and the Chinese Food and Drug Administration, in conjunction with George Mason University. It was initially established with Renmin University of China and has subsequently collaborated with Beijing Normal University [ 48 ]. A multistage stratified random sampling method was adopted, and a door-to-door survey method was used. Specifically, 2 cites were purposely chosen due to their representativeness: Beijing (representing a tier-1 city) and Hefei (representing a tier-2 city). Then, 1 urban district (representing an urban area) and 1 rural county (representing a rural area) were randomly selected in each of the 2 cities. Within each urban district and rural county, 1 subdistrict and township was randomly selected from 3 strata by the level of economic development (high, medium, and low). A total of 4 residential neighborhoods were then randomly selected from each subdistrict and township stratified by sex and age (for detailed sampling methodology, refer to the study by Zhao et al [ 49 ]).

A total of 3090 respondents completed the survey. In this study, we only included those who had internet access, as 1 focal variable was OHIS. In addition, patients with cancer were excluded from our sample because 1 key variable, cancer risk perception, measured people’s evaluation of the likelihood of getting cancer. Therefore, the final sample size in this study was 2358. The participants’ mean age was 33.98 (SD 10.88, range 18 to 60) years. In total, 60.3% (1421/2358) were female. More than half of the participants (1332/2358, 56.49%) obtained some college education or more. Less than a third (705/2358, 29.9%) earned monthly income >CNY 5000 (US$700). Approximately 94.44% (2227/2358) of the respondents had health insurance coverage, and 16% (377/2358) had at least 1 chronic condition. The average self-reported health condition was at the “good” level (mean 3.98, SD 0.76).

Ethical Considerations

The HINTS-China was approved by the institutional review board (IRB) at Beijing Normal University in 2017. Respondents who participated in the survey gave their written consent. The data were deidentified and publicly available [ 50 ]. Secondary data analysis using the HINTS-China data set in our study did not need to obtain IRB approval because research involving the study of existing data, if these sources are publicly available or research participants cannot be identified, is in the exemption category of IRB [ 51 ]. This is also a common practice in prior research using HINTS-China data [ 50 , 52 ].

Cancer Risk Perception

Drawing from prior research examining cancer risk perception [ 14 , 53 ], respondents were asked to indicate their judgment of the likelihood of getting cancer on a 5-point Likert scale (1= very unlikely and 5= very likely ): “Compared to the average person of your age and sex, how likely would you rate your chance of developing cancer sometime in your life?” (mean 2.32, SD 0.84).

Cancer Worry

Similar to prior studies using HINTS data [ 54 ], this study used a single item to ask participants to indicate to what extent they worried about getting cancer on a 5-point Likert scale (1= not at all and 5= extremely ): “How worried are you about getting cancer?” (mean 2.25, SD 1.00).

Perceived Usefulness of Cancer Information

Adapted from prior research that used a single item by using HINTS data [ 55 ], in examining to what extent respondents considered online cancer information to be useful, participants were asked to rate the overall usefulness of online cancer information on a 4-point Likert scale (1= not at all useful and 4= very useful ; mean 2.35, SD 0.68).

OHIS Measure

To investigate the extent to which respondents sought general health information on the internet, we used six items, drawn from previous research [ 43 ], that asked participants whether they have carried out the following activities on the internet in the last 12 months (1= yes and 0= no ): (1) looked for a health care provider or information about hospitals, (2) looked for exercise, weight control, or fitness information, (3) looked for information about quitting smoking, (4) looked for health or medical information for someone else, (5) asked and exchanged health-related information and topics, and (6) downloaded health or medical information. A summative scale of these 6 dichotomous items was created (mean 1.03, SD 0.99).

CIA Measure

To estimate the extent to which people intentionally avoid cancer information, participants were asked to report their agreement with the following statement on a 5-point scale (1= strongly disagree and 5= strongly agree ) adapted from prior research [ 9 ]: “I avoid being exposed to cancer information” (mean 2.76, SD 0.98).

Fatalistic Beliefs About Cancer Prevention

Drawing from previous studies using HINTS data [ 37 , 41 ], participants were asked to evaluate their agreement with 3 statements about fatalistic beliefs concerning cancer prevention on a 5-point Likert scale (1= strongly disagree and 5= strongly agree ): (1) “There is not much I can do to lower my chances of getting cancer,” (2) “It seems that everything causes cancer,” and (3) “When I think about cancer, I automatically think about death.” (mean 3.16, SD 0.74; Cronbach α=0.74).

Control variables included social demographics such as age, sex, education, and personal monthly income. In addition, as this study investigated people’s cancer-related beliefs, health-related variables were also controlled, including participants’ general health status (1= very poor and 5= very good ), chronic disease conditions (1= yes and 0= no ), health insurance coverage (1= yes and 0= no ), and family cancer history (1= yes and 0= no ).

Analytic Approach

To examine the hypothesized model, structural equation modeling was conducted using the lavaan package in R. Maximum likelihood of estimation was adopted. Following the widely used combinational rules and prior research [ 56 ], the goodness of fit of the hypothesized model should be (1) Tucker-Lewis index or comparative fit index≥0.95 and standardized root mean square residual (SRMR) ≤0.08, or, alternatively, (2) root mean square error of approximation<0.05 and SRMR <0.06. To assess the mediating effect (ie, hypothesis 7 and hypothesis 8), the bias-corrected bootstrapping method was used to estimate the path CI [ 57 ]. A CI that does not include 0 indicates a statistically significant mediating effect at the 95% CI. To examine the moderating effect of the perceived usefulness of cancer information (ie, hypothesis 9), interaction terms between independent variables (ie, cancer risk perception and cancer worry) and the perceived usefulness of cancer information were created, and the 3 variables were centered before forming the interaction terms to reduce multicollinearity problem.

Structural Model and Path Coefficients

Table 1 shows the descriptive statistics and bivariate correlations for measured variables. Controlling for social demographics and health-related variables, the structural model showed an acceptable fit ( χ 2 92 =254.4; P <.001; comparative fit index=0.95; Tucker-Lewis index=0.94; root mean square error of approximation=0.05; 90% CI 0.041 to 0.053; SRMR=0.04). As shown in Figure 2 , our findings revealed that cancer risk perception positively predicted OHIS (β=.08; P =.007). Similarly, cancer worry was positively related to OHIS (β=.10; P <.001), supporting hypothesis 1 and hypothesis 2. In addition, cancer worry was positively associated with CIA (β=.11; P <.001), whereas the results indicated a nonsignificant relationship between cancer risk perception and CIA (β=–.03; P =.23). Hence, hypothesis 4 was supported but not hypothesis 3. Moreover, the results showed that CIA was positively associated with fatalistic beliefs about cancer prevention (β=.29; P <.001); conversely, OHIS was negatively related to cancer fatalism (β=–.09; P =.003), supporting hypothesis 5 and hypothesis 6. In total, our hypothesized model explained 25.1% of the variance of cancer fatalism among the participants.

a Not applicable.

research article 1 questionnaire quizlet

Mediation and Moderated Mediation

To assess the mediating effect, the bias-corrected bootstrapping method was used to estimate the path CI. The results of bootstrapped CIs, with 5000 resamples, showed that cancer risk perception indirectly reduced cancer fatalism through OHIS (95% CI –0.010 to –0.003) but not through CIA (95% CI –0.014 to 0.003). In addition, the results supported an indirect effect of cancer worry on cancer fatalism, as mediated by OHIS (95% CI –0.012 to –0.004]) and CIA (95% CI 0.009 to 0.025). Hence, hypothesis 7 and hypothesis 8b were supported but not hypothesis 8a.

As for the moderating effects, the results revealed that there was a main effect of the perceived usefulness of online cancer information on OHIS (β=.17; P <.001) but not for CIA (β=−.01; P =.58). More importantly, there was a significant interaction effect between cancer risk perception and perceived usefulness in predicting OHIS (β=.06; P =.01). The results revealed that the simple slope of the relationship between cancer risk perception and OHIS differed significantly from 0 when the perceived usefulness of cancer information was 1 SD above the mean (β=0.14; SE=0.05; P <.001) but not 1 SD below (β=0.06; SE=0.04; P =.11). This indicates that the positive association between cancer risk perception and OHIS was salient only among participants who perceived online cancer information to be of high usefulness but not among those who deemed the information was of low usefulness ( Figure 3 ). However, there was no significant interaction effect between cancer risk perception and perceived usefulness in predicting CIA (β=−.04; P =.18).

research article 1 questionnaire quizlet

Moreover, a significant interaction between cancer worry and information usefulness was observed in predicting OHIS (β=.10; P <.001). The results of the simple slopes revealed that when online cancer information was perceived as of high usefulness, worried participants frequently acquired health information on the web (B=0.18; SE=0.04; P <.001). However, this conditional effect of cancer worry was not observed when online cancer information was perceived as of low usefulness (B=0.02; SE=0.04; P =.61; Figure 4 ).

research article 1 questionnaire quizlet

Furthermore, a significant interaction between cancer worry and perceived usefulness was detected in predicting CIA (β=−.07; P =.009). Specifically, the positive association between cancer worry and CIA existed only for people who rated the usefulness of online cancer information as low (B=0.16; SE=0.04; P <.001) but not for those who scored the usefulness as high (B=0.06; SE=0.04; P =.09; Figure 5 ). In sum, hypotheses 9a, 9c, and 9d were supported but not hypothesis 9b.

research article 1 questionnaire quizlet

The results also displayed significant moderated mediation effects ( Table 2 ). The perceived high usefulness of cancer information strengthened the indirect negative influence of cancer risk perception and cancer worry on cancer fatalism through OHIS. However, a lower level of perceived usefulness significantly intensified the indirect positive influence of cancer worry on cancer fatalism through CIA.

Principal Findings

This study reveals a dual-mediation pathway linking distinct cancer mental conditions to cancer fatalism, focusing on different information behaviors and considering the moderating role of the perceived usefulness of online cancer information. Findings from the HINTS-China data revealed that cancer risk perception and cancer worry were positively associated with OHIS. Consistent with previous studies [ 28 , 30 ], individuals who perceived a high susceptibility to getting cancer and felt worried about it tended to actively engage in OHIS, such as looking for exercise, weight control, or fitness information and exchanging health-related information on the internet. As such, the results suggest that both affective-cognitive (cancer worry) and cognitive (cancer risk perception) mental cognitions can serve as driving forces for people’s self-protective behaviors, such as health information acquisition.

However, the results indicated a different relationship between cancer worry and CIA when compared to risk perception and CIA, such that cancer worry rather than risk perception was positively associated with CIA. This finding might suggest that unlike risk perception, which has been widely noted as a problem-solving mechanism that leads to active information seeking [ 13 , 31 ], cancer worry tends to increase both general health information seeking and cancer-specific information avoidance, with mixed findings in the literature [ 58 - 60 ]. On the one hand, the finding that cancer worry was positively associated with both OHIS and CIA suggests the operation of moderating factors (eg, message characteristics) that facilitate seeking behaviors in some circumstances but avoidance actions in other contexts. On the other hand, psychologically, cancer worry is closely related to negative emotions, such as fear and anxiety. As noted by uncertainty management theory, information avoidance serves as a way of managing uncertainty and providing an escape from negative emotions [ 61 ]. This avoidance behavior tends to be more pronounced when confronting threatening and complex cancer information that may bring about more confusion and mental discomfort, even though it might compromise treatment. Hence, the results highlight that cancer risk perception and cancer worry should not be treated as homogeneous entities or used interchangeably because of their varying functions and effects.

Our study also found that OHIS was negatively associated with cancer fatalism, while CIA was positively related to it. In accord with previous studies [ 8 , 41 ], health information seeking, particularly via the internet, offers people diverse formats and depths of information across various health topics, helps specify a diagnosis or treatment plan, and provides clarity about prognoses. All these outcomes contribute to individuals’ increased health literacy and cancer knowledge, which are critical in reducing individuals’ negative coping needs embedded in cancer fatalism. In addition, OHIS offers people more opportunities to interact with others in social media communities and support groups, providing a broad sense and social proof that many others are active in engaging in self-protective behaviors for cancer prevention [ 43 ]. These perceptions help reduce people’s cancer fatalism, especially in societies that tend toward collectivism, such as China. In contrast to this study’s findings about OHIS, the finding of a positive association between CIA and cancer fatalism implies a detrimental influence of CIA on cancer prevention. Consistent with previous studies [ 10 ], people who refused to be exposed to cancer information delayed the discovery of positive information, thus maintaining their biased perceptions of their actual risks and self-agency. This biased belief is closely related to individuals’ tendencies to avoid physicians, other forms of help, and preventive screening [ 60 , 62 ]. These behaviors exacerbate individuals’ health risks, especially for those who are vulnerable to cancer for whom early detection is quite literally a life-or-death matter.

Another key finding pertains to the moderation effect. The results indicate that people only sought health information on the internet when they perceived it to be useful. If they deemed information to be useless, they tended to avoid it despite their cancer worries. Such results confirm the central postulate of the updated EPPM—the additive model—which suggests that higher levels of each threat and efficacy would lead to a more favorable impact, and the interaction effects between the threat and efficacy are additive [ 34 ]. In addition, the mediating effect of people’s information behavior on cancer fatalism was found to be contingent upon perceived information usefulness. This finding is consistent with the 3-stage model developed by Street [ 63 ], which highlights the vital role that positive experiences play in producing desired health outcomes out of user-media-message interactions. Particularly in the context of China, researchers have long questioned the problematic digital information environment and expressed concern about the negative influences of poor-quality health information, which are exacerbated by low levels of health literacy [ 64 ]. Therefore, positive media message characteristics (eg, information usefulness) are particularly important to encourage people to engage with more adaptive information behavior to better reap health benefits and combat cancers. Useless and low-quality cancer information may make people frustrated and overwhelmed, dampening their information seeking and even spurring CIA that leads to cancer fatalism. Hence, the results reinforce a challenging but imperative public health goal of providing more useful, understandable, and high-quality cancer information for people in China, especially in the digital era.

Theoretical and Practical Implications

This study has contributed new insights to inform future research on health-related information behavior and the EPPM. First, in contrast to some previous studies that primarily focused either on information seeking behavior or information avoidance, a strength of this study is that it considers both information-related behaviors, which are of equal importance in understanding the development of fatalistic beliefs about cancer prevention. More to the point, this study broadens the scope of the EPPM by incorporating cancer fatalism, which reflects individuals’ negative behavioral coping tendencies, as a fear control response, and exploring its connection with both OHIS and CIA. This expansion helps elucidate the differing implications of these 2 distinct appraisals on fatalistic beliefs concerning cancer prevention. Second, conceptualizing the perceived usefulness of online cancer information as one of the manifestations of response efficacy, this study adds a new perspective to the EPPM and the literature on health-related information management. Third, building upon the cancer-related mental condition model [ 14 ], this study has taken a step further to investigate how distinct cancer mental conditions influence disparate information behavior differently, which contributes to the theoretical advancement of the effects of cancer-related affective responses and cognitive thoughts on cancer communication. In addition, Witte [ 65 ] has demonstrated that the “danger control processes are primarily cognitive processes,” whereas the “fear control processes are mainly emotional processes.” By establishing the positive relationship between cancer worry (an affective-cognitive condition) and OHIS (a danger control process) as well as the positive relationship between cancer worry and CIA (a fear control process), this study contributes to the EPPM by highlighting the dual nature of cancer worry in engaging the 2 different appraisals proposed by the EPPM. This paves the way for future EPPM research to thoroughly explore how various cancer-related mental conditions (eg, affective, cognitive, and affective-cognitive) may either motivate or inhibit individuals in safeguarding themselves against threats such as cancer as well as by which conditions. This is particularly significant as the EPPM has traditionally focused on purely emotional appeals (eg, fear) or cognitions (eg, perceived susceptibility and perceived severity).

The findings also provide useful implications for cancer communication and control. First, cancer worry has both positive and negative influences in our model; thus, developers of future health campaigns aimed at increasing people’s risk perceptions should be cautious about unintended outcomes. They must be vigilant in avoiding negative affective responses toward cancer threats to avoid eliciting excessive cancer worry that provokes an “ostrich effect.” More tailored communication strategies are needed to promote rational thinking about cancer and personal risk, avoid inflating anxiety, and avert CIA and possible anxiety disorders [ 58 ]. Second, in clinical settings, it would be useful for physicians to identify and pay special attention to patients with high trait anxiety. Practitioners should help them attenuate unnecessary worries and anxiety through affectionate and personalized counseling. Instead of information avoidance, training in new coping skills (eg, breathing exercises, relaxation strategies, and mental imagery exercises) should be incorporated into health education and counseling. Given the moderating role of perceived information usefulness and the effectiveness of OHIS in reducing cancer fatalism, health educators are encouraged to disseminate useful, accurate, and feasible information with concrete skill sets that are easy and effective in fighting cancer threats. Furthermore, considering the potential of OHIS to alleviate cancer fatalism, public health practitioners need to make efforts to promote information seeking behavior that informs and empowers people, particularly for certain groups who are vulnerable to cancer fatalism (eg, those with low educational attainment or low health literacy).

Limitations and Suggestions for Future Research

Several limitations are worth examining more closely in future research. First, the use of cross-sectional data precluded the causal inferences in this study. To determine causality, longitudinal studies with panel data are encouraged to affirm the temporal order. Second, due to the use of secondary data, cancer worry, information usefulness, and CIA were assessed with single items. Although these measures have been frequently used in previous studies [ 28 , 54 , 55 ], future research would ideally use multiple-item scales to enhance content validity. Third, this study did not directly investigate what types of external stimuli can trigger individuals’ threats but only examined the relationship between individuals’ perceived cancer threats and subsequent coping responses. To expand upon the EPPM, it is essential for future research to use experimental methods to evaluate the message characteristics that can effectively induce adequate levels of risk perceptions, thereby encouraging adaptive actions. This endeavor holds significant promise for advancing our theoretical understanding of how various persuasive messages, including those designed to induce fear, are processed within the theoretical framework of the EPPM.

Moreover, given the specific scope of this study, our research model exclusively examined the relationship between 2 appraisals and a fear control outcome (ie, cancer fatalism), without delving into potential danger control outcomes, such as changes in belief, attitude, and behaviors concerning cancer prevention. Future research is encouraged to incorporate potential manifestations of danger control in providing a more comprehensive understanding of both fear control and danger control outcomes and their relationship with appraisals of threat and efficacy. Finally, cancer fatalism is a multidimensional construct that has been conceptualized differently across the cancer continuum [ 5 ]. A direct extension of this study would be to include other aspects of fatalism, such as fatalistic beliefs about cancer treatments among survivors of cancer who are receiving treatments.

Conclusions

Cancer is a threatening health problem and becoming an increasing burden on a global scale. Although a great proportion of cancer cases can be prevented and cured, cancer fatalism is one of the major obstacles for cancer prevention and control. This study provides evidence that OHIS is an effective mechanism for reducing cancer fatalism and minimizing CIA is necessary to allay fatalistic beliefs about cancer prevention. To facilitate healthy behavior, apart from disseminating more useful cancer information that assists people in coping with cancer threats, future endeavors should heighten rational risk perception while being cautious about elevating unnecessary cancer worry that may prompt information avoidance.

Acknowledgments

The authors are grateful to the Health Information National Trends Survey in China research program for sharing the data used for this study. This research was supported by the Shanghai Philosophy and Social Sciences Program (grant 2022ZXW005).

Data Availability

The data sets generated or analyzed during this study are available from the corresponding author upon reasonable request.

Authors' Contributions

LZ and SJ conceptualized the study. LZ conducted data analysis and wrote the original draft. LZ and SJ revised the manuscript. Both authors reviewed and approved the final version of the manuscript.

Conflicts of Interest

None declared.

  • Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. May 2021;71(3):209-249. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Xia C, Dong X, Li H, Cao M, Sun D, He S, et al. Cancer statistics in China and United States, 2022: profiles, trends, and determinants. Chin Med J (Engl). Feb 09, 2022;135(5):584-590. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Soerjomataram I, Bray F. Planning for tomorrow: global cancer incidence and the role of prevention 2020-2070. Nat Rev Clin Oncol. Oct 2021;18(10):663-672. [ CrossRef ] [ Medline ]
  • Niederdeppe J, Fowler EF, Goldstein K, Pribble J. Does local television news coverage cultivate fatalistic beliefs about cancer prevention? J Commun. Jun 01, 2010;60(2):230-253. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Powe BD, Finnie R. Cancer fatalism: the state of the science. Cancer Nurs. Dec 2003;26(6):454-65; quiz 466-7. [ CrossRef ] [ Medline ]
  • Jun J, Oh KM. Asian and Hispanic Americans' cancer fatalism and colon cancer screening. Am J Health Behav. Mar 2013;37(2):145-154. [ CrossRef ] [ Medline ]
  • Cheng H, Sit JW, Twinn SF, Cheng KK, Thorne S. Coping with breast cancer survivorship in Chinese women: the role of fatalism or fatalistic voluntarism. Cancer Nurs. 2013;36(3):236-244. [ CrossRef ] [ Medline ]
  • Lee CJ, Niederdeppe J, Freres D. Socioeconomic disparities in fatalistic beliefs about cancer prevention and the internet. J Commun. Dec 2012;62(6):972-990. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Miles A, Voorwinden S, Chapman S, Wardle J. Psychologic predictors of cancer information avoidance among older adults: the role of cancer fear and fatalism. Cancer Epidemiol Biomarkers Prev. Aug 2008;17(8):1872-1879. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Emanuel AS, Kiviniemi MT, Howell JL, Hay JL, Waters EA, Orom H, et al. Avoiding cancer risk information. Soc Sci Med. Dec 2015;147:113-120. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vrinten C, Boniface D, Lo SH, Kobayashi LC, von Wagner C, Waller J. Does psychosocial stress exacerbate avoidant responses to cancer information in those who are afraid of cancer? A population-based survey among older adults in England. Psychol Health. Jan 2018;33(1):117-129. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chae J, Lee CJ, Kim K. Prevalence, predictors, and psychosocial mechanism of cancer information avoidance: findings from a national survey of U.S. adults. Health Commun. Mar 2020;35(3):322-330. [ CrossRef ] [ Medline ]
  • Griffin RJ, Dunwoody S, Neuwirth K. Proposed model of the relationship of risk information seeking and processing to the development of preventive behaviors. Environ Res. Feb 1999;80(2 Pt 2):S230-S245. [ CrossRef ] [ Medline ]
  • Chae J. A three-factor cancer-related mental condition model and its relationship with cancer information use, cancer information avoidance, and screening intention. J Health Commun. 2015;20(10):1133-1142. [ CrossRef ] [ Medline ]
  • Witte K. Putting the fear back into fear appeals: the extended parallel process model. Commun Monogr. 1992;59(4):329-349. [ CrossRef ]
  • Hong SJ. Linking environmental risks and cancer risks within the framework of genetic-behavioural causal beliefs, cancer fatalism, and macrosocial worry. Health Risk Soc. Nov 29, 2020;22(7-8):379-402. [ CrossRef ]
  • Ramondt S, Ramírez AS. Fatalism and exposure to health information from the media: examining the evidence for causal influence. Ann Int Commun Assoc. 2017;41(3-4):298-320. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rippetoe PA, Rogers RW. Effects of components of protection-motivation theory on adaptive and maladaptive coping with a health threat. J Pers Soc Psychol. 1987;52(3):596-604. [ CrossRef ]
  • Johnson JD, Case DO. Health Information Seeking. New York, NY. Peter Lang Publishing Group; 2012.
  • Goodall C, Sabo J, Cline R, Egbert N. Threat, efficacy, and uncertainty in the first 5 months of national print and electronic news coverage of the H1N1 virus. J Health Commun. 2012;17(3):338-355. [ CrossRef ] [ Medline ]
  • Slater MD, Goodall CE, Hayes AF. Self-reported news attention does assess differential processing of media content: an experiment on risk perceptions utilizing a random sample of US local crime and accident news. J Commun. Mar 2009;59(1):117-134. [ CrossRef ]
  • Slater MD, Hayes AF, Ford VL. Examining the moderating and mediating roles of news exposure and attention on adolescent judgments of alcohol-related risks. Commun Res. 2007;34(4):355-381. [ CrossRef ]
  • Goodall CE, Reed P. Threat and efficacy uncertainty in news coverage about bed bugs as unique predictors of information seeking and avoidance: an extension of the EPPM. Health Commun. 2013;28(1):63-71. [ CrossRef ] [ Medline ]
  • Ivanova A, Kvalem IL. Psychological predictors of intention and avoidance of attending organized mammography screening in Norway: applying the Extended Parallel Process Model. BMC Womens Health. Feb 15, 2021;21(1):67. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Roberto AJ, Goodall CE. Using the extended parallel process model to explain physicians' decisions to test their patients for kidney disease. J Health Commun. Jun 2009;14(4):400-412. [ CrossRef ] [ Medline ]
  • Borkovec TD. The nature, functions, and origins of worry. In: Davey GC, Tallis F, editors. Worrying: Perspectives on Theory, Assessment and Treatment. Hoboken, NJ. John Wiley & Sons; 1994.
  • McQueen A, Vernon SW, Meissner HI, Rakowski W. Risk perceptions and worry about cancer: does gender make a difference? J Health Commun. 2008;13(1):56-79. [ CrossRef ] [ Medline ]
  • Nan X, Underhill J, Jiang H, Shen H, Kuch B. Risk, efficacy, and seeking of general, breast, and prostate cancer information. J Health Commun. 2012;17(2):199-211. [ CrossRef ] [ Medline ]
  • Yoo SW, Kim J, Lee Y. The effect of health beliefs, media perceptions, and communicative behaviors on health behavioral intention: an integrated health campaign model on social media. Health Commun. Jan 2018;33(1):32-40. [ CrossRef ] [ Medline ]
  • Griffin RJ, Neuwirth K, Dunwoody S, Giese J. Information sufficiency and risk communication. Media Psychol. 2004;6(1):23-61. [ CrossRef ]
  • Kahlor L. PRISM: a planned risk information seeking model. Health Commun. Jun 2010;25(4):345-356. [ CrossRef ] [ Medline ]
  • Moser RP, Arndt J, Han PK, Waters EA, Amsellem M, Hesse BW. Perceptions of cancer as a death sentence: prevalence and consequences. J Health Psychol. Dec 2014;19(12):1518-1524. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sweeny K, Melnyk D, Miller W, Shepperd JA. Information avoidance: who, what, when, and why. Rev Gen Psychol. 2010;14(4):340-353. [ CrossRef ]
  • Witte K, Allen M. A meta-analysis of fear appeals: implications for effective public health campaigns. Health Educ Behav. Oct 2000;27(5):591-615. [ CrossRef ] [ Medline ]
  • Case DO, Andrews JE, Johnson JD, Allard SL. Avoiding versus seeking: the relationship of information seeking to avoidance, blunting, coping, dissonance, and related concepts. J Med Libr Assoc. Jul 2005;93(3):353-362. [ FREE Full text ] [ Medline ]
  • Miles A, Rainbow S, von Wagner C. Cancer fatalism and poor self-rated health mediate the association between socioeconomic status and uptake of colorectal cancer screening in England. Cancer Epidemiol Biomarkers Prev. Oct 2011;20(10):2132-2140. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Niederdeppe J, Levy AG. Fatalistic beliefs about cancer prevention and three prevention behaviors. Cancer Epidemiol Biomarkers Prev. May 2007;16(5):998-1003. [ CrossRef ] [ Medline ]
  • Lu L, Liu J, Yuan YC. Cultural differences in cancer information acquisition: cancer risk perceptions, fatalistic beliefs, and worry as predictors of cancer information seeking and avoidance in the U.S. and China. Health Commun. Oct 2022;37(11):1442-1451. [ CrossRef ] [ Medline ]
  • Chung JE, Lee CJ. The impact of cancer information online on cancer fatalism: education and eHealth literacy as moderators. Health Educ Res. Dec 01, 2019;34(6):543-555. [ CrossRef ] [ Medline ]
  • Fleary SA, Paasche-Orlow MK, Joseph P, Freund KM. The relationship between health literacy, cancer prevention beliefs, and cancer prevention behaviors. J Cancer Educ. Oct 2019;34(5):958-965. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kobayashi LC, Smith SG. Cancer fatalism, literacy, and cancer information seeking in the American public. Health Educ Behav. Aug 2016;43(4):461-470. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Morris NS, Field TS, Wagner JL, Cutrona SL, Roblin DW, Gaglio B, et al. The association between health literacy and cancer-related attitudes, behaviors, and knowledge. J Health Commun. 2013;18 Suppl 1(Suppl 1):223-241. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lee CJ, Chae J. An initial look at the associations of a variety of health-related online activities with cancer fatalism. Health Commun. Nov 2016;31(11):1375-1384. [ CrossRef ] [ Medline ]
  • Lee SY, Hwang H, Hawkins R, Pingree S. Interplay of negative emotion and health self-efficacy on the use of health information and its outcomes. Commun Res. 2008;35(3):358-381. [ CrossRef ]
  • Johnson JD. Cancer-related Information Seeking. New York, NY. Hampton Press; 1997.
  • Barbour JB, Rintamaki LS, Ramsey JA, Brashers DE. Avoiding health information. J Health Commun. 2012;17(2):212-229. [ CrossRef ] [ Medline ]
  • Brashers DE, Goldsmith DJ, Hsieh E. Information seeking and avoiding in health contexts. Hum Commun Res. Apr 2002;28(2):258-271. [ CrossRef ]
  • Yang Y, Yu G, Pan J, Kreps GL. Public trust in sources and channels on judgment accuracy in food safety misinformation with the moderation effect of self‐affirmation: evidence from the HINTS‐China database. World Med Health Policy. Aug 24, 2022;15(2):148-162. [ CrossRef ]
  • Zhao X, Mao Q, Kreps GL, Yu G, Li Y, Chou SW, et al. Cancer information seekers in China: a preliminary profile. J Health Commun. 2015;20(5):616-626. [ CrossRef ] [ Medline ]
  • Chang A, Schulz PJ, Jiao W, Yu G, Yang Y. Media source characteristics regarding food fraud misinformation according to the Health Information National Trends Survey (HINTS) in China: comparative study. JMIR Form Res. Mar 16, 2022;6(3):e32302. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • NUS-IRB’S amended exemption categories for social, behavioural and educational research (SBER). National University of Singapore Institutional Review Board. URL: https:/​/fass.​nus.edu.sg/​cnm/​wp-content/​uploads/​sites/​2/​2021/​06/​SBER-Exemption-Categories-2020-08-25.​pdf [accessed 2024-03-22]
  • Zhang L, Jiang S. Linking health information seeking to patient-centered communication and healthy lifestyles: an exploratory study in China. Health Educ Res. Apr 12, 2021;36(2):248-260. [ CrossRef ] [ Medline ]
  • Chae J, Lee CJ. The psychological mechanism underlying communication effects on behavioral intention: focusing on affect and cognition in the cancer context. Commun Res. 2019;46(5):597-618. [ CrossRef ]
  • Chen Y, Yang Q. How do cancer risk perception, benefit perception of quitting, and cancer worry influence quitting intention among current smokers: a study using the 2013 HINTS. J Subst Use. Jan 27, 2017;22(5):555-560. [ CrossRef ]
  • Rains SA. Perceptions of traditional information sources and use of the world wide web to seek health information: findings from the health information national trends survey. J Health Commun. 2007;12(7):667-680. [ CrossRef ] [ Medline ]
  • Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling Multidiscip J. 1999;6(1):1-55. [ CrossRef ]
  • Hayes AF. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. New York, NY. Guilford Publications; 2013.
  • Borkovec TD, Alcaine OM, Behar E. Avoidance theory of worry and generalized anxiety disorder. In: Heimberg RG, Turk CL, Mennin DS, editors. Generalized Anxiety Disorder: Advances in Research and Practice. New York, NY. Guilford Press; 2004.
  • Ferrer RA, Portnoy DB, Klein WM. Worry and risk perceptions as independent and interacting predictors of health protective behaviors. J Health Commun. 2013;18(4):397-409. [ CrossRef ] [ Medline ]
  • Persoskie A, Ferrer RA, Klein WM. Association of cancer worry and perceived risk with doctor avoidance: an analysis of information avoidance in a nationally representative US sample. J Behav Med. Oct 2014;37(5):977-987. [ CrossRef ] [ Medline ]
  • Brashers DE. Communication and uncertainty management. J Commun. Sep 2001;51(3):477-497. [ CrossRef ]
  • Kim HK, Lwin MO. Cultural determinants of cancer fatalism and cancer prevention behaviors among Asians in Singapore. Health Commun. Jul 2021;36(8):940-949. [ CrossRef ] [ Medline ]
  • Street RLJ. Mediated consumer-provider communication in cancer care: the empowering potential of new technologies. Patient Educ Couns. May 2003;50(1):99-104. [ CrossRef ] [ Medline ]
  • Jiang S, Street RL. Pathway linking internet health information seeking to better health: a moderated mediation study. Health Commun. Aug 2017;32(8):1024-1031. [ CrossRef ] [ Medline ]
  • Witte K. Fear control and danger control: a test of the extended parallel process model (EPPM). Commun Monogr. Jun 02, 2009;61(2):113-134. [ CrossRef ]

Abbreviations

Edited by T de Azevedo Cardoso; submitted 26.05.23; peer-reviewed by JS Tham, L Chen; comments to author 07.10.23; revised version received 27.10.23; accepted 25.04.24; published 31.05.24.

©Lianshan Zhang, Shaohai Jiang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 31.05.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. Intro to research methods midterm Flashcards

    research article 1 questionnaire quizlet

  2. www.newsmoor.com: Questionnaire Sample- Questionnaire Sample For

    research article 1 questionnaire quizlet

  3. 30+ Questionnaire Templates (Word) ᐅ TemplateLab

    research article 1 questionnaire quizlet

  4. 5.2 Fundamentals of Survey Research

    research article 1 questionnaire quizlet

  5. Examples Of Questionnaires For Students

    research article 1 questionnaire quizlet

  6. Ch. 7 Preliminary Steps in Research Flashcards

    research article 1 questionnaire quizlet

VIDEO

  1. biological width (part 4 ) fixed prothodontics(tips and tricks)

  2. Research Questionnaire Give Away # 4 l How To Find Questionnaire l How To Research Questionnaire

  3. Exp19_Word_Intro_CapAssessment_Research

  4. biological width (part 5 ) fixed prothodontics(tips and tricks)

  5. Practical Research 1 Module 1 Answers & Explanations

  6. Research Methods: Interviews Vs Questionnaire

COMMENTS

  1. Research article Flashcards

    Explains what was found (or how successful the study was), and any problems encountered by the researchers. If different from the abstract, go with the information given in the Discussion. Study with Quizlet and memorize flashcards containing terms like Confirm that it is a scholarly article, The article should clearly state that the author (s ...

  2. Exam 1: Research Articles

    Article should be published within the last 5 years and at least one of the articles should be a nurse. It is a generic (Circular) process that describes part of the thinking and doing of solving a problem. 1. define the problem and cause. 2. generate alternative solutions. 3.

  3. Research Quiz #1 (Chapters 1-4) Flashcards

    evidence hierarchy for rating levels of evidence, associated with the study's design. The extent to which a study's design, implementation and analysis minimizes bias. Quality. Research Quiz #1 (Chapters 1-4) Abstract. Click the card to flip 👆. A short, comprehensive synopsis or summary of a study at the beginning of an article.

  4. research exam 1 practice questions Flashcards

    Study with Quizlet and memorize flashcards containing terms like Scientific research emphasizes the obtainment of _____ evidence, in which kind of research are data gathered about a phenomenon before an explanation is hypothesized or suggested, In which kind of research the first step is to start with research question backed by a theoretical framework and then collect data? and more.

  5. Research Practice Questions Flashcards

    Study with Quizlet and memorize flashcards containing terms like In which section of a research report does the author summarize the study's background, methods, findings, and conclusions? a.) The introduction b.) The discussion c.) The results sections d.) The abstract, In assessing the degree to which the findings of a quantitative study are accurate and valid, which dimension would be ...

  6. Quiz 1

    Science is a process, not an outcome. Method. Results. Don't know? 10 of 10. Quiz yourself with questions and answers for Quiz 1, so you can be ready for test day. Explore quizzes and practice tests created by teachers and students or create one from your course material.

  7. Nursing Research Nursing Test Bank and Practice Questions ...

    These exams offer a rigorous question set to assess your understanding, prepare you for actual examinations, and benchmark your performance. You're given 2 minutes per item. For Challenge Exams, click on the "Start Quiz" button to start the quiz. Complete the quiz: Ensure that you answer the entire quiz.

  8. Formulation of Research Question

    Abstract. Formulation of research question (RQ) is an essentiality before starting any research. It aims to explore an existing uncertainty in an area of concern and points to a need for deliberate investigation. It is, therefore, pertinent to formulate a good RQ. The present paper aims to discuss the process of formulation of RQ with stepwise ...

  9. Conducting a Literature Review: Research Question

    Your problem statement or research question: Interests the reader. Describes exactly what you intend to show. Explains why your problem is worth addressing. A good problem statement or research question: Comes from a broad subject area that interests you. Is narrow enough to allow you to become a local expert on it.

  10. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  11. Multiple choice quiz

    Multiple choice quiz. Test your understanding of each chapter by taking the quiz below. Click anywhere on the question to reveal the answer. Good luck! 1. A literature review is best described as: A list of relevant articles and other published material you have read about your topic, describing the content of each source.

  12. PDF Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    writing questions and building the construct of the questionnaire. It also develops the demand to pre-test the questionnaire and finalizing the questionnaire to conduct the survey. Keywords: Questionnaire, Academic Survey, Questionnaire Design, Research Methodology I. INTRODUCTION A questionnaire, as heart of the survey is based on a set of

  13. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  14. Designing and validating a research questionnaire

    However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated. In this two-part series, we discuss how to design (part 1) and how to use and validate (part 2) a research questionnaire. It is important to emphasize that questionnaires seek to gather information from other people and ...

  15. PDF Question and Questionnaire Design

    1. Early questions should be easy and pleasant to answer, and should build rapport between the respondent and the researcher. 2. Questions at the very beginning of a questionnaire should explicitly address the topic of the survey, as it was described to the respondent prior to the interview. 3. Questions on the same topic should be grouped ...

  16. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Burns et al., 2008 12. A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method.

  17. Quizlet, a tool for teaching and learning

    Quizlet has become one of the most used tools these days for teaching and learning. When 15-year-old Andrew Sutherland created a software program in 2005 to help him study 111 French terms for a test on animals, little did he imagine that the program would eventually become one of the fastest-growing free education tools, with 30 million ...

  18. Survey research

    Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930-40s by sociologist Paul Lazarsfeld to examine the effects of the ...

  19. Behind the Numbers: Questioning Questionnaires

    Based on our observations of participants' spontaneous thoughts and confusions as they filled in questionnaires on "leadership" and "teamwork", we draw attention to hidden problems in much organizational research. Many respondents found measures ambiguous, irrelevant, or misleading.

  20. Primary Research

    Primary research is any research that you conduct yourself. It can be as simple as a 2-question survey, or as in-depth as a years-long longitudinal study. The only key is that data must be collected firsthand by you. Primary research is often used to supplement or strengthen existing secondary research.

  21. Frontiers

    1 Foreign Language Education, School of Foreign Languages, Selcuk University, Konya, TĂŒrkiye; 2 Foreign Language Education, School of Foreign Languages, Akdeniz University, Antalya, TĂŒrkiye; Background: This study synthesizes research on the impact of Quizlet on learners' vocabulary learning achievement, retention, and attitude. Quizlet's implementation in language education is posited ...

  22. Reducing respondents' perceptions of bias in survey research

    Survey research has become increasingly challenging. In many nations, response rates have continued a steady decline for decades, and the costs and time involved with collecting survey data have risen with it (Connelly et al., 2003; Curtin et al., 2005; Keeter et al., 2017).Still, social surveys are a cornerstone of social science research and are routinely used by the government and private ...

  23. Journal of Medical Internet Research

    Background: Reducing cancer fatalism is essential because of its detrimental impact on cancer-related preventive behaviors. However, little is known about factors influencing individuals' cancer fatalism in China. Objective: With a general basis of the extended parallel process model, this study aims to examine how distinct cancer-related mental conditions (risk perception and worry) and ...