Advertisement

Advertisement

A Consensus-Based Checklist for Reporting of Survey Studies (CROSS)

  • Research and Reporting Methods
  • Published: 22 April 2021
  • Volume 36 , pages 3179–3187, ( 2021 )

Cite this article

reporting guidelines survey research

  • Akash Sharma MBBS   ORCID: orcid.org/0000-0002-6822-4946 1 , 2   na1 ,
  • Nguyen Tran Minh Duc MD   ORCID: orcid.org/0000-0002-9333-7539 2 , 3   na1 ,
  • Tai Luu Lam Thang MD   ORCID: orcid.org/0000-0003-1062-2463 2 , 4 ,
  • Nguyen Hai Nam MD   ORCID: orcid.org/0000-0001-5184-6936 2 , 5 ,
  • Sze Jia Ng MD   ORCID: orcid.org/0000-0001-5353-6499 2 , 6 ,
  • Kirellos Said Abbas MBCH   ORCID: orcid.org/0000-0003-0339-9339 2 , 7 ,
  • Nguyen Tien Huy MD, PhD   ORCID: orcid.org/0000-0002-9543-9440 8 ,
  • Ana Marušić MD, PhD   ORCID: orcid.org/0000-0001-6272-0917 9 ,
  • Christine L. Paul PhD 10 ,
  • Janette Kwok MBBS   ORCID: orcid.org/0000-0003-0038-1897 11 ,
  • Juntra Karbwang MD, PhD 12 ,
  • Chiara de Waure MD, MSc, PhD   ORCID: orcid.org/0000-0002-4346-1494 13 ,
  • Frances J. Drummond PhD   ORCID: orcid.org/0000-0002-7802-776X 14 ,
  • Yoshiyuki Kizawa MD, PhD   ORCID: orcid.org/0000-0003-2456-5092 15 ,
  • Erik Taal PhD   ORCID: orcid.org/0000-0002-9822-4488 16 ,
  • Joeri Vermeulen MSN, CM   ORCID: orcid.org/0000-0002-9568-3208 17 , 18 ,
  • Gillian H. M. Lee PhD   ORCID: orcid.org/0000-0002-6192-4923 19 ,
  • Adam Gyedu MD, MPH   ORCID: orcid.org/0000-0002-4186-2403 20 ,
  • Kien Gia To PhD   ORCID: orcid.org/0000-0001-5038-5584 21 ,
  • Martin L. Verra PhD   ORCID: orcid.org/0000-0002-3933-8020 22 ,
  • Évelyne M. Jacqz-Aigrain MD, PhD   ORCID: orcid.org/0000-0002-4285-7067 23 ,
  • Wouter K. G. Leclercq MD   ORCID: orcid.org/0000-0003-1159-1857 24 ,
  • Simo T. Salminen PhD 25 ,
  • Cathy Donald Sherbourne PhD 26 ,
  • Barbara Mintzes PhD   ORCID: orcid.org/0000-0002-8671-915X 27 ,
  • Sergi Lozano PhD   ORCID: orcid.org/0000-0003-1895-9327 28 ,
  • Ulrich S. Tran DSc   ORCID: orcid.org/0000-0002-6589-3167 29 ,
  • Mitsuaki Matsui MD, MSc, PhD   ORCID: orcid.org/0000-0003-4075-1266 12 &
  • Mohammad Karamouzian DVM, MSc, PhD candidate   ORCID: orcid.org/0000-0002-5631-4469 30 , 31  

29k Accesses

610 Citations

17 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

INTRODUCTION

A survey is a list of questions aiming to extract a set of desired data or opinions from a particular group of people. 1 Surveys can be administered quicker than some other methods of data gathering and facilitate data collection from a large number of participants. Numerous questions can be included in a survey that allow for increased flexibility in evaluation of several research areas, such as analysis of risk factors, treatment outcomes, disease trends, cost-effectiveness of care, and quality of life. Surveys can be conducted by phone, mail, face-to-face, or online using web-based software and applications. Online surveys can help reduce or prevent geographical dependence and increase the validity, reliability, and statistical power of the studies. Moreover, online surveys facilitate rapid survey administration as well as data collection and analysis. 2

Surveys are frequently used in a variety of research areas. For example, a PubMed search of the key word “survey” on January 7, 2021, generated over 1,519,000 results. These studies are used for a number of purposes, including but not limited to opinion polls, trend analyses, evaluation of policies, measuring the prevalence of diseases. 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 Although many surveys have been published in high-impact journals, comprehensive reporting guidelines for survey research are limited 13 , 14 and substantial variabilities and inconsistencies can be identified in the reporting of survey studies. Indeed, different studies have presented multiform patterns of survey designs and reported results in various non-systematic ways. 15 , 16 , 17

Evidence-based tools developed by experts could help streamline particular procedures that authors could follow to create reproducible and higher quality studies. 18 , 19 , 20 Research studies that have transparent and accurate reporting may be more reliable and could have a more significant impact on their potential audience. 19 However, that is often not the case when it comes to reporting research findings. For example, Moher et al. 20 reported that, although over 63,000 new studies are published in PubMed on a monthly basis, many publications face the problem of inadequate reporting. Given the lack of standardization and poor quality of reporting, the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network was created to help researchers publish high-impact health research. 20 Several important guidelines for various types of research studies have been created and listed on the EQUATOR website, including but not limited to the Consolidated Standards of Reporting Trials and encompasses (CONSORT) for randomized control trial, Strengthening the Reporting of Observational studies in Epidemiology (STROBE) for observational studies, and Preferred Reporting Items for Systemic Reviews and Meta-analyses (PRISMA) for systematic reviews and meta-analyses. The introduction of PRISMA checklist in 2009 led to a substantial increase in the quality of the systemic reviews and is a good example of how poor reporting, biases, and unsatisfactory results can be significantly addressed by implementing and following a validated reporting guideline. 21

SURGE 22 and CHERRIES 23 are frequently recommended for reporting of non-web and web-based surveys. However, a report by Tarek et al. found that many items of the SURGE and CHERRIES guidelines (e.g., development, description, testing of the questionnaire, advertisement, and administration of the questionnaire, sample representativeness, response rates, informed consent, statistical analysis) had been missed by authors. The authors therefore concluded a need to produce a single universal guideline as a standard quality-reporting tool for surveys. Moreover, these guidelines lack a structured approach for the development of guidelines. For example, CHERRIES which was developed in 2004 lacks a comprehensive literature review and the Delphi exercise. These steps are crucial in developing guidelines as they help identify potential gaps and opinions of different experts in the field. 20 , 24 While the SURGE checklist used a literature review for generation of their items, it also lacks the Delphi exercise and is limited to only self-administered postal surveys. There is also little information available about the experts involved in the development of these checklists. SURGE’s limited citations since its publication suggest that it is not commonly used by authors and not recommended by journals. Furthermore, even after the development of these guidelines (SURGE and CHERRIES), there has been limited improvement in reporting of surveys. For example, Alvin et al. reviewed 102 surveys in top nephrology journals and found that the quality of surveys was suboptimal and highlighted the need for new reporting guidelines to improve reporting quality and increase transparency. 25 Similarly, Prasad et al. found significant heterogeneity in reporting of radiology surveys published in major radiology journals and suggested the need for guidelines to increase the homogeneity and generalizability of survey results. 26 Mark et al. also found several deficiencies in survey methodologies and reporting practices and suggested a need for establishing minimum reporting standards for survey studies. 27 Similar concerns regarding the qualities of surveys have been raised in other medical fields. 28 , 29 , 30 , 31 , 32 , 33

Because of concerns regarding survey qualities and lack of well-developed guidelines, there is a need for a single comprehensive tool that can be used as a standard reporting checklist for survey research to address significant discrepancies in the reporting of survey studies. 13 , 25 , 26 , 27 , 28 , 31 , 32 The purpose of this study was to develop a universal checklist for both web- and non-web-based surveys. Firstly, we established a workgroup to search the literature for potential items that can be included in our checklist. Secondly, we collected information about experts in the field of survey research and emailed them an invitation letter. Lastly, we conducted three rounds of rating by the Delphi method.

Our study was performed from January 2018 to December 2019 using the Delphi method. This method is encouraged for use in scientific research as a feasible and reliable approach to reach final consensus among experts. 34 The process of checklist development included five phases: (i) planning; (ii) drafting of checklist items; (iii) consensus building using the Delphi method; (iv) dissemination of guidelines; and (v) maintenance of guidelines.

Planning Phase

In the planning phase, we established a workgroup, secured resources, reviewed the existing reporting guidelines, and drafted the plan and timeline of our project. To facilitate the development of Checklist for Reporting of Survey Studies (CROSS), a reporting checklist workgroup was set up. This workgroup had seven members from five countries. The expert panel members were found via searching original survey-based studies published between January 2004 and December 2016. The experts were selected based on their number of high-impact and highly cited publications using survey research methods. Furthermore, members of the EQUATOR Network and contributors to PRISMA checklist were involved. Panel members’ information, such as current affiliation, email address, and number of survey studies involved in were collected through their ResearchGate profiles (see Supplement 1 ). Lastly, a list of potential panel members was created and an invitation letter was emailed to every expert to inquire about their interest in participating in our study. Consenting experts received a follow-up email with a detailed explanation of the research objectives and the Delphi approach.

Drafting the Checklist

This process generated a list of potential items that could be included in the checklist. This procedure included searching the literature for potential items to be considered for inclusion in the checklist, establishing a checklist based on those potential items, and revising the checklist. Firstly, we conducted a literature review to identify survey studies published in major medical journals and extracted relevant information for drafting our potential checklist items (see Supplement 2 for a sample search strategy). Secondly, we searched the EQUATOR Network for previously published checklists for reporting of survey studies. Thirdly, three teams of two researchers independently extracted the potential items that could be included in our checklist. Lastly, our group members worked together to revise the checklist and remove any duplicates (Fig. 1 ). We discussed the importance and relevance of each potential item and compared each of them to the selected literature.

figure 1

Different stages of developing the checklist.

Consensus Phase Using the Delphi Method

The first round of Delphi was conducted using SurveyMonkey (SurveyMonkey Inc., San Mateo, CA, USA; www.surveymonkey.com ). An email was sent to the expert panel containing information about the Delphi process, the timeline of each Delphi phase, and a detailed overview of the project. A Likert scale was used for rating items from 1 (strongly disagree) to 5 (strongly agree). Experts were also encouraged to provide their comments, modify items, or propose a new item that they felt was necessary to be included in the checklist. Nonresponding experts were sent weekly follow-up reminders. Items that did not reach consensus were rerated in the second round along with the modified or newly added items. The main objectives of the first round were to determine unnecessary items and identify incomplete items in the survey checklist. A pre-set 70% agreement (70% experts rating 4/5 or 5/5) was used as a cutoff for including an item in the final checklist. 35 Items that did not reach the 70% agreement threshold were adjusted according to experts’ feedback and redistributed to the panelists for round 2. In the second round, we included items that did not reach consensus in round one. In this round, experts were also provided with their round one scoring so that they could modify or preserve their previous responses. Lastly, a third round of Delphi was launched to solve any disagreements about the inclusion of items that did not reach consensus in the second round.

A total of 24 experts with a median (Q1, Q3) of 20 (15.75, 31) years of research experience participated in our study. Overall, 24 items were selected in their original form in the first round, and 27 items were reviewed in the second round. Out of these 27 items, 10 items were merged into five, and 11 items were modified based on experts’ comments. In the second round, 24 experts participated and 18 items were finally included. Overall, 18 experts responded in the third round and only one additional item was included in this round.

All details regarding the percentage agreement and mean and standard deviation (SD) of items included in the checklist are presented in Table 1 . CROSS contains 19 sections with 40 different items, including “Title and abstract” (section 1); “Introduction” (sections 2 and 3); “Methods” (sections 4–10); “Results” (sections 11–13); “Discussion” (sections 14–16); and other items (sections 17–19). Please see Supplement 3 for the final checklist.

The development of CROSS is the result of a literature review and Delphi process which involved international experts with significant expertise in the development and implementation of survey studies. CROSS includes both evidenced-informed and expert consensus-based items which are intended to serve as a tool that helps improve the quality of survey studies.

The detailed descriptions of the methods and procedures in developing this guideline are provided in this paper so that the quality of the checklist can be assessed by other scholars. Our Delphi respondent members were made up of a panel of experts with backgrounds in different disciplines. We also spent a considerable amount of time researching and debating the potential items to be included in our checklist. During the Delphi process, the agreement of each potential item was rated by participants according to a 5-point Likert scale. Although the entire process was conducted electronically, we gathered data and feedback from the participants via email instead of conducting Skype or face-to-face discussions as suggested by the EQUATOR network. 13

In comparison to the CHERRIES or SURGE checklists, CROSS provides a single but comprehensive tool which is organized according to the typical primary sections required for peer-reviewed publications. It also assists researchers in developing a comprehensive research protocol prior to conducting a survey. The “Introduction” provides a clear overview of the aim of the survey. In the “Methods” section, our checklist provides a detailed explanation of initiating and developing the survey, including study design, data collection methods, sample size calculation, survey administration, study preparation, ethical considerations, and statistical analysis. The “Results” section of CROSS describes the respondent characteristics followed by the descriptive and main results, issues that are not discussed in CHERRIES and SURGE checklists. Also, our checklist can be used in both non-web-based and web-based surveys that serves all types of survey-based studies. New items were added to our checklist to address the gaps in the available tools. For example, in item 10b, we included reports of any modification of variables. This can help researchers to justify and readers to understand why there was a need to modify the variables. In item 11b, we encourage researchers to state the reasons for non-participation at each stage. Publishing these reasons can be useful for future researchers intending to conduct a similar survey. Finally, we have added components related to limitations, interpretation, and generalizability of study results to the “Discussion” section, which are an important effort in increasing transparency and external validity. These components are missing from previous checklists (i.e., CHERRIES and SURGE).

Dissemination and Maintenance of the Checklist

Following the consensus phase, we will publish our checklist statement together with a detailed Explanation and Elaboration (E&E) document in which an in-depth explanation of the scientific rationale for each recommendation will be provided. To disseminate our final checklist widely, we aim to promote it in various journals, make it easily available on multiple websites including EQUATOR, and disseminate it through presentations at relevant conferences if necessary. We will also use social media to reach certain demographics, and also the key persons in research organizations who are regularly conducting surveys in different specialties. We also aim to seek endorsement of CROSS by journal editors, professional societies, and researchers, and to collect feedback from scholars about their experience.

Taking comments, critics, and suggestion from experts for revising and correcting our guidelines could help maintain the relevancy of the checklist. Lastly, we are planning on publishing CROSS in several non-English languages to increase its accessibility across the scientific community.

Limitations

We acknowledge the limitations of our study. First, the use of the Delphi consensus method may involve some subjectivity in interpreting experts’ responses and suggestions. Second, six experts were lost to follow up. Nonetheless, we think our checklist could improve the quality of the reporting of survey studies. Similar to other reporting checklists, CROSS requires to be re-evaluated and revised overtime to ensure it remains relevant and up-to-date with evolving research methodologies of survey studies. We therefore welcome feedback, comments, critiques, and suggestions for improvement from the research community.

CONCLUSIONS

We think CROSS has the potential to be a beneficial resource to researchers who are designing and conducting survey studies. Following CROSS before and during the survey administration could assist researchers to ensure their surveys are sufficiently reliable, reproducible, and transparent.

Wikipedia contributors. (2020). Survey (human research). In Wikipedia, The Free Encyclopedia . Retrieved 19:59, December 26, 2020, from https://en.wikipedia.org/w/index.php?title=Survey_(human_research)&oldid=994953597 .

Maymone MBC, Venkatesh S, Secemsky E, Reddy K, Vashi NA. Research Techniques Made Simple: Web-Based Survey Research in Dermatology: Conduct and Applications. J Invest Dermatol 2018;138(7):1456-1462. doi: https://doi.org/10.1016/j.jid.2018.02.032 .

Article   CAS   PubMed   Google Scholar  

Alcock I, White MP, Pahl S, Duarte-Davidson R, Fleming LE. Associations Between Pro-environmental Behaviour and Neighbourhood Nature, Nature Visit Frequency and Nature Appreciation: Evidence from a Nationally Representative Survey in England, Environ Int 2020;136:105441. doi: https://doi.org/10.1016/j.envint.2019.105441 .

Article   PubMed   Google Scholar  

Siddiqui J, Brown K, Zahid A, Young CJ . Current Practices and Barriers to Referral for Cytoreductive Surgery and HIPEC Among Colorectal Surgeons: a Binational Survey. Eur J Surg Oncol 2020;46(1):166-172. doi: https://doi.org/10.1016/j.ejso.2019.09.007

Lee JG, Park CH, Chung H, Park JC, Kim DH, Lee BI, Byeon JS, Jung HY . Current Status and Trend in Training for Endoscopic Submucosal Dissection: a Nationwide Survey in Korea. PLoS One 2020;15(5):e0232691. doi: https://doi.org/10.1371/journal.pone.0232691

Article   CAS   PubMed   PubMed Central   Google Scholar  

McChesney SL, Zelhart MD, Green RL, Nichols RL . Current U.S. Pre-Operative Bowel Preparation Trends: a 2018 Survey of the American Society of Colon and Rectal Surgeons Members. Surg Infect 2020;21(1):1-8. doi: https://doi.org/10.1089/sur.2019.125

Article   Google Scholar  

Núñez A, Manzano CA, Chi C. Health Outcomes, Utilization, and Equity in Chile: an Evolution from 1990 to 2015 and the Effects of the Last Health Reform. Public Health 2020;178:38-48. doi: https://doi.org/10.1016/j.puhe.2019.08.017

Blackwell AKM, Kosīte D, Marteau TM, Munafò MR . Policies for Tobacco and E-Cigarette Use: a Survey of All Higher Education Institutions and NHS Trusts in England. Nicotine Tob Res 2020;22(7):1235-1238. doi: https://doi.org/10.1093/ntr/ntz192

Liu S, Zhu Y, Chen W, Wang L, Zhang X, Zhang Y . Demographic and Socioeconomic Factors Influencing the Incidence of Ankle Fractures, a National Population-Based Survey of 512187 Individuals. Sci Rep 2018;8(1):10443. doi: https://doi.org/10.1038/s41598-018-28722-1

Tamanini JTN, Pallone LV, Sartori MGF, Girão MJBC, Dos Santos JLF, de Oliveira Duarte YA, van Kerrebroeck PEVA . A Populational-Based Survey on the Prevalence, Incidence, and Risk Factors of Urinary Incontinence in Older Adults-Results from the “SABE STUDY”. Neurourol Urodyn 2018;37(1):466-477. doi: https://doi.org/10.1002/nau.23331

Tink W, Tink JC, Turin TC, Kelly M . Adverse childhood experiences: survey of resident practice, knowledge, and attitude. Fam Med 2017;49(1):7-13

PubMed   Google Scholar  

Shi S, Lio J, Dong H, Jiang I, Cooper B, Sherer R. Evaluation of Geriatrics Education at a Chinese University: a Survey of Attitudes and Knowledge Among Undergraduate Medical Students. Gerontol Geriatr Educ 2020;41(2):242-249. doi: https://doi.org/10.1080/02701960.2018.1468324

Bennett, C., Khangura, S., Brehaut, J. C., Graham, I. D., Moher, D., Potter, B. K., & Grimshaw, J. M. (2010). Reporting Guidelines for Survey Research: an Analysis of Published Guidance and Reporting Practices. PLoS Med , 8 (8), e1001069. https://doi.org/10.1371/journal.pmed.1001069

Turk T, Elhady MT, Rashed S, Abdelkhalek M, Nasef SA, Khallaf AM, Mohammed AT, Attia AW, Adhikari P, Amin MA, Hirayama K, Huy NT . Quality of Reporting Web-Based and Non-web-based Survey Studies: What Authors, Reviewers and Consumers Should Consider. PLoS One 2018;13(6):e0194239. doi: https://doi.org/10.1371/journal.pone.0194239

Jones, T. L., Baxter, M. A., & Khanduja, V. (2013). A Quick Guide to Survey Research. Ann R Coll Surg Engl , 95 (1), 5–7. https://doi.org/10.1308/003588413X13511609956372

Jones D, Story D, Clavisi O, Jones R, Peyton P. An Introductory Guide to Survey Research in Anaesthesia. Anaesth Intensive Care 2006;34(2):245-53. doi: https://doi.org/10.1177/0310057X0603400219

Alderman AK, Salem B . Survey Research. Plast Reconstr Surg 2010;126(4):1381-9. doi: https://doi.org/10.1097/PRS.0b013e3181ea44f9

Moher D, Weeks L, Ocampo M, Seely D, Sampson M, Altman DG, Schulz KF, Miller D, Simera I, Grimshaw J, Hoey J. Describing Reporting Guidelines for Health Research: a Systematic Review. J Clin Epidemiol 2011;64(7):718-42. doi: https://doi.org/10.1016/j.jclinepi.2010.09.013

Simera, I., Moher, D., Hirst, A. et al. Transparent and Accurate Reporting Increases Reliability, Utility, and Impact of Your Research: Reporting Guidelines and the EQUATOR Network. BMC Med 8, 24 (2010). https://doi.org/10.1186/1741-7015-8-24

Article   PubMed   PubMed Central   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG . Guidance for Developers of Health Research Reporting Guidelines. PLoS Med 2010;7(2):e1000217. doi: https://doi.org/10.1371/journal.pmed.1000217

Tan WK, Wigley J, Shantikumar S . The Reporting Quality of Systematic Reviews and Meta-analyses in Vascular Surgery Needs Improvement: a Systematic Review. Int J Surg 2014;12(12):1262-5. doi: https://doi.org/10.1016/j.ijsu.2014.10.015

Grimshaw, J. (2014). SURGE (The SUrvey Reporting GuidelinE). In Guidelines for Reporting Health Research: a User’s Manual (eds D. Moher, D.G. Altman, K.F. Schulz, I. Simera and E. Wager). https://doi.org/10.1002/9781118715598.ch20

Eysenbach G. Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res 2004;6(3):e34. DOI: https://doi.org/10.2196/jmir.6.3.e34 .

EquatorNetwork.org . Developing your reporting guideline. 3 July 2018 [cited 12/28/2020]; Available from: https://www.equator-network.org/toolkits/developing-a-reporting-guideline/developing-your-reporting-guideline/ .

Li AH, Thomas SM, Farag A, Duffett M, Garg AX, Naylor KL . Quality of Survey Reporting in Nephrology Journals: a Methodologic Review. Clin J Am Soc Nephrol 2014;9(12):2089-94. doi: https://doi.org/10.2215/CJN.02130214

Shankar PR, Maturen KE . Survey Research Reporting in Radiology Publications: a Review of 2017 to 2018. J Am Coll Radiol 2019;16(10):1378-1384. doi: https://doi.org/10.1016/j.jacr.2019.07.012

Duffett M, Burns KE, Adhikari NK, Arnold DM, Lauzier F, Kho ME, Meade MO, Hayani O, Koo K, Choong K, Lamontagne F, Zhou Q, Cook DJ . Quality of Reporting of Surveys in Critical Care Journals: a Methodologic Review. Crit Care Med 2012 Feb;40(2):441-9. doi: https://doi.org/10.1097/CCM.0b013e318232d6c6

Story DA, Gin V, na Ranong V, Poustie S, Jones D; ANZCA Trials Group. Inconsistent Survey Reporting in Anesthesia Journals. Anesth Analg 2011;113(3):591-5. doi: https://doi.org/10.1213/ANE.0b013e3182264aaf

Marcopulos BA, Guterbock TM, Matusz EF . [Formula: see text] Survey Research in Neuropsychology: a Systematic Review. Clin Neuropsychol 2020;34(1):32-55. doi: https://doi.org/10.1080/13854046.2019.1590643

Rybakov KN, Beckett R, Dilley I, Sheehan AH. Reporting Quality of Survey Research Articles Published in the Pharmacy Literature. Res Soc Adm Pharm 2020;16(10):1354-1358. doi: https://doi.org/10.1016/j.sapharm.2020.01.005

Pagano MB, Dunbar NM, Tinmouth A, Apelseth TO, Lozano M, Cohn CS, Stanworth SJ; Biomedical Excellence for Safer Transfusion (BEST) Collaborative. A Methodological Review of the Quality of Reporting of Surveys in Transfusion Medicine. Transfusion. 2018;58(11):2720-2727. doi: https://doi.org/10.1111/trf.14937

Mulvany JL, Hetherington VJ, VanGeest JB . Survey Research in Podiatric Medicine: an Analysis of the Reporting of Response Rates and Non-response Bias. Foot (Edinb) 2019;40:92-97. doi: https://doi.org/10.1016/j.foot.2019.05.005

Tabernero P, Parker M, Ravinetto R, Phanouvong S, Yeung S, Kitutu FE, Cheah PY, Mayxay M, Guerin PJ, Newton PN . Ethical Challenges in Designing and Conducting Medicine Quality Surveys. Tropical Med Int Health 2016 Jun;21(6):799-806. doi: https://doi.org/10.1111/tmi.12707

Keeney S, Hasson F, McKenna H . Consulting the Oracle: Ten Lessons from Using the Delphi Technique in Nursing Research. J Adv Nurs 2006; 53(2): 205-12 8p. doi: https://doi.org/10.1111/j.1365-2648.2006.03716.x .

Zamanzadeh V, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar A, Ghahramanian A . Details of content validity index and objectifying it in instrument development. Nursing Pract Today 2014; 1(3): 163-71.

Google Scholar  

Download references

Acknowledgements

We are thankful to Dr. David Moher (Ottawa Hospital Research Institute, Canada) and Dr. Masahiro Hashizume (Department of Global Health Policy, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan) for initial contribution of the project and in rating and development of the checklist. We are also grateful to Obaida Istanbuly (Keele University, UK) and Omar Diab (Private Dental Practice, Jordan) for their contribution in the earlier phases of the project.

Author information

Akash Sharma and Minh Duc Nguyen Tran contributed equally to this work.

Authors and Affiliations

University College of Medical Sciences and Guru Teg Bahadur Hospital, Dilshad Garden, Delhi, India

Akash Sharma MBBS

Online Research Club, Nagasaki, Japan

Akash Sharma MBBS, Nguyen Tran Minh Duc MD, Tai Luu Lam Thang MD, Nguyen Hai Nam MD, Sze Jia Ng MD & Kirellos Said Abbas MBCH

Faculty of Medicine, University of Medicine and Pharmacy, Ho Chi Minh City, Vietnam

Nguyen Tran Minh Duc MD

Department of Emergency, City’s Children Hospital, Ho Chi Minh City, Vietnam

Tai Luu Lam Thang MD

Division of Hepato-Biliary-Pancreatic Surgery and Transplantation, Department of Surgery, Graduate School of Medicine, Kyoto University, Kyoto, Japan

Nguyen Hai Nam MD

Department of Medicine, Crozer Chester Medical Center, Upland, PA, USA

Sze Jia Ng MD

Faculty of Medicine, Alexandria University, Alexandria, Egypt

Kirellos Said Abbas MBCH

Institute of Tropical Medicine (NEKKEN) and School of Tropical Medicine and Global Health, Nagasaki University, Nagasaki, 852-8523, Japan

Nguyen Tien Huy MD, PhD

Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia

Ana Marušić MD, PhD

School of Medicine and Public Health, University of Newcastle, Callaghan, Australia

Christine L. Paul PhD

Division of Transplantation and Immunogenetics, Department of Pathology, Queen Mary Hospital Hong Kong, Pok Fu Lam, Hong Kong

Janette Kwok MBBS

School of Tropical Medicine and Global Health, Nagasaki University, Nagasaki, 852-8523, Japan

Juntra Karbwang MD, PhD & Mitsuaki Matsui MD, MSc, PhD

Department of Medicine and Surgery, University of Perugia, Perugia, Italy

Chiara de Waure MD, MSc, PhD

Cancer Research at UCC, University College Cork, Cork, Ireland

Frances J. Drummond PhD

Department of Palliative Medicine, Kobe University School of Medicine, Hyogo, Japan

Yoshiyuki Kizawa MD, PhD

Department of Psychology, Health & Technology, Faculty of Behavioural, Management and Social Sciences, University of Twente, Enschede, Netherlands

Erik Taal PhD

Department of Public Health, Biostatistics and Medical Informatics Research Group, Vrije Universiteit Brussel (VUB), Brussels, Belgium

Joeri Vermeulen MSN, CM

Department of Health Care, Knowledge Centre Brussels Integrated Care, Erasmus Brussels University of Applied Sciences and Arts, Brussels, Belgium

Paediatric Dentistry and Orthodontics, Faculty of Dentistry, University of Hong Kong, Pok Fu Lam, Hong Kong

Gillian H. M. Lee PhD

Department of Surgery, School of Medicine and Dentistry, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana

Adam Gyedu MD, MPH

Faculty of Public Health, University of Medicine and Pharmacy, Ho Chi Minh City, Vietnam

Kien Gia To PhD

Department of Physiotherapy, Bern University Hospital, Insel Group, Bern, Switzerland

Martin L. Verra PhD

Hopital Robert-Debre AP-HP, Clinical Investigation Center, Paris, France

Évelyne M. Jacqz-Aigrain MD, PhD

Department of Surgery, Máxima Medical Center, Veldhoven, Veldhoven, the Netherlands

Wouter K. G. Leclercq MD

Department of Social Psychology, University of Helsinki, Helsinki, Finland

Simo T. Salminen PhD

RAND, Santa Monica, CA, USA

Cathy Donald Sherbourne PhD

School of Pharmacy and Charles Perkins Centrey, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia

Barbara Mintzes PhD

School of Economics, University of Barcelona, Barcelona, Spain

Sergi Lozano PhD

Department of Cognition, Emotion, and Methods in Psychology, School of Psychology, University of Vienna, Vienna, Austria

Ulrich S. Tran DSc

School of Population and Public Health, University of British Columbia, Vancouver, BC, Canada

Mohammad Karamouzian DVM, MSc, PhD candidate

HIV/STI Surveillance Research Center, and WHO Collaborating Center for HIV Surveillance, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran

You can also search for this author in PubMed   Google Scholar

Contributions

NTH is the generator of the idea, and supervised and helped in writing, reviewing, and mediating Delphi process; AS participated in making a draft of guidelines, mediating Delphi process, analysis of results, writing, and process validation; TLT helped in making a draft of guidelines, and analysis; MNT helped in drafting checklist and mediating Delphi process; NNH, NSJ, KSA, and MK helped in writing and mediating Delphi; AM, JK, CLP, JKB, CDW, FJD, MH, YK, EK, JV, GHL, AG, KGT, ML, EMJ, WKL, STS, CDS, BM, SL, UST, MM and MK helped in the rating of items in Delphi rounds and reviewing the manuscript.

Corresponding author

Correspondence to Nguyen Tien Huy MD, PhD .

Ethics declarations

Conflict of interest.

The authors declare that they do not have a conflict of interest.

Ethics approval

Ethics approval was not required for the study.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

(DOCX 22 kb)

Rights and permissions

Reprints and permissions

About this article

Sharma, A., Minh Duc, N., Luu Lam Thang, T. et al. A Consensus-Based Checklist for Reporting of Survey Studies (CROSS). J GEN INTERN MED 36 , 3179–3187 (2021). https://doi.org/10.1007/s11606-021-06737-1

Download citation

Received : 15 September 2020

Accepted : 17 March 2021

Published : 22 April 2021

Issue Date : October 2021

DOI : https://doi.org/10.1007/s11606-021-06737-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Surveys and Questionnaires
  • Delphi technique
  • Find a journal
  • Publish with us
  • Track your research
  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

AAPOR Reporting Guidelines for Survey Studies

  • 1 Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison
  • 2 Department of Biostatistics, Gillings School of Global Public Health, University of North Carolina at Chapel Hill
  • 3 Statistical Editor, JAMA Surgery
  • 4 Department of Cardiothoracic Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
  • Editorial Effective Use of Reporting Guidelines to Improve the Quality of Surgical Research Benjamin S. Brooke, MD, PhD; Amir A. Ghaferi, MD, MSc; Melina R. Kibbe, MD JAMA Surgery
  • Guide to Statistics and Methods SQUIRE Reporting Guidelines for Quality Improvement Studies Rachel R. Kelz, MD, MSCE, MBA; Todd A. Schwartz, DrPH; Elliott R. Haut, MD, PhD JAMA Surgery
  • Guide to Statistics and Methods STROBE Reporting Guidelines for Observational Studies Amir A. Ghaferi, MD, MS; Todd A. Schwartz, DrPH; Timothy M. Pawlik, MD, MPH, PhD JAMA Surgery
  • Guide to Statistics and Methods CHEERS Reporting Guidelines for Economic Evaluations Oluwadamilola M. Fayanju, MD, MA, MPHS; Jason S. Haukoos, MD, MSc; Jennifer F. Tseng, MD, MPH JAMA Surgery
  • Guide to Statistics and Methods TRIPOD Reporting Guidelines for Diagnostic and Prognostic Studies Rachel E. Patzer, PhD, MPH; Amy H. Kaji, MD, PhD; Yuman Fong, MD JAMA Surgery
  • Guide to Statistics and Methods ISPOR Reporting Guidelines for Comparative Effectiveness Research Nader N. Massarweh, MD, MPH; Jason S. Haukoos, MD, MSc; Amir A. Ghaferi, MD, MS JAMA Surgery
  • Guide to Statistics and Methods PRISMA Reporting Guidelines for Meta-analyses and Systematic Reviews Shipra Arya, MD, SM; Amy H. Kaji, MD, PhD; Marja A. Boermeester, MD, PhD JAMA Surgery
  • Guide to Statistics and Methods MOOSE Reporting Guidelines for Meta-analyses of Observational Studies Benjamin S. Brooke, MD, PhD; Todd A. Schwartz, DrPH, MS; Timothy M. Pawlik, MD, MPH, PhD JAMA Surgery
  • Guide to Statistics and Methods TREND Reporting Guidelines for Nonrandomized/Quasi-Experimental Study Designs Alex B. Haynes, MD, MPH; Jason S. Haukoos, MD, MSc; Justin B. Dimick, MD, MPH JAMA Surgery
  • Guide to Statistics and Methods The CONSORT Framework Ryan P. Merkow, MD, MS; Amy H. Kaji, MD, PhD; Kamal M. F. Itani, MD JAMA Surgery
  • Guide to Statistics and Methods SRQR and COREQ Reporting Guidelines for Qualitative Studies Lesly A. Dossett, MD, MPH; Amy H. Kaji, MD, PhD; Amalia Cochran, MD JAMA Surgery

Although survey studies allow researchers to gather unique information not readily available from other data sources on disease epidemiology, human behaviors and beliefs, and knowledge about health care topics from a specific population, they may be fraught with bias if not well designed and executed. The American Association for Public Opinion Research (AAPOR) Survey Disclosure Checklist (2009) and Code of Professional Ethics and Practices (2015) can guide researchers in their efforts. 1 , 2 The standards were first proposed in 1948 and arose in direct response to the presidential election where Harry Truman defeated Thomas Dewey. 3 Truman’s victory surprised the US because Gallup and other polls predicted a Dewey win. This divergence forced pollsters and statisticians to recognize flaws in their quota sampling methods, which resulted in a nonrepresentative sample and misprediction of the 33rd president of the United States. The confusion over the election prompted leaders to propose standards for survey research. Despite the long-standing nature of these guidelines, recent data show survey reporting is often subpar. 4 Compliance with disclosure requirements is often lacking, and articles in some specialties only report 75% of the required methodologic elements on average. 4

  • Editorial Effective Use of Reporting Guidelines to Improve the Quality of Surgical Research JAMA Surgery

Read More About

Pitt SC , Schwartz TA , Chu D. AAPOR Reporting Guidelines for Survey Studies. JAMA Surg. 2021;156(8):785–786. doi:10.1001/jamasurg.2021.0543

Manage citations:

© 2024

Artificial Intelligence Resource Center

Surgery in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Best Practices for Survey Research

Below you will find recommendations on how to produce the best survey possible..

Included are suggestions on the design, data collection, and analysis of a quality survey.  For more detailed information on important details to assess rigor of survey methology, see the  AAPOR Transparency Initiative .

To download a pdf of these best practices,  please click here

"The quality of a survey is best judged not by its size, scope, or prominence, but by how much attention is given to [preventing, measuring and] dealing with the many important problems that can arise."

“What is a Survey?”, American Statistical Association

1. Planning Your Survey

Is a survey the best method for answering your research question.

Surveys are an important research tool for learning about the feelings, thoughts, and behaviors of groups of individuals. However, surveys may not always be the best tool for answering your research questions. They may be appropriate when there is not already sufficiently timely or relevant existing data on the topic of study. Researchers should consider the following questions when deciding whether to conduct a survey:

  • What are the objectives of the research? Are they unambiguous and specific?
  • Have other surveys already collected the necessary data?
  • Are other research methods such as focus groups or content analyses more appropriate?
  • Is a survey alone enough to answer the research questions, or will you also need to use other types of data (e.g., administrative records)?

Surveys should not be used to produce predetermined results, campaigning, fundraising, or selling. Doing so is a violation of the  AAPOR Code of Professional Ethics .

Should the survey be offered online, by mail, in person, on the phone, or in some combination of these modes?

Once you have decided to conduct a survey, you will need to decide in what mode(s) to offer it. The most common modes are online, on the phone, in person, or by mail.  The choice of mode will depend at least in part on the type of information in your survey frame and the quality of the contact information. Each mode has unique advantages and disadvantages, and the decision should balance the data quality needs of the research alongside practical considerations such as the budget and time requirements.

  • Compared with other modes, online surveys can be quickly administered for less cost. However, older respondents, those with lower incomes, or respondents living in rural areas are less likely to have reliable internet access or to be comfortable using computers. Online surveys may work well when the primary way you contact respondents is via email. It also may elicit more honest answers from respondents on sensitive topics because they will not have to disclose sensitive information directly to another person (an interviewer).
  • Telephone surveys are often more costly than online surveys because they require the use of interviewers. Well trained interviewers can help guide the respondent through questions that might be hard to understand and encourage them to keep going if they start to lose interest, reducing the number of people who do not complete the survey. Telephone surveys are often used when the sampling frame consists of telephone numbers. Quality standards can be easier to maintain in telephone surveys if interviewers are in one centralized location.
  • In-person, or face-to-face, surveys tend to cost the most and generally take more time than either online or telephone surveys.  With an in-person survey, the interviewer can build a rapport with the respondent and help with questions that might be hard to understand. This is particularly relevant for long or complex surveys. In-person surveys are often used when the sampling frame consists of addresses.
  • Mailed paper surveys can work well when the mailing addresses of the survey respondents are known. Respondents can complete the survey at their own convenience and do not need to have computer or internet access. Like online surveys, they can work well for surveys on sensitive topics. However, since mail surveys cannot be automated, they work best when the flow of the questionnaire is relatively straightforward. Surveys with complex skip patterns based on prior responses may be confusing to respondents and therefore better suited for other modes.

Some surveys use multiple modes, particularly if a subset of the people in the sample are more reachable via a different mode. Often, a less costly method is employed first or used concurrently with another method, for example offering a choice between online and telephone response, or mailing a paper survey with a telephone follow-up with those who have not yet responded.

2. Designing Your Sample

How to design your sample.

When you run a survey, the people who respond to your survey are called your sample because they are a sample of people from the larger population you are studying, such as adults who live in the U.S. A sampling frame is a list of information that will allow you to contact potential respondents – your sample – from a population. Ultimately, it’s the sampling frame that allows you to draw a sample from the larger population. For a mail-based survey, it’s a list of addresses in the geographic area in which your population is located; for an online panel survey, it’s the people in the panel; for a telephone survey, it’s a list of phone numbers. Thinking through how to design your sample to best match the population of study can help you run a more accurate survey that will require fewer adjustments afterwards to match the population.

One approach is to use multiple sampling frames; for example, in a phone survey, you can combine a sampling frame of people with cell phones and a sampling frame of people with landlines (or both), which is now considered a best practice for phone surveys.

Surveys can be either probability-based or nonprobability-based. For decades, probability samples, often used for telephone surveys, were the gold standard for public opinion polling. In these types of samples, there is a frame that covers all or almost all the population of interest, such as a list of all the phone numbers in the U.S. or all the residential addresses, and individuals are selected using random methods to complete the survey. More recently, nonprobability samples and online surveys have gained popularity due to the rising cost of conducting probability-based surveys. A survey conducted online can use probability samples, such as those recruited using residential addresses, or can use nonprobability samples, such as “opt-in” online panels or participants recruited, through social media or personal networks.  Analyzing and reporting  nonprobability-based survey results often require using special statistical techniques and taking great care to ensure transparency about the methodology.

3. Designing your questionnaire

What are some best practices for writing survey questions.

  • Questions should be specific and ask only about one concept at a time. For example, respondents may interpret a question about the role of “government” differently – some may think of the federal government, while others may think of state governments.
  • Write questions that are short and simple and use words and concepts that the target audience will understand. Keep in mind that knowledge,  literacy skills , and  English proficiency  vary widely among respondents.
  • Keep questions free of bias by avoiding language that pushes respondents to respond in a certain way or that presents only one side of an issue. Also be aware that respondents may tend toward a socially desirable answer or toward saying “yes” or “agree” in an effort to please the interviewer, even if unconsciously.
  • Arrange questions in an order that will be logical to respondents but not influence how they answer. Often, it’s better for general questions to come earlier than specific questions about the same concept in the survey. For example, asking respondents whether they favor or oppose certain policy positions of a political leader prior to asking a general question about the favorability of that leader may prime them to weigh those certain policy positions more heavily than they otherwise would in determining how to answer about favorability.
  • Choose whether a question should be closed-ended or open-ended. Closed-ended questions, which provide a list of response options to choose from, place less of a burden on respondents to come up with an answer and are easier to interpret, but they are more likely to influence how a respondent answers. Open-ended questions allow respondents to respond in their own words but require coding in order to be interpreted quantitatively.
  • Response options for closed-ended questions should be chosen with care. They should be mutually exclusive, include all reasonable options (including, in some cases, options such as “don’t know” or “does not apply” or neutral choices such as “neither agree nor disagree”), and be in a logical order. In some circumstances, response options should be rotated (for example, half the respondents see response options in one order while the other half see it in reverse order) due to an  observed tendency  of respondents to pick the first answer in self-administered surveys and the last answer in interviewer-administered surveys. Randomization allows researchers to check on whether there are order effects.
  • Consider what languages you will offer the survey in. Many U.S. residents speak limited or no English. Most nationally representative surveys in the U.S. offer questionnaires in both English and Spanish, with bilingual interviewers available in interviewer-administered modes.
  • See AAPOR’s  resources on question wording for more details

How can I measure change over time?

If you want to measure change, don’t change the measure.

To accurately measure whether an observed change between surveys taken at two points in time reflects a true shift in public attitudes or behaviors, it is critical to keep the question wording, framing, and methodology of the survey as similar as possible across the two surveys. Changes in question-wording and even the context of other questions before it can influence how respondents answer and make it appear that there has been a change in public opinion even if the only change is in how respondents are interpreting the question (or potentially mask an actual shift in opinion).

Changes in mode, such as comparing a survey conducted over the telephone to one conducted online, can sometimes also mimic a real change because many people respond to certain questions differently when speaking to an interviewer on the phone versus responding in private to a web survey. Questions that are very personal or have a response option that respondents see as socially undesirable, or embarrassing are particularly sensitive to this mode effect.

If changing the measure is necessary — perhaps due to flawed question wording or a desire to switch modes for logistical reasons — the researcher can employ a split-ballot experiment to test whether respondents will be sensitive to the change. This would involve fielding two versions of a survey — one with the previous mode or question wording and one with the new mode or question wording — with all other factors kept as similar as possible across the two versions. If respondents answer both versions similarly, there is evidence that any change over time is likely due to a real shift in attitudes or behaviors rather than an artifact of the change in measurement. If response patterns differ according to which version respondents see, then change over time should be interpreted cautiously if the researcher moves ahead with the change in measurement.

How can I ensure the safety, confidentiality, and comfort of respondents?

  • Follow your institution’s guidance and policies on the protection of personal identifiable information and determine whether any data privacy laws apply to the study. If releasing individual responses in a public dataset, keep in mind that demographic information and survey responses may make it possible to identify respondents even if personal identifiable information like names and addresses are removed.
  • Consult an  Institutional Review Board  for recommendations on how to mitigate the risk, even if not required by your institution.
  • Disclose the sensitive topic at the beginning of the survey, or just before the questions appear in the survey, and  inform respondents  that they can skip the questions if they are not comfortable answering them (and be sure to program an online survey to allow skipping, or instruct interviewers to allow refusals without probing).
  • Provide links or hotlines to resources that can help respondents who were affected by the sensitive questions (for example, a hotline that provides help for those suffering from eating disorders if the survey asks about disordered eating behaviors).
  • Build rapport with a respondent by beginning with easy and not-too-personal questions and keeping sensitive topics for later in the survey.
  • Keep respondent burden low by keeping questionnaires and individual questions short and limiting the number of difficult, sensitive, or open-ended questions.
  • Allow respondents to skip a question or provide an explicit “don’t know” or “don’t want to answer” response, especially for difficult or sensitive questions. Requiring an answer increases the risk of respondents choosing to leave the survey early.

4. Fielding Your Survey

If i am using interviewers, how should they be trained.

Interviewers need to undergo training that covers both recruiting respondents into the survey and administering the survey. Recruitment training should cover topics such as contacting sampled respondents and convincing reluctant respondents to participate. Interviewers should be comfortable navigating the hardware and software used to conduct the survey and pronouncing difficult names or terms. They should have familiarity with the concepts the survey questions are asking about and know how to help respondents without influencing their answers. Training should also involve practice interviews to familiarize the interviewers with the variety of situations they are likely to encounter. If the survey is being administered in languages other than English, interviewers should demonstrate language proficiency and cultural awareness. Training should address how to conduct non-English interviews appropriately.

Interviewers should be trained in protocols on how best to protect the health and well-being of themselves and respondents, as needed. As an example, during the COVID-19 pandemic, training in the proper use of personal protective equipment and social distancing would be appropriate for field staff.

What kinds of testing should I do before fielding a survey?

Before fielding a survey, it is important to pretest the questionnaire. This typically consists of conducting cognitive interviews or using another qualitative research method to understand respondents’ thought processes, including their interpretation of the questions and how they came up with their answers. Pretesting should be conducted with respondents who are similar to those who will be in the survey (e.g., students if the survey sample is college students).

Conducting a pilot test to ensure that all survey procedures (e.g., recruiting respondents, administering the survey, cleaning data) work as intended is recommended. If it is unclear what question-wording or survey design choice is best, implementing an experiment during data collection can help systematically compare the effects of two or more alternatives.

What kinds of monitoring or quality checks should I do on my survey?

Checks must be made at every step of the survey life cycle to ensure that the sample is selected properly, the questionnaire is programmed accurately, interviewers do their work properly, information from questionnaires is edited and coded accurately, and proper analyses are used. The data should be monitored while it is being collected by using techniques such as observation of interviewers, replication of some interviews (re-interviews), and monitoring of response and paradata distributions. Odd patterns of responses may reflect a programming error or interviewer training issue that needs to be addressed immediately.

How do I get as many people to respond to the survey as possible?

It is important to monitor responses and attempt to maximize the number of people who respond to your survey. If very few people respond to your survey, there is a risk that you may be missing some types of respondents entirely, and your survey estimates may be biased. There are a variety of ways to incentivize respondents to participate in your survey, including offering monetary or non-monetary incentives, contacting them multiple times in different ways and at different times of the day, and/or using different persuasive messages. Interviewers can also help convince reluctant respondents to participate. Ideally,  reasonable efforts  should be made to convince both respondents who have not acknowledged the survey requests as well as those who refused to participate.

5. Analyzing and reporting the survey results

What are the common methods of analyzing survey data.

Analyzing survey data is, in many ways, similar to data analysis in other fields. However, there are a few details unique to survey data analysis to take note of. It is important to be as transparent as possible, including about any statistical techniques used to adjust the data.

Depending on your survey mode, you may have respondents who answer only part of your survey and then end the survey before finishing it. These are called partial responses, drop offs, or break offs. You should make sure to indicate these responses in your data and use a value to indicate there was no response. Questions with no response should have a different value than answer options such as “none of the above,” “I don’t know,” or “I prefer not to answer.” The same applies if your survey allows respondents to skip questions but continue in the survey.

A common way of reporting on survey data is to show cross-tabulated results, or crosstabs for short. Crosstabs are when you show a table with one question’s answers as the column headers and another question’s answers as the row names. The values in the crosstab can be either counts — the number of respondents who chose those specific answers to those two questions — or percentages. Typically, when showing percentages, the columns total to 100%.

Analyzing survey data allows us to estimate findings about the population under study by using a sample of people from that population. An industry standard is to calculate and report on the margin of sampling error, often shortened to the margin of error. The margin of error is a measurement of confidence in how close the survey results are to the true value in the population. To learn more about the margin of error and the credibility interval, a similar measurement used for nonprobability surveys, please see AAPOR’s  Margin of Error resources.

What is weighting and why is it important?

Ideally, the composition of your sample would match the population under study for all the characteristics that are relevant to the topic of your survey; characteristics such as age, sex, race/ethnicity, location, educational attainment, political party identification, etc. However, this is rarely the case in practice, which can lead to the results of your survey being skewed. Weighting is a statistical technique to adjust the results to adjust the relative contributions of your respondents to match the population characteristics more closely. Learn more about weighting .

What are the common industry standards for transparency in reporting data?

Because there are so many different ways to run surveys, it’s important to be transparent about how a survey was run and analyzed so that people know how to interpret and draw conclusions from it. AAPOR’s Transparency Initiative has established  a list of items to report with your survey results that uphold the industry transparency standards. These items include sample size, margin of sampling error, weighting attributes, the full text of the questions and answer options, the survey mode, the population under study, the way the sample was constructed, recruitment, and several other details of how the survey was run. The list of items to report can vary based on the mode of your survey — online, phone, face-to-face, etc. Organizations that want to commit to upholding these standards can also become members of the Transparency Initiative .

It is important to monitor responses and attempt to maximize the number of people who respond to your survey. If very few people respond to your survey, there is a risk that you may be missing some types of respondents entirely, and your survey estimates may be biased. There are a variety of ways to incentivize respondents to participate in your survey, including offering monetary or non-monetary incentives, contacting them multiple times in different ways and at different times of the day, and/or using different persuasive messages. Interviewers can also help convince reluctant respondents to participate. Ideally, reasonable efforts should be made to convince both respondents who have not acknowledged the survey requests as well as those who refused to participate.

2019 Presidential Address from the 74th Annual Conference

David Dutwin May 2019

“Many of you know me primarily as a methodologist.  But in fact, my path to AAPOR had nothing to do with methodology.  My early papers, in fact, wholly either provided criticism of, or underscored the critical value of, public opinion and public opinion polls.

And so in some respects, this Presidential Address is for me, completes a full circle of thought and passion I have for AAPOR, for today I would like to discuss matters pertaining to the need to reconsider, strengthen, and advance the mission of survey research in democracy.

Historically, there has been much to say on the role of public opinion in democracy.   George Gallup summarized the role of polls quite succinctly when he said,  “Without polls, [elites] would be guided only by letters to congressmen, the lobbying of pressure groups, and the reports of political henchmen.”

Further,  Democratic theory notes the critical, if not the pivotal role, of public opinion and democratic practice.   Storied political scientist V.O Key said:  “The poll furnishes a means for the deflation of the extreme claims of pressure groups and for the testing of their extravagant claims of public sentiment in support of their demands.”

Furthermore, surveys provide a critical check and balance to other claims of what the American public demands in terms of policies and their government.   Without polls, it would be all that much harder to verify and combat claims of public sentiment made by politicians, elites, lobbyists, and interest groups.  [“No policy that does not rest upon some public opinion can be permanently maintained.”- Abe Lincoln; “Public opinion is a thermometer a monarch should constantly consult” – Napoleon]

It is sometimes asked whether leaders do consult polls and whether polls have any impact of policy.   The relationship here is complex, but time and again researchers have found a meaningful and significant effect of public opinion, typically as measured by polling, on public policy. As one example, Page and Shapiro explored trends in American public opinion from the 1930s to the 1980s and found no less than 231 different changes in public policy following shifts in public opinion.

And certainly, in modern times around the world, there is recognition that the loss of public opinion would be, indeed, the loss of democracy itself. [“Where there is no public opinion, there is likely to be bad government, which sooner or later, becomes autocratic government.” – Willian Lyon Mackenzie King]

And yet, not all agree.  Some twist polling to be a tool that works against democratic principles.  [“The polls are just being used as another tool for voter suppression.” – Rush Limbaugh]

And certainly, public opinion itself is imperfect, filled with non-attitudes, the will of the crowd, and can often lead to tyranny of the majority, as Jon Stewart nicely pointed out. [“You have to remember one thing about the will of the people: It wasn’t that long ago that we were swept away by the Macarena.” – Jon Stewart]

If these later quotes were the extent of criticism on the role of public opinion and survey research in liberal democracy, I would not be up here today discussing what soon follows in this address.  Unfortunately, however, we live a world in which many of the institutions of democracy and society are under attack.

It is important to start by recognizing that AAPOR is a scientific organization.  Whether you are a quantitative or qualitative researcher, a political pollster or developer of official statistics, a sociologist or a political scientist, someone who works for a commercial entity or nonprofit, we are all survey scientists, and we come together as a great community of scientists within AAPOR, no matter our differences.

And so we, AAPOR, should be as concerned as any other scientific community regarding the current environment where science is under attack, devalued, and delegitimized.  It is estimated that since the 2016 election, the federal policy has moved to censor, or misrepresent, or curtail and suppress scientific data and discoveries over 200 times, according to the Sabin Center at Columbia University.  Not only is this a concern to AAPOR as a community of scientists, but we should be concerned as well on the impact of these attacks on public opinion itself.

Just as concerning is the attack on democratic information, in general.  Farrell and Schneier argue that there are two key types of knowledge in democracy, common and contested.  And while we should be free to argue and disagree with policy choices, our pick of democratic leaders, and even many of the rules and mores that guide us as a society, what is called contested knowledge, what cannot be up for debate is the common knowledge of democracy, for example, the legitimacy of the electoral process itself, or the validity of data attained by the Census, or even more so, I would argue, that public opinion does not tell us what the public thinks.

As the many quotes I provided earlier attest to, democracy is dependent upon a reliable and nonideological measure of the will of the people.  For more than a half century and beyond, survey research been the principal and predominant vehicle by which such knowledge is generated.

And yet, we are on that doorstep where common knowledge is becoming contested.  We are entering, I fear, a new phase of poll delegitimization.  I am not here to advocate any political ideology and it is critical for pollsters to remain within the confines of science.  Yet there has been a sea change in how polls are discussed by the current administration.  To constantly call out polls for being fake is to delegitimize public opinion itself and is a threat to our profession.

Worse still, many call out polls as mere propaganda (see Joondeph, 2018).  Such statements are more so a direct attack on our science, our field, and frankly, the entire AAPOR community.  And yet even worse is for anyone to actually rig poll results.  Perhaps nothing may undermine the science and legitimacy of polling more.

More pernicious still, we are on the precipice of an age where faking anything is possible.  The technology now exists to fake actual videos of politicians, or anyone for that matter, and to create realistic false statements.  The faking of poll results is merely in lockstep with these developments.

There are, perhaps, many of you in this room who don’t directly connect with this.  You do not do political polling.  You do government statistics.  Sociology.  Research on health, on education, or consumer research.  But we must all realize that polling is the tip of the spear.  It is what the ordinary citizen sees of our trade and our science.  As Andy Kohut once noted, it represents all of survey research. [Political polling is the “most visible expression of the validity of the survey research method.“ – Andrew Kohut]

With attacks on science at an all-time high in the modern age, including attacks on the science of surveys; with denigration of common knowledge, the glue that holds democracy together, including denunciation on the reliability of official statistics; with slander on polling that goes beyond deliberation on the validity of good methods but rather attacks good methods as junk, as propaganda, and as fake news; and worse of all, a future that, by all indications, will if anything include the increased frequency of fake polls, and fake data, well, what are we, AAPOR, to do?

We must respond.  We must react.  And, we must speak out.  What does this mean, exactly?  First, AAPOR must be able to respond.  Specifically, AAPOR must have vehicles and avenues of communication and the tools by which it can communicate.  Second, AAPOR must know how to respond.  That is to say, AAPOR must have effective and timely means of responding.  We are in an every minute of the day news cycle.  AAPOR must adapt to this environment and maximize its impact by speaking effectively within this communication environment.  And third, AAPOR must, quite simply, have the willpower to respond.  AAPOR is a fabulous member organization, providing great service to its members in terms of education, a code of ethics, guidelines for best practices and promotions of transparency and diversity in the field of survey research.  But we have to do more.  We have to learn to professionalize our communication and advocate for our members and our field.  There are no such thing as sidelines anymore.  We must do our part to defend survey science, polling, and the very role of public opinion in a functioning democracy.

This might seem to many of you like a fresh idea, and bold new step for AAPOR.  But in fact, there has been a common and consistent call for improved communication abilities, communicative outreach, and advocacy by many past Presidents, from Diane Colasanto to Nancy Belden to Andy Kohut.

Past President Frank Newport for example was and is a strong supporter of the role of public opinion in democracy, underscoring in his Presidential address that quote, “the collective views of the people…are absolutely vital to the decision-making that ultimately affects them.” He argued in his Presidential address that AAPOR must protect the role of public opinion in society.

A number of Past Presidents have rightly noted that AAPOR must recognize the central role of journalists in this regard, who have the power to frame polling as a positive or negative influence on society.  President Nancy Mathiowetz rightly pointed out that AAPOR must play a role in, and even financially support, endeavors to guarantee that journalists’ support AAPOR’s position on the role of polling in society and journalists’ treatment of polls.  And Nancy’s vision, in fact, launched relationship with Poynter in building a number of resources for journalist education of polling.

Past President Scott Keeter also noted the need for AAPOR to do everything it can to promote public opinion research.  He said that “we all do everything we can to defend high-quality survey research, its producers, and those who distribute it.”  But at the same time, Scott noted clearly that, unfortunately, “At AAPOR we are fighting a mostly defensive war.”

And finally, Past President Cliff Zukin got straight to the point in his Presidential address, noting that, quote “AAPOR needs to increase its organizational capacity to respond and communicate, both internally and externally. We need to communicate our positions and values to the outside world, and we need to diffuse ideas more quickly within our profession.”

AAPOR is a wonderful organization, and in my biased opinion, the best professional organization I know.  How have we responded to the call of past Presidents?  I would say, we responded with vigor, with energy, and with passion.  But we are but a volunteer organization of social scientists.  And so, we make task forces.  We write reports.  These reports are well researched, well written, and at the same time, I would argue, do not work effectively to create impact in the modern communication environment.

We have taken one small step to ameliorate this, with the report on polling in the 2016 election, which was publicly released via a quite successful live Facebook video event.  But we can still do better.  We need to be more timely for one, as that event occurred 177 days after the election, when far fewer people were listening, and the narrative was largely already written.  And we need to find ways to make such events have greater reach and impact.  And of course, we need more than just one event every four years.

I have been proud to have been a part of, and even be the chair of, a number of excellent task force reports.  But we cannot, I submit, continue to respond only with task force reports.  AAPOR is comprised of the greatest survey researchers in the world.  But it is not comprised of professional communication strategists, plain and simple.  We need help, and we need professional help.

In the growth of many organizations, there comes a time when the next step must be taken.  The ASA many years ago, for example, hired a full time strategic communications firm.  Other organizations, including the NCA, APSA, and others, chose instead to hire their own full time professional communication strategist.

AAPOR has desired to better advocate for itself for decades.  We recognize that we have to get into the fight, that there are again no more things as sidelines.  And we have put forward a commendable effort in this regard, building educational resources for journalists, and writing excellent reports on elections, best practices, sugging and frugging, data falsification, and other issues.  But we need to do more, and in the context of the world outside of us, we need to speak a language that resonates with journalists, political elites, and perhaps most importantly the public.

I want to stop right here and make it clear, that the return on investment on such efforts is not going to be quick.  And the goal here is not to improve response rates, though I would like that very much!  No, it is not likely that any efforts in any near term reverses trends in nonresponse.

It may very well be that our efforts only slow or at best stop the decline. But that would be an important development.  The Washington Post says that democracy dies in darkness.  If I may, I would argue that AAPOR must say, democracy dies in silence, when the vehicle for public opinion, surveys, has been twisted to be distrusted by the very people who need it most, ordinary citizens.  For the most part, AAPOR has been silent.  We can be silent no more.

This year, Executive Council has deliberated the issues outlined in this address, and we have chosen to act.  The road will be long, and at this time, I cannot tell you where it will lead.  But I can tell you our intentions and aspirations.  We have begun to execute a 5 point plan that I present here to you.

First, AAPOR Executive Council developed and released a request for proposals for professional strategic communication services.  Five highly regarded firms responded.  After careful deliberation and in person meetings with the best of these firms, we have chosen Stanton Communications to help AAPOR become a more professionalized association.  Our goals in the short term are as follows.

We desire to become more nimble and effective at responding to attacks on polls, with key AAPOR members serving as spokespersons when needed, but only after professional development of the messages they will promulgate, approved by Council, and professionalized by the firm.  Stanton brings with it a considerable distribution network of journalists and media outlets.  AAPOR, through its professional management firm Kellen, has access to audio and video services of the National Press Club, and will utilize these services when needed to respond to attacks on polls, and for other communications deemed important by AAPOR Executive Council.

Our plan is to begin small.  We are cognizant of the cost that professional communication can entail, and for now, we have set very modest goals. The first step is to be prepared, and have a plan for, the 2020 election, with fast response options of communication during the campaign, and perhaps most importantly, directly thereafter.

The second element of our plan is to re-envision AAPOR’s role in journalism education.  In short, we believe we need to own this space, not farm it out to any other entity.  We need refreshed educational videos, and many more of them, from explaining response rates to the common criticisms made on the use of horserace polling in the media.

We need to travel.  Willing AAPOR members should be funded to travel and present at journalism conferences, to news rooms, and to journalism schools on an annual basis. AAPOR could as well have other live events, for example a forum on the use of polls in journalism.  There should be a consistent applied effort over time.  The media and journalists are AAPOR’s greatest spokespeople.  By and large, much of our image is shaped through them.

The third element looks at the long game.  And that is, for AAPOR to help in developing civics education on public opinion and the role of public opinion in democracy.  With the help of educational experts, and importantly, tipping our hats to our AAPOR’s Got Talent winner last year, Allyson Holbrook, who proposed exactly this kind of strategy, we believe AAPOR can help develop a curriculum and educational materials and engage with educators to push for the inclusion of this curriculum in primary education.  AAPOR can and should develop specific instructional objectives of civics education by grade and develop a communications plan to lobby for the inclusion of this civics curriculum by educators.

The fourth element is for AAPOR to direct the Transparency Initiative to develop a strategic plan for the next ten years.  We recognize that it is not always the case that polls are executed with best practices.  How does AAPOR respond in these instances?  With a plethora of new sampling approaches and modalities, we believe the TI needs to have a full-throated conversation about these challenges and how AAPOR should handle them.  After all, this too is part of the conversation of AAPOR communication.

Finally, AAPOR should, as past President Tim Johnson called for last year, learn as much as it can about the perceptions of polls in society.  We cannot make effective strategic communication plans without knowing first how they will resonate and know to some degree their expected effectiveness.  Such an effort should continue over time, building both a breadth and depth of understanding.

If this sounds a bit like a wish list, well, you would be right.  For now, the immediate goal for AAPOR and its communication firm is to prepare for 2020 and to take some modest steps toward professionalizing AAPOR’s ability to effectively and quickly communicate and advocate.    Looking toward the future, AAPOR Council has authorized the development of the Ad Hoc Committee on Public Opinion.  This committee will be comprised of AAPOR members dedicated to pushing forward this agenda.

We recognize the potential cost of these endeavors in terms of money and labor, and so in each area, there will be mission leaders on the committee whose goal is to push forward with two goals.  The first is funding.  We cannot and should not fund these endeavors alone.  We will be seeking foundational funding for each of these areas, and are developing a proposal for each specifically.  Perhaps only one area attains funding, perhaps all of them.  No matter, the committee will adjust its goals contingent on the means it has available.

A number of members have already asked to be part of these efforts.  But I call on all of you, the AAPOR membership, to reach out and join the effort as well.  We need people experienced in seeking funding, and people passionate in moving the needle with regard to polling journalism, civics education, and the role of public opinion in democracy.  AAPOR’s secret sauce has always been the passion of its members and we call on you to help.  Please go the link below to tell us you want to join the effort.

Friends and colleagues, one of the many excellent AAPOR task forces already, in fact explored this issue, the task force on polling and democracy and leadership.  They argued that “AAPOR should adopt an increased public presence arguing for the importance of public opinion in a democracy and the importance of rigorous, unbiased, scientific research assessing public opinion.”

It is time we strive to realize these aspirations.  For the good of our association, our field, and our very democracy.  If past efforts by AAPOR volunteers are any indication, we anticipate great success and health in the future of our field and our endeavors.

It has been an honor and a privilege serving as your President. Thank you.”

BMJ Author Hub

Before you submit

In this section:

  • NEW! Featured Author Support
  • How to choose a journal
  • Submitting to a Topic Collection
  • See our calls for papers for Topic Collections
  • See our published Topic Collections
  • Become a Guest Editor
  • Reporting guidelines
  • Patient and public partnership
  • Study protocols
  • Scientific misconduct
  • Trial registration
  • Authorship and contributorship
  • Research Ethics
  • Patient consent and confidentiality
  • Competing interests
  • From research to publication
  • Language editing services
  • Writing for online visibility

The EQUATOR network (Enhancing the QUAlity and Transparency Of health Research) is an international initiative that seeks to improve the reliability and value of published health research literature. Reporting guidelines promote clear reporting of methods and results to allow critical appraisal of the manuscript.

All research articles should be written in accordance with the relevant research reporting guideline, this will ensure that you provide enough information for editors, peer reviewers and readers to understand how the research was performed and to judge whether the findings are likely to be reliable. Reporting guidelines should be submitted with research articles as supplemental materials; checklists should list which page of your research article each checklist item appears.

All available guidelines can be found on the Equator Network website, below is the list of most often used but others may apply:

  • Randomised controlled trials (RCTs): CONSORT guidelines, flowchart and structured abstract checklist
  • Systematic reviews and meta-analyses: PRISMA guidelines , flowchart and structured abstract checklist
  • Observational studies in epidemiology: STROBE guidelines (also refer to RECORD for observational studies using routinely collected health data) and MOOSE guidelines
  • Diagnostic accuracy studies: STARD guidelines
  • Quality improvement studies: SQUIRE guidelines
  • Multivariate prediction models: TRIPOD guidelines
  • Economic evaluation studies: CHEERS guidelines
  • Animal pre-clinical studies: ARRIVE guidelines
  • Web-based surveys: CHERRIES guidelines
  • Studies using data from electronic health records: CODE-EHR guidelines
  • Reporting of sex and gender information: SAGER guidelines

If you are not sure which guidelines are the most relevant for your type of study, please use the online tool developed by the EQUATOR Network and Penelope Research.

Reporting Guidelines

It is important that your manuscript gives a clear and complete account of the research that you have done. Well reported research is more useful and complete reporting allows editors, peer reviewers and readers to understand what you did and how.

Poorly reported research can distort the literature, and leads to research that cannot be replicated or used in future meta-analyses or systematic reviews.

You should make sure that you manuscript is written in a way that the reader knows exactly what you did and could repeat your study if they wanted to with no additional information. It is particularly important that you give enough information in the methods section of your manuscript.

To help with reporting your research, there are reporting guidelines available for many different study designs. These contain a checklist of minimum points that you should cover in your manuscript. You should use these guidelines when you are preparing and writing your manuscript, and you may be required to provide a completed version of the checklist when you submit your manuscript. 

The EQUATOR (Enhancing the Quality and Transparency Of health Research) Network is an international initiative that aims to improve the quality of research publications. It provides a comprehensive list of reporting guidelines and other material to help improve reporting. 

A list full of all of the reporting guidelines endorsed by the EQUATOR Network can be found here . Some of the reporting guidelines for common study designs are:

  • Randomized controlled trials – CONSORT
  • Systematic reviews – PRISMA
  • Observational studies – STROBE
  • Case reports – CARE
  • Qualitative research – COREQ
  • Pre-clinical animal studies – ARRIVE

Peer reviewers may be asked to use these checklists when assessing your manuscript. If you follow these guidelines, editors and peer reviewers will be able to assess your manuscript better as they will more easily understand what you did. It may also mean that they ask you for fewer revisions.

What are Reporting Guidelines for Research?

Reporting guidelines are recommendations of what information authors should include in their manuscripts when writing about their research. These are imperative for ensuring ethical and valid research, especially in the health sciences.

Updated on December 21, 2022

a doctor reviewing reporting guidelines

Many reporting guidelines were written for health science research. The Equator Network lists 542. There are reporting guidelines for different types of manuscripts (for example, PRISMA was designed for systematic reviews), different types of study designs (CONSORT for randomized controlled trials), and for research on specific topics or fields (STROBE for epidemiology and REMARK for tumor marker prognostic studies).

Many journals also endorse or even require the use of these guidelines. The Journal of the American Medical Association , The Lancet , and the New England Journal of Medicine , for instance, require authors to follow the CONSORT guidelines in randomized controlled trials (RCTs).

This article will familiarize you with the main clinical guidelines and provide specific examples of how they’re reflected in published studies. These studies give good examples of what’s expected of you when you communicate the outcomes of these guidelines in your scientific manuscript.

What do reporting guidelines consist of?

Reporting guidelines can be in the form of structured text, a checklist, or a flow diagram.

These guidelines are meant to improve the quality, transparency, repeatability, comparability, and applicability of research in a field. The use of standardized guidelines allows research to be included in systematic reviews and meta-analyses and it assists scientists, especially health science researchers and professionals, to make research-based decisions.

The Equator Network lists reporting guidelines commonly used in the health sciences. It also offers a search engine to browse for guidelines by study type, clinical area, or using keywords. Equator is also a great resource when searching for keywords to include after your abstract.

The most-encountered reporting guidelines are listed below so you can see their various differences and specifics. You’ll also know what to do when encountering one of these guidelines as you seek to publish your research, systematic review, protocol, and so forth.

The main reporting guidelines for journal publication

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were designed to improve the reporting of systematic reviews. PRISMA consists of a 27-item checklist , an expanded checklist , a flow diagram , and an Explanation and Elaboration document .

Specifically, PRISMA requires you to report:

  • which databases or websites you searched
  • which search terms or criteria you used to find studies
  • which criteria you used to include or exclude such studies in your review

For example, a systematic review on perioperative interventions to prevent postoperative pulmonary complications used search terms such as “postoperative care” and “postoperative complications” combined with “respiratory failure” and similar terms.

In this case, researchers used only studies published between 1990 and December 12, 2017, and only included those related to patients aged 18 and over.

The Consolidated Standards of Reporting Trials (CONSORT) guidelines were designed to improve the reporting of randomized controlled trials (RCTs).

CONSORT consists of a 25-item checklist , a flow diagram , and an Explanation and Elaboration document. Specifically, it’s important to report:

  • the trial design
  • how participants were selected
  • sample sizes obtained for each group
  • how many participants were included and excluded, and for what reason
  • how participants were randomized
  • whether participants and care providers were blinded during the study

For example, a trial comparing fractional flow reserve- and angiography-guided percutaneous coronary intervention assessed 1,905 patients, but 900 were not eligible for various reasons. Therefore, 1,005 patients were randomly assigned to receive either angiography-guided percutaneous coronary intervention (496 patients) or fractional flow reserve-guided percutaneous coronary intervention (509 patients).

The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines were designed to improve the reporting of observational studies in epidemiology, including cohort, case-control, and cross-sectional studies. STROBE consists of a 12-item checklist and an Explanation and Elaboration document .

Specifically, these guidelines require authors to describe the eligibility criteria of participants included in cohort and cross-sectional studies, as well as a rationale for assigning participants to case or control groups. This case-control study , for example, used 373 age and gender-matched colorectal cancer patients from the Iowa Cancer Registry as controls for 368 cutaneous melanoma cases to determine if arsenic exposure increased the risk of developing cutaneous melanoma.

The Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines were designed to improve the reporting of qualitative research, including interviews and focus groups. COREQ consists of a 32-item checklist requiring a description of the person leading the interviews or focus groups. This should include their credentials, background, training, and gender, as well as their relationship with participants.

Researchers should also report:

  • participants’ demographic information
  • how participants were recruited and selected
  • the setting in which the data were collected
  • questions and prompts used to lead discussions

A good example of applying these guidelines is this study on nursing home staff’s perceptions of barriers toward implementing person-centered care for people living with dementia.

This study used convenience sampling to recruit 24 staff members from six nursing homes in South Korea for semi-structured interviews. The researchers included a table with all interview questions. This also comprised a detailed description of the authors who conducted the interviews, their qualifications, background, relationship with the nursing homes, and experience in nursing home care as well as in conducting qualitative research.

The Consensus-based Clinical Case Report Guidelines (CARE) were developed to improve the reporting of case reports. CARE consists of a 13-item checklist , a flow diagram , and an Explanation and Elaboration document. Specifically, it requires thorough reporting of patient information and symptoms, clinical findings, diagnostic assessment, therapeutic interventions, and outcomes.

In one example, this study reported a case of a 2-year-old who presented with a stiff and swollen knee without any other symptoms. The initial diagnosis was oligoarthritis, but treatment with steroid injections was ineffective. Further tests were conducted when a relative of the patient was diagnosed with pulmonary tuberculosis. The chest radiograph was normal, but an effusion from the swollen knee tested positive for *Mycobacterium tuberculosis*. Anti-tuberculous therapy resolved the joint swelling.

The Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines were developed to improve the reporting of bioscience research using laboratory animals.

ARRIVE consists of an essential checklist of 10 items, a full checklist with a further set of 11 recommended items, and an Explanation & Elaboration document. The essential checklist in this case requires that you report the details of animals used (species, gender, age, etc.), the exact number in each group, if any animals were excluded from analysis, and why, as well as a detailed description of experimental procedures.

This recommended checklist also adds the name of the ethical review committee that approved the study, with relevant document numbers, animal housing conditions, steps taken to reduce their pain or suffering, and whether any adverse events occurred during the experiment.

In one example, this study investigating the influence of chronic L-DOPA treatment on immune response following allogeneic and xenogeneic graft in a rat model of Parkinson’s disease included a table summarizing the sample size of each treatment group. This research also disclosed that some of their animals had to be removed from the study because they developed tumors unrelated to the experiment.

The Standards for Reporting Diagnostic Accuracy (STARD) guidelines were developed to improve the reporting of diagnostic accuracy studies. STARD consists of a 30-item checklist , a flow diagram , and an Explanation and Elaboration document. Specifically, STARD requires authors to include the eligibility criteria used to include participants and how many patients underwent the index and reference standard test.

For example, this study investigating the diagnostic accuracy of fecal immunochemical testing for hemoglobin as a screening tool for colorectal cancer provides a flow diagram of participants throughout the study. In terms of the original 360 patients screened, only 255 were eligible and completed the immunochemical test for fecal hemoglobin (the index test) while only 229 received a colonoscopy (the reference test).

The Meta-analysis of Observational Studies in Epidemiology (MOOSE) guidelines were designed to improve the reporting of meta-analyses of observational studies in epidemiology.

MOOSE consists of a six-item checklist that requires authors to report:

  • the details of the search strategy they used
  • how studies were assessed
  • how data were classified, coded, and analyzed
  • relevant descriptive information for each study included

For example, this study on peripheral fibroblast growth factor-2 levels in major depressive patients provides a flow diagram showing how many search results the researchers acquired from each database, which articles were included, and which were excluded. In this case, from an original 243 articles found, only seven were included in the meta-analysis.

These studies were summarized in a table, describing the sample size, age, and gender of participants, country where the study was conducted, and fibroblast growth factor levels found.

The Qualitative Research Review Guidelines or RATS is a checklist written to guide peer reviewers working on qualitative research manuscripts. The guidelines recommend reviewers think about the relevance of the study design (R), the appropriateness of the qualitative method used (A), the transparency of procedures used (T), and the soundness of the interpretive approach (S) to make up the acronym RATS.

For example, the checklist prompts the reviewer to ask if qualitative methodology was the best approach to achieve the study aims

The Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) guideline was written to improve the reporting of intervention evaluation studies using nonrandomized designs. TREND consists of a 22-point checklist including requirements such as including eligibility criteria of participants and recruitment methods, how participants were assigned to different groups, and details of the study interventions. Results should include sample sizes for each analysis, and report estimated effect sizes and confidence intervals.

For example, in a study that investigated the efficiency of using infrared vein imaging to insert intravenous catheters in COVID-19 patients, patients were assigned to either the control or intervention group based on the ward they were admitted to due to logistical considerations in the COVID-19 isolation wards during the pandemic. The researchers had 62 patients in the intervention group and 60 in the control group. There was no statistical difference between the demographic details of the patients in the two groups.

The Minimum Information About a Microarray Experiment (MIAME) guideline was written to improve the reporting of microarray experiments. The guideline document comprises six parts that describe what information should be included in manuscripts that used microarray experiments.

MIAME requires that authors describe the samples used, the experimental design, the assay design, hybridizations, measurements (including raw images of each assay, image analysis, and a summary of the final processed data), and normalization controls.

For example, a study on genes in developing mouse lungs extracted total RNA from lung tissue from 5- to 7-day old mice. RNA was subjected to synthesis, fragmentation, and hybridization using the CodeLink Expression Assay Kit. The raw data and details of the label protocol, hybridization protocol, scan protocol, and data processing were made available on NCBI´s Gene Expression Omnibus database accession number GSM5702660 .

The Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK) guideline was written to improve the reporting of tumor marker prognostic studies. REMARK consists of a 20-item checklist and an Explanation and Elaboration document. Specifically, they require authors to report patient characteristics and treatments, specimen characteristics, assay methods, study design, and statistical analysis.

For example, this study evaluating the prognostic value of cathepsin-D in primary breast cancer included 2,810 samples that had been stored in liquid nitrogen in a tumor bank from patients who were diagnosed with breast cancer between 1978 and 1992, had no metastatic disease at diagnosis, no previous diagnosis of carcinoma, and no evidence of disease within 1 month of surgery. Patients with inoperable T4 tumors and patients who received neoadjuvant therapy before surgery were excluded from the study.

Make sure you get your guidelines and journal requirements 100% right

We hope this article provides a quick and useful reference on the main reporting guidelines you may face as a (clinical) researcher, and how they’re represented in studies. To be sure you’ve met the research requirements and the other requirements of your target publication, we recommend an AJE edit . A publication professional will work with you to ensure you’ve satisfied the requirements, and they’ll get your scientific English in tip-top shape, ready for the world to see.

The AJE Team

The AJE Team

See our "Privacy Policy"

What is decision making?

Signpost with three blank signs on sky backgrounds

Decisions, decisions. When was the last time you struggled with a choice? Maybe it was this morning, when you decided to hit the snooze button—again. Perhaps it was at a restaurant, with a miles-long menu and the server standing over you. Or maybe it was when you left your closet in a shambles after trying on seven different outfits before a big presentation. Often, making a decision—even a seemingly simple one—can be difficult. And people will go to great lengths—and pay serious sums of money—to avoid having to make a choice. The expensive tasting menu at the restaurant, for example. Or limiting your closet choices to black turtlenecks, à la Steve Jobs.

Get to know and directly engage with senior McKinsey experts on decision making

Aaron De Smet is a senior partner in McKinsey’s New Jersey office, Eileen Kelly Rinaudo  is McKinsey’s global director of advancing women executives and is based in the New York office, Frithjof Lund is a senior partner in the Oslo office, and Leigh Weiss is a senior adviser in the Boston office.

If you’ve ever wrestled with a decision at work, you’re definitely not alone. According to McKinsey research, executives spend a significant portion of their time— nearly 40 percent , on average—making decisions. Worse, they believe most of that time is poorly used. People struggle with decisions so much so that we actually get exhausted from having to decide too much, a phenomenon called decision fatigue.

But decision fatigue isn’t the only cost of ineffective decision making. According to a McKinsey survey of more than 1,200 global business leaders, inefficient decision making costs a typical Fortune 500 company 530,000 days  of managers’ time each year, equivalent to about $250 million in annual wages. That’s a lot of turtlenecks.

How can business leaders ease the burden of decision making and put this time and money to better use? Read on to learn the ins and outs of smart decision making—and how to put it to work.

Learn more about our People & Organizational Performance Practice .

How can organizations untangle ineffective decision-making processes?

McKinsey research has shown that agile is the ultimate solution for many organizations looking to streamline their decision making . Agile organizations are more likely to put decision making in the right hands, are faster at reacting to (or anticipating) shifts in the business environment, and often attract top talent who prefer working at companies with greater empowerment and fewer layers of management.

For organizations looking to become more agile, it’s possible to quickly boost decision-making efficiency by categorizing the type of decision to be made and adjusting the approach accordingly. In the next section, we review three types of decision making and how to optimize the process for each.

What are three keys to faster, better decisions?

Business leaders today have access to more sophisticated data than ever before. But it hasn’t necessarily made decision making any easier. For one thing, organizational dynamics—such as unclear roles, overreliance on consensus, and death by committee—can get in the way of straightforward decision making. And more data often means more decisions to be taken, which can become too much for one person, team, or department. This can make it more difficult for leaders to cleanly delegate, which in turn can lead to a decline in productivity.

Leaders are growing increasingly frustrated with broken decision-making processes, slow deliberations, and uneven decision-making outcomes. Fewer than half  of the 1,200 respondents of a McKinsey survey report that decisions are timely, and 61 percent say that at least half the time they spend making decisions is ineffective.

What’s the solution? According to McKinsey research, effective solutions center around categorizing decision types and organizing different processes to support each type. Further, each decision category should be assigned its own practice—stimulating debate, for example, or empowering employees—to yield improvements in effectiveness.

Here are the three decision categories  that matter most to senior leaders, and the standout practice that makes the biggest difference for each type of decision.

  • Big-bet decisions are infrequent but high risk, such as acquisitions. These decisions carry the potential to shape the future of the company, and as a result are generally made by top leaders and the board. Spurring productive debate by assigning someone to argue the case for and against a potential decision can improve big-bet decision making.
  • Cross-cutting decisions, such as pricing, can be frequent and high risk. These are usually made by business unit heads, in cross-functional forums as part of a collaborative process. These types of decisions can be improved by doubling down on process refinement. The ideal process should be one that helps clarify objectives, measures, and targets.
  • Delegated decisions are frequent but low risk and are handled by an individual or working team with some input from others. Delegated decision making can be improved by ensuring that the responsibility for the decision is firmly in the hands of those closest to the work. This approach also enhances engagement and accountability.

In addition, business leaders can take the following four actions to help sustain rapid decision making :

  • Focus on the game-changing decisions, ones that will help an organization create value and serve its purpose.
  • Convene only necessary meetings, and eliminate lengthy reports. Turn unnecessary meetings into emails, and watch productivity bloom. For necessary meetings, provide short, well-prepared prereads to aid in decision making.
  • Clarify the roles of decision makers and other voices. Who has a vote, and who has a voice?
  • Push decision-making authority to the front line—and tolerate mistakes.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

How can business leaders effectively delegate decision making.

Business is more complex and dynamic than ever, meaning business leaders are faced with needing to make more decisions in less time. Decision making takes up an inordinate amount of management’s time—up to 70 percent for some executives—which leads to inefficiencies and opportunity costs.

As discussed above, organizations should treat different types of decisions differently . Decisions should be classified  according to their frequency, risk, and importance. Delegated decisions are the most mysterious for many organizations: they are the most frequent, and yet the least understood. Only about a quarter of survey respondents  report that their organizations make high-quality and speedy delegated decisions. And yet delegated decisions, because they happen so often, can have a big impact on organizational culture.

The key to better delegated decisions is to empower employees by giving them the authority and confidence to act. That means not simply telling employees which decisions they can or can’t make; it means giving employees the tools they need to make high-quality decisions and the right level of guidance as they do so.

Here’s how to support delegation and employee empowerment:

  • Ensure that your organization has a well-defined, universally understood strategy. When the strategic intent of an organization is clear, empowerment is much easier because it allows teams to pull in the same direction.
  • Clearly define roles and responsibilities. At the foundation of all empowerment efforts is a clear understanding of who is responsible for what, including who has input and who doesn’t.
  • Invest in capability building (and coaching) up front. To help managers spend meaningful coaching time, organizations should also invest in managers’ leadership skills.
  • Build an empowerment-oriented culture. Leaders should role model mindsets that promote empowerment, and managers should build the coaching skills they want to see. Managers and employees, in particular, will need to get comfortable with failure as a necessary step to success.
  • Decide when to get involved. Managers should spend effort up front to decide what is worth their focused attention. They should know when it’s appropriate to provide close guidance and when not to.

How can you guard against bias in decision making?

Cognitive bias is real. We all fall prey, no matter how we try to guard ourselves against it. And cognitive and organizational bias undermines good decision making, whether you’re choosing what to have for lunch or whether to put in a bid to acquire another company.

Here are some of the most common cognitive biases and strategies for how to avoid them:

  • Confirmation bias. Often, when we already believe something, our minds seek out information to support that belief—whether or not it is actually true. Confirmation bias  involves overweighting evidence that supports our belief, underweighting evidence against our belief, or even failing to search impartially for evidence in the first place. Confirmation bias is one of the most common traps organizational decision makers fall into. One famous—and painful—example of confirmation bias is when Blockbuster passed up the opportunity  to buy a fledgling Netflix for $50 million in 2000. (Actually, that’s putting it politely. Netflix executives remember being “laughed out” of Blockbuster’s offices.) Fresh off the dot-com bubble burst of 2000, Blockbuster executives likely concluded that Netflix had approached them out of desperation—not that Netflix actually had a baby unicorn on its hands.
  • Herd mentality. First observed by Charles Mackay in his 1841 study of crowd psychology, herd mentality happens when information that’s available to the group is determined to be more useful than privately held knowledge. Individuals buy into this bias because there’s safety in the herd. But ignoring competing viewpoints might ultimately be costly. To counter this, try a teardown exercise , wherein two teams use scenarios, advanced analytics, and role-playing to identify how a herd might react to a decision, and to ensure they can refute public perceptions.
  • Sunk-cost fallacy. Executives frequently hold onto underperforming business units or projects because of emotional or legacy attachment . Equally, business leaders hate shutting projects down . This, researchers say, is due to the ingrained belief that if everyone works hard enough, anything can be turned into gold. McKinsey research indicates two techniques for understanding when to hold on and when to let go. First, change the burden of proof from why an asset should be cut to why it should be retained. Next, categorize business investments according to whether they should be grown, maintained, or disposed of—and follow clearly differentiated investment rules  for each group.
  • Ignoring unpleasant information. Researchers call this the “ostrich effect”—when people figuratively bury their heads in the sand , ignoring information that will make their lives more difficult. One study, for example, found that investors were more likely to check the value of their portfolios when the markets overall were rising, and less likely to do so when the markets were flat or falling. One way to help get around this is to engage in a readout process, where individuals or teams summarize discussions as they happen. This increases the likelihood that everyone leaves a meeting with the same understanding of what was said.
  • Halo effect. Important personal and professional choices are frequently affected by people’s tendency to make specific judgments based on general impressions . Humans are tempted to use simple mental frames to understand complicated ideas, which means we frequently draw conclusions faster than we should. The halo effect is particularly common in hiring decisions. To avoid this bias, structured interviews can help mitigate the essentializing tendency. When candidates are measured against indicators, intuition is less likely to play a role.

For more common biases and how to beat them, check out McKinsey’s Bias Busters Collection .

Learn more about Strategy & Corporate Finance consulting  at McKinsey—and check out job opportunities related to decision making if you’re interested in working at McKinsey.

Articles referenced include:

  • “ Bias busters: When the crowd isn’t necessarily wise ,” McKinsey Quarterly , May 23, 2022, Eileen Kelly Rinaudo , Tim Koller , and Derek Schatz
  • “ Boards and decision making ,” April 8, 2021, Aaron De Smet , Frithjof Lund , Suzanne Nimocks, and Leigh Weiss
  • “ To unlock better decision making, plan better meetings ,” November 9, 2020, Aaron De Smet , Simon London, and Leigh Weiss
  • “ Reimagine decision making to improve speed and quality ,” September 14, 2020, Julie Hughes , J. R. Maxwell , and Leigh Weiss
  • “ For smarter decisions, empower your employees ,” September 9, 2020, Aaron De Smet , Caitlin Hewes, and Leigh Weiss
  • “ Bias busters: Lifting your head from the sand ,” McKinsey Quarterly , August 18, 2020, Eileen Kelly Rinaudo
  • “ Decision making in uncertain times ,” March 24, 2020, Andrea Alexander, Aaron De Smet , and Leigh Weiss
  • “ Bias busters: Avoiding snap judgments ,” McKinsey Quarterly , November 6, 2019, Tim Koller , Dan Lovallo, and Phil Rosenzweig
  • “ Three keys to faster, better decisions ,” McKinsey Quarterly , May 1, 2019, Aaron De Smet , Gregor Jost , and Leigh Weiss
  • “ Decision making in the age of urgency ,” April 30, 2019, Iskandar Aminov, Aaron De Smet , Gregor Jost , and David Mendelsohn
  • “ Bias busters: Pruning projects proactively ,” McKinsey Quarterly , February 6, 2019, Tim Koller , Dan Lovallo, and Zane Williams
  • “ Decision making in your organization: Cutting through the clutter ,” McKinsey Quarterly , January 16, 2018, Aaron De Smet , Simon London, and Leigh Weiss
  • “ Untangling your organization’s decision making ,” McKinsey Quarterly , June 21, 2017, Aaron De Smet , Gerald Lackey, and Leigh Weiss
  • “ Are you ready to decide? ,” McKinsey Quarterly , April 1, 2015, Philip Meissner, Olivier Sibony, and Torsten Wulf.

Signpost with three blank signs on sky backgrounds

Want to know more about decision making?

Related articles.

Three gear wheels in contact

What is productivity?

" "

What is the future of work?

" "

What is leadership?

The independent source for health policy research, polling, and news.

Understanding the Intersection of Medicaid & Work: A Look at What the Data Say

Madeline Guth, Patrick Drake , Robin Rudowitz , and Maiss Mohamed Published: Apr 24, 2023

  • Issue Brief
  • Appendix Tables

While data show that the majority of Medicaid enrollees are working, there has been long-standing debate about imposing work requirements in Medicaid. For the first time in the history of the Medicaid program, the Trump Administration approved Section 1115 waivers that included  work and reporting requirements as a condition of Medicaid eligibility in some states. However, courts struck down many of these requirements and the Biden Administration withdrew these provisions in all states that had approvals. Arkansas was the only state to implement Medicaid work requirements with consequences for noncompliance.

Work requirements are now back on the agenda as some Congressional Republicans have indicated that they will rely on a budget outline that would require Medicaid enrollees to work, or look for work, in order to receive coverage. In a speech on April 17, Speaker McCarthy emphasized work requirements as part of negotiations to increase the debt limit, and such requirements were included in the Republicans’ proposed debt limit bill released on April 19. In addition, Republican legislators in several states have proposed seeking work requirement waivers. In July 2023, Georgia intends to expand Medicaid eligibility to 100% of the federal poverty level (FPL), with initial and continued enrollment conditioned on meeting work requirements (after a federal judge overturned the Biden Administration’s withdrawal of Georgia’s work requirement).

Experience in Arkansas and earlier estimates of implementing work requirements nationally suggest that many could lose coverage due primarily with barriers in meeting work reporting requirements. An analysis from the Congressional Budget Office (CBO) found that a national Medicaid work requirement would result in 2.2 million adults losing Medicaid coverage per year (and subsequently experiencing increases in medical expenses), and lead to only a very small increase in employment. CBO estimates that this policy would decrease federal spending by $15 billion annually due to the reduction in enrollment. New attention on work and reporting requirements come as millions are at risk of losing coverage due to administrative barriers as states resume routine renewals and disenrollments with the unwinding of the Medicaid continuous enrollment provision that was included in the Families First Coronavirus Response Act (FFCRA) enacted at the start of the COVID-19 pandemic.

To provide context to these debates, this brief explores work status and characteristics of Medicaid enrollees in 2021 to answer three key questions:

  • What is the work status of Medicaid covered adults?

What do we know about Medicaid adults who are working?

  • What do we know about the intersection of work and health and the impact of Medicaid work requirements?

These data show that most Medicaid covered adults were either working or faced a barrier to work, leaving just nine percent of enrollees who could be directly targeted by work requirement policies.

What is the work status of Medicaid adults?

Many Medicaid adults who are not working face barriers to moving into employment, such as functional disability. Even if they do not qualify for Medicaid on the basis of a disability through SSI, many adults on Medicaid have high rates of functional disability and serious medical conditions, especially among those not working. Approximately 17% of Medicaid adults have a functional disability, with the highest rates of disability among Medicaid adults not in the labor force (27%) (data not shown). Medicaid adults may also experience mental health conditions that impede their ability to work, with about one in three (30%) non-working Medicaid adults reporting depression. 1

What do we know about the intersection of work and health and the impact of work requirements?

Medicaid can  support employment by providing affordable health coverage, and research suggests that the effects of work requirements on health and employment are likely limited. Research  shows that being in poor health is associated with increased risk of job loss, while access to affordable health insurance has a positive effect on the ability to obtain and maintain employment. Medicaid coverage helps low-wage workers get care that enables them to remain healthy enough to work. Also, states may launch initiatives , such as voluntary employment referral programs, to support employment for Medicaid enrollees without making employment a condition of eligibility. In focus groups, enrollees report that Medicaid coverage helps them to manage chronic conditions and supports their ability to work jobs that may be physically demanding. However, a review  of research on the relationship between work and health found that although there is strong evidence of an association between unemployment and poorer health outcomes, there is limited evidence on the effect of  employment  on health. Further, research from other public programs, including TANF and SNAP , suggests that work requirements have had little impact on increasing employment. A CBO report finds that earnings associated with employment gains due to TANF and SNAP work requirements were offset by loss of income for those no longer eligible for the programs.

In part due to evidence on the impacts of work requirements, courts and the Biden Administration determined that such requirements do not further Medicaid program objectives. A January 2021  executive order  from President Biden directed HHS to  review  waiver policies that may undermine Medicaid. CMS subsequently withdrew Medicaid  work requirement waivers  in all states that had approvals.   Previously, in 2020 the DC appeals court affirmed that the Trump Administration’s approvals of work requirements in Arkansas and New Hampshire were unlawful because the Secretary failed to consider the impact on coverage; before leaving office, the Trump Administration asked the Supreme Court to reverse these decisions. After the Biden Administration withdrew the Arkansas and New Hampshire work requirements, the Administration  asked  the Supreme Court to vacate the lower court decisions and dismiss the Arkansas case as moot (as that waiver had expired) and send the New Hampshire case back to HHS (as New Hampshire had not asked the Court to review the case involving its waiver). In April 2022, the Court  granted  this motion, effectively putting an end to the pending litigation. This dismissal does not preclude a future presidential administration from revisiting work requirements; however, any future work requirements approved would likely face legal challenges.

Looking Ahead

Right now, Georgia is the only state with an approved work requirement waiver, as a Federal District Court judge vacated the Biden Administration’s withdrawal. Once implemented in July 2023, Georgia’s waiver will expand Medicaid eligibility to 100% of the federal poverty level (FPL), with initial and continued enrollment conditioned on meeting work and premium requirements. Section 1115 monitoring and evaluation requirements will require Georgia to track and report the number of enrollees who gain and maintain coverage. As only Arkansas has implemented Medicaid work requirements with consequences for noncompliance, the results of monitoring and evaluation in Georgia will provide further evidence as to the impacts of work requirements—however, Georgia is unique in applying work requirements to a new coverage group rather than to an existing Medicaid population.

Additionally, other states have indicated they may pursue work requirement waivers in the future, and Congressional Republicans have recently discussed a federal Medicaid work requirement tied to approval to raise the debt limit. Although the Biden Administration has said work requirements do not further Medicaid objectives, a future presidential administration could revisit this view and allow state waivers (though any future work requirements approved via waiver could face legal challenges).

  • Work Requirements
  • Medicaid's Future

Also of Interest

  • An Overview of Medicaid Work Requirements: What Happened Under the Trump and Biden Administrations?
  • Medicaid Waiver Tracker: Approved and Pending Section 1115 Waivers by State
  • Medicaid Work Requirements are Back on the Agenda

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Table of Contents

Which social media platforms are most common, who uses each social media platform, find out more, social media fact sheet.

Many Americans use social media to connect with one another, engage with news content, share information and entertain themselves. Explore the patterns and trends shaping the social media landscape.

To better understand Americans’ social media use, Pew Research Center surveyed 5,733 U.S. adults from May 19 to Sept. 5, 2023. Ipsos conducted this National Public Opinion Reference Survey (NPORS) for the Center using address-based sampling and a multimode protocol that included both web and mail. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race and ethnicity, education and other categories.

Polls from 2000 to 2021 were conducted via phone. For more on this mode shift, read our Q&A.

Here are the questions used for this analysis , along with responses, and  its methodology ­­­.

A note on terminology: Our May-September 2023 survey was already in the field when Twitter changed its name to “X.” The terms  Twitter  and  X  are both used in this report to refer to the same platform.

reporting guidelines survey research

YouTube and Facebook are the most-widely used online platforms. About half of U.S. adults say they use Instagram, and smaller shares use sites or apps such as TikTok, LinkedIn, Twitter (X) and BeReal.

Note: The vertical line indicates a change in mode. Polls from 2012-2021 were conducted via phone. In 2023, the poll was conducted via web and mail. For more details on this shift, please read our Q&A . Refer to the topline for more information on how question wording varied over the years. Pre-2018 data is not available for YouTube, Snapchat or WhatsApp; pre-2019 data is not available for Reddit; pre-2021 data is not available for TikTok; pre-2023 data is not available for BeReal. Respondents who did not give an answer are not shown.

Source: Surveys of U.S. adults conducted 2012-2023.

reporting guidelines survey research

Usage of the major online platforms varies by factors such as age, gender and level of formal education.

% of U.S. adults who say they ever use __ by …

  • RACE & ETHNICITY
  • POLITICAL AFFILIATION

reporting guidelines survey research

This fact sheet was compiled by Research Assistant  Olivia Sidoti , with help from Research Analyst  Risa Gelles-Watnick , Research Analyst  Michelle Faverio , Digital Producer  Sara Atske , Associate Information Graphics Designer Kaitlyn Radde and Temporary Researcher  Eugenie Park .

Follow these links for more in-depth analysis of the impact of social media on American life.

  • Americans’ Social Media Use  Jan. 31, 2024
  • Americans’ Use of Mobile Technology and Home Broadband  Jan. 31 2024
  • Q&A: How and why we’re changing the way we study tech adoption  Jan. 31, 2024

Find more reports and blog posts related to  internet and technology .

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Try searching for

  • Misinformation
  • Subscriptions
  • Fact-checking

In this piece

What does the public in six countries think of generative ai in news.

An electronic screen displaying Japan's Nikkei share average as the share average climbed to a record high in Tokyo. Credit: Reuters / Issei Kato

An electronic screen displaying Japan's Nikkei share average. Credit: Reuters / Issei Kato

DOI: 10.60625/risj-4zb8-cg87

Executive summary.

Based on an online survey focused on understanding if and how people use generative artificial intelligence (AI), and what they think about its application in journalism and other areas of work and life across six countries (Argentina, Denmark, France, Japan, the UK, and the USA), we present the following findings.

Findings on the public’s use of generative AI

ChatGPT is by far the most widely recognised generative AI product – around 50% of the online population in the six countries surveyed have heard of it. It is also by far the most widely used generative AI tool in the six countries surveyed. That being said, frequent use of ChatGPT is rare, with just 1% using it on a daily basis in Japan, rising to 2% in France and the UK, and 7% in the USA. Many of those who say they have used generative AI have used it just once or twice, and it is yet to become part of people’s routine internet use.

In more detail, we find:

  • While there is widespread awareness of generative AI overall, a sizable minority of the public – between 20% and 30% of the online population in the six countries surveyed – have not heard of any of the most popular AI tools.
  • In terms of use, ChatGPT is by far the most widely used generative AI tool in the six countries surveyed, two or three times more widespread than the next most widely used products, Google Gemini and Microsoft Copilot.
  • Younger people are much more likely to use generative AI products on a regular basis. Averaging across all six countries, 56% of 18–24s say they have used ChatGPT at least once, compared to 16% of those aged 55 and over.
  • Roughly equal proportions across six countries say that they have used generative AI for getting information (24%) as creating various kinds of media, including text but also audio, code, images, and video (28%).
  • Just 5% across the six countries covered say that they have used generative AI to get the latest news.

Findings on public opinion about the use of generative AI in different sectors

Most of the public expect generative AI to have a large impact on virtually every sector of society in the next five years, ranging from 51% expecting a large impact on political parties to 66% for news media and 66% for science. But, there is significant variation in whether people expect different sectors to use AI responsibly – ranging from around half trusting scientists and healthcare professionals to do so, to less than one-third trusting social media companies, politicians, and news media to use generative AI responsibly.

  • Expectations around the impact of generative AI in the coming years are broadly similar across age, gender, and education, except for expectations around what impact generative AI will have for ordinary people – younger respondents are much more likely to expect a large impact in their own lives than older people are.
  • Asked if they think that generative AI will make their life better or worse, a plurality in four of the six countries covered answered ‘better’, but many have no strong views, and a significant minority believe it will make their life worse. People’s expectations when asked whether generative AI will make society better or worse are generally more pessimistic.
  • Asked whether generative AI will make different sectors better or worse, there is considerable optimism around science, healthcare, and many daily routine activities, including in the media space and entertainment (where there are 17 percentage points more optimists than pessimists), and considerable pessimism for issues including cost of living, job security, and news (8 percentage points more pessimists than optimists).
  • When asked their views on the impact of generative AI, between one-third and half of our respondents opted for middle options or answered ‘don’t know’. While some have clear and strong views, many have not made up their mind.

Findings on public opinion about the use of generative AI in journalism

Asked to assess what they think news produced mostly by AI with some human oversight might mean for the quality of news, people tend to expect it to be less trustworthy and less transparent, but more up to date and (by a large margin) cheaper for publishers to produce. Very few people (8%) think that news produced by AI will be more worth paying for compared to news produced by humans.

  • Much of the public think that journalists are currently using generative AI to complete certain tasks, with 43% thinking that they always or often use it for editing spelling and grammar, 29% for writing headlines, and 27% for writing the text of an article.
  • Around one-third (32%) of respondents think that human editors check AI outputs to make sure they are correct or of a high standard before publishing them.
  • People are generally more comfortable with news produced by human journalists than by AI.
  • Although people are generally wary, there is somewhat more comfort with using news produced mostly by AI with some human oversight when it comes to soft news topics like fashion (+7 percentage point difference between comfortable and uncomfortable) and sport (+5) than with ‘hard’ news topics, including international affairs (-21) and, especially, politics (-33).
  • Asked whether news that has been produced mostly by AI with some human oversight should be labelled as such, the vast majority of respondents want at least some disclosure or labelling. Only 5% of our respondents say none of the use cases we listed need to be disclosed.
  • There is less consensus on what uses should be disclosed or labelled. Around one-third think ‘editing the spelling and grammar of an article’ (32%) and ‘writing a headline’ (35%) should be disclosed, rising to around half for ‘writing the text of an article’ (47%) and ‘data analysis’ (47%).
  • Again, when asked their views on generative AI in journalism, between a third and half of our respondents opted for neutral middle options or answered ‘don’t know’, reflecting a large degree of uncertainty and/or recognition of complexity.

Introduction

The public launch of OpenAI’s ChatGPT in November 2022 and subsequent developments have spawned huge interest in generative AI. Both the underlying technologies and the range of applications and products involving at least some generative AI have developed rapidly (though unevenly), especially since the publication in 2017 of the breakthrough ‘transformers’ paper (Vaswani et al. 2017) that helped spur new advances in what foundation models and Large Language Models (LLMs) can do.

These developments have attracted much important scholarly attention, ranging from computer scientists and engineers trying to improve the tools involved, to scholars testing their performance against quantitative or qualitative benchmarks, to lawyers considering their legal implications. Wider work has drawn attention to built-in limitations, issues around the sourcing and quality of training data, and the tendency of these technologies to reproduce and even exacerbate stereotypes and thus reinforce wider social inequalities, as well as the implications of their environmental impact and political economy.

One important area of scholarship has focused on public use and perceptions of AI in general, and generative AI in particular (see, for example, Ada Lovelace Institute 2023; Pew 2023). In this report, we build on this line of work by using online survey data from six countries to document and analyse public attitudes towards generative AI, its application across a range of different sectors in society, and, in greater detail, in journalism and the news media specifically.

We go beyond already published work on countries including the USA (Pew 2023; 2024), Switzerland (Vogler et al. 2023), and Chile (Mellado et al. 2024), both in terms of the questions we cover and specifically in providing a cross-national comparative analysis of six countries that are all relatively privileged, affluent, free, and highly connected, but have very different media systems (Humprecht et al. 2022) and degrees of platformisation of their news media system in particular (Nielsen and Fletcher 2023).

The report focuses on the public because we believe that – in addition to economic, political, and technological factors – public uptake and understanding of generative AI will be among the key factors shaping how these technologies are being developed and are used, and what they, over time, will come to mean for different groups and different societies (Nielsen 2024). There are many powerful interests at play around AI, and much hype – often positive salesmanship, but sometimes wildly pessimistic warnings about possible future risks that might even distract us from already present issues. But there is also a fundamental question of whether and how the public at large will react to the development of this family of products. Will it be like blockchain, virtual reality, and Web3? All promoted with much bombast but little popular uptake so far. Or will it be more like the internet, search, and social media – hyped, yes, but also quickly becoming part of billions of people’s everyday media use.

To advance our understanding of these issues, we rely on data from an online survey focused on understanding if and how people use generative AI, and what they think about its application in journalism and other areas of work and life. In the first part of the report, we present the methodology, then we go on to cover public awareness and use of generative AI, expectations for generative AI’s impact on news and beyond, how people think AI is being used by journalists right now, and how people think about how journalists should use generative AI, before offering a concluding discussion.

As with all survey-based work, we are reliant on people’s own understanding and recall. This means that many responses here will draw on broad conceptions of what AI is and might mean, and that, when it comes to generative AI in particular, people are likely to answer based on their experience of using free-standing products explicitly marketed as being based on generative AI, like ChatGPT. Most respondents will be less likely to be thinking about incidents where they may have come across functionalities that rely in part on generative AI, but do not draw as much attention to it – a version of what is sometimes called ‘invisible AI’ (see, for example, Alm et al. 2020). We are also aware that these data reflect a snapshot of public opinion, which can fluctuate over time.

We hope the analysis and data published here will help advance scholarly analysis by complementing the important work done on the use of AI in news organisations (for example, Beckett and Yaseen 2023; Caswell 2024; Diakopoulos 2019; Diakopoulos et al 2024; Newman 2024; Simon 2024), including its limitations and inequities (see, for example, Broussard 2018, 2023; Bender et al. 2021), and help centre the public as a key part of how generative AI will develop and, over time, potentially impact many different sectors of society, including journalism and the news media.

Methodology

The report is based on a survey conducted by YouGov on behalf of the Reuters Institute for the Study of Journalism (RISJ) at the University of Oxford. The main purpose is to understand if and how people use generative AI, and what they think about its application in journalism and other areas of work and life.

The data were collected by YouGov using an online questionnaire fielded between 28 March and 30 April 2024 in six countries: Argentina, Denmark, France, Japan, the UK, and the USA.

YouGov was responsible for the fieldwork and provision of weighted data and tables only, and RISJ was responsible for the design of the questionnaire and the reporting and interpretation of the results.

Samples in each country were assembled using nationally representative quotas for age group, gender, region, and political leaning. The data were weighted to targets based on census or industry-accepted data for the same variables.

Sample sizes are approximately 2,000 in each country. The use of a non-probability sampling approach means that it is not possible to compute a conventional ‘margin of error’ for individual data points. However, differences of +/- 2 percentage points (pp) or less are very unlikely to be statistically significant and should be interpreted with a very high degree of caution. We typically do not regard differences of +/- 2pp as meaningful, and as a general rule we do not refer to them in the text.

It is important to note that online samples tend to under-represent the opinions and behaviours of people who are not online (typically those who are older, less affluent, and have limited formal education). Moreover, because people usually opt in to online survey panels, they tend to over-represent people who are well educated and socially and politically active.

Some parts of the survey require respondents to recall their past behaviour, which can be flawed or influenced by various biases. Additionally, respondents’ beliefs and attitudes related to generative AI may be influenced by social desirability bias, and when asked about complex socio-technical issues, people will not always be familiar with the terminology experts rely on or understand the terms the same way. We have taken steps to mitigate these potential biases and sources of error by implementing careful questionnaire design and testing.

1. Public awareness and use of generative AI

Most of our respondents have, by now, heard of at least some of the most popular generative AI tools. ChatGPT is by far the most widely recognised of these, with between 41% (Argentina) and 61% (Denmark) saying they’d heard of it.

Other tools, typically those built by incumbent technology companies – such as Google Gemini, Microsoft Copilot, and Snapchat My AI – are some way behind ChatGPT, even with the boost that comes from being associated with a well-known brand. They are, with the exception of Grok from X, each recognised by roughly 15–25% of the public.

Tools built by specialised AI companies, such as Midjourney and Perplexity, currently have little to no brand recognition among the public at large. And there’s little national variation here, even when it comes to brands like Mistral in France; although it is seen by some commentators as a national champion, it clearly hasn’t yet registered with the wider French population.

We should also remember that a sizable minority of the public – between 19% of the online population in Japan and 30% in the UK – have not heard of any of the most popular AI tools (including ChatGPT) despite nearly two years of hype, policy conversations, and extensive media coverage.

While our Digital News Report (Newman et al. 2023) shows that in most countries the news market is dominated by domestic brands that focus on national news, in contrast, the search and social platform space across countries tends to feature the same products from large technology companies such as Google, Meta, and Microsoft. At least for now, it seems like the generative AI space will follow the pattern from the technology sector, rather than the more nationally oriented one of news providers serving distinct markets defined in part by culture, history, and language.

The pattern we see for awareness in Figure 1 extends to use, with ChatGPT by far the most widely used generative AI tool in the six countries surveyed. Use of ChatGPT is roughly two or three times more widespread than the next products, Google Gemini and Microsoft Copilot. What’s also clear from Figure 2 is that, even when it comes to ChatGPT, frequent use is rare, with just 1% using it on a daily basis in Japan, rising to 2% in France and the UK, and 7% in the USA. Many of those who say they have used generative AI have only used it once or twice, and it is yet to become part of people’s routine internet use.

Use of ChatGPT is slightly more common among men and those with higher levels of formal education, but the biggest differences are by age group, with younger people much more likely to have ever used it, and to use it on a regular basis (Figure 3). Averaging across all six countries, 16% of those aged 55 and over say they have used ChatGPT at least once, compared to 56% of 18–24s. But even among this age group infrequent use is the norm, with just over half of users saying they use it monthly or less.

Although people working in many different industries – including news and journalism – are looking for ways of deploying generative AI, people in every country apart from Argentina are slightly more likely to say they are using it in their private life rather than at work or school (Figure 4). If providers of AI products convince more companies and organisations that these tools can deliver great efficiencies and new opportunities this may change, with professional use becoming more widespread and potentially spilling over to people’s personal lives – a dynamic that was part of how the use of personal computers, and later the internet, spread. However, at this stage private use is more widespread.

Averaging across six countries, roughly equal proportions say that they have used generative AI for getting information (24%) as creating media (28%), which as a category includes creating images (9%), audio (3%), video (4%), code (5%), and generating text (Figure 5). When it comes to creating text more specifically, people report using generative AI to write emails (9%) and essays (8%), and for creative writing (e.g. stories and poems) (7%). But it’s also clear that many people who say they have used generative AI for creating media have just been playing around or experimenting (11%) rather than looking to complete a specific real-world task. This is also true when it comes to using generative AI to get information (9%), but people also say they have used it for answering factual questions (11%), advice (10%), generating ideas (9%), and summarisation (8%).

An average of 5% across the six countries say that they have used generative AI to get the latest news, making it less widespread than most of the other uses that were mentioned previously. One reason for this is that the free version of the most widely used generative AI product – ChatGPT – is not yet connected to the web, meaning that it cannot be used for the latest news. Furthermore, our previous research has shown that around half of the most widely used news websites are blocking ChatGPT (Fletcher 2024), and partly as a result, it is rarely able to deliver the latest news from specific outlets (Fletcher et al. 2024).

The figures for using generative AI for news vary by country, from just 2% in the UK and Denmark to 10% in the USA (Figure 6). The 10% figure in the USA is probably partly due to the fact that Google has been trialling Search Generative Experiences (SGE) there for the last year, meaning that people who use Google to search for a news-related topic – something that 23% of Americans do each week (Newman et al. 2023) – may see some generative AI text that attempts to provide an answer. However, given the documented limitations of generative AI when it comes to factual precision, companies like Google may well approach news more cautiously than other types of content and information, and the higher figure in the USA may also simply be because generative AI is more widely used there generally.

Numerous examples have been documented of generative AI giving incorrect answers when asked factual questions, as well as other forms of so-called ‘hallucination’ that result in poor- quality outputs (e.g. Angwin et al. 2024). Although some are quick to point out that it is wrong to expect generative AI to be good at information-based tasks – at least at its current state of development – some parts of the public are experimenting with doing exactly that.

Given the known problems when it comes to reliability and veracity, it is perhaps concerning that our data also show that users seem reasonably content with the performance – most of those (albeit a rather small slice of the online population) who have tried to use generative AI for information-based tasks generally say they trusted the outputs (Figure 7).

In interpreting this, it is important to keep in mind two important caveats.

First, the vast majority of the public has not used generative AI for information-based tasks, so we do not know about their level of trust. Other evidence suggests that trust among the large part of the public that has not used generative AI is low, meaning overall trust levels are likely to be low (Pew 2024).

Second, people are more likely to say that they ‘somewhat trust’ the outputs rather than ‘strongly trust’, which indicates a degree of scepticism – their trust is far from unconditional. However, this may also mean that from the point of view of members of the public who have used the tools, information from generative AI while clearly not perfect is already good enough for many purposes, especially tasks like generating ideas.

When we ask people who have used generative AI to create media whether they think the product they used did it well or badly, we see a very similar picture. Most of those who have tried to use generative AI to create media think that it did it ‘very’ or ‘somewhat’ well, but again, we can only use this data to know what users of the technology think.

The general population’s views on the media outputs may look very different, and while early adopters seem to have some trust in generative AI, and feel these technologies do a somewhat good job for many tasks, it is not certain that everyone will feel the same, even if or when they start using generative AI tools.

2. Expectations for generative AI’s impact on news and beyond

We now move from people’s awareness and use of generative AI products to their expectations around what the development of these technologies will mean. First, we find that most of the public expect generative AI to have a large impact on virtually every sector of society in the next five years (Figure 8). For every sector, there is a smaller number who expect low impact (compared to a large impact), and a significant number of people (roughly between 15% and 20%) who answer ‘don’t know’.

Averaging across six countries, we find that around three-quarters of respondents think generative AI will have a large impact on search and social media companies (72%), while two-thirds (66%) think that it will have a large impact on the news media – strikingly, the same proportion who think it will have a large impact upon the work of scientists (66%). Around half think that generative AI will have a large impact upon national governments (53%) and politicians and political parties (51%).

Interestingly, there are generally fewer people who expect it will have a large impact on ordinary people (48%). Much of the public clearly thinks the impact of generative AI will be mediated by various existing social institutions.

Bearing in mind how different the countries we cover are in many respects, including in terms of how people use and think about news and media (see, for example, Newman et al. 2023), it is striking that we find few cross-country differences in public expectations around the impact of generative AI. There are a few minor exceptions. For example, expectations around impact for politicians and political parties are a bit higher than average in the USA (60% vs 51%) and a bit lower in Japan (44% vs 51%) – but, for the most part, views across countries are broadly similar.

For almost all these sectors, there is little variation across age and gender, and the main difference when it comes to different levels of education is that respondents with lower levels of formal education are more likely to respond with ‘don’t know’, and those with higher levels of education are more likely to expect a large impact. The number who expect a small impact remains broadly stable across levels of education.

The only exception to this relative lack of variation by demographic factors is expectations around what impact generative AI will have for ordinary people. Younger respondents, who, as we have shown in earlier sections, are much more likely to have used generative AI tools, are also much more likely to expect a large impact within the next five years than older people, who often have little or no personal experience of using generative AI (Figure 9).

Expectations around the impact of generative AI, whether large or small, in themselves say nothing about how people think about whether this impact will, on balance, be for better or for worse.

Because generative AI use is highly mediated by institutions, and our data document that much of the public clearly recognise this, a useful additional way to think about expectations is to consider whether members of the public trust different sectors to make responsible use of generative AI.

We find that public trust in different institutions to make responsible use of generative AI is generally quite low (Figure 10). While around half in most of the six counties trust scientists and healthcare professionals to use generative AI responsibly, the figures drop below 40% for most other sectors in most countries. Figures for social media companies are lower than many other sectors, as are those for news media, ranging from 12% in the UK to 30% in Argentina and the USA.

There is more cross-country variation in public trust and distrust in different institutions’ potential use of generative AI, partly in line with broader differences from country to country in terms of trust in institutions.

But there are also some overarching patterns.

First, younger people, while still often sceptical, are for many sectors more likely to say they trust a given institution to use generative AI responsibly, and less likely to express distrust. This tendency is most pronounced in the sectors viewed with greatest scepticism by the public at large, including the government, politicians, and ordinary people, as well as news media, social media, and search engines.

Second, a significant part of the public does not have a firm view on whether they trust or distrust different institutions to make responsible use of generative AI. Varying from sector to sector and from country to country, between roughly one-quarter and half of respondents answer ‘neither trust nor distrust’ or ‘don’t know’ when asked. There is much uncertainty and often limited personal experience; in that sense, the jury is still out.

Leaving aside country differences for a moment and looking at the aggregate across all six countries, we can combine our data on public expectations around the size of the impact that generative AI will have with expectations around whether various sectors will use these technologies responsibly. This will provide an overall picture of how people think about these issues across different social institutions (Figure 11).

If we compare public perceptions relative to the average percentage of respondents who expect a large impact across all sectors (58%, marked by the vertical dashed line in Figure 11) and the average percentage of respondents who distrust actors in a given sector to make responsible use of generative AI (33%, marked by the horizontal dashed line), we can group expectations from sector to sector into four quadrants.

  • First, there are those sectors where people expect generative AI to have a relatively large impact, but relatively few expect it will be used irresponsibly (e.g. healthcare and science).
  • Second, there are sectors where people expect the impact may not be as great, and relatively fewer fear irresponsible use (e.g. ordinary people and retailers).
  • Third, there are sectors where relatively few people expect a large impact, and relatively more people are worried about irresponsible use (e.g. government and political parties).
  • Finally, there are sectors where more people expect large impact, and more people fear irresponsible use by the actors involved (e.g. social media and the news media, who are viewed very similarly by the public in this respect).

It is important to keep this quite nuanced and differentiated set of expectations in mind in interpreting people’s general expectations around what impact they think generative AI will have for them personally, as well as for society at large.

Asked if they think that generative AI will make their life better or worse, more than half of our respondents answer ‘neither better nor worse’ or ‘don’t know’, with a plurality in four of the six countries covered answering ‘better’, and a significant minority ‘worse’ (Figure 12). The large number of people with no strong expectations either way is consistent across countries, but the balance between more optimistic responses and more pessimistic ones varies.

People’s expectations when asked whether generative AI will make society better or worse are more pessimistic on average. There are about the same number of optimists, but significantly more pessimists who believe generative AI will make society worse. Expectations around what generative AI might mean for society are more varied across the six countries we cover. In two (France and the UK), there are more who expect it will make society worse than better. In another two (Denmark and the USA), there are as many pessimists as optimists. And in the remaining two (Argentina and Japan) more respondents expect generative AI products will make society better than expect them to make society worse.

Looking more closely at people’s expectations, both in terms of their own life and in terms of society, younger people and people with more formal education also often opt for ‘neither better nor worse’ or ‘don’t know’, but in most countries – Argentina being the exception – they are more likely to answer ‘better’ (Figure 13).

Asked whether they think the use of generative AI will make different areas of life better or worse, again, much of the public is undecided, either opting for ‘neither better nor worse’ or answering ‘don’t know’, underlining that it is still early days.

We now look specifically at the percentage point difference between optimists who expect AI to make things better and pessimists who expect it to make them worse gives a sense of public expectations across different areas (Figure 14). Large parts of the public think generative AI will make science (net ‘better’ of +44 percentage points), healthcare (+36), and many daily routine activities, including transportation (+26), shopping (+22), and entertainment (+17), better, even though there is much less optimism when it comes to core areas of the rule of law, including criminal justice (+1) and more broadly legal rights and due process (-3), and considerable pessimism for some very bread-and-butter issues, including cost of living (-6), equality (-6), and job security (-18).

News and journalism is also an area where, on balance, there is more pessimism than optimism (-8) – a striking contrast to another area involving the media, namely entertainment (+17). But there is a lot of national variation here. In countries that are more optimistic about the potential effects of generative AI, namely Argentina (+19) and Japan (+8), the proportion that think it will make news and journalism better is larger than the proportion that think it will become worse. The UK public are particularly negative about the effect of generative AI on journalism, with a net score of -35. There is a similar lack of consensus across different countries on whether crime and justice, legal rights and due process, cost of living, equality, and job security will be made better or worse.

3. How people think generative AI is being used by journalists right now

Many of the conversations around generative AI and journalism are about what might happen in the future – speculation about what the technology may or may not be able to do one day, and how this will shape the profession as we know it. But it is important to remember that some journalists and news organisations are using generative AI right now, and they have been using some form of AI in the newsroom for several years.

We now focus on how much the public knows about this, what they think journalists currently use generative AI for, and what processes they think news media have in place to ensure quality.

In the survey, we showed respondents a list of journalistic tasks and asked them how often they think journalists perform them ‘using artificial intelligence with some human oversight’. The tasks ranged from behind-the-scenes work like ‘editing the spelling and grammar of an article’ and ‘data analysis’ through to much more audience-facing outputs like ‘writing the text of an article’ and ‘creating a generic image/illustration to accompany the text of an article’.

We specifically asked about doing these ‘using artificial intelligence with some human oversight’ because we know that some newsrooms are already performing at least some tasks in this way, while few are currently doing them entirely using AI without a human in the loop. Even tasks that may seem fanciful to some, like ‘creating an artificial presenter or author’, are not without precedent. In Germany, for example, the popular regional newspaper Express has created a profile for an artificial author called Klara Indernach, 1 which it uses as the byline for its articles created with the help of AI, and several news organisations across the world already use AI-generated artificial presenters for various kinds of video and audio.

Figure 15 shows that a substantial minority of the public believe that journalists already always or often use generative AI to complete a wide range of different tasks. Around 40% believe that journalists often or always use AI for translation (43%), checking spelling and grammar (43%), and data analysis (40%). Around 30% think that journalists often or always use AI for re-versioning – whether it’s rewriting the same article for different people (28%) or turning text into audio or video (30%) – writing headlines (29%), or creating stock images (30%).

In general, the order of the tasks in Figure 15 reflects the fact that people – perhaps correctly – believe that journalists are more likely to employ AI for behind-the-scenes work like spellchecking and translation than they are for more audience-facing outputs. This may be because people understand that some tasks carry a greater reputational risk for journalists, and/or that the technology is simply better at some things than others.

The results may also reveal a degree of cynicism about journalism from some parts of the public. The fact that around a quarter think that journalists always or often use AI to create an image if a real photograph is not available (28%) and 17% think they create an artificial presenter or author may say more about their attitudes towards journalism as an institution than about how they think generative AI is actually being used. However unwelcome they might be – and however wrong they are about how many news media use AI – these perceptions are a social reality, shaping how parts of the public think about the intersection between journalism and AI.

Public perceptions of what journalists and news media already use AI for are quite consistent across different genders and age groups, but there are some differences by country, with respondents in Argentina and the USA a little more likely to believe that AI is used for each of these tasks, and respondents in Denmark and the UK less likely.

Among those news organisations that have decided to implement generative AI for certain tasks, the importance of ‘having a human in the loop’ to oversee processes and check errors is often stressed. Human oversight is nearly always mentioned in public-facing guidelines on the use of AI for editorial work, and journalists themselves mention it frequently (Becker et al. 2024).

Large parts of the public, however, do not think this is happening (Figure 16). Averaging across the six countries, around one-third think that human editors ‘always’ or ‘often’ check AI outputs to make sure they are correct or of a high standard before publishing them. Nearly half think that journalists ‘sometimes’, ‘rarely’, or ‘never’ do this – again, perhaps, reflecting a level of cynicism about the profession among the public, or a tendency to judge the whole profession and industry on the basis of how some parts of it act.

The proportion that think checking is commonplace is lowest in the UK, where only one-third of the population say they ‘trust most news most of the time’ (Newman et al. 2023), but we also see similarly low figures in Denmark, where trust in the news is much higher. The results may, therefore, also partly reflect more than just people’s attitudes towards journalism and the news media.

4. What does the public think about how journalists should use generative AI?

Various forms of AI have long been used to produce news stories by publishers including, for example, Associated Press, Bloomberg, and Reuters. And content produced with newer forms of generative AI has, with mixed results, been published by titles including BuzzFeed, the Los Angeles Times , the Miami Herald , USA Today , and others.

Publishers may be more or less comfortable with how they are using these technologies to produce various kinds of content, but our data suggest that much of the public is not – at least not yet. As we explore in greater detail in our forthcoming 2024 Reuters Institute Digital News Report (Newman et al. 2024), people are generally more comfortable with news produced by human journalists than by AI.

However, averaging across six countries, younger people are significantly more likely to say they are comfortable with using news produced in whole or in part by AI (Figure 17). The USA and Argentina have somewhat higher levels of comfort with news made by generative AI, but there too, much of the public remains sceptical.

We also asked respondents whether they are comfortable or uncomfortable using news produced mostly by AI with some human oversight on a range of different topics. Figure 18 shows the net percentage point difference between those that selected ‘very’ or ‘somewhat’ comfortable and those that selected ‘very’ or ‘somewhat’ uncomfortable (though, as ever, a significant minority selected the ‘neither’ or ‘don’t know’ options). Looking across different topics, there is somewhat more comfort with using news produced mostly by AI with some human oversight when it comes to ‘softer’ news topics, like fashion (+7) and sports (+5), than ‘hard’ news topics including politics (-33) and international affairs (-21).

But in every area, at this point in time, only for a very small number of topics are there more people uncomfortable with relying on AI-generated news than comfortable. As with overall comfort, there is somewhat greater acceptance of the use of AI for generating various kinds of news with at least some human oversight in the USA and Argentina.

Putting aside country differences, there is again a marked difference between our respondents overall and younger respondents. Among respondents overall, there are only three topic areas out of ten where slightly more respondents are comfortable with news made mostly by AI with some human oversight than are uncomfortable with this. Among respondents aged 18 to 24, this rises to six out of ten topic areas.

It is important to remember that much of the public does not have strong views either way, at least at this stage. Between one-quarter and one-third of respondents answer either ‘neither comfortable nor uncomfortable’ or ‘don’t know’ when asked the general questions about comfort with different degrees of reliance on generative AI versus human journalists, and between one-third and half of respondents do the same when asked about generative AI news for specific topics. It is an open question as to how these less clearly formed views will evolve.

One way to assess what the public expects it will mean if and when AI comes to play a greater role in news production is to gauge people’s views on how it will change news, compared to a baseline of news produced entirely by human journalists.

We map this by asking respondents if they think that news produced mostly by AI with some human oversight will differ from what most are used to across a range of different qualities and attributes.

Between one-third and half of our respondents do not have a strong view either way. Focusing on those respondents who do have a view, we can look at the net percentage point difference between how many respondents think AI will make the news somewhat more or much more (e.g. more ‘up to date’ or more ‘transparent’), versus somewhat less or much less, of each, helping to provide an overarching picture of public expectations.

On balance, more respondents expect news produced mostly by AI with some human oversight to be less trustworthy (-17) and less transparent (-8), but more up to date (+22) and – by a large margin – cheaper to make (+33) (Figure 19). There is considerable national variation here, but with the exception of Argentina, the balance of public opinion (net positive or negative) is usually the same for these four attributes. For the others, the balance often varies.

Essentially our data suggest that the public, at this stage, primarily think that the use of AI in news production will help publishers by cutting costs, but identify few, if any, ways in which they expect it to help them – and several key areas where many expect news made with AI to be worse.

In light of this, it makes sense that, when asked if news produced mostly by AI with some human oversight is more or less worth paying for than news produced entirely by a human journalist, an average of 41% across six countries say less worth paying for (Figure 20). Just 8% say they think that news made in this way will be more valuable.

There is some variation here by country and by age, but even among the generally more AI-positive younger respondents aged 18–24, most say either less worth paying for (33%) or about the same (38%). The implications of the spread of generative AI and how it is used by publishers for people’s willingness to pay for news will be interesting to follow going forward, as tensions may well mount between the ‘pivot to pay’ we have seen from many news media in recent years and the views we map here.

Looking across a range of different tasks that journalists and news media might use generative AI for, and in many cases already are using generative AI for, we can again gauge how comfortable the public is by looking at the balance between how many are comfortable with a particular use case and how many are uncomfortable.

As with several of the questions above, about a third have no strong view either way at this stage – but many others do. Across six countries, the balance of public opinion ranges from relatively high levels of comfort with back-end tasks, including editing spelling and grammar (+38), translation (+35), and the making of charts (+28), to widespread net discomfort with synthetic content, including creating an image if a real photo is not available (-13) and artificial presenters and authors (-24) (Figure 21).

When asked if it should be disclosed or labelled as such if news has been produced mostly by AI with some human oversight, only 5% of our respondents say none of the use cases included above need to be disclosed, and the vast majority of respondents say they want some form of disclosure or labelling in at least some cases. Research on the effect of labelling AI-generated news is ongoing, but early results suggest that although labelling may be desired by audiences, it may have a negative effect on trust (Toff and Simon 2023).

There is, however, less consensus on what exactly should be disclosed or labelled, except for somewhat lower expectations around the back-end tasks people are frequently comfortable with AI completing (Figure 22). Averaging across six countries, around half say that ‘creating an image if a real photograph is not available’ (49%), ‘writing the text of an article’ (47%), and ‘data analysis’ (47%) should be labelled as such if generative AI is used. However, this figure drops to around one-third for ‘editing the spelling and grammar of an article’ (32%) and ‘writing a headline’ (35%). Again, variation exists between both countries and demographic groups that are generally more positive about AI.

Based on online surveys of nationally representative samples in six countries, we have, with a particular focus on journalism and news, documented how aware people are of generative AI, how they use it, and their expectations on the magnitude of impact it will have in different sectors – including whether it will be used responsibly.

We find that most of the public are aware of various generative AI products, and that many have used them, especially ChatGPT. But between 19% and 30% of the online population in the six countries surveyed have not heard of any of the most popular generative AI tools, and while many have tried using various of them, only a very small minority are, at this stage, frequent users. Going forward, some use will be driven by people seeking out and using stand-alone generative AI tools such as ChatGPT, but it seems likely that much of it will be driven by a combination of professional adaptation, through products used in the workplace, and the introduction of more generative AI-powered elements into platforms already widely used in people’s private lives, including social media and search engines, as illustrated with the recent announcements of much greater integration of generative AI into Google Search.

When it comes to public expectations around the impact of generative AI and whether these technologies are likely to be used responsibly, we document a differentiated and nuanced picture. First, there are sectors where people expect generative AI will have a greater impact, and relatively fewer people expect it will be used irresponsibly (including healthcare and science). Second, there are sectors where people expect the impact may not be as great, and relatively fewer fear irresponsible use (including from ordinary people and retailers). Third, there are sectors where relatively fewer people expect large impact, and relatively more people are worried about irresponsible use (including government and political parties). Fourth, there are sectors where more people expect large impact, and more people fear irresponsible use by the actors involved (this includes social media and the news media).

Much of the public is still undecided on what the impact of generative AI will be. They are unsure whether, on balance, generative AI will make their own lives and society better or worse. This is understandable, given many are not aware of any of these products, and few have personal experience of using them frequently. Younger people and those with higher levels of formal education – who are also more likely to have used generative AI – are generally more positive.

Expectations around what generative AI might mean for society are more varied across the six countries we cover. In two, there are more who expect it will make society worse than better, in another two, there are as many pessimists as optimists, and in the final two, more respondents expect generative AI products will make society better than expect them to make society worse. These differences may also partly reflect the current situation societies find themselves in, and whether people think AI can fundamentally change the direction of those societies. To some extent we also see this pattern reflected in how people think about AI in news. Across a range of measures, in some countries people are generally more optimistic, but in others more pessimistic.

Looking at journalism and news media more closely, we have found that many believe generative AI is already relatively widely used for many different tasks, but that they are, in most cases, not convinced these uses of AI make news better – they mostly expect it to make it cheaper to produce.

While there is certainly curiosity, openness to new approaches, and some optimism in parts of the public (especially when it comes to the use of these technologies in the health sector and by scientists), generally, the role of generative AI in journalism and news media is seen quite negatively compared to many other sectors – in some ways similar to how much of the public sees social media companies. Basically, we find that the public primarily think that the use of generative AI in news production will help publishers cut costs, but identify few, if any, ways in which they expect it to help them as audiences, and several key areas where many expect news made with AI to be worse.

These views are not solely informed by how people think generative AI will impact journalism in the future. A substantial minority of the public believe that journalists already always or often use generative AI to complete a wide range of different tasks. Some of these are tasks that most are comfortable with, and are within the current capabilities of generative AI, like checking spelling and grammar. But many others are not. More than half of our respondents believe that news media at least sometimes use generative AI to create images if no real photographs are available, and as many believe that news media at least sometimes create artificial authors or presenters. These are forms of use that much of the public are uncomfortable with.

Every individual journalist and every news organisation will need to make their own decisions about which, if any, uses of generative AI they believe are right for them, given their editorial principles and their practical imperatives. Public opinion cannot – and arguably should not – dictate these decisions. But public opinion provides a guide on which uses are likely to influence how people judge the quality of news and their comfort with relying on it, and thus helps, among other things, to identify areas where it is particularly important for journalists and news media to communicate and explain their use of AI to their target audience.

It is still early days, and it remains to be seen how public use and perception of generative AI in general, and its role in journalism and news specifically, will evolve. On many of the questions asking respondents to evaluate AI in different sectors and for different uses, between roughly a quarter and half of respondents pick relatively neutral middle options or answer ‘don’t know’. There is still much uncertainty around what role generative AI should and will have, in different sectors, and for different purposes. And, especially in light of how many have limited personal experience of using these products, it makes sense that much of the public has not made up their minds.

Public debate, opinion commentary, and news coverage will be among the factors influencing how this evolves. So will people’s own experience of using generative AI products, whether for private or professional purposes. Here, it is important to note two things. First, younger respondents generally are much more open to, and in many cases optimistic about, generative AI than respondents overall. Second, despite the many documented limitations and problems with state-of-the-art generative AI products, those respondents who use these tools themselves tend to offer a reasonably positive assessment of how well they work, and how much they trust them. This does not necessarily mean that future adopters will feel the same. But if they do, and use becomes widespread and routine, overall public opinion will change – in some cases perhaps towards a more pessimistic view, but, at least if our data are anything to go by, in a more grounded and cautiously optimistic direction.

  • Ada Lovelace Institute and The Alan Turing Institute. 2023. How Do People Feel About AI? A Nationally Representative Survey of Public Attitudes to Artificial Intelligence in Britain
  • Alm, C. O., Alvarez, A., Font, J., Liapis, A., Pederson, T., Salo, J. 2020. ‘Invisible AI-driven HCI Systems – When, Why and How’, Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society , 1–3. https://doi.org/10.1145/3419249.3420099
  • Angwin, J., Nelson, A., Palta, R. 2024. ‘ Seeking Reliable Election Information? Don’t Trust AI ’, Proof News.
  • Becker, K. B., Simon, F. M., Crum, C. 2023. ‘Policies in Parallel? A Comparative Study of Journalistic AI Policies in 52 Global News Organisations’, https://doi.org/10.31235/osf.io/c4af9 .
  • Beckett, C., Yaseen, M. 2023. ‘ Generating Change: A Global Survey of What News Organisations Are Doing with Artificial Intelligence ’. London: JournalismAI, London School of Economics. .
  • Bender, E. M., Gebru, T., McMillan-Major, A., Shmitchell, S. 2021. ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency , 610–23. New York: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922 .
  • Broussard, M. 2018. Artificial Unintelligence: How Computers Misunderstand the World . Reprint edition. Cambridge, MA: The MIT Press.
  • Broussard, M. 2023. More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech . Cambridge, MA: The MIT Press.
  • Caswell, D. 2024. ‘ AI in Journalism Challenge 2023. ’ London: Open Society Foundations.
  • Diakopoulos, N. 2019. Automating the News: How Algorithms Are Rewriting the Media . Cambridge, MA: Harvard University Press.
  • Diakopoulos, N., Cools, H., Li, C., Helberger, N., Kung, E., Rinehart, A., Gibbs, L. 2024. ‘ Generative AI in Journalism: The Evolution of Newswork and Ethics in a Generative Information Ecosystem ’. New York: Associated Press. .
  • Fletcher, R. 2024. How Many News Websites Block AI Crawlers? Reuters Institute for the Study of Journalism. https://doi.org/10.60625/risj-xm9g-ws87 .
  • Fletcher, R., Adami, M., Nielsen, R. K. 2024. ‘ I’m Unable To’: How Generative AI Chatbots Respond When Asked for the Latest News . Reuters Institute for the Study of Journalism. https://doi.org/10.60625/RISJ-HBNY-N953 .
  • Humprecht, E., Herrero, L. C., Blassnig, S., Brüggemann, M., Engesser, S. 2022. ‘Media Systems in the Digital Age: An Empirical Comparison of 30 Countries’, Journal of Communication 72(2): 145–64. https://doi.org/10.1093/joc/jqab054 .
  • Mellado, C., Cruz, A., Dodds, T. 2024. Inteligencia Artificial y Audiencias en Chile .
  • Newman, N. 2024. Journalism, Media, and Technology Trends and Predictions 2024 . Reuters Institute for the Study of Journalism. https://doi.org/10.60625/risj-0s9w-z770 .
  • Newman, N., Fletcher, R., Eddy, K., Robertson, C. T., Nielsen, R. K. 2023. Reuters Institute Digital News Report 2023 . Reuters Institute for the Study of Journalism. https://doi.org/10.60625/risj-p6es-hb13 .
  • Newman, N., Fletcher, R., Robertson C. T., Ross Arguedas, A. A., Nielsen, R. K. 2024. Reuters Institute Digital News Report 2024 (forthcoming). Reuters Institute for the Study of Journalism.
  • Nielsen, R. K. 2024. ‘ How the News Ecosystem Might Look like in the Age of Generative AI ’ .
  • Nielsen, R. K., Fletcher, R. 2023. ‘Comparing the Platformization of News Media Systems: A Cross-Country Analysis’, European Journal of Communication 38(5): 484–99. https://doi.org/10.1177/02673231231189043 .
  • Pew. 2023. ‘ Growing Public Concern about the Role of Artificial Intelligence in Daily Life ’.
  • Pew. 2024. ‘ Americans’ Use of ChatGPT is Ticking Up, but Few Trust its Election Information ’.
  • Simon, F. M. 2024. ‘ Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena ’, Columbia Journalism Review .
  • Toff, B., Simon, F. M. 2023. ‘ Or They Could Just Not Use It?’: The Paradox of AI Disclosure for Audience Trust in News . https://doi.org/10.31235/osf.io/mdvak .
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., Polosukhin, I. 2017. Attention Is All You Need. https://doi.org/10.48550/ARXIV.1706.03762 .
  • Vogler, D., Eisenegger, M., Fürst, S., Udris, L., Ryffel, Q., Rivière, M., Schäfer, M. S. 2023. Künstliche Intelligenz in der journalistischen Nachrichtenproduktion: Jahrbuch Qualität der Medien Studie 1 / 2023. https://doi.org/10.5167/UZH-238634 .

1 https://www.express.de/autor/klara-indernach-594809

reporting guidelines survey research

About the authors

Dr Richard Fletcher is Director of Research at the Reuters Institute for the Study of Journalism. He is primarily interested in global trends in digital news consumption, the use of social media by journalists and news organisations, and, more broadly, the relationship between computer-based technologies and journalism.

Professor Rasmus Kleis Nielsen is Director of the Reuters Institute for the Study of Journalism, Professor of Political Communication at the University of Oxford, and served as Editor-in-Chief of the International Journal of Press/Politics from 2015 to 2018. His work focuses on changes in the news media, political communication, and the role of digital technologies in both.

Acknowledgements

We would like to thank Caryhs Innes, Xhoana Beqiri, and the rest of the team at YouGov for their work on fielding the survey. We would also like to thank Felix Simon for his help with the data analysis. We are grateful to the other members of the research team at RISJ for their input on the questionnaire and interpretation of the results, and to Kate Hanneford-Smith, Alex Reid, and Rebecca Edwards for helping to move this project forward and keeping us on track.

Funding acknowledgement

Report published by the Reuters Institute for the Study of Journalism (2024) as part of our work on AI and the Future of News , supported by seed funding from Reuters News and made possible by core funding from the Thomson Reuters Foundation.

RISJ and TRF logo

At every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block

reporting guidelines survey research

  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

May 31, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Research finds limits to access of emergency contraceptive pill in Australia

by Flinders University

contraceptive

The accessibility of first-line oral emergency contraceptives in Australian community pharmacies is problematic—with a national survey finding almost one-third reporting they do not stock the ulipristal acetate pill that has been recommended by medical authorities.

Only 70% of the 233 pharmacies surveyed stocked ulipristal acetate emergency contraceptive (EC) pills, compared to levonorgestrel, which was stocked at 98%. The survey also found that ulipristal acetate was much less likely to be stocked in community pharmacies in rural and remote areas and was even more expensive when it was.

"This is despite evidence that unintended pregnancies are more common among those living in rural and remote areas and highlights a clear equity issue that should be addressed," researchers say in an article due to be published in the journal Contraception .

"Despite guidelines recommending it as the first line oral emergency contraceptive, ulipristal acetate is less likely to be available in community pharmacies, and when it is available it is likely to be much more expensive," says corresponding author Flinders University Associate Professor Luke Grzeskowiak, who leads the Reproductive and Perinatal Pharmacoepidemiology Research Group at Flinders University and the South Australian Health and Medical Research Institute (SAHMRI).

"Several measures could be taken to improve women's ability to receive evidence-based treatments. With medication costs ranging from $26 to $80, this calls into question whether government subsidies should be available," says Associate Professor Grzeskowiak.

Emergency contraception has the potential to reduce the risk of unintended pregnancy following an episode of unprotected sexual intercourse . There are a number of factors that must be considered when selecting the most appropriate EC product for each consumer; such as time since unprotected sexual intercourse, use of other oral contraceptives and body mass index.

First author Tahlee Stevenson, a Research Associate from the University of Adelaide School of Public Health, says, "We need to better understand why pharmacies are choosing not to stock ulipristal acetate. Is this because of low consumer awareness and/or higher prices impacting demand, or is it related to a lack of awareness and understanding among pharmacy owners regarding evidence-based recommendations for emergency contraception?

"To truly work towards improving accessibility, we must address these factors and ensure that all consumers can source their preferred emergency contraceptive method in a timely and cost-effective manner. By only stocking levonorgestrel, pharmacies are inhibiting their capacity to follow clinical guidelines , and this may mean that some consumers aren't able to access the EC that is appropriate for their individual needs and circumstances," she says.

While there is legislation and guidelines covering the supply of emergency contraception, these don't extend to whether or not individual products are stocked, and pharmacies can choose not to stock any product at all. This results in a postcode lottery in terms of access.

Pharmacists must be aware of key differences in the available methods of EC to ensure that they are prepared to facilitate shared decision-making based on the individual needs of each woman.

Explore further

Feedback to editors

reporting guidelines survey research

Genetics study points to potential treatments for restless leg syndrome

reporting guidelines survey research

Gene therapy trial restores hearing in both ears of children who were born deaf

reporting guidelines survey research

Panel rejects psychedelic drug MDMA as a PTSD treatment in possible setback for advocates

2 hours ago

reporting guidelines survey research

Commonly used alcohol-based mouthwash brand may disrupt the balance of your oral microbiome, scientists say

10 hours ago

reporting guidelines survey research

Women's mental agility is better during menstruation, shows study

11 hours ago

reporting guidelines survey research

Injury prediction rule could decrease radiographic imaging exposure in children, study shows

12 hours ago

reporting guidelines survey research

A promising vaccine approach to induce longer-lasting protective immunity against COVID-19

13 hours ago

reporting guidelines survey research

How tumor stiffness alters immune cell behavior to escape destruction

14 hours ago

reporting guidelines survey research

Veterans with service dogs found to have fewer PTSD symptoms, higher quality of life

reporting guidelines survey research

Scientists reveal how a potassium ion channel reprograms energy production in cancer cells

Related stories.

reporting guidelines survey research

Emergency contraception is hard to find in Georgia, rural pharmacies

Feb 10, 2022

reporting guidelines survey research

Emergency contraception: Here's what you probably don't know but should

Sep 1, 2023

reporting guidelines survey research

Emergency contraception: Types, side effects and more

Feb 23, 2023

reporting guidelines survey research

Levonorgestrel emergency contraceptive pill found to be more effective when taken with an anti-inflammatory medication

Aug 16, 2023

reporting guidelines survey research

New drug combo 'promising candidate' for on-demand contraceptive pill

Apr 25, 2022

reporting guidelines survey research

How Plan B and other generic drugs work to prevent pregnancy

Apr 19, 2023

Recommended for you

reporting guidelines survey research

An anti-inflammatory curbs spread of fungi causing serious blood infections

15 hours ago

reporting guidelines survey research

Research shows GLP-1 receptor agonist drugs are effective but come with complex concerns

19 hours ago

reporting guidelines survey research

New curative therapy brings hope for the treatment of nodding syndrome

reporting guidelines survey research

US health officials advise using antibiotic as a 'morning-after pill' against STDs

16 hours ago

reporting guidelines survey research

Oral nucleoside antiviral is progressing toward future pandemic preparedness

Jun 3, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

reporting guidelines survey research

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

fhfa's logo

Programs National Mortgage Database Program

NMDB Logo

Introduction

The National Mortgage Database (NMDB®)  [1]  program is jointly funded and managed by the Federal Housing Finance Agency (FHFA) and the  Consumer Financial Protection Bureau  (CFPB). This program is designed to provide a rich source of information about the U.S. mortgage market. It has three primary components: 

  • the National Mortgage Database  (NMDB),
  • the quarterly National Survey of Mortgage Originations (NSMO), [2]
  • the annual American Survey of Mortgage Borrowers  (ASMB). [3]

The NMDB program enables FHFA to meet the statutory requirements of section 1324(c) of the Federal Housing Enterprises Financial Safety and Soundness Act of 1992, as amended by the Housing and Economic Recovery Act of 2008, to conduct a monthly mortgage market survey. Specifically, FHFA must, through a survey of the mortgage market, collect data on the characteristics of individual mortgages, including those eligible for purchase by Fannie Mae and Freddie Mac and those that are not, and including subprime and nontraditional mortgages. In addition, FHFA must collect information on the creditworthiness of borrowers, including a determination of whether subprime and nontraditional borrowers would have qualified for prime lending.  [4]

For CFPB, the NMDB program supports policymaking and research efforts and helps identify and understand emerging mortgage and housing market trends. CFPB uses the NMDB, among other purposes, in support of the market monitoring called for by the Dodd-Frank Wall Street Reform and Consumer Protection Act, including understanding how mortgage debt affects consumers and for retrospective rule review required by the statute.  

No information on borrower names, addresses, Social Security numbers, or dates of birth is ever used or stored by FHFA or CFPB as part of the NMDB program.  Furthermore, safeguards are in place to ensure that information in the database is not used to identify individual borrowers or lenders and is handled in full accordance with federal privacy laws and the Fair Credit Reporting Act (FCRA).

National Mortgage Database

The National Mortgage Database (NMDB) is the first component of the National Mortgage Database program. NMDB is updated quarterly for a nationally representative five percent sample of closed-end first-lien residential mortgages in the United States.  

The purpose of NMDB is to inform and educate FHFA, CFPB and other federal agencies about lending products and mortgage market health. The database is comprehensive, and there are many possibilities for how it may be used. Some examples include:

  • Studying the subprime mortgage crisis:  Because the data goes back to 1998, the database can be used to assess possible causes of the recent subprime crisis.
  • Monitoring new and emerging products in the mortgage market:  The database allows agencies to monitor volume and performance of products in the mortgage market and help regulators identify potential problems or new risks.
  • Monitoring the relative health of mortgage markets and consumers:  The database provides detailed mortgage loan performance information including whether payments are made on-time, as well as information regarding loan modifications, foreclosures, and bankruptcies. This can help policy makers better understand how various products are being used and how they are performing.
  • Evaluating loss mitigation, borrower counseling, and loan modification programs:  The database can be used to evaluate the efficacy and potential impact of counseling programs.
  • Monitoring affordable lending:  Since the database is updated quarterly, it provides information on mortgage access and mortgage terms for low-income borrowers and communities faster than data required by the Home Mortgage Disclosure Act, or HMDA. Currently, HMDA data does not become available until the year following origination.
  • Performing stress tests and prepayment/default modeling:  The database can be used by policy makers, researchers, and regulators to improve prepayment and default modeling and to implement stress-test scenarios for the entire national mortgage market.

Description

The NMDB assembles credit, administrative, servicing, and property data for a nationally representative five percent sample of closed-end first-lien residential mortgages in the United States. The database includes the following information:

  • mortgage performance from origination to termination;
  • mortgage terms;
  • property value and characteristics;
  • type and purpose of the mortgage product;
  • sale in the secondary mortgage market; and
  • credit-related information on all mortgage cosigners, including second liens, other past and present mortgages, and credit scores from one year before origination to one year after termination.

Related Docum​ents

Notice of Revision to an Existing System of Records: National Mortgage Database Project  (12/28/2016)

Revised System of Records-National Mortgage Database Project  (8/28/2015)

FHFA Update About the National Mortgage Database  (8/1/2014)

System of Records: National Mortgage Database Project  (4/16/2014)

Privacy Impact Assessment  (11/6/2013)

Notice of Proposed Establishment of New System of Records  (12/10/2012)

Privacy Impact Assessment  (9/17/2012)

National Survey of Mortgage Originations

National Mortgage Database

The National Survey of Mortgage Originations (NSMO) is the second component of the  National Mortgage Database program . The NSMO is conducted quarterly and is jointly sponsored by the Federal Housing Finance Agency (FHFA) and the  Consumer Financial Protection Bureau  (CFPB).  

The purpose of the NSMO is to collect voluntary feedback directly from mortgage borrowers about their experience obtaining a mortgage. The information will provide researchers, policy makers, and others with data that they can analyze to inform housing and mortgage-related public policy and to understand consumers’ experiences taking out a mortgage. The data will help shape policies in the future to better protect consumers.

For Survey Respondents

If you are here, you probably received our letter asking for your help with an important national survey of mortgage borrowers.

If you obtained a mortgage to purchase or refinance either a personal home or a home for someone else (such as a rental property), we would like to know more about your experience in obtaining that mortgage. Hearing directly from borrowers provides valuable information about the functioning of the mortgage market that will help us improve lending practices and the mortgage process for future borrowers.

This survey is jointly sponsored by the Federal Housing Finance Agency and the Consumer Financial Protection Bureau (CFPB), two Federal agencies that are working together to improve the safety and transparency of the lending process for all consumers.

The responses to this survey will remain anonymous. The questionnaire does not ask you for any identifying information, so please do not identify yourself in any way on the envelope or the returned questionnaire. The code numbers on the survey are there to aid in the scanning process and to keep track of returned surveys.

We greatly appreciate your effort to answer the questions and return the questionnaire. We thank you for your help with this important national survey.

For those who have been selected to be a part of the survey, it can be completed online. Go to  www.NSMOsurvey.com  and enter your personal PIN number that was included in the letter mailed to you.

If you have any questions about this survey, please feel free to call us at 1-​855-531-0724. We look forward to hearing from you.

Current Survey Cover Letter and Questionnaire

NSMO 2024Q2 Cover Letter

Survey Cover Letter

FHFA NSMO Survey Questionnaire thumbnail

Survey Qu​estionnaire

Related Documents

30-Day Notice of Submission of National Survey of Mortgage Originations (N​SMO) Information Collection  (4/14/2023)

60-Day Notice of Submission of National Survey of Mortgage Originations (NSMO) Information Collection  (12/6/2022)

30-Day Notice of Submission of National Survey of Mortgage Originations (NSMO) Information Collection  (4/3/2020)

60-Day Notice of Submission of National Survey of Mortgage Originations (NSMO) Information Collection  (12/10/2019)

30-Day Notice of Submission of National Survey of Mortgage Originations (NSMO) Information Collection  (9/13/2016)

60-Day Notice of Submission of National Survey of Mortgage Originations (NSMO) Information Collection  (12/28/2016)

Proposed Collection; Comment Request: National Survey of Mortgage Borrowers  (30-Day Notice) (7/1/2013)

Proposed Collection; Comment Request: National Survey of Mortgage Borrowers  (60-Day Notice) (4/25/2013)

American Survey of Mortgage Borrowers

reporting guidelines survey research

The American Survey of Mortgage Borrowers (ASMB) is the third component of the  National Mortgage Database program . The ASMB is conducted annually and is jointly sponsored by the Federal Housing Finance Agency and the  Consumer Financial Protection Bureau  (CFPB).  

The purpose of the ASMB is to collect voluntary feedback directly from mortgage borrowers about their experience with their mortgage and property. ASMB respondents are representative of the overall population of borrowers with a mortgage loan, including those who recently took out a loan and those who have had their loan for multiple years. The feedback collected by the ASMB includes information about a range of topics related to maintaining a mortgage and property, such as borrowers’ experiences with managing their mortgage, responding to financial stressors, insuring against risks, seeking assistance from federally-sponsored programs and other sources, and terminating a mortgage loan. The information will provide researchers, policy makers, and others with data that they can analyze to inform housing and mortgage-related public policy and to understand consumers' experiences maintaining a mortgage. The data will help shape policies in the future to better protect consumers.

If you have or recently had a mortgage on a personal home or a home for someone else (such as a rental property), we would like to know more about your experiences with your mortgage and with property ownership. Hearing directly from borrowers provides valuable information about the functioning of the mortgage market that will help us improve lending practices and the mortgage process for future borrowers.

For those who have been selected to be a part of the survey, it can be completed online. Go to  www.ASMBsurvey.com  and enter your personal PIN number that was included in the letter mailed to you.

If you have any questions about this survey, please feel free to call us at 855-339-7877. We look forward to hearing from you.

Thumbnail image of Survey Letter

30-Day Notice of Submission of Information Collection American Survey of Mortgage Borrowers (ASMB) for OMB Approval​  (5/19/2022)

60-Day Notice of Submission of Information Collection American Survey of Mortgage Borrowers for Approval from OMB​ (12/18/2021)

30-Day Notice of Submission of American Survey of Mortgage Borrowers (ASMB) Emergency Information Collection - Correction  (8/7/2020)

30-Day Notice of Submission of American Survey of Mortgage Borrowers (ASMB) Emergency Information Collection  (7/31/2020)

30-Day Notice of Submission of American Survey of Mortgage Borrowers (ASMB) Information Collection  (3/24/2016)

60-Day Notice of Submission of National Survey of Existing Mortgage Borrowers (NSEMB) Information Collection  (11/10/2015)

[1]  NMDB® and FHFA® are federally registered trademarks of the Federal Housing Finance Agency (FHFA Marks) and are subject to all applicable laws governing the use of trademarks. FHFA Marks may be used for educational, informational, non-promotional and non-commercial purposes. FHFA requires all third parties referring to FHFA Marks to do so in a manner that does not imply a relationship with the Federal Housing Finance Agency. Material in which FHFA Marks appear must acknowledge that the trademarks are federally registered trademarks of the Federal Housing Finance Agency.

[2]  The National Survey of Mortgage Originations was originally called the National Survey of Mortgage Borrowers. The name of the survey was changed to avoid confusion with the American Survey of Mortgage Borrowers, effective May 9, 2016.

[3]  The American Survey of Mortgage Borrowers was originally called the National Survey of Existing Mortgage Borrowers. The name of the survey was changed to avoid confusion with the National Survey of Mortgage Originations, effective March 24, 2016.

[4]  FHFA interprets the NMDB program as a whole, including the NSMO, as the “survey” required by the Safety and Soundness Act. The statutory requirement is for a monthly survey. Core inputs to the NMDB, such as a regular refresh of credit-bureau data, occur monthly, though the NSMO is conducted quarterly.​​​​​​

Pag​e last updated: May 9, 2024

Press Release

F​HFA and CFPB​ Release Updated Data from the National Survey of Mortgage Originations for Public Use  (12/13/2022)

​F​HFA and CFPB​ Release Updated Data from the National Survey of Mortgage Originations for Public Use  (7/29/2021)

F​HFA ​Announces New and Expanded Statistical Products from the National Mortgage Database  (6/30/2021)

FHFA and CFPB Release Additional Data from the National Survey of Mortgage Originations for Public Use  (2/20/2020)

FHFA Releases U.S. Mortgage Statistics from the National Mortgage Database  (12/12/2018)

FHFA and CFPB Release National Survey of Mortgage Originations Dataset for Public Use  (11/8/2018)

FHFA and CFPB Partner on Development of National Mortgage Database  (11/1/2012)      

NMDB Aggregate Data  (3/31/2023)

NSMO Public Use File  (3/3/2023)   

Technical Documentation

NMDB  (12/28/2022)

NSMO  (12/13/2022)

FHFA Stats Blog

What Types of Mortgages Do Fannie Mae and Freddie Mac Acquire? (4/14/2021)

Mortgage Performance During the COVID-19 Pandemic  (2/2/2021)            

NMDB Staff Working Papers

18-02: First-Time Homebuyer Counseling and the Mortgage Selection Experience  (3/14/2018)

18-01: Mortgage Experience of Rural Borrowers  (3/14/2018)

NSMO Symposium in Cityscape

Link  to  Cityscape       

NMDB Staff at FHFA

Daniel Grodzicki Principal Economist

Elizabeth Hoeffel Senior Survey Statistician

Ian Keith Senior Program Analyst

Ismail Mohamed Senior Financial Analyst

Saty Patrabansh Associate Director ​ Office of Data and Statistics

Jay Schultz Senior Economist

Jonathan Spader Manager

Matthew Streeter Economist

Rebecca Sullivan Economist

Guidelines for Reporting Survey-Based Research Submitted to Academic Medicine

Affiliation.

  • 1 Assistant editor for AM Last Pages Deputy editor for research Editor-in-chief.
  • PMID: 29485492
  • DOI: 10.1097/ACM.0000000000002094

Publication types

  • Publications*
  • Surveys and Questionnaires

IMAGES

  1. Reporting Guidelines for Survey Research: An Analysis of Published

    reporting guidelines survey research

  2. What Are Reporting Guidelines For Research?

    reporting guidelines survey research

  3. A Comprehensive Guide to Survey Research Methodologies

    reporting guidelines survey research

  4. (PDF) Reporting Guidelines for Survey Research: An Analysis of

    reporting guidelines survey research

  5. COREQ (Consolidated criteria for reporting qualitative research

    reporting guidelines survey research

  6. Types of Research Report

    reporting guidelines survey research

VIDEO

  1. Navigating Global ESG Reporting Standards

  2. New Accounting & Financial Reporting Guidelines

  3. Promotional Guidelines Survey 2015

  4. What is survey research?

  5. Unlocking the Power of Data: A Comprehensive Guide to Academic Online Surveys

  6. Encore: Documentation Back to the Basics

COMMENTS

  1. Reporting guidelines

    Preliminary guideline for reporting bibliometric reviews of the biomedical literature (BIBLIO) : a minimum requirements. 19. Appropriate design and reporting of superiority, equivalence and non-inferiority clinical trials incorporating a benefit-risk assessment: the BRAINS study including expert workshop. 20.

  2. Search for reporting guidelines

    A Consensus-Based Checklist for Reporting of Survey Studies (CROSS). J Gen Intern Med. 2021. Language: English: PubMed ID: 33886027: Reporting guideline acronym: CROSS: Study design: Observational studies, Qualitative research: Applies to the whole report or to individual sections of the report? Whole report: Additional information

  3. A Consensus-Based Checklist for Reporting of Survey Studies ...

    Because of concerns regarding survey qualities and lack of well-developed guidelines, there is a need for a single comprehensive tool that can be used as a standard reporting checklist for survey research to address significant discrepancies in the reporting of survey studies. 13, 25,26,27,28, 31, 32 The purpose of this study was to develop a ...

  4. Guidelines for Reporting Survey-Based Research Submitted to ...

    We recognize, of course, that the details and sophistication of a given survey design and implementation project may vary depending on the purpose and maturity of the ideas being explored and the type of submission (e.g., Research Report, Article, Innovation Report). Selected Reporting Guidelines for Survey Studies. Small differences in how a ...

  5. A Consensus-Based Checklist for Reporting of Survey Studies (CROSS)

    Because of concerns regarding survey qualities and lack of well-developed guidelines, there is a need for a single comprehensive tool that can be used as a standard reporting checklist for survey research to address significant discrepancies in the reporting of survey studies. 13, 25 - 28, 31, 32 The purpose of this study was to develop a ...

  6. Reporting Guidelines for Survey Research: An Analysis of Published

    Methods and Findings. We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of ...

  7. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Burns et al., 2008 12. A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method.

  8. Reporting Guidelines for Survey Research: An

    Examples are provided in Table 1. Although no reporting guidelines for survey research were identified, several journals referenced the EQUATOR Network's web site. The EQUATOR Network includes two papers relevant to reporting survey research [13],[14]. Table 1. Instructions to authors--Examples of relevant text per category.

  9. Reporting Guidelines for Survey Research: An Analysis of Published

    Introduction. Surveys are a research method by which information is typically gathered by asking a subset of people questions on a specific topic and generalising the results to a larger population [1,2]. They are an essential component of many types of research including public opinion, politics, health, and others.

  10. SURGE (The SUrvey Reporting GuidelinE)

    The SURGE (The SUrvey Reporting GuidelinE) is primarily intended to provide guidance to authors reporting information that has been collected via self-administered, postal surveys. This chapter discusses three phases of the foundation work to develop reporting guideline for survey research.

  11. AAPOR Reporting Guidelines for Survey Studies

    The confusion over the election prompted leaders to propose standards for survey research. Despite the long-standing nature of these guidelines, recent data show survey reporting is often subpar. 4 Compliance with disclosure requirements is often lacking, ...

  12. Best practices for reporting survey‐based research

    FIGURE 1. Best practices for reporting survey‐based research. Several published guidelines offer authors a roadmap to inform their approach to reporting. It is important to consider that these guidelines pertain to research using surveys to assess outcome measures and not solely the reporting of the de novo development of a survey instrument.

  13. Reporting guidelines for survey research: an analysis of published

    Background: Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

  14. (PDF) Reporting Guidelines for Survey Research: An Analysis of

    Study findings are reported in accordance with Strengthening the Reporting of Observational studies in Epidemiology (STROBE) cross-sectional study, as well as the additional reporting guidelines ...

  15. A Consensus-Based Checklist for Reporting of Survey Studies ...

    7 Faculty of Medicine, Alexandria University, Alexandria, Egypt. 8 Institute of Tropical Medicine (NEKKEN) and School of Tropical Medicine and Global Health, Nagasaki University, Nagasaki, 852-8523, Japan. [email protected]. 9 Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia.

  16. Best Practices for Survey Research

    The same applies if your survey allows respondents to skip questions but continue in the survey. A common way of reporting on survey data is to show cross-tabulated results, or crosstabs for short. Crosstabs are when you show a table with one question's answers as the column headers and another question's answers as the row names.

  17. [PDF] Reporting Guidelines for Survey Research: An Analysis of

    DOI: 10.1371/journal.pmed.1001069 Corpus ID: 294153; Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices @article{Bennett2011ReportingGF, title={Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices}, author={Carol L. Bennett and Sara D. Khangura and Jamie C. Brehaut and Ian D. Graham and David Moher ...

  18. A reporting guideline for IS survey research

    A paper was short-listed if it was judged to likely contain reporting guidelines for survey research. This judgment was subjective and was based on the title and the summary text presented underneath on the search results pages. We excluded papers with a title or summary text that suggested a focus on reporting guidelines for specific research ...

  19. Reporting guidelines

    The EQUATOR network (Enhancing the QUAlity and Transparency Of health Research) is an international initiative that seeks to improve the reliability and value of published health research literature. Reporting guidelines promote clear reporting of methods and results to allow critical appraisal of the manuscript. All research articles should be written in accordance with the relevant research ...

  20. Reporting Guidelines for Allergy and Immunology Survey Research

    While survey reports are common, fewer than 10% of medical journals provide clear guidelines to investigators for survey research. In this special article, we provide guidance on minimum recommendations in the form of a checklist for allergy and immunology reporting of survey research (CHAIRS). Key …

  21. Reporting Guidelines

    Reporting Guidelines. It is important that your manuscript gives a clear and complete account of the research that you have done. Well reported research is more useful and complete reporting allows editors, peer reviewers and readers to understand what you did and how. Poorly reported research can distort the literature, and leads to research ...

  22. What are Reporting Guidelines for Research?

    Updated on December 21, 2022. Reporting guidelines are recommendations of what information authors should include in their manuscripts when writing about their research. These are imperative for ensuring ethical and valid research, especially in the health sciences. Many reporting guidelines were written for health science research.

  23. What is decision making?

    But decision fatigue isn't the only cost of ineffective decision making. According to a McKinsey survey of more than 1,200 global business leaders, inefficient decision making costs a typical Fortune 500 company 530,000 days of managers' time each year, equivalent to about $250 million in annual wages. That's a lot of turtlenecks.

  24. Understanding the Intersection of Medicaid & Work: A Look at What the

    Research indicates that enrollees in Arkansas were unaware of or confused by the new work and reporting requirements, which did not provide an additional incentive to work beyond economic pressures.

  25. Social Media Fact Sheet

    To better understand Americans' social media use, Pew Research Center surveyed 5,733 U.S. adults from May 19 to Sept. 5, 2023. Ipsos conducted this National Public Opinion Reference Survey (NPORS) for the Center using address-based sampling and a multimode protocol that included both web and mail.

  26. What does the public in six countries think of generative AI in news

    While our Digital News Report (Newman et al. 2023) shows that in most countries the news market is dominated by domestic brands that focus on national news, in contrast, the search and social platform space across countries tends to feature the same products from large technology companies such as Google, Meta, and Microsoft. At least for now, it seems like the generative AI space will follow ...

  27. Department of Human Services

    Overview. Our mission is to assist Pennsylvanians in leading safe, healthy, and productive lives through equitable, trauma-informed, and outcome-focused services while being an accountable steward of commonwealth resources.

  28. Research finds limits to access of emergency contraceptive pill in

    The accessibility of first-line oral emergency contraceptives in Australian community pharmacies is problematic—with a national survey finding almost one-third reporting they do not stock the ...

  29. National Mortgage Database Program

    Image Introduction The National Mortgage Database (NMDB®) [1] program is jointly funded and managed by the Federal Housing Finance Agency (FHFA) and the Consumer Financial Protection Bureau (CFPB). This program is designed to provide a rich source of information about the U.S. mortgage market. It has three primary components: the National Mortgage Database (NMDB), the quarterly National ...

  30. Guidelines for Reporting Survey-Based Research Submitted to Academic

    Guidelines for Reporting Survey-Based Research Submitted to Academic Medicine. Acad Med. 2018 Mar;93 (3):337-340. doi: 10.1097/ACM.0000000000002094.