Advertisement

Advertisement

A Consensus-Based Checklist for Reporting of Survey Studies (CROSS)

  • Research and Reporting Methods
  • Published: 22 April 2021
  • Volume 36 , pages 3179–3187, ( 2021 )

Cite this article

  • Akash Sharma MBBS   ORCID: orcid.org/0000-0002-6822-4946 1 , 2   na1 ,
  • Nguyen Tran Minh Duc MD   ORCID: orcid.org/0000-0002-9333-7539 2 , 3   na1 ,
  • Tai Luu Lam Thang MD   ORCID: orcid.org/0000-0003-1062-2463 2 , 4 ,
  • Nguyen Hai Nam MD   ORCID: orcid.org/0000-0001-5184-6936 2 , 5 ,
  • Sze Jia Ng MD   ORCID: orcid.org/0000-0001-5353-6499 2 , 6 ,
  • Kirellos Said Abbas MBCH   ORCID: orcid.org/0000-0003-0339-9339 2 , 7 ,
  • Nguyen Tien Huy MD, PhD   ORCID: orcid.org/0000-0002-9543-9440 8 ,
  • Ana Marušić MD, PhD   ORCID: orcid.org/0000-0001-6272-0917 9 ,
  • Christine L. Paul PhD 10 ,
  • Janette Kwok MBBS   ORCID: orcid.org/0000-0003-0038-1897 11 ,
  • Juntra Karbwang MD, PhD 12 ,
  • Chiara de Waure MD, MSc, PhD   ORCID: orcid.org/0000-0002-4346-1494 13 ,
  • Frances J. Drummond PhD   ORCID: orcid.org/0000-0002-7802-776X 14 ,
  • Yoshiyuki Kizawa MD, PhD   ORCID: orcid.org/0000-0003-2456-5092 15 ,
  • Erik Taal PhD   ORCID: orcid.org/0000-0002-9822-4488 16 ,
  • Joeri Vermeulen MSN, CM   ORCID: orcid.org/0000-0002-9568-3208 17 , 18 ,
  • Gillian H. M. Lee PhD   ORCID: orcid.org/0000-0002-6192-4923 19 ,
  • Adam Gyedu MD, MPH   ORCID: orcid.org/0000-0002-4186-2403 20 ,
  • Kien Gia To PhD   ORCID: orcid.org/0000-0001-5038-5584 21 ,
  • Martin L. Verra PhD   ORCID: orcid.org/0000-0002-3933-8020 22 ,
  • Évelyne M. Jacqz-Aigrain MD, PhD   ORCID: orcid.org/0000-0002-4285-7067 23 ,
  • Wouter K. G. Leclercq MD   ORCID: orcid.org/0000-0003-1159-1857 24 ,
  • Simo T. Salminen PhD 25 ,
  • Cathy Donald Sherbourne PhD 26 ,
  • Barbara Mintzes PhD   ORCID: orcid.org/0000-0002-8671-915X 27 ,
  • Sergi Lozano PhD   ORCID: orcid.org/0000-0003-1895-9327 28 ,
  • Ulrich S. Tran DSc   ORCID: orcid.org/0000-0002-6589-3167 29 ,
  • Mitsuaki Matsui MD, MSc, PhD   ORCID: orcid.org/0000-0003-4075-1266 12 &
  • Mohammad Karamouzian DVM, MSc, PhD candidate   ORCID: orcid.org/0000-0002-5631-4469 30 , 31  

26k Accesses

556 Citations

17 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

INTRODUCTION

A survey is a list of questions aiming to extract a set of desired data or opinions from a particular group of people. 1 Surveys can be administered quicker than some other methods of data gathering and facilitate data collection from a large number of participants. Numerous questions can be included in a survey that allow for increased flexibility in evaluation of several research areas, such as analysis of risk factors, treatment outcomes, disease trends, cost-effectiveness of care, and quality of life. Surveys can be conducted by phone, mail, face-to-face, or online using web-based software and applications. Online surveys can help reduce or prevent geographical dependence and increase the validity, reliability, and statistical power of the studies. Moreover, online surveys facilitate rapid survey administration as well as data collection and analysis. 2

Surveys are frequently used in a variety of research areas. For example, a PubMed search of the key word “survey” on January 7, 2021, generated over 1,519,000 results. These studies are used for a number of purposes, including but not limited to opinion polls, trend analyses, evaluation of policies, measuring the prevalence of diseases. 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 Although many surveys have been published in high-impact journals, comprehensive reporting guidelines for survey research are limited 13 , 14 and substantial variabilities and inconsistencies can be identified in the reporting of survey studies. Indeed, different studies have presented multiform patterns of survey designs and reported results in various non-systematic ways. 15 , 16 , 17

Evidence-based tools developed by experts could help streamline particular procedures that authors could follow to create reproducible and higher quality studies. 18 , 19 , 20 Research studies that have transparent and accurate reporting may be more reliable and could have a more significant impact on their potential audience. 19 However, that is often not the case when it comes to reporting research findings. For example, Moher et al. 20 reported that, although over 63,000 new studies are published in PubMed on a monthly basis, many publications face the problem of inadequate reporting. Given the lack of standardization and poor quality of reporting, the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network was created to help researchers publish high-impact health research. 20 Several important guidelines for various types of research studies have been created and listed on the EQUATOR website, including but not limited to the Consolidated Standards of Reporting Trials and encompasses (CONSORT) for randomized control trial, Strengthening the Reporting of Observational studies in Epidemiology (STROBE) for observational studies, and Preferred Reporting Items for Systemic Reviews and Meta-analyses (PRISMA) for systematic reviews and meta-analyses. The introduction of PRISMA checklist in 2009 led to a substantial increase in the quality of the systemic reviews and is a good example of how poor reporting, biases, and unsatisfactory results can be significantly addressed by implementing and following a validated reporting guideline. 21

SURGE 22 and CHERRIES 23 are frequently recommended for reporting of non-web and web-based surveys. However, a report by Tarek et al. found that many items of the SURGE and CHERRIES guidelines (e.g., development, description, testing of the questionnaire, advertisement, and administration of the questionnaire, sample representativeness, response rates, informed consent, statistical analysis) had been missed by authors. The authors therefore concluded a need to produce a single universal guideline as a standard quality-reporting tool for surveys. Moreover, these guidelines lack a structured approach for the development of guidelines. For example, CHERRIES which was developed in 2004 lacks a comprehensive literature review and the Delphi exercise. These steps are crucial in developing guidelines as they help identify potential gaps and opinions of different experts in the field. 20 , 24 While the SURGE checklist used a literature review for generation of their items, it also lacks the Delphi exercise and is limited to only self-administered postal surveys. There is also little information available about the experts involved in the development of these checklists. SURGE’s limited citations since its publication suggest that it is not commonly used by authors and not recommended by journals. Furthermore, even after the development of these guidelines (SURGE and CHERRIES), there has been limited improvement in reporting of surveys. For example, Alvin et al. reviewed 102 surveys in top nephrology journals and found that the quality of surveys was suboptimal and highlighted the need for new reporting guidelines to improve reporting quality and increase transparency. 25 Similarly, Prasad et al. found significant heterogeneity in reporting of radiology surveys published in major radiology journals and suggested the need for guidelines to increase the homogeneity and generalizability of survey results. 26 Mark et al. also found several deficiencies in survey methodologies and reporting practices and suggested a need for establishing minimum reporting standards for survey studies. 27 Similar concerns regarding the qualities of surveys have been raised in other medical fields. 28 , 29 , 30 , 31 , 32 , 33

Because of concerns regarding survey qualities and lack of well-developed guidelines, there is a need for a single comprehensive tool that can be used as a standard reporting checklist for survey research to address significant discrepancies in the reporting of survey studies. 13 , 25 , 26 , 27 , 28 , 31 , 32 The purpose of this study was to develop a universal checklist for both web- and non-web-based surveys. Firstly, we established a workgroup to search the literature for potential items that can be included in our checklist. Secondly, we collected information about experts in the field of survey research and emailed them an invitation letter. Lastly, we conducted three rounds of rating by the Delphi method.

Our study was performed from January 2018 to December 2019 using the Delphi method. This method is encouraged for use in scientific research as a feasible and reliable approach to reach final consensus among experts. 34 The process of checklist development included five phases: (i) planning; (ii) drafting of checklist items; (iii) consensus building using the Delphi method; (iv) dissemination of guidelines; and (v) maintenance of guidelines.

Planning Phase

In the planning phase, we established a workgroup, secured resources, reviewed the existing reporting guidelines, and drafted the plan and timeline of our project. To facilitate the development of Checklist for Reporting of Survey Studies (CROSS), a reporting checklist workgroup was set up. This workgroup had seven members from five countries. The expert panel members were found via searching original survey-based studies published between January 2004 and December 2016. The experts were selected based on their number of high-impact and highly cited publications using survey research methods. Furthermore, members of the EQUATOR Network and contributors to PRISMA checklist were involved. Panel members’ information, such as current affiliation, email address, and number of survey studies involved in were collected through their ResearchGate profiles (see Supplement 1 ). Lastly, a list of potential panel members was created and an invitation letter was emailed to every expert to inquire about their interest in participating in our study. Consenting experts received a follow-up email with a detailed explanation of the research objectives and the Delphi approach.

Drafting the Checklist

This process generated a list of potential items that could be included in the checklist. This procedure included searching the literature for potential items to be considered for inclusion in the checklist, establishing a checklist based on those potential items, and revising the checklist. Firstly, we conducted a literature review to identify survey studies published in major medical journals and extracted relevant information for drafting our potential checklist items (see Supplement 2 for a sample search strategy). Secondly, we searched the EQUATOR Network for previously published checklists for reporting of survey studies. Thirdly, three teams of two researchers independently extracted the potential items that could be included in our checklist. Lastly, our group members worked together to revise the checklist and remove any duplicates (Fig. 1 ). We discussed the importance and relevance of each potential item and compared each of them to the selected literature.

figure 1

Different stages of developing the checklist.

Consensus Phase Using the Delphi Method

The first round of Delphi was conducted using SurveyMonkey (SurveyMonkey Inc., San Mateo, CA, USA; www.surveymonkey.com ). An email was sent to the expert panel containing information about the Delphi process, the timeline of each Delphi phase, and a detailed overview of the project. A Likert scale was used for rating items from 1 (strongly disagree) to 5 (strongly agree). Experts were also encouraged to provide their comments, modify items, or propose a new item that they felt was necessary to be included in the checklist. Nonresponding experts were sent weekly follow-up reminders. Items that did not reach consensus were rerated in the second round along with the modified or newly added items. The main objectives of the first round were to determine unnecessary items and identify incomplete items in the survey checklist. A pre-set 70% agreement (70% experts rating 4/5 or 5/5) was used as a cutoff for including an item in the final checklist. 35 Items that did not reach the 70% agreement threshold were adjusted according to experts’ feedback and redistributed to the panelists for round 2. In the second round, we included items that did not reach consensus in round one. In this round, experts were also provided with their round one scoring so that they could modify or preserve their previous responses. Lastly, a third round of Delphi was launched to solve any disagreements about the inclusion of items that did not reach consensus in the second round.

A total of 24 experts with a median (Q1, Q3) of 20 (15.75, 31) years of research experience participated in our study. Overall, 24 items were selected in their original form in the first round, and 27 items were reviewed in the second round. Out of these 27 items, 10 items were merged into five, and 11 items were modified based on experts’ comments. In the second round, 24 experts participated and 18 items were finally included. Overall, 18 experts responded in the third round and only one additional item was included in this round.

All details regarding the percentage agreement and mean and standard deviation (SD) of items included in the checklist are presented in Table 1 . CROSS contains 19 sections with 40 different items, including “Title and abstract” (section 1); “Introduction” (sections 2 and 3); “Methods” (sections 4–10); “Results” (sections 11–13); “Discussion” (sections 14–16); and other items (sections 17–19). Please see Supplement 3 for the final checklist.

The development of CROSS is the result of a literature review and Delphi process which involved international experts with significant expertise in the development and implementation of survey studies. CROSS includes both evidenced-informed and expert consensus-based items which are intended to serve as a tool that helps improve the quality of survey studies.

The detailed descriptions of the methods and procedures in developing this guideline are provided in this paper so that the quality of the checklist can be assessed by other scholars. Our Delphi respondent members were made up of a panel of experts with backgrounds in different disciplines. We also spent a considerable amount of time researching and debating the potential items to be included in our checklist. During the Delphi process, the agreement of each potential item was rated by participants according to a 5-point Likert scale. Although the entire process was conducted electronically, we gathered data and feedback from the participants via email instead of conducting Skype or face-to-face discussions as suggested by the EQUATOR network. 13

In comparison to the CHERRIES or SURGE checklists, CROSS provides a single but comprehensive tool which is organized according to the typical primary sections required for peer-reviewed publications. It also assists researchers in developing a comprehensive research protocol prior to conducting a survey. The “Introduction” provides a clear overview of the aim of the survey. In the “Methods” section, our checklist provides a detailed explanation of initiating and developing the survey, including study design, data collection methods, sample size calculation, survey administration, study preparation, ethical considerations, and statistical analysis. The “Results” section of CROSS describes the respondent characteristics followed by the descriptive and main results, issues that are not discussed in CHERRIES and SURGE checklists. Also, our checklist can be used in both non-web-based and web-based surveys that serves all types of survey-based studies. New items were added to our checklist to address the gaps in the available tools. For example, in item 10b, we included reports of any modification of variables. This can help researchers to justify and readers to understand why there was a need to modify the variables. In item 11b, we encourage researchers to state the reasons for non-participation at each stage. Publishing these reasons can be useful for future researchers intending to conduct a similar survey. Finally, we have added components related to limitations, interpretation, and generalizability of study results to the “Discussion” section, which are an important effort in increasing transparency and external validity. These components are missing from previous checklists (i.e., CHERRIES and SURGE).

Dissemination and Maintenance of the Checklist

Following the consensus phase, we will publish our checklist statement together with a detailed Explanation and Elaboration (E&E) document in which an in-depth explanation of the scientific rationale for each recommendation will be provided. To disseminate our final checklist widely, we aim to promote it in various journals, make it easily available on multiple websites including EQUATOR, and disseminate it through presentations at relevant conferences if necessary. We will also use social media to reach certain demographics, and also the key persons in research organizations who are regularly conducting surveys in different specialties. We also aim to seek endorsement of CROSS by journal editors, professional societies, and researchers, and to collect feedback from scholars about their experience.

Taking comments, critics, and suggestion from experts for revising and correcting our guidelines could help maintain the relevancy of the checklist. Lastly, we are planning on publishing CROSS in several non-English languages to increase its accessibility across the scientific community.

Limitations

We acknowledge the limitations of our study. First, the use of the Delphi consensus method may involve some subjectivity in interpreting experts’ responses and suggestions. Second, six experts were lost to follow up. Nonetheless, we think our checklist could improve the quality of the reporting of survey studies. Similar to other reporting checklists, CROSS requires to be re-evaluated and revised overtime to ensure it remains relevant and up-to-date with evolving research methodologies of survey studies. We therefore welcome feedback, comments, critiques, and suggestions for improvement from the research community.

CONCLUSIONS

We think CROSS has the potential to be a beneficial resource to researchers who are designing and conducting survey studies. Following CROSS before and during the survey administration could assist researchers to ensure their surveys are sufficiently reliable, reproducible, and transparent.

Wikipedia contributors. (2020). Survey (human research). In Wikipedia, The Free Encyclopedia . Retrieved 19:59, December 26, 2020, from https://en.wikipedia.org/w/index.php?title=Survey_(human_research)&oldid=994953597 .

Maymone MBC, Venkatesh S, Secemsky E, Reddy K, Vashi NA. Research Techniques Made Simple: Web-Based Survey Research in Dermatology: Conduct and Applications. J Invest Dermatol 2018;138(7):1456-1462. doi: https://doi.org/10.1016/j.jid.2018.02.032 .

Article   CAS   PubMed   Google Scholar  

Alcock I, White MP, Pahl S, Duarte-Davidson R, Fleming LE. Associations Between Pro-environmental Behaviour and Neighbourhood Nature, Nature Visit Frequency and Nature Appreciation: Evidence from a Nationally Representative Survey in England, Environ Int 2020;136:105441. doi: https://doi.org/10.1016/j.envint.2019.105441 .

Article   PubMed   Google Scholar  

Siddiqui J, Brown K, Zahid A, Young CJ . Current Practices and Barriers to Referral for Cytoreductive Surgery and HIPEC Among Colorectal Surgeons: a Binational Survey. Eur J Surg Oncol 2020;46(1):166-172. doi: https://doi.org/10.1016/j.ejso.2019.09.007

Lee JG, Park CH, Chung H, Park JC, Kim DH, Lee BI, Byeon JS, Jung HY . Current Status and Trend in Training for Endoscopic Submucosal Dissection: a Nationwide Survey in Korea. PLoS One 2020;15(5):e0232691. doi: https://doi.org/10.1371/journal.pone.0232691

Article   CAS   PubMed   PubMed Central   Google Scholar  

McChesney SL, Zelhart MD, Green RL, Nichols RL . Current U.S. Pre-Operative Bowel Preparation Trends: a 2018 Survey of the American Society of Colon and Rectal Surgeons Members. Surg Infect 2020;21(1):1-8. doi: https://doi.org/10.1089/sur.2019.125

Article   Google Scholar  

Núñez A, Manzano CA, Chi C. Health Outcomes, Utilization, and Equity in Chile: an Evolution from 1990 to 2015 and the Effects of the Last Health Reform. Public Health 2020;178:38-48. doi: https://doi.org/10.1016/j.puhe.2019.08.017

Blackwell AKM, Kosīte D, Marteau TM, Munafò MR . Policies for Tobacco and E-Cigarette Use: a Survey of All Higher Education Institutions and NHS Trusts in England. Nicotine Tob Res 2020;22(7):1235-1238. doi: https://doi.org/10.1093/ntr/ntz192

Liu S, Zhu Y, Chen W, Wang L, Zhang X, Zhang Y . Demographic and Socioeconomic Factors Influencing the Incidence of Ankle Fractures, a National Population-Based Survey of 512187 Individuals. Sci Rep 2018;8(1):10443. doi: https://doi.org/10.1038/s41598-018-28722-1

Tamanini JTN, Pallone LV, Sartori MGF, Girão MJBC, Dos Santos JLF, de Oliveira Duarte YA, van Kerrebroeck PEVA . A Populational-Based Survey on the Prevalence, Incidence, and Risk Factors of Urinary Incontinence in Older Adults-Results from the “SABE STUDY”. Neurourol Urodyn 2018;37(1):466-477. doi: https://doi.org/10.1002/nau.23331

Tink W, Tink JC, Turin TC, Kelly M . Adverse childhood experiences: survey of resident practice, knowledge, and attitude. Fam Med 2017;49(1):7-13

PubMed   Google Scholar  

Shi S, Lio J, Dong H, Jiang I, Cooper B, Sherer R. Evaluation of Geriatrics Education at a Chinese University: a Survey of Attitudes and Knowledge Among Undergraduate Medical Students. Gerontol Geriatr Educ 2020;41(2):242-249. doi: https://doi.org/10.1080/02701960.2018.1468324

Bennett, C., Khangura, S., Brehaut, J. C., Graham, I. D., Moher, D., Potter, B. K., & Grimshaw, J. M. (2010). Reporting Guidelines for Survey Research: an Analysis of Published Guidance and Reporting Practices. PLoS Med , 8 (8), e1001069. https://doi.org/10.1371/journal.pmed.1001069

Turk T, Elhady MT, Rashed S, Abdelkhalek M, Nasef SA, Khallaf AM, Mohammed AT, Attia AW, Adhikari P, Amin MA, Hirayama K, Huy NT . Quality of Reporting Web-Based and Non-web-based Survey Studies: What Authors, Reviewers and Consumers Should Consider. PLoS One 2018;13(6):e0194239. doi: https://doi.org/10.1371/journal.pone.0194239

Jones, T. L., Baxter, M. A., & Khanduja, V. (2013). A Quick Guide to Survey Research. Ann R Coll Surg Engl , 95 (1), 5–7. https://doi.org/10.1308/003588413X13511609956372

Jones D, Story D, Clavisi O, Jones R, Peyton P. An Introductory Guide to Survey Research in Anaesthesia. Anaesth Intensive Care 2006;34(2):245-53. doi: https://doi.org/10.1177/0310057X0603400219

Alderman AK, Salem B . Survey Research. Plast Reconstr Surg 2010;126(4):1381-9. doi: https://doi.org/10.1097/PRS.0b013e3181ea44f9

Moher D, Weeks L, Ocampo M, Seely D, Sampson M, Altman DG, Schulz KF, Miller D, Simera I, Grimshaw J, Hoey J. Describing Reporting Guidelines for Health Research: a Systematic Review. J Clin Epidemiol 2011;64(7):718-42. doi: https://doi.org/10.1016/j.jclinepi.2010.09.013

Simera, I., Moher, D., Hirst, A. et al. Transparent and Accurate Reporting Increases Reliability, Utility, and Impact of Your Research: Reporting Guidelines and the EQUATOR Network. BMC Med 8, 24 (2010). https://doi.org/10.1186/1741-7015-8-24

Article   PubMed   PubMed Central   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG . Guidance for Developers of Health Research Reporting Guidelines. PLoS Med 2010;7(2):e1000217. doi: https://doi.org/10.1371/journal.pmed.1000217

Tan WK, Wigley J, Shantikumar S . The Reporting Quality of Systematic Reviews and Meta-analyses in Vascular Surgery Needs Improvement: a Systematic Review. Int J Surg 2014;12(12):1262-5. doi: https://doi.org/10.1016/j.ijsu.2014.10.015

Grimshaw, J. (2014). SURGE (The SUrvey Reporting GuidelinE). In Guidelines for Reporting Health Research: a User’s Manual (eds D. Moher, D.G. Altman, K.F. Schulz, I. Simera and E. Wager). https://doi.org/10.1002/9781118715598.ch20

Eysenbach G. Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res 2004;6(3):e34. DOI: https://doi.org/10.2196/jmir.6.3.e34 .

EquatorNetwork.org . Developing your reporting guideline. 3 July 2018 [cited 12/28/2020]; Available from: https://www.equator-network.org/toolkits/developing-a-reporting-guideline/developing-your-reporting-guideline/ .

Li AH, Thomas SM, Farag A, Duffett M, Garg AX, Naylor KL . Quality of Survey Reporting in Nephrology Journals: a Methodologic Review. Clin J Am Soc Nephrol 2014;9(12):2089-94. doi: https://doi.org/10.2215/CJN.02130214

Shankar PR, Maturen KE . Survey Research Reporting in Radiology Publications: a Review of 2017 to 2018. J Am Coll Radiol 2019;16(10):1378-1384. doi: https://doi.org/10.1016/j.jacr.2019.07.012

Duffett M, Burns KE, Adhikari NK, Arnold DM, Lauzier F, Kho ME, Meade MO, Hayani O, Koo K, Choong K, Lamontagne F, Zhou Q, Cook DJ . Quality of Reporting of Surveys in Critical Care Journals: a Methodologic Review. Crit Care Med 2012 Feb;40(2):441-9. doi: https://doi.org/10.1097/CCM.0b013e318232d6c6

Story DA, Gin V, na Ranong V, Poustie S, Jones D; ANZCA Trials Group. Inconsistent Survey Reporting in Anesthesia Journals. Anesth Analg 2011;113(3):591-5. doi: https://doi.org/10.1213/ANE.0b013e3182264aaf

Marcopulos BA, Guterbock TM, Matusz EF . [Formula: see text] Survey Research in Neuropsychology: a Systematic Review. Clin Neuropsychol 2020;34(1):32-55. doi: https://doi.org/10.1080/13854046.2019.1590643

Rybakov KN, Beckett R, Dilley I, Sheehan AH. Reporting Quality of Survey Research Articles Published in the Pharmacy Literature. Res Soc Adm Pharm 2020;16(10):1354-1358. doi: https://doi.org/10.1016/j.sapharm.2020.01.005

Pagano MB, Dunbar NM, Tinmouth A, Apelseth TO, Lozano M, Cohn CS, Stanworth SJ; Biomedical Excellence for Safer Transfusion (BEST) Collaborative. A Methodological Review of the Quality of Reporting of Surveys in Transfusion Medicine. Transfusion. 2018;58(11):2720-2727. doi: https://doi.org/10.1111/trf.14937

Mulvany JL, Hetherington VJ, VanGeest JB . Survey Research in Podiatric Medicine: an Analysis of the Reporting of Response Rates and Non-response Bias. Foot (Edinb) 2019;40:92-97. doi: https://doi.org/10.1016/j.foot.2019.05.005

Tabernero P, Parker M, Ravinetto R, Phanouvong S, Yeung S, Kitutu FE, Cheah PY, Mayxay M, Guerin PJ, Newton PN . Ethical Challenges in Designing and Conducting Medicine Quality Surveys. Tropical Med Int Health 2016 Jun;21(6):799-806. doi: https://doi.org/10.1111/tmi.12707

Keeney S, Hasson F, McKenna H . Consulting the Oracle: Ten Lessons from Using the Delphi Technique in Nursing Research. J Adv Nurs 2006; 53(2): 205-12 8p. doi: https://doi.org/10.1111/j.1365-2648.2006.03716.x .

Zamanzadeh V, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar A, Ghahramanian A . Details of content validity index and objectifying it in instrument development. Nursing Pract Today 2014; 1(3): 163-71.

Google Scholar  

Download references

Acknowledgements

We are thankful to Dr. David Moher (Ottawa Hospital Research Institute, Canada) and Dr. Masahiro Hashizume (Department of Global Health Policy, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan) for initial contribution of the project and in rating and development of the checklist. We are also grateful to Obaida Istanbuly (Keele University, UK) and Omar Diab (Private Dental Practice, Jordan) for their contribution in the earlier phases of the project.

Author information

Akash Sharma and Minh Duc Nguyen Tran contributed equally to this work.

Authors and Affiliations

University College of Medical Sciences and Guru Teg Bahadur Hospital, Dilshad Garden, Delhi, India

Akash Sharma MBBS

Online Research Club, Nagasaki, Japan

Akash Sharma MBBS, Nguyen Tran Minh Duc MD, Tai Luu Lam Thang MD, Nguyen Hai Nam MD, Sze Jia Ng MD & Kirellos Said Abbas MBCH

Faculty of Medicine, University of Medicine and Pharmacy, Ho Chi Minh City, Vietnam

Nguyen Tran Minh Duc MD

Department of Emergency, City’s Children Hospital, Ho Chi Minh City, Vietnam

Tai Luu Lam Thang MD

Division of Hepato-Biliary-Pancreatic Surgery and Transplantation, Department of Surgery, Graduate School of Medicine, Kyoto University, Kyoto, Japan

Nguyen Hai Nam MD

Department of Medicine, Crozer Chester Medical Center, Upland, PA, USA

Sze Jia Ng MD

Faculty of Medicine, Alexandria University, Alexandria, Egypt

Kirellos Said Abbas MBCH

Institute of Tropical Medicine (NEKKEN) and School of Tropical Medicine and Global Health, Nagasaki University, Nagasaki, 852-8523, Japan

Nguyen Tien Huy MD, PhD

Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia

Ana Marušić MD, PhD

School of Medicine and Public Health, University of Newcastle, Callaghan, Australia

Christine L. Paul PhD

Division of Transplantation and Immunogenetics, Department of Pathology, Queen Mary Hospital Hong Kong, Pok Fu Lam, Hong Kong

Janette Kwok MBBS

School of Tropical Medicine and Global Health, Nagasaki University, Nagasaki, 852-8523, Japan

Juntra Karbwang MD, PhD & Mitsuaki Matsui MD, MSc, PhD

Department of Medicine and Surgery, University of Perugia, Perugia, Italy

Chiara de Waure MD, MSc, PhD

Cancer Research at UCC, University College Cork, Cork, Ireland

Frances J. Drummond PhD

Department of Palliative Medicine, Kobe University School of Medicine, Hyogo, Japan

Yoshiyuki Kizawa MD, PhD

Department of Psychology, Health & Technology, Faculty of Behavioural, Management and Social Sciences, University of Twente, Enschede, Netherlands

Erik Taal PhD

Department of Public Health, Biostatistics and Medical Informatics Research Group, Vrije Universiteit Brussel (VUB), Brussels, Belgium

Joeri Vermeulen MSN, CM

Department of Health Care, Knowledge Centre Brussels Integrated Care, Erasmus Brussels University of Applied Sciences and Arts, Brussels, Belgium

Paediatric Dentistry and Orthodontics, Faculty of Dentistry, University of Hong Kong, Pok Fu Lam, Hong Kong

Gillian H. M. Lee PhD

Department of Surgery, School of Medicine and Dentistry, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana

Adam Gyedu MD, MPH

Faculty of Public Health, University of Medicine and Pharmacy, Ho Chi Minh City, Vietnam

Kien Gia To PhD

Department of Physiotherapy, Bern University Hospital, Insel Group, Bern, Switzerland

Martin L. Verra PhD

Hopital Robert-Debre AP-HP, Clinical Investigation Center, Paris, France

Évelyne M. Jacqz-Aigrain MD, PhD

Department of Surgery, Máxima Medical Center, Veldhoven, Veldhoven, the Netherlands

Wouter K. G. Leclercq MD

Department of Social Psychology, University of Helsinki, Helsinki, Finland

Simo T. Salminen PhD

RAND, Santa Monica, CA, USA

Cathy Donald Sherbourne PhD

School of Pharmacy and Charles Perkins Centrey, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia

Barbara Mintzes PhD

School of Economics, University of Barcelona, Barcelona, Spain

Sergi Lozano PhD

Department of Cognition, Emotion, and Methods in Psychology, School of Psychology, University of Vienna, Vienna, Austria

Ulrich S. Tran DSc

School of Population and Public Health, University of British Columbia, Vancouver, BC, Canada

Mohammad Karamouzian DVM, MSc, PhD candidate

HIV/STI Surveillance Research Center, and WHO Collaborating Center for HIV Surveillance, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran

You can also search for this author in PubMed   Google Scholar

Contributions

NTH is the generator of the idea, and supervised and helped in writing, reviewing, and mediating Delphi process; AS participated in making a draft of guidelines, mediating Delphi process, analysis of results, writing, and process validation; TLT helped in making a draft of guidelines, and analysis; MNT helped in drafting checklist and mediating Delphi process; NNH, NSJ, KSA, and MK helped in writing and mediating Delphi; AM, JK, CLP, JKB, CDW, FJD, MH, YK, EK, JV, GHL, AG, KGT, ML, EMJ, WKL, STS, CDS, BM, SL, UST, MM and MK helped in the rating of items in Delphi rounds and reviewing the manuscript.

Corresponding author

Correspondence to Nguyen Tien Huy MD, PhD .

Ethics declarations

Conflict of interest.

The authors declare that they do not have a conflict of interest.

Ethics approval

Ethics approval was not required for the study.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

(DOCX 22 kb)

Rights and permissions

Reprints and permissions

About this article

Sharma, A., Minh Duc, N., Luu Lam Thang, T. et al. A Consensus-Based Checklist for Reporting of Survey Studies (CROSS). J GEN INTERN MED 36 , 3179–3187 (2021). https://doi.org/10.1007/s11606-021-06737-1

Download citation

Received : 15 September 2020

Accepted : 17 March 2021

Published : 22 April 2021

Issue Date : October 2021

DOI : https://doi.org/10.1007/s11606-021-06737-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Surveys and Questionnaires
  • Delphi technique
  • Find a journal
  • Publish with us
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Survey Research | Definition, Examples & Methods

Survey Research | Definition, Examples & Methods

Published on August 20, 2019 by Shona McCombes . Revised on June 22, 2023.

Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyze the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyze the survey results, step 6: write up the survey results, other interesting articles, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research : investigating the experiences and characteristics of different social groups
  • Market research : finding out what customers think about products, services, and companies
  • Health research : collecting data from patients about symptoms and treatments
  • Politics : measuring public opinion about parties and policies
  • Psychology : researching personality traits, preferences and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and in longitudinal studies , where you survey the same sample several times over an extended period.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

survey research articles pdf

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • US college students
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18-24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalized to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

Several common research biases can arise if your survey is not generalizable, particularly sampling bias and selection bias . The presence of these biases have serious repercussions for the validity of your results.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every college student in the US. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalize to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions. Again, beware of various types of sampling bias as you design your sample, particularly self-selection bias , nonresponse bias , undercoverage bias , and survivorship bias .

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by mail, online or in person, and respondents fill it out themselves.
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses.

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g. residents of a specific region).
  • The response rate is often low, and at risk for biases like self-selection bias .

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyze.
  • The anonymity and accessibility of online surveys mean you have less control over who responds, which can lead to biases like self-selection bias .

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping mall or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g. the opinions of a store’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations and is at risk for sampling bias .

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data: the researcher records each response as a category or rating and statistically analyzes the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analyzed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g. yes/no or agree/disagree )
  • A scale (e.g. a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g. age categories)
  • A list of options with multiple answers possible (e.g. leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analyzed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an “other” field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic. Avoid jargon or industry-specific terminology.

Survey questions are at risk for biases like social desirability bias , the Hawthorne effect , or demand characteristics . It’s critical to use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no indication that you’d prefer a particular answer or emotion.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Prevent plagiarism. Run a free check.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by mail, online, or in person.

There are many methods of analyzing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also clean the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organizing them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analyzing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analyzed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyze it. In the results section, you summarize the key results from your analysis.

In the discussion and conclusion , you give your explanations and interpretations of these results, answer your research question, and reflect on the implications and limitations of the research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Survey Research | Definition, Examples & Methods. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/survey-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, questionnaire design | methods, question types & examples, what is a likert scale | guide & examples, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.72(1); 2008 Feb 15

Best Practices for Survey Research Reports: A Synopsis for Authors and Reviewers

Jolaine reierson draugalis.

a The University of Oklahoma College of Pharmacy

Stephen Joel Coons

b The University of Arizona College of Pharmacy

Cecilia M. Plaza

c American Association of Colleges of Pharmacy

INTRODUCTION

As survey researchers, as well as reviewers, readers, and end users of the survey research literature, we are all too often disheartened by the poor quality of survey research reports published in the peer-reviewed literature. For the most part, poor quality can be attributed to 2 primary problems: (1) ineffective reporting of sufficiently rigorous survey research, or (2) poorly designed and/or executed survey research, regardless of the reporting quality. The standards for rigor in the design, conduct, and reporting of survey research in pharmacy should be no lower than the standards for the creation and dissemination of scientific evidence in any other discipline. This article provides a checklist and recommendations for authors and reviewers to use when submitting or evaluating manuscripts reporting survey research that used a questionnaire as the primary data collection tool.

To place elements of the checklist in context, a systematic review of the Journal was conducted for 2005 (volume 69) and 2006 (volume 70) to identify articles that reported the results of survey research. In 2005, volume 69, 10/39 (26%) and 2006, volume 70, 10/29 (35%) of the total research articles published (not including personal or telephone interviews) used survey research methods. As stated by Kerlinger and Lee, “Survey research studies large and small populations (or universes) by selecting and studying samples chosen from the population to discover the relative incidence, distribution, and interrelations of sociological and psychological variables.” 1 Easier said than done; that is, if done in a methodologically sound way. Although survey research projects may use personal interviews, panels, or telephones to collect data, this paper will only consider mail, e-mail, and Internet-based data collection approaches. For clarity, the term survey should be reserved to describe the research method whereas a questionnaire or survey instrument is the data collection tool. In other words, the terms survey and questionnaire should not be used interchangeably. As well, data collection instruments are used in many research designs such as pretest/posttest and experimental designs, and use of the term survey is inappropriate to describe the instrument or the methodology in these cases. In 2005-2006 Journal volumes 69 and 70, 11/68 research articles (16%) used inappropriate terminology. Survey research can be very powerful and may well be the only way to conduct a particular inquiry or ongoing body of research.

There is no shortage of text and reference books, to name but a few of our favorites, Dillman's Mail and Internet Surveys: The Tailored Design Method , 2 Fowler's Survey Research Methods , 3 Salant and Dillman's How to Conduct Your Own Survey , 4 and Aday and Cornelius's Designing and Conducting Health Surveys – A Comprehensive Guide . 5 As well, numerous guidelines, position statements, and best practices are available from a wide variety of associations in the professional literature and via the Internet. We will cite a number of these throughout this paper. Unfortunately, it is apparent from both the published literature and the many requests to contribute data to survey research projects that these materials are not always consulted and applied. In fact, it seems quite evident that there is a false impression that conducting survey research is relatively easy. As an aside to his determination of the effectiveness of follow-up techniques in mail surveys, Stratton found, “the number of articles that fell short of a scholarly level of execution and reporting was surprising.” 6 In addition, Desselle more recently observed that, “Surveys are perhaps the most used, and sometimes misused, methodological tools among academic researchers.” 7

We will structure this paper based on a modified version of the 10 guiding questions established in the Best Practices for Survey and Public Opinion Research by the American Association for Public Opinion Research (AAPOR). 8 The 10 guiding questions are: (1) was there a clearly defined research question? (2) did the authors select samples that well represent the population to be studied? (3) did the authors use designs that balance costs with errors? (4) did the authors describe the research instrument? (5) was the instrument pretested? (6) were quality control measures described? (7) was the response rate sufficient to enable generalizing the results to the target population? (8) were the statistical, analytic, and reporting techniques appropriate to the data collected? (9) was evidence of ethical treatment of human subjects provided? and (10) were the authors transparent to ensure evaluation and replication? These questions can serve as a guide for reviewers and researchers alike for identifying features of quality survey research. A grid addressing the 10 questions and subcategories is provided in Appendix 1 for use in preparing and reviewing submissions to the Journal .

Clearly Defined Research Question

Formulating the research questions and study objectives depends on prior work and knowing what is already available either in archived literature, American Association of Colleges of Pharmacy (AACP) institutional research databases, or from various professional organizations and associations. 9 , 10 The article should clearly state why the research is necessary, placing it in context, and drawing upon previous work via a literature review. 9 This is especially pertinent to the measurement of psychological constructs, such as satisfaction (eg, satisfaction with pharmacy services). Too many researchers just put items down on a page that they think measure the construct (and answer the research question); however, they may miss the mark because they have not approached the research question and, subsequently, item selection or development from the perspective of a theoretical framework or existing model that informs the measurement of satisfaction. Another important consideration is whether alternatives to using survey research methods have been considered, in essence asking the question of whether the information could better be obtained using a different methodology. 8

Sampling Considerations

For a number of reasons (eg, time, cost), data are rarely obtained from every member of a population. A census, while appropriate in certain specific cases where responses from an entire population are needed to adequately answer the research question, is not generally required in order to obtain the desired data. In the majority of situations, sampling from the population under study will both answer the research question and save both time and money. Survey research routinely involves gathering data from a subset or sample of individuals intended to represent the population being studied. 11 Therefore, since researchers are relying on data from samples to reflect the characteristics and attributes of interest in the target population, the samples must be properly selected. 12 To enable the proper selection of a sample, the target population has to be clearly identified. The sample frame should closely approximate the full target population; any significant departure from that should be justified. Once the sample frame has been identified, the sample selection process needs to be delineated including the sampling method (eg, probability sampling techniques such as simple random or stratified). Although nonprobability sample selection approaches (eg, convenience, quota, or snowball sampling) are used in certain circumstances, probability sampling is preferred if the survey results are to be credibly generalized to the target population. 13

The required sample size depends on a variety of factors, including whether the purpose of the survey is to simply describe population characteristics or to test for differences in certain attributes of interest by subgroups within the population. Authors of survey research reports should describe the process they used to estimate the necessary sample size including the impact of potential nonresponse. An in-depth discussion of sample size determination is beyond the scope of this paper; readers are encouraged to refer to the excellent existing literature on this topic. 13 , 14

Balance Between Costs and Errors

Balance between costs and errors deals with a realistic appraisal of resources needed to carry out the study. This appraisal includes both monetary and human resource aspects. Tradeoffs are necessary but involve more than just numbers of subjects. For example, attempting to get large sample sizes but with insufficient follow-up versus a smaller more targeted representative sample with multiple follow-ups. Seemingly large sample sizes do not necessarily represent a probability sample. When conducting survey research, if follow-ups are not planned and budgeted for, the study should not be initiated. The effectiveness of incentives and approaches to follow-up are discussed in detail elsewhere, 2 , 4 , 5 but the importance of well-planned follow-up procedures cannot be overstated. In volumes 69 and 70 of the Journal , 11/20 (55%) survey research papers reported the use of at least 1 follow-up to the initial invitation to participate.

Description of the Survey Instrument

The survey instrument or questionnaire used in the research should be described fully. If an existing questionnaire was used, evidence of psychometric properties such as reliability and validity should be provided from the relevant literature. Evidence of reliability indicates that the questionnaire is measuring the variable or variables in a reproducible manner. Evidence supporting a questionnaire's validity indicates that it is measuring what is intended to be measured. 15 In addition, the questionnaire's measurement model (ie, scale structure and scoring system) should be described in sufficient detail to enable the reader to understand the meaning and interpretation of the resulting scores. When open-ended, or qualitative, questions are included in the questionnaire, a clear description must be provided as to how the resulting text data will be summarized and coded, analyzed, and reported.

If a new questionnaire was created, a full description of its development and testing should be provided. This should include discussion of the item generation and selection process, choice of response options/scales, construction of multi-item scales (if included), and initial testing of the questionnaire's psychometric properties. 15 As with an existing questionnaire, evidence supporting the validity and reliability of the new questionnaire should be clearly provided by authors. If researchers are using only selected items from scales in an existing questionnaire, justification for doing so should be provided and their measurement properties in their new context should be properly tested prior to use. In addition, proper attribution of the source of scale items should be provided in the study report. In volumes 69 and 70 in the Journal , 10/20 (50%) survey research papers provided no or insufficient information concerning the reliability and/or validity of the survey instrument used in the study.

Commonly measured phenomena in survey research include frequency, quantity, feelings, evaluations, satisfaction, and agreement. 16 Authors should provide sufficient detail for reviewers to be able to discern that the items and response options are congruent and appropriate for the variables being measured. For instance, a reviewer would question an item asking about the frequency of a symptom with the response options ranging from “excellent” to “poor.” In an extensive review article, Desselle provides an overview of the construction, implementation, and analysis of summated rating attitude scales. 7

Pretesting is often conducted with a focus group to identify ambiguous questions or wording, unclear instructions, or other problems with the instrument prior to widespread dissemination. Pretesting is critical because it provides valuable information about issues related to reliability and validity through identification of potential problems prior to data collection. In volumes 69 and 70 in the Journal , only 8/20 survey research papers (40%) reported that pretesting of the survey instrument was conducted. Authors should clearly describe how a survey instrument was pretested. While pretesting is often conducted with a focus group of peers or others similar to subjects, cognitive interviewing is becoming increasingly important in the development and testing of questionnaires to explore the way in which members of the target population understand, mentally process, and respond to the items on a questionnaire. 17 , 18 Cognitive testing, for example, consists of the use of both verbal probing by the interviewer (eg, “What does the response ‘some of the time’ mean to you?”) and think aloud , in which the interviewer asks the respondent to verbalize whatever comes to mind as he or she answers the question. 16 This technique helps determine whether respondents are interpreting the questions and the response sets as intended by the questionnaire developers. If done with a sufficient number of subjects, the cognitive interviewing process also provides the opportunity to fulfill some of the roles of a pilot test in which length, flow, ease of administration, ease of response, and acceptability to respondents can be assessed. 19

Quality Control Measures

The article should describe in the methods section whether procedures such as omit or skip patterns (procedures that direct respondents to answer only those items relevant to them) were used on the survey instrument. The article should also describe whether a code book was used for data entry and organization and what data verification procedures were used, for example spot checking a random 10% of data entries back to the original survey instruments. Outliers should be verified and the procedure for handling missing data should be explained.

Response Rates

In general, response rate can be defined as the number of respondents divided by the number of eligible subjects in the sample. A review of survey response rates reported in the professional literature found that over a quarter of the articles audited failed to define response rate. 20 As stated by Johnson and Owens, “when a ‘response rate’ is given with no definition, it can mean anything, particularly in the absence of any additional information regarding sample disposition.” 20 Hence, of equal importance to the response rate itself is transparency in its reporting. As with the CONSORT guidelines for randomized controlled trials, the flow of study subjects from initial sample selection and contact through study completion and analysis should be provided. 21 Drop-out or exclusion for any reason should be documented and every individual in the study sample should be accounted for clearly. In addition, there may be a need to distinguish between the overall response rate and item-level response rates. Very low response rates for individual items on a questionnaire can be problematic, particularly if they represent important study variables.

Fowler states that there is no agreed-upon standard for acceptable response rates; however, he indicates that some federal funding agencies ask that survey procedures be used that are likely to result in a response rate of over 75%. 3 Bailey also asserted that the minimal acceptable response rate was 75%. 22 Schutt indicated that below 60% was unacceptable, but Babbie stated that a 50% response rate was adequate. 23 , 24 As noted in the Canadian Medical Association journal's editorial policy, “Except for in unusual circumstances, surveys are not considered for publication in CMAJ if the response rate is less than 60% of eligible participants.” 10 Fowler states that, “…one occasionally will see reports of mail surveys in which 5% to 20% of the selected sample responded. In such instances, the final sample has little relationship to the original sampling process; those responding are essentially self-selected. It is very unlikely that such procedures will provide any credible statistics about the characteristics of the population as a whole.” 3 Although the literature does not reflect agreement on a minimum acceptable response rate, there is general consensus that at least half of the sample should have completed the survey instrument. In volumes 69 and 70 in the Journal , 7/20 survey research papers (35%) had response rates less than 30%, 6/20 (30%) had response rates between 31%-60%, and 7/20 (35%) had response rates of 61% or greater. In volumes 69 and 70 in the Journal , in the 13 survey research articles that had less than a 60% response rate, 8/13 (61.5%) mentioned the possibility of response bias.

The lower the response rate, the higher the likelihood of response bias or nonresponse error. 4 , 25 “Nonresponse error occurs when a significant number of subjects in the sample do not respond to the survey and when they differ from respondents in a way that influences, or could influence, the results.” 26 Response bias stems from the survey respondents being somehow different from the nonrespondents and, therefore, not representative of the target population. The article should address both follow-up procedures (timing, method, and quantity) and response rate. While large sample sizes are often deemed desirable, they must be tempered by the consideration that low response rates are more damaging to the credibility of results than a small sample. 12 Most of the time, response bias is very hard to rule out due to lack of sufficient information regarding the nonrespondents. Therefore, it is imperative that researchers design their survey method to optimize response rates. 2 , 27 To be credible, published survey research must meet acceptable levels of scientific rigor, particularly in regard to response rate transparency and the representativeness or generalizability of the study's results.

Statistical, Analytic, and Reporting Techniques

As noted in the Journal's Instructions to Reviewers, there should be a determination of whether the appropriate statistical techniques were used. The article should indicate what statistical package was used and what statistical technique was applied to what variables. Decisions must be made as to how data will be presented, for example, using a pie chart to provide simple summaries of data but not to present linear or relational data. 28 The authors should provide sufficient detail to allow reviewers to match up hypothesis testing and relevant statistical analyses. In addition, if the questionnaire included qualitative components (eg, open-ended questions), a thorough description should be provided as to how and by whom the textual responses were coded for analysis.

Human Subjects Considerations

Even though most journals now require authors to indicate Institutional Review Board (IRB) compliance, there are still many examples of requests to participate, particularly in web-based or e-mail data collection modes, that have obviously not been subjected to IRB scrutiny. Examples of the evidence that this is the case include insufficient verbiage (eg, estimates of time to complete, perceived risks and benefits) in the invitation to participate, “mandatory” items (thereby violating subject's right to refuse to answer any or all items), and use of listservs for “quick and dirty” data gathering when the ultimate intent is to disseminate the findings. The authors should explicitly list which IRB they received approval from, the IRB designation received (eg, exempt, expedited), and how consent was obtained.

Transparency

The authors should fully specify their methods and report in sufficient detail such that another researcher could replicate the study. This consideration permeates the previous 9 sections. For example, an offer to provide the instrument upon request does not substitute for the provision of reliability and validity evidence in the article itself. Another example related to transparency of methods would be the description of the mode of administration. In volume 69 of the Journal , 3/10 (30%) survey research articles used mixed survey methods (both Internet and first-class mail) but did not provide sufficient detail as to what was collected by each respective method. Also, in volume 69 of the Journal , 1 survey research article simply used the word “sent” without providing any information as to how the instrument was delivered.

Additional Considerations Regarding Internet or Web-based Surveys

The use of Internet or e-mail based surveys (also referred to as “email surveys”) has grown in popularity as a proposed less expensive and more efficient method of conducting survey research. 2 , 29 - 32 The supposed ease in data collection can give the impression that survey research is easily conducted; however, the good principles for traditional mail surveys still apply. Authors and reviewers must be aware that the mode of administration is irrelevant to all that must be done prior to that. Some potential problems associated with the use of web-based surveys are their ability to be forwarded to inappropriate or unintended subjects. 31 Web-based surveys also suffer from potential problems with undeliverable e-mails due to outdated listservs or incorrect e-mail addresses, thus affecting the calculation of the response rate and determination of the most appropriate denominator. 2 , 30 - 31 The authors should describe specifically how the survey instrument was disseminated (eg, e-mail with a link to the survey) and what web-based survey tool was used.

We have provided 10 guiding questions and recommendations regarding what we consider to be best practices for survey research reports. Although our recommendations are not minimal standards for manuscripts submitted to the Journal , we hope that they provide guidance that will result in an enhancement of the quality of published reports of questionnaire-based survey research. It is important for both researchers/authors and reviewers to seriously consider the rigor that needs to be applied in the design, conduct, and reporting of survey research so that the reported findings credibly reflect the target population and are a true contribution to the scientific literature.

ACKNOWLEDGEMENT

The ideas expressed in this manuscript are those of the authors and do not represent the position of the American Association of Colleges of Pharmacy.

Appendix 1. Criteria for Survey Research Reports

  • Are the study objectives clearly identified?
  • —AACP databases
  • —Readily available literature
  • —Other professional organizations
  • What sampling approaches were used?
  • Did the authors provide a description of how coverage and sampling error were minimized?
  • Did the authors describe the process to estimate the necessary sample size?
  • Did the authors use designs that balance costs with errors? (eg, strive for a census with inadequate follow-up versus smaller sample but aggressive follow-up)
  • Was evidence provided regarding the reliability and validity of an existing instrument?
  • How was a new instrument developed and assessed for reliability and validity?
  • Was the scoring scheme for the instrument sufficiently described?
  • Was the procedure used to pretest the instrument described?
  • Was a code book used?
  • Did the authors discuss what techniques were used for verifying data entry?
  • What was the response rate?
  • How was response rate calculated?
  • Were follow-ups planned for and used?
  • Do authors address potential nonresponse bias?
  • Were the statistical, analytic, and reporting techniques appropriate to the data collected?
  • Did the authors list which IRB they received approval from?
  • Did the authors explain how consent was obtained?
  • Was evidence for validity provided?
  • Was evidence of reliability provided?
  • Were results generalizable?
  • Is replication possible given information provided?

Suggested Keywords

Businesses & Institutions

Global Research and Market Insights

survey research articles pdf

Candace Browning

Head of BofA Global Research

March 24, 2024

Must Read Research: Changing the Game

This week we discuss the shift to Emerging Markets (EM), play-by-play from the 2024 Global Industrials Conference and 30 breakthrough technologies that can change the game.

Also featuring commentary from  Global Economic Weekly

sd

Global trends need global vision

Each week, our analysts discuss what’s emerging in global markets on the Global Research Unlocked™ podcast.

Featured content

A large energy-generating facility near a busy highway.

10 Macro Themes for 2024 – Breadth in rate cuts and markets

We identify 10 macro themes across global economics and strategy and provide our year ahead outlooks.

survey research articles pdf

It’s not just research. It’s results.

We are honored to be named #1 in the All-America Research Team survey on Oct 24, 2023, collected during the polling period of May 30 – June 23. Read more for details and methodology.

An aerial view of beautiful rice field.

Around the world in 5 questions

The world economy is going through significant structural changes after many years of smooth globalization dynamics.

Aerial view of electricity pylons and a big Electrical substation.

Delivering the energy transition

The planet is warming but challenges hinder efforts to tackle climate change. We outline obstacles to the energy transition.

Futuristic Digital Technology Background

Artificial Intelligence…Is Intelligent!

We are at a defining moment – like the internet in the ‘90s – where Artificial Intelligence (AI) is moving toward mass adoption.

See the latest from Bank of America Institute

See the latest from Bank of America Institute

Uncovering powerful insights that move business and society forward.

About Global Research

About Global Research

Our award-winning analysts, supported by our BofA Data Analytics team, provide insightful, objective and in-depth research to help you make informed investing decisions. We service individual investors and a wide variety of institutional money managers including hedge funds, mutual funds, pension funds and sovereign wealth management funds.

BofA Mercury®

BofA Mercury®

Insights and tools to help optimize your trading strategies.  Sign in to BofA Mercury®.

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • News & Views
  • Anaesthetists feel...

Anaesthetists feel broadly negative about associates, survey finds

  • Related content
  • Peer review

Most anaesthetists have a negative view of anaesthesia associates (AAs), a survey by the Royal College of Anaesthetists (RCoA) has found. 1

Consultants who regularly worked with AAs were less likely to have a negative view of the role, while anaesthetists in training who had never worked with AAs held the most negative views.

All RCoA members working as anaesthetists in the NHS were invited to complete the survey, which was designed by the college and conducted by the independent agency Research by Design.

The survey ran from 21 August to 21 September 2023, not long after the publication of NHS England’s long term workforce plan in June, which set out plans to increase the number of AAs to 2000 by 2036-37. 2 The survey received 6049 anonymous responses (a 35% response rate), making it the largest survey of anaesthetists’ opinions on AAs to date.

Overall, 62% of respondents expressed a negative opinion of AAs, while 15% expressed a positive opinion. Consultants who had worked with AAs daily or weekly had the most positive opinion, with half (50%) saying that they held a positive or very positive opinion of AAs.

Anaesthetists in training who had never worked with AAs held the most negative views, with 80% reporting a negative or very negative opinion of AAs.

Impact on training

When consultants were asked whether AAs helped or hindered the training and education they delivered to anaesthetists in training, 23% who had worked directly with AAs found them to be a help or a small help. A third (34%) said that AAs neither helped nor hindered, 41% said that they were a hindrance, and the remainder chose “don’t know/not sure.”

Specialists, associate specialists, and specialty (SAS) doctors and locally employed doctors (LEDs) who had worked directly with AAs were slightly less positive. Just 14% said that AAs were a help or a small help to the training they had delivered, while 26% said that they neither helped or hindered, half (49%) said that they were a hindrance, and the remainder chose “don’t know/not sure.”

When trainees were asked to what extent they believed that the presence of AAs would help or hinder their anaesthetic training, 72% who had worked with AAs and 78% of trainees who had not done so said that they would be a hindrance.

Patient safety

The survey also asked respondents what confidence they had in AAs’ ability to provide safe, high quality patient care and successful anaesthetic outcomes.

Consultants who had worked directly with AAs daily or weekly had the highest levels of confidence, with 56% saying that they were confident or very confident. Just 17% of trainees who had worked with AAs were confident.

When asked about workload, just 16% of consultants and 15% of SAS doctors and LEDs who had worked directly with AAs said that this had decreased their workload, while a third (30%) of consultants and 22% of SAS doctors and LEDs said that it had increased theirs.

The survey findings come after members of the RCoA voted in favour of six resolutions proposed by grassroots campaigners who opposed the expansion of AA numbers, at an extraordinary general meeting in October. 3 Speaking to The BMJ , the RCoA’s president, Fiona Donald, said that she was not surprised by the survey findings because they reflected feelings expressed by members at the meeting, to the college through other roots of communication, and on social media.

“We need to move to a place where the interests of doctors and AAs aren’t competing,” she said. “I think everyone has a pretty good idea of what a safe, efficient, and sustainable service looks like, and in terms of AAs, both AAs and doctors want to see proper guidance, standards, and scope of practice in place so that everyone knows where we are heading, what the limits are, and what people need to do when they are supervising and training AAs. And so that AAs have a clear idea of what their scope of practice should be.”

Scope of practice

In a statement on its website the RCoA said that it took the survey findings very seriously and that, in the context of the October vote, the findings reinforced its request for a pause in the recruitment of new student AAs.

The college said that it would continue to gather evidence and had commissioned a literature review related to AAs and their international comparators. It said, “Our aim is to ensure we have captured a robust synthesis of all relevant published research. We will share our assessment of the evidence (including the survey findings) and what it means for the college’s position in relation to AAs later this year.”

The RCoA added that it would continue to work with stakeholders to develop a comprehensive scope of practice for AAs beyond the point of qualification. The scope of practice will take effect in 2025 after regulation of AAs is in place.

The college also expressed its support for AAs currently training or working in the NHS. Donald told The BMJ , “What’s really important in all of this is to recognise that these are people who are doing a job that they have been employed to do, that they have entered into in good faith—and no one has the right, in my opinion, to make others feel unwelcome in the workplace.

“We have a duty to make sure that everyone working in the hospital feels able to do their job and to do that without feeling under pressure or unwelcome.”

  • ↵ Royal College of Anaesthetists. Report of member survey on anaesthesia associates. Apr 2024. https://rcoa.ac.uk/news/report-member-survey-anaesthesia-associates
  • Wilkinson E

survey research articles pdf

AI Index Report

The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI. The report aims to be the world’s most credible and authoritative source for data and insights about AI.

Subscribe to receive the 2024 report in your inbox!

AI Index coming soon

Coming Soon: 2024 AI Index Report!

The 2024 AI Index Report will be out April 15! Sign up for our mailing list to receive it in your inbox.

Steering Committee Co-Directors

Jack Clark

Ray Perrault

Steering committee members.

Erik Brynjolfsson

Erik Brynjolfsson

John Etchemendy

John Etchemendy

Katrina light

Katrina Ligett

Terah Lyons

Terah Lyons

James Manyika

James Manyika

Juan Carlos Niebles

Juan Carlos Niebles

Vanessa Parli

Vanessa Parli

Yoav Shoham

Yoav Shoham

Russell Wald

Russell Wald

Staff members.

Loredana Fattorini

Loredana Fattorini

Nestor Maslej

Nestor Maslej

Letter from the co-directors.

AI has moved into its era of deployment; throughout 2022 and the beginning of 2023, new large-scale AI models have been released every month. These models, such as ChatGPT, Stable Diffusion, Whisper, and DALL-E 2, are capable of an increasingly broad range of tasks, from text manipulation and analysis, to image generation, to unprecedentedly good speech recognition. These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new. However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.

Although 2022 was the first year in a decade where private AI investment decreased, AI is still a topic of great interest to policymakers, industry leaders, researchers, and the public. Policymakers are talking about AI more than ever before. Industry leaders that have integrated AI into their businesses are seeing tangible cost and revenue benefits. The number of AI publications and collaborations continues to increase. And the public is forming sharper opinions about AI and which elements they like or dislike.

AI will continue to improve and, as such, become a greater part of all our lives. Given the increased presence of this technology and its potential for massive disruption, we should all begin thinking more critically about how exactly we want AI to be developed and deployed. We should also ask questions about who is deploying it—as our analysis shows, AI is increasingly defined by the actions of a small set of private sector actors, rather than a broader range of societal actors. This year’s AI Index paints a picture of where we are so far with AI, in order to highlight what might await us in the future.

- Jack Clark and Ray Perrault

Our Supporting Partners

AI Index Supporting Partners

Analytics & Research Partners

AI Index Supporting Partners

Stay up to date on the AI Index by subscribing to the  Stanford HAI newsletter.

Read our research on: Gun Policy | International Conflict | Election 2024

Regions & Countries

Political typology quiz.

Notice: Beginning April 18th community groups will be temporarily unavailable for extended maintenance. Thank you for your understanding and cooperation.

Where do you fit in the political typology?

Are you a faith and flag conservative progressive left or somewhere in between.

survey research articles pdf

Take our quiz to find out which one of our nine political typology groups is your best match, compared with a nationally representative survey of more than 10,000 U.S. adults by Pew Research Center. You may find some of these questions are difficult to answer. That’s OK. In those cases, pick the answer that comes closest to your view, even if it isn’t exactly right.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

IMAGES

  1. How to write a survey research paper. How to write better Survey Paper

    survey research articles pdf

  2. How to Write and Publish a Research Paper.pdf

    survey research articles pdf

  3. (PDF) How many types of research articles

    survey research articles pdf

  4. (PDF) Survey Research

    survey research articles pdf

  5. 2: Example of scientific article in PDF format.

    survey research articles pdf

  6. Methodology Sample In Research : Research Support: Research Methodology

    survey research articles pdf

VIDEO

  1. Can I Upload and Article to ChatGPT? Yes, you can upload a PDF, image or document to ChatGPT 4.0

  2. Difference between Survey paper Vs Review Article Vs Research Paper

  3. Surveys and Questionnaires: Research

  4. Public Opinion And Survey Research Important Questions|| 4th Semester || Du Regular/Ncweb

  5. What is a Survey and How to Design It? Research Beast

  6. Public Opinion And Survey Research Question Paper Pattern May Exam 2023|| 4th Semester

COMMENTS

  1. (PDF) Understanding and Evaluating Survey Research

    Survey research is defined as. "the collection of information from. a sample of individuals through their. responses to questions" (Check &. Schutt, 2012, p. 160). This type of r e -. search ...

  2. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  3. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  4. PDF Survey Research

    Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined population (e.g., all adult women living in the United States) through the use of a questionnaire (for more lengthy discussions, see Babbie, 1990; Fowler, 1988; ...

  5. PDF Survey Research

    of survey research. Survey research owes its continuing popularity to its versatility, efficiency, and generalizability. First and . foremost is the . versatility. of survey methods. Researchers have used survey methods to investigate areas of education as diverse as school desegregation, academic achievement, teaching practice, and leadership.

  6. PDF Understanding and Evaluating Survey Research

    the research, the type of research questions to be answered, and the availability of resources. The pur-pose of this article is to describe survey research as one approach to the conduct of research so that the reader can critically evaluate the appropriateness of the con-clusions from studies employing survey research. SURVEY RESEARCH

  7. PDF The Impact of Covid-19 on Student Experiences and Expectations

    realistic and relevant one - it was the status quo less than two months before the survey, and (2) there being no systematic bias in the reporting of the data - an assumption that is implicitly made when using any survey data.3 Our ndings on academic outcomes indicate that COVID-19 has led to a large number of students delaying

  8. PDF Survey Research

    Survey Research Scott Keeter Overview The survey is the most commonly used social research method for collecting data from individuals in a population of interest. Surveys are conducted by nearly every government in the world, by academic researchers, by nonprofit and nongovernmental organizations, by journalists, and by corporations and ...

  9. PDF SURVEY AND CORRELATIONAL RESEARCH DESIGNS

    was illustrated here: the survey research design. 8.1 An Overview of Survey Designs A nonexperimental research design used to describe an individual or a group by having participants complete a survey or questionnaire is called the survey research design. A survey, which is a common measurement tool in the behavioral sciences, is a series of

  10. A Short Introduction to Survey Research

    3.3 The Importance of Survey Research in the Social Sciences and Beyond. Survey research is one of the pillars in social science research in the twenty-first century. Surveys are used to measure almost everything from voting behavior to public opinion and to sexual preferences (De Leeuw et al. 2008: 1).

  11. PDF Essentials of Survey Research and Analysis

    Surveys (also called "questionnaires") are a systematic way of asking people to volunteer information about their attitudes, behaviors, opinions and beliefs. The success of survey research rests on how closely the answers that people give to survey questions matches reality - that is, how people really think and act.

  12. Reducing respondents' perceptions of bias in survey research

    Survey research has become increasingly challenging. In many nations, response rates have continued a steady decline for decades, and the costs and time involved with collecting survey data have risen with it (Connelly et al., 2003; Curtin et al., 2005; Keeter et al., 2017).Still, social surveys are a cornerstone of social science research and are routinely used by the government and private ...

  13. Designing and Using Surveys in Nursing Research: A Contemporary Discussion

    Abstract. This commentary summarizes the contemporary design and use of surveys or questionnaires in nursing science, particularly in light of recent reporting guidelines to standardize and improve the quality of survey studies in healthcare research. The benefits, risks, and limitations of these types of data collection tools are also briefly ...

  14. SURVEY RESEARCH

    Abstract For the first time in decades, conventional wisdom about survey methodology is being challenged on many fronts. The insights gained can not only help psychologists do their research better but also provide useful insights into the basics of social interaction and cognition. This chapter reviews some of the many recent advances in the literature, including the following: New findings ...

  15. A Consensus-Based Checklist for Reporting of Survey Studies ...

    A survey is a list of questions aiming to extract a set of desired data or opinions from a particular group of people. 1 Surveys can be administered quicker than some other methods of data gathering and facilitate data collection from a large number of participants. Numerous questions can be included in a survey that allow for increased flexibility in evaluation of several research areas, such ...

  16. Survey response rates: Trends and a validity assessment framework

    However, a significant increase of number of surveys per article did occur from 2015 to 2020, t(458) = 5.22, p < .001, d = .49. In answering Research Question 8, we note no difference between the number of surveys per article from 2010 to 2015 and a jump in the average number of surveys per article from 1.27 in 2015 to 1.79 in 2020.

  17. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  18. Best Practices for Survey Research Reports: A Synopsis for Authors and

    This article provides a checklist and recommendations for authors and reviewers to use when submitting or evaluating manuscripts reporting survey research that used a questionnaire as the primary data collection tool. To place elements of the checklist in context, a systematic review of the Journal was conducted for 2005 (volume 69) and 2006 ...

  19. Market & Financial Insights, Research & Strategy

    Tap into innovative financial and investing research, market insights and strategies from BofA Securities. Explore our latest insights here. ... We are honored to be named #1 in the All-America Research Team survey on Oct 24, 2023, collected during the polling period of May 30 - June 23. Read more for details and methodology.

  20. Anaesthetists feel broadly negative about associates, survey finds

    Most anaesthetists have a negative view of anaesthesia associates (AAs), a survey by the Royal College of Anaesthetists (RCoA) has found.1 Consultants who regularly worked with AAs were less likely to have a negative view of the role, while anaesthetists in training who had never worked with AAs held the most negative views. All RCoA members working as anaesthetists in the NHS were invited to ...

  21. AI Index Report

    The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

  22. Political Typology Quiz

    About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions.