• Search Menu

Advance articles

  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Submit?
  • About Journal of Survey Statistics and Methodology
  • About the American Association for Public Opinion Research
  • About the American Statistical Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Frequent Survey Requests and Declining Response Rates: Evidence from the 2020 Census and Household Surveys

  • View article
  • Supplementary data

Effects of a Web–Mail Mode on Response Rates and Responses to a Care Experience Survey: Results of a Randomized Experiment

The efficacy of propensity score matching for separating selection and measurement effects across different survey modes, bayesian multisource hierarchical models with applications to the monthly retail trade survey, bayesian quantile regression models for complex survey data under informative sampling, sequential and concurrent mixed-mode designs: a tailored approach, optimal predictors of general small area parameters under an informative sample design using parametric sample distribution models, a cost–benefit analysis of reinterview designs for estimating and adjusting mode measurement effects: a case study for the dutch health survey and labour force survey, supplementing a paper questionnaire with web and two-way short message service (sms) surveys, optimal conformal prediction for small areas, text messages to facilitate the transition to web-first sequential mixed-mode designs in longitudinal surveys, estimation of a population total under nonresponse using follow-up, concurrent, web-first, or web-only how different mode sequences perform in recruiting participants for a self-administered mixed-mode panel study, reconsidering sampling and costs for face-to-face surveys in the 21st century, improving the efficiency of outbound cati as a nonresponse follow-up mode in address-based samples: a quasi-experimental evaluation of a dynamic adaptive design, the effects of placement and order on consent to data linkage in a web survey, the prevalence and nature of cognitive interviewing as a survey questionnaire evaluation method in the united states, interviewer ratings of physical appearance in a large-scale survey in china, investigating respondent attention to experimental text lengths, a catch-22—the test–retest method of reliability estimation, improving donor imputation using the prediction power of random forests: a combination of swisscheese and missforest, correction to: correcting selection bias in big data by pseudo-weighting, peekaboo the effect of different visible cash display and amount options during mail contact when recruiting to a probability-based panel, responsive and adaptive designs in repeated cross-national surveys: a simulation study, a mixture model approach to assessing measurement error in surveys using reinterviews, incorporating adaptive survey design in a two-stage national web or mail mixed-mode survey: an experiment in the american family health study, using auxiliary information in probability survey data to improve pseudo-weighting in nonprobability samples: a copula model approach, proxy survey cost indicators in interviewer-administered surveys: are they actually correlated with costs, total bias in income surveys when nonresponse and measurement errors are correlated, interviewer effects on the measurement of physical performance in a cross-national biosocial survey, survey consent to administrative data linkage: five experiments on wording and format, experimenting with qr codes and envelope size in push-to-web surveys, optimizing data collection interventions to balance cost and quality in a sequential multimode survey, conjugate modeling approaches for small area estimation with heteroscedastic structure, email alerts.

  • Recommend to your Library

Affiliations

  • Online ISSN 2325-0992
  • Copyright © 2024 American Association for Public Opinion Research
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research articles on survey

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

NPS Survey Platform

NPS Survey Platform: Types, Tips, 11 Best Platforms & Tools

Apr 26, 2024

user journey vs user flow

User Journey vs User Flow: Differences and Similarities

gap analysis tools

Best 7 Gap Analysis Tools to Empower Your Business

Apr 25, 2024

employee survey tools

12 Best Employee Survey Tools for Organizational Excellence

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

research articles on survey

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

research articles on survey

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

research articles on survey

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

research articles on survey

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods, sign up for our weekly newsletter.

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

A Survey of U.S Adults’ Opinions about Conduct of a Nationwide Precision Medicine Initiative® Cohort Study of Genes and Environment

Contributed equally to this work with: David J. Kaufman, Rebecca Baker, Lauren C. Milner, Kathy L. Hudson

* E-mail: [email protected]

Affiliation National Human Genome Research Institute, Division of Genomics and Society, National Institutes of Health, Rockville, MD, United States of America

Affiliation National Institutes of Health, Office of the Director, Bethesda, MD, United States of America

  • David J. Kaufman, 
  • Rebecca Baker, 
  • Lauren C. Milner, 
  • Stephanie Devaney, 
  • Kathy L. Hudson

PLOS

  • Published: August 17, 2016
  • https://doi.org/10.1371/journal.pone.0160461
  • Reader Comments

Table 1

A survey of a population-based sample of U.S adults was conducted to measure their attitudes about, and inform the design of the Precision Medicine Initiative’s planned national cohort study.

An online survey was conducted by GfK between May and June of 2015. The influence of different consent models on willingness to share data was examined by randomizing participants to one of eight consent scenarios.

Of 4,777 people invited to take the survey, 2,706 responded and 2,601 (54% response rate) provided valid responses. Most respondents (79%) supported the proposed study, and 54% said they would definitely or probably participate if asked. Support for and willingness to participate in the study varied little among demographic groups; younger respondents, LGBT respondents, and those with more years of education were significantly more likely to take part if asked. The most important study incentive that the survey asked about was learning about one’s own health information. Willingness to share data and samples under broad, study-by-study, menu and dynamic consent models was similar when a statement about transparency was included in the consent scenarios. Respondents were generally interested in taking part in several governance functions of the cohort study.

Conclusions

A large majority of the U.S. adults who responded to the survey supported a large national cohort study. Levels of support for the study and willingness to participate were both consistent across most demographic groups. The opportunity to learn health information about one’s self from the study appears to be a strong motivation to participate.

Citation: Kaufman DJ, Baker R, Milner LC, Devaney S, Hudson KL (2016) A Survey of U.S Adults’ Opinions about Conduct of a Nationwide Precision Medicine Initiative® Cohort Study of Genes and Environment. PLoS ONE 11(8): e0160461. https://doi.org/10.1371/journal.pone.0160461

Editor: Alejandro Raul Hernandez Montoya, Universidad Veracruzana, MEXICO

Received: January 18, 2016; Accepted: July 19, 2016; Published: August 17, 2016

This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

Data Availability: With respect to our ability to share the data, the ethos of the Precision Medicine Initiative is to share data openly. However, in this instance, the survey data used in the paper were collected under a contractual agreement with the survey research company GfK. GfK carries out the survey on a sample drawn from a large population-based sample that GfK recruits and maintains. GfK has an ethical and contractual obligation to protect the privacy of its panel members and their households. To this end GfK passes these obligations on to its clients. We are bound ethically and legally not to share respondent identifiers, or data that could be linked to the larger dataset we possess that would allow for the identification of respondents or households. Requests for collaborations to examine aggregate analyses of these data are welcome and can be sent to [email protected] .

Funding: The Foundation for National Institutes of Health directly paid for GfK to field the survey. The authors themselves received no specific funding for the work. FNIH did not participate in data collection, analysis, decisions to publish or preparation of the manuscript. They were involved in discussions about the logistics of study design but did not influence survey content.

Competing interests: During the study and at the time of publication, DK worked at the National Institutes of Health (NIH) which is sponsoring and developing the PMI cohort study. He was not involved in those efforts. During the study and at the time of publication, KH, RB, and LM worked at the NIH which is sponsoring and developing the PMI cohort study, and were directly involved in development of the PMI cohort study. SD worked at the NIH at the time the survey was developed where she worked directly on development of the PMI study. At the time of publication, she worked at the White House where she was involved in some aspects of developing the PMI study. This does not alter the authors' adherence to all PLOS ONE policies on data sharing and materials.

Introduction

Precision medicine is an emerging approach to disease prevention, diagnosis and treatment that takes into account differences between individuals. While not new, to date it has only been applied to certain conditions. The Precision Medicine Initiative® (PMI) plans to build a comprehensive scientific knowledge base to implement precision medicine on a larger scale by launching a national cohort study of a million or more Americans [ 1 ]. The national cohort study will aim to foster open, responsible data sharing, maintain participant privacy, and build on strong partnerships between researchers and participants [ 2 ].

Prospective cohort studies using biospecimens are a common approach taken to examine the effects and interactions of genes, environment, and lifestyle [ 3 – 7 ]. Although they are labor-, time-, and capital-intensive [ 8 ], these studies can provide the statistical power needed to detect small biological effects on disease [ 9 – 10 ]. Both public [ 3 , 5 , 11 , 12 ] and private [ 4 , 6 , 7 ] cohort studies and biobanks have been created, and genetic analyses have been incorporated into existing cohort studies as genotyping and computational tools become more accessible [ 11 , 13 , 14 ].

The Precision Medicine Initiative® aims to expand on these efforts to engage a wider group who would volunteer a standardized set of health information that can be shared broadly with qualified researchers. Cohort volunteers would share their information and biological specimens for genomic and other analyses. Genomic information would be combined with clinical data from electronic health records, lifestyle data, and data measured through mobile health devices, for use by a broad range of researchers. Participants would have access to information about cohort-fueled research findings, as well as some individual research results.

The National Institutes of Health (NIH) along with other federal agencies has begun to design and execute this large prospective study as part of the White House’s Precision Medicine Initiative® [ 1 , 2 , 10 ]. During the initial planning process for this PMI Cohort Program, NIH engaged a wide variety of expertise through four public workshops on issues of design and vision for the cohort [ 15 – 18 ], and two Requests for Information [ 19 , 20 ].

At a July, 2015 workshop on participant engagement and health equity a broad range of experts discussed the role of participant engagement in the design and conduct of an inclusive PMI cohort [ 17 ]. The discussions, which focused on building and sustaining public trust, actively engaging participants, and enlisting participants to set research priorities and directly collect study data informed the strategic design of the PMI cohort [ 21 ].

The workshop concluded that continued engagement of a broad range of stakeholders will be needed to plan, carry out and sustain the PMI cohort program. As part of this larger public engagement effort, a survey of U. S. adults was conducted to measure support for such a study, to measure acceptability of various design features, and to identify and prioritize public concerns.

Materials and Methods

Survey methods.

A 44-question online survey, determined by the NIH Office of Human Subjects Research to be exempt from human subjects review (Exemption #12889), collected U.S. adults’ opinions about a national cohort study. Formal consent was not obtained both because the study was judged as exempt, and because completion of the survey is taken as a form of consent to participate.

The survey was not intended to collect psychometric data and thus did not rely on validated psychometric scales. However, to examine changes over time in public support for and hypothetical willingness to take part in a large U.S. cohort study, exact wording from two previous surveys was used for some questions.[ 22 – 24 ] Some questions came from a related survey of biobank participants under development at the time by the NIH-funded Electronic Medical Records and Genomics (or eMERGE) consortia [ 25 ]. Response choices consisted of pre-defined options. Most of these question choices were developed based on findings of focus groups conducted as part of prior studies [ 22 – 24 ].

The survey addressed support for and willingness to take part in the cohort study, specific aspects of participation, study oversight including participant involvement in governance, and the return of information to participants. Respondents were first shown a description of the cohort study. ( S1 Appendix ) At the end of the description, respondents were told that participants in the cohort study “might get access to the information collected about their health”.

Respondents were then asked several questions about their support for the concept and willingness to take part if they were asked. ( S2 Appendix contains exact wording of all of the questions analyzed here.) Respondents were also shown one of eight different scenarios, selected at random, describing study consent and data sharing and asked whether they would “consent to share your samples and information with researchers in this manner”. The eight scenarios varied with respect to two factors: the structure of consent (broad, study by study, menu, or dynamic consent) and the presence or absence of a statement that cohort study participants would “have access to a website where you would be able to see what studies are going on, which studies are using your information, and what each study has learned.” The exact wording of all eight versions of consent is found in S3 Appendix .

A pilot survey (n = 30) fielded between May 5 and May 7, 2015 evaluated the instrument length and logic. Median completion time for the pilot was 23 minutes; the instrument was shortened to 20 minutes. The final instrument was translated into Spanish for use by respondents who preferred it. The translation was back-translated and certified.

Sample selection and online survey administration was managed by the online survey firm GfK. The survey sample was drawn from GfK’s KnowledgePanel, which is itself a probability-based pool of approximately 55,000 people designed to be representative of the U.S. population.

Individuals can become GfK panelists only after being randomly selected by GfK; no one can volunteer to be a member. GfK selects people using probability-based sampling of addresses from the U.S. Postal Service’s Delivery Sequence File, which includes 97% of residential U.S. households. Excluded from eligibility are those in residential care settings, institutionalized and incarcerated people, and homeless individuals. Individuals residing at randomly sampled addresses are invited to join KnowledgePanel through a series of mailings in English and Spanish; non-responders are phoned when a telephone number can be matched to the sampled address.

For those who agree to be part of the GfK panel, but do not have Internet access, GfK provides at no cost a laptop and Internet connection. GfK also optimized the survey for administration via smartphone or tablet. When GfK enrolls participants into its panel, each panel participant answers demographic questions which are banked and periodically updated by GfK. GfK can then provide data on common demographics for each of its participants, allowing surveys to reduce burden by not asking these questions. Data in this paper on participants’ self-reported race and ethnic group, age, education, gender, sexual orientation or gender identity, household income, and residence in a metropolitan statistical area were all measured by GfK prior to this survey.

GfK attempts to address sources of survey error including sampling error, non-coverage and non-response due to panel recruitment methods and panel attrition by using demographic post-stratification weights based on demographics of the U.S. Current Population Survey (CPS) as the benchmark. Once the data are collected, post-stratification weights are constructed so that the study data can be adjusted for the study’s sample design and for survey nonresponse[ 26 ].

This series of methods has resulted in GfK survey samples that compare favorably to other gold standard methods designed to generate population-based samples [ 27 ]. During the field period for this survey, GfK first drew a random sample of 3,271 U.S. adults from their Web-enabled panel of approximately 55,000 U.S. residents. This included Hispanics and black non-Hispanics. In order to meet oversampling goals of 500 in these two groups, three additional random samples were drawn, including one of 665 Black non-Hispanic adults and two additional samples of 541 and 320 Hispanic adults. GfK contacted each of these 4,777 individuals via email to invite them to take part in this survey. Non-respondents received up to four email reminders from GfK.

The survey was fielded online between May 28, 2015 and June 9, 2015. Participants received the equivalent of $2 for their time. After survey data were collected, information previously collected by GfK on panel members’ demographics was added to the dataset.

Analysis Methods

Data were cleaned and analyzed using SPSS software [ 28 ]. Respondents who skipped more than one-third of the questions, or who completed the survey in less than one quarter of the median survey time were excluded from analyses.

Support for the study and willingness to participate were measured using 4-point Likert scales; two binary variables were created for analysis from these scales. Demographic variables were analyzed using the categories as shown in Table 1 . Two sets of multiple logistic regressions were conducted. ( Table 2 ) Support for the study and willingness to participate were the dependent variables. In these models, race and ethnic group were treated as a single categorical variable using dummy variables, and treating white non-Hispanics as the reference group. Education, household income and age were each modeled as ordinal variables using the categories shown in Tables 1 and 2 . Respondents who identified as lesbian, gay, bisexual or transgender (LGBT) were analyzed together as a single group.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

(n = 2,601).

https://doi.org/10.1371/journal.pone.0160461.t001

thumbnail

Each multiple logistic regression included independent covariates for gender, self-identified race and ethnic group, survey language (among Hispanics only), age, household income, educational attainment, residence within or outside a metropolitan statistical area, and identification as either lesbian, gay, bisexual or transgender. For purposes of the analysis, race and ethnicity was treated as a categorical variable, using dummy variables for black non-Hispanics, Hispanics, and other non-white, non-Hispanics. Education, household income, and age were each treated as 4-level variables using the categories shown below. To examine whether there were differences among Hispanics who took the survey in Spanish and English, separate regressions were conducted using Hispanic respondents’ data only, adjusting for all of the variables below except for race and ethnic group.

https://doi.org/10.1371/journal.pone.0160461.t002

The multiple logistic regressions examined demographic factors associated with support and participation. Analyses that included the entire sample were weighted to 2014 U.S. Census demographic benchmarks. To examine whether Hispanics who took the survey in English differed from those who took it in Spanish with respect to support and willingness to participate, separate regressions were carried out using only Hispanic respondents’ data, adjusting for the covariates in Table 1 except for race and ethnic group. Analyses within or between different races and ethnic groups used an alternate set of weights calculated for these oversampled groups.

In total, 4,777 people were contacted by GfK via email and invited to take the survey, and 2,706 provided at least some responses, for an overall response rate of 56.6%. Overall response rates were 62.2% (2,036 of 3,271) in the general population sample, 345/665 or 51.9%, in the Black-non-Hispanic oversample and 325/841 or 38.6% in the Hispanic oversample. It should be noted that 320 Hispanic cases in the oversample were invited to respond on June 5, 2015 and the survey was closed on June 9, 2015. Members of this oversample received fewer email reminders to take part and had a shorter field period to participate, which could account for some but not all of the lower completion rate in the Hispanic oversample.

Responses from 105 people (3.9%) were excluded from analysis because they skipped more than one-third of the questions or completed the survey in less than 6 minutes, leaving a valid response rate of 2,601/4,777 or 54%. The excluded people did not differ demographically from those retained in the analysis. The margin of error on opinion estimates based on the sample of 2,601 is +/- 1.9%.

Demographic characteristics of the surveyed population are found in Table 1 . After weighting the sample, people with less than twelve years of education were still somewhat underrepresented compared to census data. This should be considered where differences in opinions exist across education groups.

General Support for the Cohort Study

Immediately after viewing the description of the cohort study, participants were asked ‘Based on the description you just read, do you think this study should be done?’ Seventy-nine percent said the study definitely (22%) or probably (57%) should be done, while 21% said probably not (16%) or definitely not (5%).

Similar levels of support were observed across most demographic groups ( Table 2 ). A multiple logistic regression treating support for the cohort study as a binary independent variable showed that, adjusting for the other factors in Table 1 , no significant differences in support were observed between genders, age groups, races or ethnic groups, or between Hispanics who took the survey in Spanish and English. Fewer years of education (p<0.0001), lower household income (p = 0.04), and residence outside of metropolitan statistical areas (a proxy for rural residence, p = 0.03) were independently associated with lower levels of support for the study. However, in all but one of the demographic categories examined (0–11 years of education), 70% or more said they supported the study.

Stated Willingness to Participate in the Cohort Study if Asked

The question ‘Would you participate in the cohort study if you were asked?’ was also posed at the survey’s start. Prior to this question, the only possible personal benefit of participating that was mentioned was that cohort participants “might get access to the information collected about their health.” A majority of participants (54%) said they definitely (14%) or probably (40%) would participate if asked, while the rest said they probably (30%) or definitely (16%) would not take part. Willingness to participate did not vary considerably between demographic groups ( Table 2 ). Majorities (>50%) in most groups said they would participate if asked, and in each group, at least 1 in 9 people said they would definitely take part. A second multiple logistic regression treated willingness to participate as a binary independent variable. Adjusting for the other factors in Table 1 , increasing years of education (p<0.0001) and younger age (p<0.0001) were independently associated with increased likelihood of willingness to participate. Compared to white non-Hispanics, Hispanic respondents were more likely to say they would participate (59% vs 53%, adjusted p = 0.009). As a group, respondents who identified as lesbian, gay, bisexual or transgender were significantly more likely to say they would participate if asked (p = 0.01).

One in four respondents said that they supported the idea of the study, but also said they would not participate if they were asked. People who supported the study but would not participate if asked were more than twice as likely as those who would participate to agree that the study “would take too much of my time” (77% vs 30%), and were less likely than those who would take part to agree with the statement “I trust the study to protect my privacy” (51% vs. 81% respectively).

The survey was not designed to educate people about precision medicine or biomedical research. However it was hypothesized that thinking about some of the attitudinal questions in the survey could influence respondents’ opinions about taking part in the study. To test this hypothesis, near the end of the survey, respondents were asked again “Now that you have had a chance to think about the study, would you participate in the study if you were asked?” Overall, responses were fairly similar to the earlier question– 56% said they would definitely (15%) or probably (41%) take part if asked; 25% said they probably would not participate and 19% said they definitely would not. Seven in ten (70%) did not change their answer from the beginning to the end of the survey. However, 15% had grown more positive about participating by the end of the survey, and 15% had grown more negative. Some demographic differences were observed in shifts from the beginning of the survey to the end. For example, fewer people who took the survey in Spanish (61% vs 55%) said they would take part at the end of the survey, while more people with some college (55% vs 60%) or a bachelor’s degree (60% vs 64%) were willing to take part at the end of the survey.

Respondents were also asked about specific things they would be willing to do as study participants. Among all respondents, one in seven (14%) said they would participate for their lifetime, and an additional 11% said they would take part for at least ten years. Among people who said they would definitely or probably participate if asked, 42% said they would take part for at least ten years. However, only one in four of the Black non-Hispanics and Hispanics who were willing to take part said they would do so for at least ten years.

All respondents, including those who said they would not participate, were asked to “[i]magine you were considering participating in the study”, and then asked about their willingness to provide various types of data and samples. Nearly three quarters of respondents (73%) said that if they were participating they would be willing to provide the study with a blood sample. Higher fractions said they would provide urine, hair, and saliva samples (75%), data from an activity tracker (i.e. Fitbit) (75%), genetic information (76%), a family medical history (77%), soil and water samples from home (83%), and data on lifestyle, diet and exercise (84%). Among those with a social media account (n = 1,641) only 43% responded that they would share their social media information with the study.

In each demographic group listed in Table 1 , at least 9% of people (one in eleven) said they would definitely participate, would take part for at least ten years, and would provide the study with a blood sample.

In the sample, 87% owned either a smart phone (62%) or a cell phone (25%). Three-quarters of these phone users responded that if they “were texted or prompted on your cell phone to answer a question from the study, or measure your pulse”, they would be willing to respond at least once a week. A majority (59%) said they would respond at least once a day, and 28% would be willing to respond at least twice a day.

Incentives for participation

Respondents were asked about the importance of six different incentives in their decision about whether or not to participate. The most important incentive was “learning information about my health”, listed as either somewhat or very important by 90% of people, including at least 85% of people in each group in Table 1 . Receiving payment for their time (80%) and getting health care (77%) were important to more people than receiving free internet connections (56%), activity trackers (55%) or smartphones and data plans (52%). However, the technology incentives were of more interest to younger respondents, those with lower household incomes, and those with fewer years of education.

Respondents said they would be interested in a wide variety of information that the study might return to them ( Fig 1 ). Three in four would be interested in “lab results” (examples given were cholesterol and blood sugar) as well as genetic results. Slightly fewer (68%) said they would like a copy of their medical record. Six in ten (60%) said they would be interested in receiving information about other research studies related to their health.

thumbnail

Respondents’ interest in different types of information the study could return to participants.

https://doi.org/10.1371/journal.pone.0160461.g001

Consent and Sharing of Data and Samples

As described above, respondents were randomly selected to view one of eight consent scenarios and asked “would you consent to share your samples and information with researchers in this manner”. There were four models of consent: broad, study-by-study, menu, and dynamic consent. The exact wording of the four consent scenarios is found in S3 Appendix . Two versions of these four scenarios were presented; four where the consent description stood alone, and four where the consent option was followed by this sentence: “You would (also) have access to a website where you would be able to see what studies are going on, which studies are using your information, and what each study has learned.”

When the consent models were displayed alone, similar fractions of respondents said they would share samples and data under the study-by-study (72%), menu (75%), and dynamic consent models (73%) ( Fig 2 ) while 64% would share with the study under the broad consent model. However, when the consent scenarios were accompanied by the statement about a website that displays how samples and data are being used, there was essentially no difference in support for the four consent models.

thumbnail

Willingness to share samples and information under different consent models.

https://doi.org/10.1371/journal.pone.0160461.g002

When asked about allowing different categories of researchers to use their samples and information, people were most likely to say they would share with researchers at the NIH (79%) and U.S. academic researchers (71%). There was more reluctance to share with “pharmaceutical or drug company researchers” (52%) or “other government researchers” (44%). The category “other government researchers” may be overly broad and non-specific; for example, had the survey named researchers at specific health-related agencies, responses may have differed. Consistent with two prior surveys, people were least willing to share with university researchers in other countries (39%) [ 23 , 24 ].

In a separate question, 43% of people agreed that if their personal information was removed first, they would be willing to have their “information and research results available on the Internet to anyone”.

Involvement of Participants in Design and Conduct of the Study

To create a cohort study that addresses health related questions that are relevant to the lives of participants, study designers are embracing new models of participants as partners in research. Several questions addressed respondents’ interest in this area. A large majority (76%) agreed with the statement “research participants and researchers should be equal partners in the study”.

Fig 3 shows that between 34% and 62% of respondents said that participants should be involved in various phases of the study. Important to the most respondents was participant involvement in three governance tasks—deciding what kinds of research are appropriate, deciding what to do with study results, and deciding what research questions to answer. Between 35% and 45% said they would like to be involved themselves in those three aspects of the study.

thumbnail

Aspects of the study that participants should be involved in generally, and aspects the respondent themselves would want to be involved in.

https://doi.org/10.1371/journal.pone.0160461.g003

One in four people said that including research participants in planning and running the study would increase their willingness to participate, including 18% of those people who said earlier in the survey that they would not take part if asked. Another 17% said it would make them less willing to take part if participants were included, and 58% said it would not affect their decision.

The Precision Medicine Initiative® cohort program been engaging and partnering with participant representatives prior to the launch of the study, and plans to actively continue this work with cohort study participants. This survey reflects an early effort to understand the views and preferences of potential participants toward the PMI cohort program. Findings from this survey were incorporated into the final report that the Precision Medicine Initiative Working Group made to the National Institutes of Health, and are reflected in recommendations about how the cohort study might be designed.[ 21 ] As such, the survey represents one of several early efforts to engage the public in order to inform whether and how the PMI cohort study might move forward.

Across most demographic groups this survey found consistent levels of support for and willingness to participate in the PMI cohort study as it was described. The overall support (79%) and willingness to take part (54%) observed are comparable to measures in previous public surveys conducted using nationwide GfK samples in 2008 and 2012, which found overall support for a large nationwide biobank at 84% and 83% respectively, and willingness to participate if asked at 60% and 56% respectively [ 23 , 25 ]. Support in this study could be lower than that measured in earlier surveys in part due to the explicitly stated association with the NIH (and thus the federal government) in this survey, as well as high profile privacy breaches associated with the federal government and health providers in the six months prior to the survey [ 29 , 30 ]. Differences in levels of support and willingness to participate might also result from this survey’s mention of smartphones and activity trackers to collect data, especially among older respondents, who had more concern about the privacy of electronic media (data not shown). This study’s estimates of overall support and willingness to participate are also biased slightly upward, since people with fewer years of education, who were underrepresented in the sample compared to U.S. demographics, were less likely to support the study and participate. However, extrapolating support and willingness observed in each category of education to U.S. census frequencies of the education categories suggests the magnitude of inflation from this source is less than 1% in both figures.

The findings suggest that certain groups including older Americans and those with lower socioeconomic status may require additional engagement if they are to take part. However the survey findings do not support the idea that people from communities that have historically been understudied in research are not interested in participating in this cohort. In contrast, in each demographic group in Table 1 , at least one in eleven people (9%) said they would definitely participate if asked, would donate blood, and would take part for at least ten years. The willingness to take part observed here is only the foundation for efforts needed to engage, recruit and retain people in traditionally underrepresented groups. Researchers likely must work as part of communities that have been underrepresented, if those communities are going to feel and be a part of the study. [ 17 ] To this end, scientists may consider adopting language and policies that bond researchers and potential participants together todesign and govern the study [ 17 , 31 ].

Thirty percent of survey respondents shifted their opinions about their willingness to take part in the study from the beginning to the end of the survey. This suggests that considering some of the potential risks and benefits of participation may inform and influence people’s decision to take part. Engagement before and during study recruitment may help people make better informed decisions about participation.

The observation that receipt of health information was the most important incentive was consistent with results of a 2008 nationwide survey [ 23 ]. Maximizing information shared with research participants will be a key challenge of the PMI. Survey respondents expressed interest in a wide range of information, including but not limited to genetic information. Laboratory measurements such as blood sugar were seen as equally interesting. The return of information may also benefit research, encouraging participants to stay engaged and enrolled, and to take part in other research studies based on their results.

There was considerable enthusiasm among respondents about participant involvement in different phases of the study. Between 19% and 45% said they themselves would take part in various study related functions. The tasks of greatest interest to the most people were governance-related. Developing “participants as equal partners” may not drastically improve enrollment. However it may establish the kind of study identity and enthusiasm that others have cited as one key to the success of this effort [ 16 – 18 , 21 ].

NIH researchers were found to be trusted with the data and samples to be collected. If the NIH serves as a leader in the PMI cohort, it must be prepared to understand and meet those expectations. For example, if PMI cohort data are shared with foreign academics, study leaders may need to address negative attitudes about such sharing, perhaps by engaging the public to understand their reservations, and explaining how the sharing benefits U.S. medicine and research.

Limitations

It is very important to note that the results of this survey were not meant to, and do not accurately predict what portion of American adults would take part in the PMI cohort study if they were asked. First, although half of respondents said they would take part in PMI if asked, only 54% of the sample contacted for this survey agreed to participate. Second, respondents in this survey are members of the GfK panel; they may be more favorably inclined toward research participation than the general population. This limitation is inherent in most studies of attitudes about taking part in biomedical research, since people must be willing to take part in a survey study like this one to collect such data. On the other hand, the bias may not be particularly strong since sharing opinions on a survey is likely to be a smaller, lower risk commitment than sharing ones’ biospecimens and medical data. Third, people’s stated willingness to take part in a hypothetical study will not correlate perfectly with actual behaviors. The PMI study should not be expected to enjoy a 54% success rate in its recruitment based on these data. Given the limitations of the survey, the data likely provide valid estimates of support for the study as well as the relative willingness of different groups to participate, the relative importance of different incentives, and the relative acceptability of different consent models.

Supporting Information

S1 appendix. text used to describe the pmi cohort study in the survey..

https://doi.org/10.1371/journal.pone.0160461.s001

S2 Appendix. Exact wording of survey questions used in this manuscript.

https://doi.org/10.1371/journal.pone.0160461.s002

S3 Appendix. Wording used to describe eight consent scenarios in the survey.

https://doi.org/10.1371/journal.pone.0160461.s003

Acknowledgments

The authors wish to thank the Foundation for the National Institutes of Health, which funded this survey. The authors would also like to thank Vence Bonham, Laura Rodriguez, Alex Lee and the members of the Consent, Education, Regulation and Consultation Working Group of the eMERGE research consortium for their contributions to the survey and manuscript development.

Author Contributions

  • Conceived and designed the experiments: DK RB LM SD KH.
  • Performed the experiments: DK RB LM SD KH.
  • Analyzed the data: DK RB.
  • Wrote the paper: DK RB LM SD KH.
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. The White House. Precision Medicine Initiative: Proposed Privacy and Trust Principals [Internet]. Washington: The White House; 2015 [cited 2015 October 8]. Available: https://www.whitehouse.gov/sites/default/files/docs/pmi_privacy_and_trust_principles_july_2015.pdf .
  • 3. Va.gov [Internet]. Washington: Million Veteran Program (MVP) [Accessed 8 October 2015]. Available: http://www.research.va.gov/mvp/ .
  • 4. Genomeweb.com [Internet]. New York: Regeneron Launches 100K-Patient Genomics Study with Geisinger, Forms New Genetics Center [updated 2014 January 14; Accessed 8 October 2015]. Available: https://www.genomeweb.com/sequencing/regeneron-launches-100k-patient-genomics-study-geisinger-forms-new-genetics-cent .
  • 5. Ukbiobank.ac.uk [Internet]. London: Biobank UK Homepage [updated 2015 October 7; Accessed 8 October 2015]. Available: http://www.ukbiobank.ac.uk .
  • 7. Dor.kaiser.org [Internet]. Oakland: Kaiser Permanente Division of Research The research program on genes, environment, and health [updated 2015 January; Accessed 8 October 2015]. Available: http://www.dor.kaiser.org/external/DORExternal/rpgeh/index.aspx .
  • 9. National Human Genome Research Institute. Design considerations for a potential United States population-based cohort to determine the relationships among genes, environment, and health: `Recommendations of an expert panel’ [Internet]. Bethesda: National Human Genome Research Institute; 2005 [Accessed 8 October 2015] Available: https://www.genome.gov/Pages/About/OD/ReportsPublications/PotentialUSCohort.pdf .
  • 15. National Institutes of Health. ACD Precision Medicine Initiative Working Group Public Workshop: Unique Scientific Opportunities for the Precision Medicine Initiative National Research Cohort [Internet]. Bethesda: National Institutes of Health; 2015 April 28 [updated 2015 June 9; Accessed 8 October 2015]. Available: http://www.nih.gov/precisionmedicine/workshop-20150428.htm .
  • 16. National Institutes of Health. ACD Precision Medicine Initiative Working Group Public Workshop: Digital Health Data in a Million-Person Precision Medicine Initiative Cohort [Internet]. Bethesda: National Institutes of Health; 2015 May 28 [updated 2015 June 30; Accessed 8 October 2015]. Available: http://www.nih.gov/precisionmedicine/workshop-20150528.htm .
  • 17. National Institutes of Health. ACD Precision Medicine Initiative Working Group Public Workshop: Participant Engagement and Health Equity Workshop [Internet]. Bethesda: National Institutes of Health; 2015 July 1 [updated 2015 August 18; Accessed 8 October 2015]. Available: http://www.nih.gov/precisionmedicine/workshop-20150701.htm .
  • 18. National Institutes of Health. Mobile and Personal Technologies in Precision Medicine Workshop—Precision Medicine Initiative Cohort [Internet]. Bethesda: National Institutes of Health; 2015 July 27 [updated 2015 August 5; Accessed 8 October 2015] Available: http://www.nih.gov/precisionmedicine/workshop-20150727.htm .
  • 19. National Institutes of Health. Summary of Responses from the Request for Information on Building the Precision Medicine Initiative National Research Participant Group [Internet]. Bethesda: National Institutes of Health; 2015 [Accessed 8 October 2015]. Available: https://www.nih.gov/sites/default/files/research-training/initiatives/pmi/pmi-workshop-20150528-rfi-summary.pdf .
  • 20. National Institutes of Health. National Institutes of Health. Request for Information: NIH Precision Medicine Cohort—Strategies to Address Community Engagement and Health Disparities [Internet]. Bethesda: National Institutes of Health; 2015 June 2 [updated 2015 August 18; Accessed 8 October 2015]. Available: http://www.nih.gov/precisionmedicine/rfi-announcement-06022015.htm .
  • 21. Precision Medicine Initiative (PMI) Working Group. The Precision Medicine Initiative Cohort Program—Building a Research Foundation for 21 st Century Medicine [Internet]. Bethesda: National Institutes of Health; 2015 September 17 [Accessed 8 October 2015]. Available: http://acd.od.nih.gov/reports/DRAFT-PMI-WG-Report-9-11-2015-508.pdf .
  • 25. Vanderbilt.edu [Internet]. Nashville: Welcome to eMerge: Collaborate [Accessed 8 October 2015]. Available: https://emerge.mc.vanderbilt.edu/ .
  • 26. GfK. GfK KnowledgePanel [Internet]. Nuremburg: Gfk 2015[Accessed 8 October 2015]. Available: http://www.gfk.com/Documents/GfK-KnowledgePanel.pdf .
  • 27. Baker LC, Bundorf MK, Singer S, Wagner TH. Validity of the survey of health and internet and Knowledge Network's panel and sampling. Available: http://cdc.gov/PCD/issues/2004/oct/pdf/04_0004_01.pdf . Accessed 14 January 2016.
  • 28. SPSS for Windows, rel. 21.0. 2012. Chicago, IL: SPSS Inc.

research articles on survey

Americans' Primary Care Experiences

U.S. News & World Report surveyed 2,000 U.S. adults about health care issues, including why and how often they go to the doctor, how they choose their doctors and why they choose to (or don't) follow the advice of their doctors.

Annika Urban April 24, 2024

Global Perceptions of Women’s Rights

On International Women’s Day, advocates push for gender equality – and survey data suggests some countries are more receptive than others.

Julia Haines March 8, 2024

research articles on survey

Report: Criticism of Democracy Grows

Support for representative democracy is down in many countries surveyed by the Pew Research Center, with particularly noticeable declines in Kenya and Sweden.

Elliott Davis Jr. Feb. 28, 2024

research articles on survey

Consumers Stay Optimistic on the Economy

The sentiment survey from the University of Michigan did show a slight increase in inflation expectations.

Tim Smart Feb. 16, 2024

research articles on survey

Study Links Living Alone to Depression

New research bound to influence conversations about America’s ‘loneliness epidemic’ suggests living alone could have implications for physical and mental health.

Steven Ross Johnson Feb. 15, 2024

research articles on survey

Long COVID Prevalence by State

Long COVID tended to be more prevalent in the South, Midwest and West, according to a CDC analysis.

Cecelia Smith-Schoenwalder Feb. 15, 2024

research articles on survey

States With the Highest Marriage Rates

Colorado had the highest marriage rate of any state, while New Mexico had the lowest.

Steven Ross Johnson Feb. 8, 2024

research articles on survey

Consumers Turn the Corner on Economy

Increased income expectations and declining inflation are behind the rosier outlook.

Tim Smart Jan. 19, 2024

research articles on survey

Poll: Russians Still Approve of Putin

Most Russian survey respondents see the war in Ukraine as a broader conflict with the West and support it amid concerns about their own country’s economy.

Elliott Davis Jr. Jan. 9, 2024

research articles on survey

Consumer Sentiment Up on Lower Inflation

Progress on inflation is finally changing the mood of consumers to a more positive state.

Tim Smart Dec. 22, 2023

research articles on survey

America 2024

research articles on survey

The scourge of customer satisfaction surveys

On a scale of 1 to 5, how likely are you to share this story with a friend?

research articles on survey

All I can really say about the appointment at my kid's allergist is that it occurred. We waited weeks to get in, got some tests, received a diagnosis and a treatment plan, had a weird insurance thing that wasted our time. American healthcare took place.

Then I got a survey. 

The email contained the usual set of questions . How would I rate the service I received? How likely was I to recommend them to a friend? But I've gotta say, getting asked how satisfied I was with the care provided by a pediatric allergist was baffling to me. My child received necessary medical treatment at a speed commensurate with its urgency. It was fine. What aspect of it could I possibly evaluate? I don't need to express an opinion about the chairs in the waiting room.

The whole thing vexed me enough that I started to really notice customer satisfaction surveys — and, as I'm sure you've seen, they are everywhere . It seems like every interaction I have with a money-involving organization also comes with a polite request for my feedback. A restaurant. A hotel. A shop. The insurance company that wasted my time. Every time I buy something or interact with someone: another survey. While I was pitching this story to my editor, his email dinged. A survey! How'd we do? How long was your wait time? How satisfied were you with the knowledge and professionalism of the salesperson who served you?

Most of the time I'm not asked to evaluate the quality of a product or service. I'm asked to evaluate the experience , the meta-consumption that drives our hyperactive service economy. A tsunami of surveys has turned us all into optimization analysts for multibillion-dollar companies. Bad enough I'm providing free labor to help a transnational corporation improve its share price or "evaluate" a low-paid, overworked, nonunion employee. It's more than annoying. I'm starting to suspect it's unethical.

This isn't just my imagination. We're all getting more requests for feedback. Global spending on market research has doubled since 2016, to more than $80 billion a year. More than half of that money is doled out in the United States, and a fifth of it — $16 billion! — is devoted to customer surveys.

Consider the experience of Qualtrics, one of the largest survey-data companies. In the past year, the firm has analyzed 1.6 billion survey responses. That's a 4% increase over the prior year — and responses for the first quarter of 2024 were 10% above what Qualtrics projected. Its analysis of "non-structured data," which is to say customer-service phone calls and online chatter, hit 2 billion conversations last year. This year the company projects an increase of 62%.

Why are there suddenly so many surveys? Because people have so many options today that they're not bothering to complain when something sucks. They just move on to a different, equally accessible website. A company pisses them off or disappoints them, and poof! They're gone.

"When a customer has a poor experience, 10% fewer of them are telling the company about it than they did in 2021," says Brad Anderson, the president of product and engineering at Qualtrics. "What's happening is they're just switching." So companies are using surveys in a bid to hang on to those unloyal customers. After all, it's way more expensive to acquire a new customer than keep an old one.

The tricky part is marketing research has shown that the objective quality of a product, its nominal goodness, matters less than whether it meets customer expectations. "Quality," as one research paper put it, "is what the customer says it is." Customer satisfaction correlates with profitability, with share price , with success .

Now, to get all philosophical for a moment, what even is satisfaction , anyway? People tried for decades to figure that out. Then, in 2004, a Bain consultant named Fred Reichheld came up with an answer. He called it the Net Promoter Score. 

Before I tell you what that is, let me ask you a question: On a scale of 1 to 10, how likely would you be to recommend this article to someone else?

That's it. That's what the Net Promoter Score does. If you'd recommend something to someone else, it has by definition satisfied you. Mystery solved.

The NPS came along at the same time as the widening use of the internet and social media, which made it very easy to ask about. Phone calls, snail mail — that stuff is time-consuming and expensive. But surveys sent via email and text are fast and cheap. 

"People don't choose based on objective quality anymore," says one marketing expert.

In American marketing, NPS became an unstoppable craze. Other metrics followed: the Customer Satisfaction Score, the Customer Effort Score, measurements of the entire Customer Experience. A survey, or monitoring calls to customer service, could reveal loyalty, intent to buy again, the specific parts of the "customer journey" that were most pleasant. "People don't choose based on objective quality anymore," says Nick Lee, a marketing professor at the Warwick Business School. "Value is added by way more than what we would call objective product features."

At the peak of the so-called sharing economy, customer surveys were all-powerful. They went both ways: Suddenly, Uber drivers and Uber riders both had star ratings to care about. Customer surveys were going to fix asymmetrical marketplace information. But of course, the whole thing was frothier than a five-star milkshake. By the late 2010s it was becoming clear that all those reviews and ratings were getting less useful over time. They were subject, it turned out, to " reputation inflation ." Eventually everything gets four stars out of five.

The glut of customer surveys has created an additional problem for marketers. Email surveys are like the robocalls of old: You hit delete without even looking at them. "People receive so many survey requests that they're more likely to refuse to participate in any survey," says James Wagner, a researcher at the University of Michigan's Institute for Social Research. It's called oversurveying, and it makes people less likely to respond . Which means that, for statistical validity, companies have to send out more surveys. Which lowers the response rate even further, which means that companies have to send out yet more surveys, in a never-ending doom loop. On a scale of 1 to 5, customer satisfaction with customer-satisfaction surveys is headed to zero.

In reality, nobody's even sure these surveys are measuring the right thing . "Companies regularly collect customer-satisfaction measures, Net Promoter Scores, things like that," says Christine Moorman, a business-administration professor at Duke University who heads up a semiannual survey of hundreds of chief marketing officers. "But then the question is what do they do with it, and to what strategic ends? Most of them are doing it out of habit, not because they're thinking about the larger strategic questions they have."

Big survey companies don't just dump a giant Excel spreadsheet on their clients and send them an invoice. They offer sophisticated analyses of the data they collect. But unless those numbers are tied to possible changes the client might make, what's the point? "It's a huge arms race," says Lee, the Warwick marketing prof. "If you can give me more data rather than less data, I want more data. But the business model as to whether that data is valuable, it's sometimes questionable. Because people don't know what to do with the data, and they let the agency tell them what it says." Just because a company gets a bunch of survey results doesn't mean it knows what to do with them.

Customer surveys aren't just bad for companies. After reading the copious research on how surveys are actually used, I've come to the conclusion that they're even worse for us, the oversurveyed customers.

Any time a scientist wants to do research involving humans, it's a whole thing. That always comes with risks, from exposing people to an untested drug to simply wasting their time. To get approved by an Institutional Review Board, the potential results have to be worth the risks, to provide some benefit to humanity. That's called "equipoise." And if a proposed experiment on living things doesn't have it, you ain't supposed to do the experiment. 

Perhaps customer surveys should be evaluated for "equipoise." What if they're only being used to discipline or fire employees?

Perhaps customer surveys should be evaluated for equipoise. If the companies actually use the data to improve a product or experience, that's good for us subjects. But what if it's used only to improve the company's share value or profitability? Or to discipline or fire employees? That only helps the company. And that doesn't even take into account whether I, the surveyed one, gave my consent for data I provided to be used in that way — a key to ethical research. 

"Maybe we should have to pre-tell people what we're going to do with the data before we get it," Lee says. "That would be a way to stop companies from doing it indiscriminately." But he knows that's a nonstarter. "We'd be adding bureaucracy into the system. Never a popular thing to do with companies."

Worse, for vast swaths of services, you and I are the last people anyone should be soliciting opinions from. Things like doctor visits, legal services, or school classes are "pretty hard for the user to evaluate," Lee says. "We ask for customer feedback on these things all the time, but it's hard for a customer to give you immediate feedback, because a customer doesn't know what quality is yet." The college class you hated because it was hard, and at 8 a.m., might turn out to be your favorite academic memory and the foundation for your professional skill set 15 years later. Whether a visit to the mechanic was pleasant doesn't tell you how well they fixed your car. You have to drive around with your new drive shaft awhile to know whether you got shafted. 

Lee has unpublished data, which hasn't been peer-reviewed, comparing hospital performance in Britain's National Health Service with surveys of both patients and employees. "It's not surprising that the best hospitals have the best patient feedback and best worker feedback," he says. But what is surprising is that worker feedback, not customer responses, correlates most closely with quality. Users, it turns out, aren't very good at telling what's what.

You know what is good at sorting through tons of data? Artificial intelligence. As email surveys get lower and lower response rates, consumer marketing companies have begun to tout their acumen at applying AI to the unstructured verbiage of online reviews, social-media posts, and call-center transcripts. Maybe these new tools, based on large language models, will be able to coax better responses from oversurveyed consumers. "It's the ability to be able to detect when there's a low-quality answer and come back and ask the customer for more data," Anderson says. "When we ask the second question, 40% of the time the customer engages and provides more data. The number of syllables in the second response increases by 9x." 

Now, if I get a callback from a customer-survey robot, there's a good chance most of those additional syllables will be profane. How will I rate my experience getting interviewed by an AI? It might get more actionable data out of me than that email from my kid's allergist did. But I'm pretty sure I won't recommend it to a friend.

Adam Rogers is a senior correspondent at Business Insider.

About Discourse Stories

Through our Discourse journalism, Business Insider seeks to explore and illuminate the day’s most fascinating issues and ideas. Our writers provide thought-provoking perspectives, informed by analysis, reporting, and expertise. Read more Discourse stories here .

research articles on survey

Related stories

More from Retail

Most popular

research articles on survey

  • Main content
  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

Your Health

  • Treatments & Tests
  • Health Inc.
  • Public Health

Helping women get better sleep by calming the relentless 'to-do lists' in their heads

Yuki Noguchi

Yuki Noguchi

research articles on survey

Katie Krimitsos is among the majority of American women who have trouble getting healthy sleep, according to a new Gallup survey. Krimitsos launched a podcast called Sleep Meditation for Women to offer some help. Natalie Champa Jennings/Natalie Jennings, courtesy of Katie Krimitsos hide caption

Katie Krimitsos is among the majority of American women who have trouble getting healthy sleep, according to a new Gallup survey. Krimitsos launched a podcast called Sleep Meditation for Women to offer some help.

When Katie Krimitsos lies awake watching sleepless hours tick by, it's almost always because her mind is wrestling with a mental checklist of things she has to do. In high school, that was made up of homework, tests or a big upcoming sports game.

"I would be wide awake, just my brain completely spinning in chaos until two in the morning," says Krimitsos.

There were periods in adulthood, too, when sleep wouldn't come easily, like when she started a podcasting company in Tampa, or nursed her first daughter eight years ago. "I was already very used to the grainy eyes," she says.

Now 43, Krimitsos says in recent years she found that mounting worries brought those sleepless spells more often. Her mind would spin through "a million, gazillion" details of running a company and a family: paying the electric bill, making dinner and dentist appointments, monitoring the pets' food supply or her parents' health checkups. This checklist never, ever shrank, despite her best efforts, and perpetually chased away her sleep.

"So we feel like there are these enormous boulders that we are carrying on our shoulders that we walk into the bedroom with," she says. "And that's what we're laying down with."

By "we," Krimitsos means herself and the many other women she talks to or works with who complain of fatigue.

Women are one of the most sleep-troubled demographics, according to a recent Gallup survey that found sleep patterns of Americans deteriorating rapidly over the past decade.

"When you look in particular at adult women under the age of 50, that's the group where we're seeing the most steep movement in terms of their rate of sleeping less or feeling less satisfied with their sleep and also their rate of stress," says Gallup senior researcher Sarah Fioroni.

Overall, Americans' sleep is at an all time low, in terms of both quantity and quality.

A majority – 57% – now say they could use more sleep, which is a big jump from a decade ago. It's an acceleration of an ongoing trend, according to the survey. In 1942, 59% of Americans said that they slept 8 hours or more; today, that applies to only 26% of Americans. One in five people, also an all-time high, now sleep fewer than 5 hours a day.

Popular myths about sleep, debunked

Popular myths about sleep, debunked

"If you have poor sleep, then it's all things bad," says Gina Marie Mathew, a post-doctoral sleep researcher at Stony Brook Medicine in New York. The Gallup survey did not cite reasons for the rapid decline, but Mathew says her research shows that smartphones keep us — and especially teenagers — up later.

She says sleep, as well as diet and exercise, is considered one of the three pillars of health. Yet American culture devalues rest.

"In terms of structural and policy change, we need to recognize that a lot of these systems that are in place are not conducive to women in particular getting enough sleep or getting the sleep that they need," she says, arguing things like paid family leave and flexible work hours might help women sleep more, and better.

No one person can change a culture that discourages sleep. But when faced with her own sleeplessness, Tampa mom Katie Krimitsos started a podcast called Sleep Meditation for Women , a soothing series of episodes in which she acknowledges and tries to calm the stresses typical of many women.

Many Grouchy, Error-Prone Workers Just Need More Sleep

Shots - Health News

Many grouchy, error-prone workers just need more sleep.

That podcast alone averages about a million unique listeners a month, and is one of 20 podcasts produced by Krimitsos's firm, Women's Meditation Network.

"Seven of those 20 podcasts are dedicated to sleep in some way, and they make up for 50% of my listenership," Krimitsos notes. "So yeah, it's the biggest pain point."

Krimitsos says she thinks women bear the burdens of a pace of life that keeps accelerating. "Our interpretation of how fast life should be and what we should 'accomplish' or have or do has exponentially increased," she says.

She only started sleeping better, she says, when she deliberately cut back on activities and commitments, both for herself and her two kids. "I feel more satisfied at the end of the day. I feel more fulfilled and I feel more willing to allow things that are not complete to let go."

Help | Advanced Search

Computer Science > Machine Learning

Title: continual learning of large language models: a comprehensive survey.

Abstract: The recent success of large language models (LLMs) trained on static, pre-collected, general datasets has sparked numerous research directions and applications. One such direction addresses the non-trivial challenge of integrating pre-trained LLMs into dynamic data distributions, task structures, and user preferences. Pre-trained LLMs, when tailored for specific needs, often experience significant performance degradation in previous knowledge domains -- a phenomenon known as "catastrophic forgetting". While extensively studied in the continual learning (CL) community, it presents new manifestations in the realm of LLMs. In this survey, we provide a comprehensive overview of the current research progress on LLMs within the context of CL. This survey is structured into four main sections: we first describe an overview of continually learning LLMs, consisting of two directions of continuity: vertical continuity (or vertical continual learning), i.e., continual adaptation from general to specific capabilities, and horizontal continuity (or horizontal continual learning), i.e., continual adaptation across time and domains (Section 3). We then summarize three stages of learning LLMs in the context of modern CL: Continual Pre-Training (CPT), Domain-Adaptive Pre-training (DAP), and Continual Fine-Tuning (CFT) (Section 4). Then we provide an overview of evaluation protocols for continual learning with LLMs, along with the current available data sources (Section 5). Finally, we discuss intriguing questions pertaining to continual learning for LLMs (Section 6). The full list of papers examined in this survey is available at this https URL .

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Opportunities for industry leaders as new travelers take to the skies

Travel fell sharply during the COVID-19 pandemic—airline revenues dropped by 60 percent in 2020, and air travel and tourism are not expected to return to 2019 levels before 2024. 1 “ Back to the future? Airline sector poised for change post-COVID-19 ,” McKinsey, April 2, 2021; “ What will it take to go from ‘travel shock’ to surge? ” McKinsey, November 23, 2021. While this downturn is worrisome, it is likely to be temporary. McKinsey’s latest survey of more than 5,500 air travelers around the world shows that the aviation industry faces an even bigger challenge: sustainability.

The survey results indicate emerging trends in passenger priorities:

About the survey

We asked about 5,500 people in 13 countries, half of them women, to answer 36 questions in July 2021. Each had taken one or more flights in the previous 12 months. More than 25 percent took at least half of their flights for business reasons; 5 percent had taken more than eight flights in the previous 24 months. They ranged in age from 18 to over 75 and hailed from the US and Canada, the UK, Sweden, Spain, Poland, Germany, Saudi Arabia, India, China, Japan, Australia, and Brazil.

Topics included concerns about climate change and carbon emissions, carbon reduction measures, and factors influencing tourism stays and activities.

We compared the results to those of a survey asking the same questions that we conducted in July 2019.

  • Most passengers understand that aviation has a significant impact on the environment. Emissions are now the top concern of respondents in 11 of the 13 countries polled, up from four in the 2019 survey. More than half of respondents said they’re “really worried” about climate change, and that aviation should become carbon neutral in the future.
  • Travelers continue to prioritize price and connections over sustainability in booking decisions, for now. This may be partly because no airline has built a business system or brand promise on sustainability. Also, some consumers may currently be less concerned about their own impact because they’re flying less frequently in the pandemic. That said, almost 40 percent of travelers globally are now willing to pay at least two percent more for carbon-neutral tickets, or about $20 for a $1,000 round-trip, and 36 percent plan to fly less to reduce their climate impact.
  • Attitudes and preferences vary widely among countries and customer segments. Around 60 percent of travelers in Spain are willing to pay more for carbon-neutral flights, for example, compared to nine percent in India and two percent in Japan.

This article outlines steps that airlines, airports, and their suppliers could take to respond to changing attitudes and preferences. The survey findings suggest that airlines may need to begin with gaining a deeper understanding of changes across heterogenous customer segments and geographies. With those insights in hand, they could tailor their communications, products, and services to differentiate their brands, build awareness among each passenger segment, and better connect with customers.

Would you like to learn more about our Travel, Logistics & Infrastructure Practice ?

The survey findings point to fundamental and ongoing changes in consumer behavior.

After a decade of steady growth in passenger traffic, air travel was hit hard by the pandemic. International air travel immediately fell by almost 100 percent, and overall bookings declined by more than 60 percent for 2020, according to Airports Council International. At the time of writing, revenue passenger miles have returned to close to pre-pandemic levels in the United States, but still lag behind in other markets. 2 “COVID-19: October 2021 traffic data,” International Air Transport Association (IATA), December 8, 2021. In its October 2021 report, before the Omicron variant emerged, the International Air Transport Association (IATA) forecast that the industry’s losses would be around $52 billion in 2021 and $12 billion in 2022. 3 “Economic performance of the airline industry,” IATA, October 4, 2021.

Furthermore, travelers’ preferences and behaviors have changed sharply during the pandemic, particularly around health and safety requirements. An Ipsos survey for the World Economic Forum found that, on average, three in four adults across 28 countries agreed that COVID-19 vaccine passports should be required of travelers to enter their country and that they would be effective in making travel and large events safe. 4 “Global public backs COVID-19 vaccine passports for international travel,” Ipsos, April 28, 2021. And a 2021 survey by Expedia Group found that people buying plane tickets now care more about health, safety, and flexibility than previously. But, there is also renewed interest in travel as nearly one in five travelers expected travel to be the thing they spent the most on in 2021, one in three had larger travel budgets for the year, and many were looking for new experiences such as once-in-a-lifetime trips. 5 “New research: How travelers are making decisions for the second half of 2021,” Skift, August 26, 2021.

Comparing McKinsey’s 2019 and 2021 survey results, sustainability remains a priority as respondents show similar levels of concern about climate change, continue to believe that aviation must become carbon neutral, and want their governments to step in to reduce airline emissions. Some changes were more striking. The share of respondents who say they plan to fly less to minimize their environmental impact rose five percentage points to 36 percent. In 2021 half of all respondents said they want to fly less after the pandemic. Changes in opinion varied across markets. Passengers in the UK, US, and Saudi Arabia, for example, were more likely to feel “flygskam,” (shame about flying) while those in Spain, Poland, and Australia felt significantly less guilty about flying.

It is worth tracking these trends in each market and demographic, because passengers’ experiences and opinions are increasingly relevant: passengers spend far more time online, increasingly trust each other’s recommendations more than traditional marketing, and can reshape brand perceptions faster than ever. 6 “ Understanding the ever-evolving, always-surprising consumer ,” McKinsey, August 31, 2021. In some markets consumers may reward airlines that meet rising demands for environmental sustainability—and punish those who fall behind.

The Australian airline Qantas may be acting on a similar belief. In November 2021, it announced a new “green tier” in its loyalty program. The initiative, based on feedback from passengers, is “designed to encourage, and recognize the airline’s 13 million frequent flyers for doing things like offsetting their flights, staying in eco-hotels, walking to work, and installing solar panels at home”. Qantas states that it is one of the largest private-sector buyers of Australian carbon credits, and it will use program funds to support more conservation and environmental projects. 7 “Qantas frequent flyers to be rewarded for being sustainable,” Qantas media release, November 26,, 2021. “A look at how people around the world view climate change,” Pew Research April 18, 2019. Washington Post-Kaiser Family Foundation climate change survey, July 9 to August 5, 2019.

Given these shifting trends, it may be helpful for all industry stakeholders to maintain a deep and up-to-date understanding of consumer segments in each market that they serve. Three main findings about today’s travelers emerged from the 2021 survey:

Finding 1: Most travelers now have concerns about climate change and carbon emissions—and many are prepared to act on these concerns

Concern about carbon emissions from aviation did not rise much during the pandemic, probably in part because air travel declined so sharply. About 56 percent of respondents said they were worried about climate change, and 54 percent said aviation should “definitely become carbon neutral” in the future.

While these numbers have increased only one or two percentage points since 2019, the share of respondents who rank CO 2 emissions as their top concern about aviation—ahead of concerns such as noise pollution and mass tourism—rose by nine percentage points to 34 percent. More than 30 percent of respondents have paid to offset their CO2 emissions from air travel.

Finding 2: Price and connections still matter much more than emissions to most travelers

Of the nine major factors travelers consider when booking a flight, carbon emissions consistently rank as sixth-most important across customer segments. This may be partly because most airline marketing centers around low cost or superior service, and pricing and revenue management are targeted at price and best connection. Most booking websites allow prospective travelers to sort by price and number of connections, for example, but not by carbon footprint. Google Flights has made a first step, showing average CO2 emissions per flight and improving transparency for travelers.

Travelers might begin to make different choices if emissions featured more prominently in the booking process—particularly if more airlines offered CO 2 reduction measures that delivered genuine environmental impact.

Finding 3: Attitudes vary widely by demographics and geography

Beliefs about the seriousness of climate change, and how to respond to it, vary across demographics and geographies (exhibit). Although younger people are generally more aware of the predicted consequences of climate change, older cohorts have become more concerned about climate change since the 2019 survey. In some countries, large majorities see climate change as a major threat, while that represents a minority view in other countries.

The survey shows that frequent travelers feel slightly more shame about flying than other respondents—37 percent compared to 30 percent—but show a much lower intention to reduce their air travel to minimize their climate impact, at 19 percent compared to 38 percent.

According to Pew Research, more than 80 percent of people in Greece, Spain, France, and South Korea believe climate change is a major threat, compared to around 40 percent of those in Russia, Nigeria, and Israel. 8 “A look at how people around the world view climate change,” Pew Research April 18, 2019. According to 2019 polling by the Washington Post and Kaiser Family Foundation, more than three-quarters of Americans believe it represents a major problem or a crisis—but fewer than half are willing to pay to help address it. 9 Washington Post-Kaiser Family Foundation climate change survey, July 9 to August 5, 2019.

These numbers may change quickly in the next few years as discussions about climate change become less abstract as oceans rise and storms, forest fires, and droughts become more severe. Instead of being one topic of concern among many, millions more people around the world may come to see climate change as today’s greatest challenge.

This shift seems to be apparent in government action, especially in mature economies. The US, for example, announced its intention to exit the Paris Agreement in June 2017 but pledged to rejoin in April 2021. 10 “Climate change: US formally withdraws from Paris agreement,” BBC, November 4, 2020; “President Biden sets 2030 greenhouse gas pollution reduction target,” White House fact sheet, April 22, 2021. And in September, the White House set a goal for the country to produce 3 billion gallons of sustainable aircraft fuel annually by 2030—up from about 4.5 million gallons produced in the US in 2020—which would cut carbon emissions from flying by 20 percent compared with taking no action. 11 “Biden administration advances the future of sustainable fuels in American aviation,” White House fact sheet, September 9, 2021.

Cargo airplane loading

Taking stock of the pandemic’s impact on global aviation

How the industry can be cleared for takeoff.

Travelers’ attitudes and behaviors appear to be in flux, and will likely continue to change. Depending on the world’s progress in preventing and treating COVID-19, the industry will likely take at least a couple of years to recover from the downdrafts caused by the pandemic.

In this unique moment in aviation history, airlines may be able to communicate in new ways to inspire passengers to join the fight against climate change. Based on McKinsey’s experience in aviation and other industries around the world, there may be an opportunity for carriers to make it “easy to do good”. When following such an approach, experience shows that customers are drawn to straightforward language, demonstrations of what the industry is doing in this area, and the tangible benefits of those efforts. The most compelling stories are positive and connect with customers’ emotional needs.

As in the early days of travel advertising, airlines could reinforce the idea that the journey is the destination—that “getting there is half the fun.” By inviting customers to get involved in creating a greener future and own the solution, they could forge new partnerships and deepen loyalty.

Actual progress will be essential; organizations that talk about sustainability without demonstrating action may quickly be held to account. Simply keeping pace with trends or regulatory requirements will offer no advantages. Airlines that move boldly, such as by replacing rather than modifying a loyalty program with some kind of “planet-positive” scheme, will stand out from competitors.

The survey results and McKinsey’s work in the industry lead us to believe that the market is ready for a forward-thinking airline to chart a route to a cleaner future for the industry. Leading airlines that build a business strategy and brand promise on sustainability will likely attract a growing share of business and leisure travelers, fresh capital and talent, and new allies across the industry, government, and society at large.

In the years ahead, more customers will be willing to pay for sustainability, particularly if airlines can engage them with interesting approaches, such as gamification in frequent flyer programs, opt-out rather than opt-in offsets, “green fast lanes” for check-ins and security control, and customized emission-reduction offers. Decarbonization could become the standard to reach and maintain next-tier levels in loyalty programs. Passengers will be able to join the global decarbonization team and transform flight shame into flight pride.

Like many private flyers, corporate customers will look for ways to mitigate their CO 2 footprint. Passenger and cargo airlines could craft attractive decarbonization programs to engage the rising numbers of corporates aiming to significantly reduce their scope 3 emissions from air transport.

No single set of approaches will be effective in every geography or with every passenger segment. But airlines with a deep understanding of their customers’ changing needs and desires will continue to outperform those that don’t. Such organizations could recruit more of their passengers to the decarbonization team while protecting their brands, the future of aviation, and the planet itself.

Mishal Ahmad is a manager in McKinsey’s New Jersey office, Frederik Franz is a senior associate in the Berlin office, Tomas Nauclér is a senior partner the Stockholm office, and Daniel Riefer is an associate partner in the Munich office.

The authors would like to thank Joost Krämer for his contributions to this article.

Explore a career with us

Related articles.

Scaling sustainable aviation fuel today for clean skies tomorrow

Scaling sustainable aviation fuel today for clean skies tomorrow

How airlines can chart a path to zero-carbon flying

How airlines can chart a path to zero-carbon flying

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Following Up on Employee Surveys: A Conceptual Framework and Systematic Review

Lena-alyeska huebner.

1 Wilhelm Wundt Institute of Psychology, Leipzig University, Leipzig, Germany

2 Volkswagen AG, Wolfsburg, Germany

Hannes Zacher

Associated data.

The original contributions presented in the study are included in the article/ Supplementary Material , further inquiries can be directed to the corresponding author/s.

Employee surveys are often used to support organizational development (OD), and particularly the follow-up process after surveys, including action planning, is important. Nevertheless, this process is oftentimes neglected in practice, and research on it is limited as well. In this article, we first define the employee survey follow-up process and differentiate it from other common feedback practices. Second, we develop a comprehensive conceptual framework that integrates the relevant variables of this process. Third, we describe the methods and results of a systematic review that synthesizes the literature on the follow-up process based on the conceptual framework with the purpose of discussing remaining research gaps. Overall, this paper contributes to a better understanding of the organizational and human factors that affect this process. This is useful for practitioners, as it provides guidance for the successful implementation of this human resource practice. For example, research suggests that it is important to enable managers as change agents and to provide them with sufficient resources.

Introduction

Employee surveys are widely used in organizations today, and their popularity continues to grow ( Church and Waclawski, 2017 ). Their implementation varies from annual surveys to surveying in shorter intermittent time intervals (e.g., “pulse surveys;” Welbourne, 2016 ). The purposes of employee surveys include, but are not limited to, enhancing communication between management and staff, giving employees a voice, reducing social distance between management and employees, and intervention/organizational development (OD) ( Hartley, 2001 ; Kraut, 2006 ). The implementation of an employee survey is not limited to only one of these purposes, but can serve several of them simultaneously ( Burke et al., 1996 ). The success of employee surveys for OD depends heavily on the implementation of a proper follow-up process, that is, the use of the collected data for the initiation of organizational changes ( Falletta and Combs, 2002 ).

Despite its importance, the employee survey follow-up process is often neglected, limiting the effectiveness of this widely used management tool ( De Waal, 2014 ). Many times, organizations view the employee survey process as completed once the data have been collected, consequently failing to properly follow-up on the results and use them as a tool to drive change ( Church et al., 2012 ). Similarly, the literature on the employee survey follow-up process is scarce, as this stage receives less attention by researchers in comparison to numerous studies examining the actual surveying process ( Fraser et al., 2009 ). For example, research has investigated why surveys are conducted at all and what types of items they include ( Sugheir et al., 2011 ), as well as the issue of social desirability in survey responses ( Keiser and Payne, 2019 ). In addition, the sparse literature on the employee survey follow-up process is conceptually fragmented, published across various academic disciplines, and uses inconsistent labels (e.g., employee survey follow-up, feedback intervention). This is especially disadvantageous for practitioners, as it makes it difficult for them to locate reliable evidence-based research, even though employee surveys are a common OD technique ( Falletta and Combs, 2002 ). Also, practitioners lack an extensive overview of relevant factors to consider during implementation, as no comprehensive theoretical model of the process exists. Lastly, there have been reviews on survey feedback interventions or that included such as one of other OD practices, but the most recent work was published over 30 years ago (see Neuman et al., 1989 ). However, more research on the topic has been conducted since then, but we lack guidance on what variables and domains in this line of research to examine with future studies. Hence, the lack of an updated review of the employee survey follow-up process literature prevents systematic theoretical and empirical research on this important topic and practical progress in this area.

To advance this area of research and practice, we conducted a systematic literature review ( Daniels, 2018 ; Siddaway et al., 2019 ) on the employee survey follow-up process. First, we define employee surveys, conceptually integrate them into the existing feedback and change management/OD literature, and differentiate them from other feedback practices, such as 360 degree feedback. Describing the nomological network of employee surveys is important because past literature on the topic has been mainly on “survey feedback interventions,” rather than specifically the employee survey follow-up process. Also, differentiating this process from other feedback practices (e.g., 360 degree feedback) demonstrates the necessity of treating this concept as a distinct human resource practice even though it shows similarities to other feedback processes. Second, we developed a conceptual framework to depict the relationships between the relevant variables for the employee survey follow-up process as a change tool. Third, we systematically reviewed and evaluated the literature on the follow-up of employee surveys based on the components of the comprehensive conceptual model. With this approach, the present systematic review explores the following research question: Which variables of our conceptual model have been sufficiently informed by past research and which variables require future research? Finally, we discuss the implications of our review for future research and offer several recommendations for organizational practice.

Overall, our conceptual framework and systematic review contribute to the organizational change and development literature and to practice in four important ways. First, based on a conceptual integration and framework, our review highlights which variables research in this area has investigated, and which variables have been neglected and require further attention. Second, the employee survey follow-up process can generally be categorized as a survey feedback intervention, but is nevertheless a distinct process that deserves focused attention. For example, in contrast to reviews on survey feedback interventions, this review excludes studies conducted with student samples (e.g., Brown, 1972 ), and on the other hand includes other empirical research conducted on the topic, as for example cross-sectional work (e.g., Church et al., 1995 ) or qualitative interviews with survey practitioners (e.g., Gable et al., 2010 ). Third, past reviews on survey feedback are outdated, as more research has been conducted on the topic since then. Hence, our review includes all relevant literature that has been published until today. Fourth, the results of our review are useful for practitioners as they provide an integrated overview of the current state of knowledge on the employee survey follow-up process and of the factors that should be taken into account for the successful implementation of this human resource practice.

Theoretical Background

We begin by conceptually integrating the employee survey follow-up process into the literature on related and overarching topics, including feedback, feedback interventions, survey feedback interventions, and other formats (see Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-12-801073-g001.jpg

The nomological network of employee surveys. 360 degree-, multisource-, and upward feedback practices are by definition also survey feedback interventions, but generally not explicitly labeled as such in the literature, hence the dotted line.

In the broadest sense, an employee survey is a form of feedback, defined as a communication process in which a sender sends a message to a recipient, with the message containing information about the recipient ( Ilgen et al., 1979 ). The term feedback is poorly defined and used inconsistently in the literature ( Besieux, 2017 ). It has been conceptualized and labeled in many different ways, for example as process feedback (how) and performance feedback (what) ( Besieux, 2017 ), as feedback to the individual or the group ( Nadler, 1979 ), or as cognitive (how and why) and outcome feedback (what) ( Jacoby et al., 1984 ). This has led to a plethora of literature on feedback, for example on how to give effective feedback (e.g., Aguinis et al., 2012 ) or on recipients’ reactions to feedback (e.g., Fedor et al., 1989 ).

Feedback Interventions

When feedback is used as an intentional intervention by providing information about a recipient’s task performance and actions being taken by an agent to intervene, this is called a feedback intervention ( Kluger and DeNisi, 1996 ). A meta-analysis on feedback interventions by Kluger and DeNisi (1996) showed large variability in its effects, but there was also large variability in the types of feedback interventions included in the analyses, for example feedback for memory tasks, test performances, and physical tasks.

Feedback interventions have also been considered in the change literature. Guzzo et al. (1985) examined 11 different types of organizational interventions, with feedback interventions being one of them. They found positive effects for this type of intervention practice, yet their scope was broad, too, in that they also included performance appraisal techniques and access to performance data. Nadler’s review ( 1979 ) of experimental research on feedback regarding task group behavior, on the other hand, found conflicting results for the effectiveness of feedback interventions to groups. However, feedback was again considered in a broad sense, including feedback for coding or sorting tasks, problem solving exercises, or group discussions.

Survey Feedback Interventions

When feedback is solicited through the medium of surveying and transferred back to relevant stakeholders for the purpose of diagnosis and intervention, it is called survey feedback (intervention) ( Nadler, 1976 ). Throughout the industrial and organizational (IO) psychology literature, this is generally referred to as “survey feedback,” whereas such interventions can also be applied in different contexts, as for example education or research (e.g., Gehlbach et al., 2018 ). In the work context, survey feedback interventions entail systematic data collection and feeding the results back to organizational members ( Nadler, 1976 ).

Studies on survey feedback interventions are scattered across the OD literature. Several reviews and meta-analyses have included them as one of many OD interventions. For example, Friedlander and Brown (1974) conducted a review on several different approaches to OD, with survey feedback being one of them. They summarized ten survey feedback intervention studies and concluded that such can have positive effects on the attitudes of those involved. Shortly after, Margulies et al. (1977) summarized six studies relevant to this type of OD intervention and concluded that more research was needed on this technique to understand under which circumstances it produces the most benefits. A few years later, Porras and Berg (1978) and Porras (1979) reviewed four survey feedback intervention studies as one of several different OD techniques, but could not find superiority of this technique over others. Another example of survey feedback relevant for the OD literature is a meta-analysis by Neuman et al. (1989) . The authors identified six survey feedback intervention studies out of 84 studies implementing other human processes approaches to OD, meaning such techniques attempt to achieve improved organizational performance via improved human functioning. Indeed, the human approach techniques were found to be more effective than techno-structural interventions (i.e., modifications to work or the work environment) in changing organizational attitudes. Lastly, Hopkins (1982) reviewed the use of survey feedback in educational settings and concluded that it is generally useful as a tool in educational organizations. In summary, there is much research on survey feedback interventions, but previous reviews and meta-analyses on this topic have shown mixed results. The majority of authors concluded that more research is needed on this topic, and this assumption holds up until today.

Other Types of Feedback Practices

Other related human resource practices, for example performance appraisals, such as 360 degree-, multisource-, and upward feedback also rely on systematic data collection and feeding it back to organizational members ( DeNisi and Kluger, 2000 ). Due to the necessity of collecting anonymous feedback, the data for these practices are usually collected with surveys ( Bracken et al., 2001 ), similarly to employee surveys. Therefore, by definition, these practices are survey feedback interventions, but are usually not labeled as such throughout the literature (see dotted line in Figure 1 ). Also, as the following discussion will show, the specific processes of these practices differ from those of employee surveys.

360 Degree Feedback

One popular practice of performance management is 360 degree feedback, which is a type of performance appraisal that solicits feedback from several sources, mostly for employees in management positions ( Atwater et al., 2007 ). As the name implies, the vertical and horizontal feedback that is collected from multiple rating sources can be conceptualized as a circle. A full circle of feedback constitutes feedback from superiors and subordinates (vertical feedback), peers (horizontal feedback), and self-ratings ( Foster and Law, 2006 ). The goal is to provide feedback to a single person regarding their management qualities ( Vukotich, 2014 ). The two general frameworks in which 360 degree feedback programs are implemented are either for developmental purposes of the rated manager or for administrative purposes, such as promotions ( Hannum, 2007 ).

Generally though, only a small group of people provides feedback. Usually, these are individuals capable of making statements about leadership behaviors because they have worked closely with the rated person. However, the effectiveness of the process is rather limited when the recipients of feedback are left with acting on it without training, which is why it is recommended to have trained facilitators or consultants deliver the anonymous feedback and support managers in understanding the data ( Nowack and Mashihi, 2012 ; Vukotich, 2014 ).

Multisource Feedback

The term multisource feedback (MSF) is often used interchangeably with 360 degree feedback, even though this is not accurate ( Foster and Law, 2006 ). MSF constitutes more than one source of feedback (e.g., self-ratings and peer-ratings), but it must not necessarily involve the full circle of 360 degree feedback. Hence, 360 degree feedback is a type of MSF, but MSF is not necessarily 360 degree feedback ( Foster and Law, 2006 ). However, MSF programs share similar processes with 360 degree feedback initiatives and generally also provide feedback to a single recipient, most often a leader ( Atwater et al., 2007 ). They can also be implemented for developmental or administrative purposes, for example as part of performance appraisal processes ( Timmreck and Bracken, 1997 ).

Upward Feedback

Upward feedback is a more narrow form of 360 degree feedback and MSF. It is the vertical feedback derived from subordinates with the purpose of appraising a manager’s performance ( van Dierendonck et al., 2007 ). Upward feedback programs typically include self-ratings of leader behaviors that can then be compared to subordinates’ ratings to help feedback recipients identify development needs and subsequently improve their leadership skills. Similar to 360 degree feedback or MSF programs, upward feedback programs aim to support leadership development or administrative decision-making and entail comparable processes ( Atwater et al., 2000 ).

Comparing Other Feedback Practices to Employee Surveys

Employee surveys are similar to the above mentioned human resource feedback practices, but are nevertheless distinct in their processes and goals. Their most overlap occurs when an employee survey contains items on leadership behavior, specifically direct leaders. In such a case, the employee survey functions as upward feedback to managers in addition to the assessment of general work conditions ( Church and Oliver, 2006 ). The most prominent differences between the various human resource feedback practices and the employee survey is the type of feedback that is solicited and the handling of the data following the survey. Employee surveys only utilize vertical feedback, meaning feedback is carried up the organizational hierarchy starting at the bottom. They entail formal feedback derived from large groups of or all employees in an organization (best case at least from a representative sample), and the results are aimed at evaluating general work conditions. The goal is therefore not to evaluate a specific employee’s leadership skills, but to obtain feedback from a wide range of employees on more general work-related topics ( Bungard et al., 2007 ).

The employee survey follow-up process then entails using the group-level feedback data for organizational change initiatives. Some organizations choose to implement top–down initiatives in reaction to survey results in which management or other stakeholders review the data at a higher and aggregated level than that of single teams. They then decide on overarching action plans for the whole company or certain departments, such as the implementation of new performance appraisal systems, overhauling internal communication, or changing the company strategy ( Linke, 2018 ). Such top–down approaches are not the focus of this review, but the interested reader is referred to different case study descriptions (see e.g., Chesler and Flanders, 1967 ; Goldberg and Gordon, 1978 ; Rollins, 1994 ; Falletta and Combs, 2002 ; Feather, 2008 ; Tomlinson, 2010 ; Costello et al., 2011 ; Cattermole et al., 2013 ).

The focus of this review is the bottom–up approach to change, which focuses on employee involvement and participation and is of a more decentralized nature ( Conway and Monks, 2011 ). The employee follow-up in line with this approach entails the discussion of psychosocial working-environment data between managers and their teams and having a dialogue about results that pose areas with need for action. Ideally, action planning and proper action plan implementation should follow these discussions ( Welbourne, 2016 ).

As mentioned previously, such follow-up steps after the survey are oftentimes neglected in practice ( Church et al., 2012 ). One reason for this could be that employee surveys generally have different purposes in comparison to 360 degree, multisource, and upward feedback approaches. They are mostly used for OD or assessment purposes ( Hartley, 2001 ). They are much less likely to be tied to personal rewards, such as promotions of specific managers. Hence, the responsibility to review the data and to implement changes based on it does not lie as clearly with managers as it does with the feedback practices described above.

Overall, there is little empirical evidence regarding the follow-up on employee surveys, and the research that is available is scattered and labeled inconsistently (e.g., employee satisfaction survey, opinion survey, engagement survey). As noted above, researchers have offered reviews and meta-analyses on different types of feedback, feedback interventions, and specifically survey feedback interventions. From a holistic perspective, however, the results of these reviews are mixed and inconsistent, calling for a systematic review on the distinct concept of the employee survey follow-up. In the following section, we offer a conceptual framework for presenting research on this topic.

A Conceptual Framework of the Employee Survey Follow-Up Process

We developed a conceptual framework of the employee survey process, with particular focus on the follow-up (see Figure 2 ). For its development, we drew from existing theory and research. Mainly, the OD/change and organizational behavior literature informed this model, more specifically models proposed by Nadler and Tushman (1980) ; Burke and Litwin (1992) , and Porras and Robertson (1992) .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-12-801073-g002.jpg

Conceptual framework of the employee survey process, specifically the follow-up process. Variables listed as external factors serve as examples; list is not exhaustive.

Nadler and Tushman’s Congruence Model of Organizational Behavior (1980) informed the general structure of our model with its input-, transformation process-, and output approach to behavioral systems in an organization, which is in alignment with open systems theory ( Katz and Kahn, 1978 ). According to their conceptualization, there are inputs for the behavioral system (i.e., the organization). This behavioral system consists of specific organizational elements and produces behaviors that ultimately lead to certain levels of organizational performance (i.e., outputs).

This systems and transformation view of the organization is applicable to the employee survey (follow-up), as this process itself is an approach to identifying and solving organizational problems. Specifically, the post-survey follow-up is an organizational transformation process fed with data from certain input sources, such as the employee survey ( Falletta and Combs, 2002 ). This transformation process emerges, like any other organizational process, through the interaction of human and organizational factors and the resulting behaviors ( Nadler and Tushman, 1980 ). Lastly, such systems put forth outputs that can be categorized into organizational and individual performance ( Nadler, 1981 ).

Two other common and popular change models inform the more specific variables of the model; Burke and Litwin’s Model of Organizational Performance and Change (1992) and Porras and Robertson’s Change-Based Organizational Framework (1992). Figure 2 attempts to portray the primary variables and components relevant to the employee survey follow-up process. Below we will describe each component of the model in more detail.

The Employee Survey

The employee survey itself produces the necessary data for all subsequent steps (i.e., teams receive their results and plan actions based on them) ( Linke, 2018 ), hence it can be considered as an antecedent of the survey follow-up process. Much research has been accumulated on survey development and administration, but it stands mostly in isolation from the steps following the actual survey, meaning most studies do not connect this knowledge to the survey follow-up steps, creating a disconnect between these bodies of literature.

External Factors

Besides the survey delivering data as input for the follow-up process, there are also factors external to the organization that provide input for the follow-up. As other researchers have noted, external factors affect and sometimes initiate organizational change ( Burke and Litwin, 1992 ; Porras and Robertson, 1992 ). These factors can include any outside conditions that influence the organization, for example political circumstances, culture, marketplaces, or even industry category ( Burke and Litwin, 1992 ). These external factors represent the context in which the employee survey is embedded and therefore also have an effect on the employee survey and follow-up. For example, the culture of the country that the company resides in will most likely influence what kind of questions are asked in an employee survey (e.g., collectivist vs. individualistic cultures). Culture most likely also influences participation rates in an employee survey (e.g., there might be low participation rates when the survey content does not fit the cultural context).

The Employee Survey Follow-Up Process

Consistent with Porras and Robertson’s (1992) Change-Based Organizational Framework, we identified two main factors that are relevant for the follow-up process: The work setting (i.e., organizational system) and its organizational members (i.e., the human system).

Organizational System

There are many ways to think about the components of an organization, hence there is no one way agreed upon description ( Nadler and Tushman, 1980 ). Generally, these components refer to the organizational arrangements that characterize how an organization functions. We have listed the components we deemed most important for the implementation of employee surveys and their follow-up: Structure, resources, culture/climate, and strategy. Structure refers to the arrangement of people and their functions into different levels of responsibility and authority ( Duncan, 1979 ). As employee survey follow-up processes take place in work groups, the structure of an organization becomes defining for the constellations in which the process is carried out ( Nadler, 1980 ). Resources refer to any organizational, physical, psychological, or social aspects of work that help achieve work goals ( Demerouti et al., 2001 ) and are hence also relevant for all work-related processes, such as employee surveys and their follow-up. Culture and climate are related constructs, with culture referring to the collection of rules, values, and principles that guide organizational behavior. Climate refers to the collective impressions, feelings, and expectations of members in a team or work unit ( Burke and Litwin, 1992 ). Culture has long been recognized to play an important role in OD ( Beer and Walton, 1987 ), and with the follow-up process being a team-level task, there is reason to believe that especially the climate in a work unit will affect this process as well. Strategy is how an organization intends to achieve effectiveness over an extended time frame ( Burke and Litwin, 1992 ), and the literature on employee surveys suggests that the goals of employee surveys (including their follow-up) should be aligned with the company’s strategy ( Falletta and Combs, 2002 ). Generally, surveys can and should also be used to support the organization’s strategy ( Macey and Fink, 2020 ).

Human System

The human system refers to any participants and change agents involved in the process of the employee survey and its follow-up. Leaders are important change agents in OD ( Beer and Walton, 1987 ), and the employee survey (follow-up) process requires dedication from top management down to direct supervisors ( Knapp and Mujtaba, 2010 ). Whereas the top–down approach to change is of a strategic and centralized nature and managed from higher levels of the organization, the bottom–up approach to change focuses on employee involvement and participation ( Conway and Monks, 2011 ). Hence, employees are also important to the process und take on the role of change agents.

Lastly, whereas some literature on employee surveys recommends that only employees and team leaders are present during the feedback and action planning meetings (see e.g., Knapp and Mujtaba, 2010 ), some sources recommend that trainers or consultants help facilitate during the process by supporting managers in making sense of the data and engaging in action planning discussions with their teams (see e.g., Bungard et al., 2007 ; Linke, 2018 ). Consequently, other change agents besides managers and employees can play an important role in the process.

Output is what the organization produces, more specifically its performance ( Nadler and Tushman, 1980 ), but there is a lack of consensus as to what constitutes a valid set of performance criteria in an organization ( Ostroff, 1992 ). There is, however, general agreement that performance is multi-dimensional and applies to the multiple levels of an organization (i.e., the individual-, team-, and organizational level) ( Sonnentag and Frese, 2002 ). In the context of this research, we drew from the above mentioned change models by Nadler and Tushman (1980) ; Burke and Litwin (1992) , and Porras and Robertson (1992) and differentiate between individual (psychological vs. physiological) and organizational outcomes, assuming that these two can influence each other.

Feedback Loops

The feedback loops pertain to the process of reviewing developed action plans and evaluating them regarding their effectiveness and sustainability. This helps create accountability and guide future decisions regarding readjustment of action plans or the necessity to develop additional action plans based on the current survey cycle (see smaller loop circling back to the follow-up process in Figure 2 ; Bungard et al., 2007 ).

The second loop connects back to a new survey cycle, restarting the process of action plan development based on newly collected data (see Figure 2 ). This feedback loop informs the future survey and follow-up process in that new action plans can be informed by the outcomes of previous action plans. For example, if an action plan was not successfully implemented, an additional action might be developed. Also, past research has shown that previous experiences with change initiatives can shape attitudes toward future change initiatives, such as levels of trust in future change programs ( Bordia et al., 2011 ). More specifically, past research suggests that the quality of handling survey data and conducting a follow-up process might influence attitudes toward future surveys, including perceptions of its usefulness ( Thompson and Surface, 2009 ) or the intent to participate in future surveys ( Rogelberg et al., 2000 ).

Literature Search

From September 2020 to December 2020 and in June 2021, we conducted several comprehensive literature searches in Google Scholar and PsycInfo. We used the search terms “employee survey,” “survey feedback,” “organizational survey,” “employee engagement survey,” “employee opinion survey,” “employee satisfaction survey,” “survey feedback intervention,” and “survey key driver analysis.” We also searched “upward feedback” as we expected for this term might not only refer to traditional upward feedback programs, but that this term might also put forth research that refers to vertical feedback.

The literature seldom discusses the follow-up process without the preceding surveying process. Therefore, during the initial phase of the database search, we included all titles that indicated a discussion of employee surveys in general. An important distinction was whether the title of the study indicated merely the use of surveys as the data collection method for other research purposes or whether the record discussed the process of conducting an employee survey. This especially posed a challenge for this review, as surveys are the most popular method of research in psychology ( Dillman et al., 2014 ). The search resulted in 462 initial records (see Figure 3 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-12-801073-g003.jpg

Systematic literature review process.

Inclusion and Exclusion Criteria

Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA-P) protocol ( Moher et al., 2015 ), we screened all articles. 1 The inclusion criteria applied during the scanning of abstracts and full texts were that the record (1) primarily discusses the bottom–up approach to organizational change in the context of the employee survey follow-up process, which constitutes the group discussion of fed back psychosocial data, (2) constitutes primary empirical literature published in peer-reviewed academic journals or book chapters of edited books, and (3) it is written in English or German. Regarding point (2), we chose to not include gray literature (e.g., dissertations, conference papers) to ensure a sufficient level of quality of the included literature, which is guaranteed by the peer-review process of academic journals and of edited books.

We excluded general books on the matter because, as a common and popular human resource practice, there are numerous books on employee surveys, which are ultimately based on the empirical literature we summarize in this review. The employee survey process at organizations is defined by the dynamics between managers and teams, and this is different to a teacher and student context. Hence, we excluded research conducted in educational settings when it was conducted with teacher and student samples (e.g., Brown, 1972 ; Hand et al., 1975 ). We did, however, include studies in educational settings when the survey feedback was used among educational staff (e.g., between principals and teachers) for the development of the educational institution as an organization (e.g., Miles et al., 1969 ). We also excluded non-primary literature, such as book reviews and commentaries, because these are also based on the primary work we summarize in this review. Finally, we searched the references of relevant papers until no new records were identified, which resulted in an additional 11 records. The final sample constitutes 53 records published between 1952 and 2021.

For each paper, we tabulated and extracted the following information: Author(s), year of publication, the research field the study was published in, the terms used to describe the employee survey/the follow-up process, the study type/analytic methods, and a short summary of findings (see Appendix). We also coded all records according to which components of the conceptual model they inform. When the record contained information pertaining specifically to a variable as listed in the conceptual model, the record was coded and listed accordingly in Table 1 . In addition, we coded records according to whether the study indicated the involvement of an external change agent, more specifically the level of involvement of another change agent. We coded a study as indicating low change agent involvement when there was no involvement or little involvement either during the preparation stage of feedback meetings or during the actual feedback meetings. We coded a study as indicating medium involvement of a change agent when such supported managers thoroughly either during the preparation phase (e.g., thoroughly briefed managers on how to conduct meetings) or during the actual feedback meetings (e.g., they moderated the feedback meetings for or with managers), or when they supported moderately during both phases. We coded the record as indicating high involvement of an external change agent when they thoroughly supported managers during preparation and during the actual feedback meetings.

Reviewed empirical studies coded according to which components of the conceptual model of the employee survey follow-up process they inform.

Total number of studies: 53. *For according citations, see Appendix. **Studies were coded according to involvement levels of additional change agents other than managers: Low (no involvement or little involvement before or during feedback meetings); medium (thorough involvement either before or during feedback meetings or moderate involvement during both phases); high (high involvement before and during feedback meetings).

Coder(s) also recoded 10% of the studies to check their consistency ( Daniels, 2018 ). 2

Six records indicated that data was used for multiple publications (i.e., constituting three unique publications) and were marked as such in the Appendix. We suspected eight additional records to constitute only four unique publications based on the analog study design descriptions. We were able to acquire contact information from at least one author of two (i.e., four) manuscripts. One confirmed the multiple use of data and one was not able to provide information due the long time that had passed since publication.

In the following, we summarize and integrate the findings derived from the records we identified via our literature searches and structure them according to the components of our conceptual model with the purpose of revealing domains in which our evidence-based knowledge remains underdeveloped.

None of the studies included in this review investigated the characteristics of the employee survey as antecedents to the follow-up process. A variety of different questionnaires served as the basis for follow-up activities, and there was also much variety in the extent of information that the authors provided about the questionnaires. Whereas some provided many details and item examples, others merely named the survey. In some instances, the questionnaires were matched to the specific context and circumstances of the organization, for example to a military setting [the Army’s General Organizational Questionnaire (GOQ); Adams and Sherwood, 1979 ], or to mining and milling (the Work Attitudes Survey; Gavin and Krois, 1983 ).

Overall, the surveys contained items regarding a variety of psychosocial constructs relevant to the workplace. Examples include, but are not limited to, job demands, control at work, social interactions, leadership, and commitment to the organization ( Björklund et al., 2007 ), as well as rewards, communication, quality of management, participation, employee satisfaction, organizational climate, and effectiveness ( Amba-Rao, 1989 ). More examples include items on response to stress, the need for work development, and perceived work environment ( Elo et al., 1998 ), as well as items regarding quality of work life, individual morale, individual distress, supportive leadership, role clarity ( Jury et al., 2009a , b ).

Results by Gavin and McPhail (1978) of an implemented employee survey at an Admissions and Records Department suggest that it might be more beneficial to use items developed for the specific context of an organization, rather than general organizational climate measures, as those items improved more in comparison to the general items. Consequently, the authors suggested that tailored survey interventions might be more effective than global initiatives. Similarly, Adams and Sherwood (1979) also suggested that items tailored to the specific context might be more beneficial than general items.

Lastly, one study discussed the usefulness of survey key driver analysis (SKDA) for managers in the survey data analysis process, which is a statistical procedure to identify topics that can be prioritized for action planning among a variety of other measured topics in a survey. More specifically, the identified key drivers are most highly associated with the outcome (oftentimes employee engagement). Cucina et al. (2017) called for the moratorium of this practice, which evoked a series of commentaries (see Hyland et al., 2017 ; Johnson, 2017 ; Klein et al., 2017 ; Macey and Daum, 2017 ; Rotolo et al., 2017 ; Scherbaum et al., 2017 ). Similarly, some authors have suggested that managers do not need statistical training to recognize significant differences, but instead can deal best with their data by examining percentages of favorable and unfavorable results and comparing them to other departments or past survey results ( Dodd and Pesci, 1977 ). However, in some studies, managers received survey results prepared through survey key driver analysis (SKDA) (e.g., Griffin et al., 2000 ; Ward, 2008 ).

In summary, whereas all studies provide a mostly sufficient description of the employee survey that was used for the intervention, we recognized a disconnect between the survey items and their significance as antecedents to the action planning process. It is reasonable to assume though that the questionnaires help participants structure their subjective feelings and guide subsequent action planning by providing relevant concepts for discussion. Also, the way the data is prepared by or for managers most likely also affects the subsequent action planning process.

None of the studies included in this review explicitly examined external factors, but as we described in earlier sections, such are complex and difficult to define and measure. One important factor to consider could be, for example, the national culture in which the organization is embedded. None of the empirical studies examined the employee survey follow-up process from a cross-national perspective, but our review yielded studies conducted in Australia, Germany, Finland, South Africa, Sweden, the United Kingdom, and the United States. Also, the studies included in this review were implemented in a variety of different industries, as for example military, banking, schools, hospitals, manufacturing, and mining, but none of them examined effects across different industries. Therefore, our results suggest that the role of external factors is yet to be explored in the context of employee surveys and their follow-up.

The classic structure for the implementation of employee surveys is the waterfall design in work families. Within this approach, higher level feedback sessions serve as role models for lower level work groups, and results are presented to and in the according work families (i.e., a manager with her/his subordinates) ( Nadler, 1980 ). Most reviewed intervention studies made use of this model (see Table 1 ); Adams and Sherwood (1979) for example reported some improvements following an employee survey conducted in a military setting with strong hierarchical structures, which matched the classic waterfall and work family design.

However, some researchers have suggested the superiority of other structure models for survey feedback meetings. For example, Alderfer and Ferriss (1972) found that higher level managers denied their problems in feedback meetings, while exhibiting a decline in workplace morale. The authors suggested that the traditional work family model might not be the most effective way to conduct survey feedback meetings, as it might lack psychological safety for participants. Instead, it could be useful to first conduct peer meetings, which can be followed by work family meetings. One year later, Alderfer and Holbrook (1973) followed up with a study in which they implemented a peer-/intergroup model instead of a work family design with which they found some positive effects: Individuals that shared a common organizational fate, for instance because they had similar tasks, but they did not have direct authority relationships, were brought together for the employee survey follow-up meetings. Managers also met among each other, and these meetings were followed-up by intergroup sessions in which members of the different systems at different levels of authority interacted. The authors proposed that there might be less hesitance of employees to speak up in such meetings because direct managers are not present.

Eklöf et al. (2004) compared other types of structure models during which feedback was provided by a trained ergonomist: Individually to each person in the group, to only the group supervisors, and to the entire group with the supervisor present. Results suggested potential benefits in giving feedback to only supervisors. This was the most cost effective intervention group, as it resulted in a higher average number of psychosocial modification types per individual (i.e., different types of modifications to the workplace) and required the least time investment. It is important to note though that the average number of psychosocial modification types per individual decreased for all groups during this intervention, but the supervisor feedback group merely showed the least decrease.

In summary, research suggests that other implementation structures besides the classic waterfall and work family design for the employee survey follow-up could be useful, but we require additional research to compare and further explore such implementation strategies.

Only three of the studies specifically examined resources in the employee survey follow-up context. Dodd and Pesci (1977) found that managers who received feedback training seemed to conduct more feasible, measureable, visible, and timely action programs than managers without training. Trained managers were also more likely to improve employee attitudes and morale through the feedback intervention. Wiley (2012) surveyed 31 survey practitioners from a sample of large organizations, and the top three barriers to effective post-survey action planning named were execution (following through), importance (lacking attention by executive management), and resources (especially time, but also lacking training, technical, and financial resources). Lastly, Fraser et al. (2009) interviewed 18 managers from large multi-site companies that had implemented employee surveys in the past. Results indicated that important resources for the implementation of a successful follow-up process included a clear action purpose of the survey itself, senior management endorsement of the implementation, experienced leaders, and the support of trained change agents to drive the process.

Some other studies mentioned almost in passing different types of resources (e.g., time, financial resources, training) that affected the employee survey process. For example, participation in the survey intervention implemented by Elo et al. (1998) was voluntary, but sessions were held at times where shift workers could participate immediately before or after their working hours, and the company paid compensation to those who participated. Church et al. (2012) provided some qualitative data from employees who reported that action was not taken based on their survey results, and they named a lack of commitment by managers to follow through as one of the reasons. The participants also named the lack of other resources, including time, funding, or manpower. Lastly, Baker et al. (2003) reported in their study that some managers noted that the pressures of their daily work made it difficult to disseminate the results to the entire staff.

Overall, resources seemed to not find much attention in the reviewed literature. The reason for this neglect could be that the majority of study contexts might not have suffered from lacking resources because organizations consenting to collaborate in research and the research teams implementing the intervention are likely to ensure that the research can be carried out appropriately with sufficient resources.

Culture/climate

Similarly to resources, organizational culture/climate was given little attention throughout the literature. An exception was a study by Bowers (1973) , in which he examined organizational climate as a mediator. He found that the positive effects of survey feedback on measures of organizational functioning were weaker when controlling for climate. Other anecdotal descriptions provide inferential information about the importance of culture/climate to the employee survey follow-up. Swanson and Zuber (1996) described the hostile organizational culture of the mailing company that their intervention was attempted to be implemented in. There was high turnover with managers routinely being fired or demoted without clearly stated reasons, which resulted in managers maintaining low profiles and not speaking up. Top management generally showed defensiveness toward the survey reports and an unwillingness to change. Overall, the organizational culture was hostile, hierarchical, and demonstrated low ability to change which contributed to the employee survey intervention to fail.

In strong contrast to this stands a case study by Ward (2008) . It describes the successful implementation of a survey endeavor at Fujitsu through a consulting firm, whose methodology was “say, stay, strive.” This strategy was aimed at giving employees a voice, giving them incentive to stay with the company, and striving to be better. This fit Fujitsu’s organizational culture well, and top management was very supportive of the survey implementation. The company made an effort to share best practices, and improvements in employee engagement were noted through action planning at the local level.

In summary, only one study specifically examined climate or culture, but we can draw inferences from the descriptions provided by some of the authors. Most likely, this research topic has been given little attention for similar reasons as the neglect of resources. An organization is not likely to collaborate in intervention research when their culture does not allow such efforts.

None of the studies included in this review contained specific information pertaining to organizational strategy, which poses a large research gap.

Nearly all studies provided descriptions of the employees involved in the studies, as they constituted the participants of their research. Only five studies examined the relevance of group composition and the characteristics of employees participating in feedback meetings. For example, Alderfer and Holbrook (1973) found that group composition was related to the length of time that different topics were discussed. Branch managers mainly discussed authority, control, communication, and conflict, whereas management trainees were mainly concerned with communication, conflict, and careers. Church et al. (2012) examined whether the same pattern of results (i.e., groups that reported action was being taken based on their survey results showed more favorable survey responses over the years) held up for different groups of employees, such as frontline employees, executives, and professionals. Results suggested that frontline managers were more dissatisfied when results were not acted on in comparison to the other two groups, but they were equally satisfied when results were acted upon. Hence, the results held up across different groups of employees.

Gavin and Krois (1983) examined the demographic characteristics of the feedback groups, including employees. For example, younger groups displayed more constructive problem-solving and fewer avoidance behaviors. More highly educated groups spent relatively more time on problem resolution. Nadler et al. (1976 , 1980) found differing effects in different departments of a bank (i.e., tellers, financial consultants). The authors concluded that different approaches may be called for in different types of work units that are made up of different kinds of organizational members. Tellers, for example, have little control over their tasks, and higher performance might be less rewarding for them as for financial consultants, who have more autonomy in their tasks. Hence, these groups might have different levels of motivation to engage with the survey feedback data.

Overall, we still know very little about employees’ roles in the survey feedback process and how different individuals might perform and engage in it. Church et al. (2012) already highlighted this gap in the literature almost 10 years ago and noted that different individual predispositions might lead to differing response profiles and subsequently might also affect all following steps (e.g., action planning).

Leaders/managers

It is widely accepted throughout the change management literature that leaders and managers play a central role as change agents ( Conway and Monks, 2011 ). Nevertheless, only nine studies gave specific attention to leadership (see Table 1 ). For example, results suggest that teams led by managers with poor leadership skills potentially benefit most from survey feedback interventions ( Solomon, 1976 ), but managers with low ratings on leadership questions might also be less likely to use the feedback with their units, even though they have the most need to do so ( Born and Mathieu, 1996 ). On the contrary, Conlon and Short (1984) found that managers with higher ratings on an item asking how the manager performs under pressure and how often the manager holds group meetings for communication purposes, were more likely to provide survey feedback to their teams. Even though these items were weak predictors, the authors concluded that supervisors who have preexisting processes in place to discuss work related matters with their teams might be at an advantage to continue such behavior within the scope of the employee survey follow-up. Supervisor ratings also improved after the intervention; more specifically, the intervention had the greatest effect on supervisor ratings in comparison to all other measures (e.g., climate or resources).

Jöns (2000) examined leadership and the type of feedback discussions (with or without a neutral presenter/moderator) as moderators of perceived quality of the feedback meetings and their outcomes. However, the author jointly examined leadership assessments (upward feedback) and employee surveys while acknowledging their close parallels. Self-guided feedback meetings, in comparison to moderated meetings, led to greater improvements in leadership behaviors, but only for groups in which leaders were rated as satisfactory, in comparison to high and very high ratings. Results also suggested that managers improved in their moderation skills over time.

In summary, the results of these studies suggest that managers and leaders play a central role in the employee survey follow-up process, but only few studies examined the characteristics of leaders in-depth to determine which factors contribute to and which might inhibit the employee survey follow-up.

Other change agents

Overall, most studies included some kind of change agent or consultant (internal or external) who accompanied the employee survey endeavors in addition to work unit managers. Their involvement in the process differed with regard to intensity, but also with regard to the steps of the employee survey process they supported. However, only three studies specifically examined the role of change agents. For example, Alderfer and Ferriss (1972) found that managers who received support from a consulting team that consisted of insider and outsider consultants were more likely to view the intervention positively and showed more awareness of interpersonal problems. This suggests that it might be beneficial to utilize the expertise of an external consultant who can foster communication across organizational boundaries, but to also have an internal consultant present who understands the specific needs of the team and can evaluate the feasibility of action plans.

We will now provide a few study examples of different levels of change agent involvement from least to most (see Table 1 for an overview). Some studies described no or low involvement of other change agents, which meant that there was, if any, little involvement either during the preparation stage or during the feedback meetings. For example, some studies did not mention any consultant or other change agent supporting the survey feedback process ( Björklund et al., 2007 ; Huebner and Zacher, 2021 ). Other studies described low involvement of other change agents. For example, in a study of survey feedback in neonatal intensive care units, Baker et al. (2003) reported that team leaders participated in some exercises to foster their understanding of the data, which the study heavily relied on, rather than interpreting the data for managers. However, respondents in several care center units commented that a facilitator or an expert in organizational behavior would have been helpful to support them during the actual feedback meetings in reviewing, interpreting, and highlighting the relevant results and deciding on which topics to target with action planning.

Other studies described a medium level of involvement of consultants, which means that managers received thorough support either during the preparation phase of feedback meetings or during the actual meetings. For example, Born and Mathieu (1996) provided thorough training for supervisors in which they were coached on how to conduct feedback meetings with their teams and how to develop action plans. Then, supervisors were independently responsible for holding the according meetings with their teams. Similarly, Solomon (1976) reported that managers participated in a workshop in which they received the result reports of their teams, received help in interpreting the data, and were guided on how to develop action plans. Subsequently, they held feedback meetings with their teams.

Lastly, some studies described high involvement of other change agents, which means managers received thorough support before and during feedback meetings. For example, in an intervention study by Elo et al. (1998) , occupational health physicians and nurses took on active roles by providing consultative support in the face-to-face discussions with work teams and managers, which was furthermore supported by an external researcher–consultant. The occupational health personnel also ensured the continuity of the process and kept participating in the meetings.

Overall, the different grades of change agent involvement and the contrasting results across studies make a definite statement regarding the effectiveness of involving other change agents in the process challenging.

Individual Outcomes

Psychological outcomes.

The majority of studies (38) provided information about a variety of psychological outcomes following employee survey follow-up processes (see Table 1 ). For example, a large-scale survey feedback intervention showed improvements in all areas measured by the survey, which mainly related to indicators of workplace culture, such as quality of work life, morale, opportunity for professional growth, and supportive leadership ( Jury et al., 2009a , b ). Survey feedback has also been shown to lead to increases in readiness to change among executives of the organization ( Alderfer and Holbrook, 1973 ), or improvements in communication, ease in tension in organization, satisfaction, and employee relations ( Amba-Rao, 1989 ). Conlon and Short (1984) reported improved ratings of supervisor behavior, goal clarity, task perceptions, and opportunity for advancement improved during their intervention, whereas at least a medium level of feedback was needed to produce meaningful changes.

However, most results of the studies included in this review were rather mixed. In a short case description by Miles et al. (1969) , survey feedback meetings among school staff led to improvements in participant ratings of own openness and collaborative problem-solving, but other improvements, such as in communication, were short-lived. The authors suspected a lack of follow-through regarding the planned actions and a relatively low number of actions generally were the reason that changes did not persist. Björklund et al. (2007) reported that groups with feedback and action plans showed improvements on leadership factors and commitment to the organization, but job demands and control at work did not improve. Adams and Sherwood (1979) reported that one of the intervention groups in a military setting even showed a decline in job satisfaction. However, this group experienced a change in commanders during the intervention, which could have been a possible confound to the study.

Anderzén and Arnetz (2005) found improvements 1 year after their intervention in terms of employee well-being, work-related exhaustion, performance feedback, participatory management, skills development, efficiency, and leadership, but no changes for goal clarity. Church and Oliver (2006) showed that respondents who reported that their survey results had been used for action, rated overall job satisfaction more favorably. Church et al. (2012) followed up on these results with more longitudinal data of the same organization and found that the group that indicated that its survey results had been shared and acted upon, were consistently more favorable raters across all items and across all years.

Another type of psychological outcome is the satisfaction of participants with the feedback process, which most likely influences their motivation to participate in the following feedback sessions. In a study by Peter (1994) , the necessary follow-up survey could not be administered to conduct a proper comparison of employee attitudes and turnover intentions before and after the survey feedback intervention due to administrative changes in the organization. However, nursing manages reported high satisfaction with the survey intervention and process in general. Specifically, 75% responded they would want to use the intervention again and 25% indicated that they would probably use it again. In a follow-up study, improvements on job satisfaction could be found for one work unit ( Peter et al., 1997 ). Klein et al. (1971) found that variables such as quality of meetings, the person presenting the information, and the number of meetings influenced how satisfied participants were with the feedback process and data utilization. Also, ratings of feedback quality were higher when meetings were held in person by frontline managers.

In summary, most studies were able to find improvements on a variety of psychosocial outcomes, but results were generally mixed and seemed to differ depending on different factors that could have acted as moderators of the found relationships.

Physiological outcomes

Only four studies examined physiological changes following survey feedback interventions, and they were all published in medical and health journals, rather than in industrial and organizational psychology journals. For example, Anderzén and Arnetz (2005) found that improvements in psychosocial work factors were associated with improvements in self-rated health and ratings of quality of sleep. Also, levels of stress-related hormones (i.e., serum triglycerides and serum cholesterol) in blood samples were reduced at an aggregate level after the intervention, and serum testosterone (an important restorative hormone) increased. The authors also measured increased levels of cortisol; low levels of cortisol are indicative of chronic fatigue and burnout. Similarly, Elo et al. (1998) reported reduced mental, but also physical strain for one of the three departments (i.e., finishing department of a factory) in which the survey feedback was implemented.

Eklöf et al. (2004) examined the proportion of workgroup members who reported any workplace modifications with regard to ergonomics (e.g., screen placement, visual conditions, etc.) or with regard to psychosocial aspects (e.g., social support, support from supervisor) following a survey feedback intervention. They found that both outcomes decreased for all feedback groups (i.e., feedback to groups, only to supervisors, only to individual employees) and for the control group. However, the feedback groups positively differed from the control group in that there was less decrease in ergonomic workplace modifications. Importantly, this study did not measure actual modifications or physiological benefits, such as reduced musculoskeletal complaints. The authors also caution that intervention effects could have been inflated or diminished due to a variety of confounds, such as recall bias, control-group effects, and social desirability. This study was followed up on by Eklöf and Hagberg (2006) using the same intervention implementation. The researchers could not find any intervention effects for symptom indicators, such as eye discomfort or musculoskeletal symptoms, which were self-reported as pain or discomfort in neck, shoulder, upper or lower back. There was, however, an improvement in social support measures when feedback was fed back to supervisors only.

In sum, results suggest that physiological benefits can be derived from employee surveys, but results were generally mixed and require further investigation.

Organizational Outcomes

Nine studies examined organizational outcomes following survey feedback. For example, Church and Oliver (2006) found that groups that reported action was taken following their surveys showed 50% lower incident rates of accidents on the job and 48% less lost time in days due to accidents. Those groups also showed lower turnover intentions and actual turnover. However, as the turnover data was not longitudinal, causality cannot be inferred. Similarly, Nadler et al. (1976) reported reduced turnover in one of the branches for bank tellers that used the feedback system effectively. Branches that used the feedback system ineffectively even showed a slight increase in turnover. Hautaluoma and Gavin (1975) reported a lower turnover rate for older employees and less absenteeism for blue-collar workers at an organization in which consultants held quite intense survey feedback meetings with staff.

Anderzén and Arnetz (2005) found that as self-rated health ratings increased following the survey intervention, absenteeism decreased. Also, decreased work tempo and improved work climate were related to decreased absenteeism. In contrast, Björklund et al. (2007) could not replicate these findings and did not find decreased sick leave for any of the comparison groups (a group without any feedback, a group with feedback only, and a group with feedback and action planning).

In summary, employee surveys seem to have the potential to lead to improvements in organizational outcomes, such as reduced turnover or absenteeism, but results are mixed and do not seem to hold up in every context.

With an increasing number of organizations that survey their employees ( Welbourne, 2016 ), it is likely that the topic of implementing a proper follow-up process will also continue to gain importance. We reviewed the literature on this topic based on an integrative conceptual model that we developed drawing from Nadler and Tushman’s Congruence Model of Organizational Behavior (1980), Burke and Litwin’s Model of Organizational Performance and Change (1992), and Porras and Robertson’s Change-Based Organizational Framework (1992).

In the following, we summarize the major insights of our review pertaining to each component of the model. By doing so, we answer our research question regarding which variables of our conceptual model have been sufficiently informed by past research and which variables require future research. Based on this discussion, we also provide implications for practice and offer suggestions for future research. Overall, we conclude that research on the employee survey follow-up process has investigated some of the relevant aspects, but large gaps of knowledge remain. Most of the research we reviewed focused on the measurement and achievement of human or organizational outcomes following a survey feedback intervention, which was mostly accomplished with pre/post designs. There were less studies focusing on the process of the employee survey follow-up. Some studies did investigate the process with other research designs, including qualitative interviews with survey practitioners or managers (e.g., Fraser et al., 2009 ; Wiley, 2012 ) or by surveying managers who conducted employee follow-up meetings (e.g., Gable et al., 2010 ). Researchers use longitudinal designs to measure change and to answer questions of causality ( Wang et al., 2017 ). However, there may be also value in other designs that collect cross-sectional or qualitative data.

In this regard, we suggest that more attention should be paid to the organizational actors who drive the employee survey (follow-up) process. In the majority of studies, managers and employees played what seemed a rather passive role in the process in the sense that they were described as attendees to the survey feedback meetings, but their specific characteristics were often not examined. Sometimes, demographic variables (e.g., age, education, marital status) were merely treated as correlates, rather than independent variables (e.g., Peter, 1994 ). However, these actors are the main organizational stakeholders that drive the process and are mostly affected by it as well. Hence, they play an essential role and should receive more research attention.

Especially the topic of leadership is of great significance. Leaders generally constitute important change agents in organizations ( Conway and Monks, 2011 ) and, accordingly, they play an important role in the employee survey process ( Welbourne, 2016 ). Despite their importance, only few studies examined leadership in this context. However, several studies included in this review mentioned the potential for tension between leaders and subordinates and the resulting lack of psychological safety for participants in the employee survey process (e.g., Alderfer and Holbrook, 1973 ; Dodd and Pesci, 1977 ; Baker et al., 2003 ). This potential for tension between managers/supervisors and subordinates during the employee survey follow-up has not yet been fully explored, but instead was mostly named as a limitation to or challenge of the included studies. In contrast, the issue of reactions to received feedback has received more attention in the upward feedback and 360 degree feedback literature (e.g., Atwater et al., 2000 ; Atwater and Brett, 2005 ) and in the performance appraisal literature as well (e.g., Pichler, 2012 ).

Experts often recommend that an additional change agent should be involved in these other feedback practices to support the recipients of the feedback in the process of understanding the data and using it for developmental purposes. The majority of studies included in this review involved change agents in addition to managers, such as human resource personnel or consultants. However, their level of involvement varied greatly between studies, and differences between groups with and without support by a change agent remain largely unexplored. Some results suggest that some type of support for managers, such as training, may present advantages for the process ( Dodd and Pesci, 1977 ).

Furthermore, other additional research gaps emerged in light of our conceptual model, including the effects of survey items/questionnaires as antecedents to the follow-up tasks. Whereas most studies sufficiently described the surveys they were using, none of them examined the characteristics of the survey as predictors. Related to this, another gap concerns the interpretation of the survey data after it is available to managers. It remains unclear, how the data should best be presented to managers (and also employees), and how much support managers should receive in the process. Another gap concerns the effects of organizational culture/climate, organizational strategy, and the availability of resources on the follow-up process. Almost none of the studies explicitly examined these factors, whereas the results of some case study descriptions suggest that organizational culture and climate could be important to consider (e.g., Swanson and Zuber, 1996 ; Ward, 2008 ). As the majority of research described some type of intervention in an organization, it is possible that the above mentioned factors were not explicitly studied because it is likely that they were sufficient when an organization agrees to collaborate in such research. Examining natural settings, for example by retrospectively asking survey practitioners about their experiences in the survey implementation process, could deem useful to further explore these variables.

Generally, this body of literature remains underdeveloped, which stands in contrast to research on more specific workplace interventions that aim to improve worker well-being and job attitudes (e.g., Fox et al., 2021 ; Solinger et al., 2021 ). However, other OD interventions are more clearly defined in terms of their goals and, hence, they must be carefully chosen to match the characteristics of the target group ( Bowers and Hausser, 1977 ). For example, a team building intervention might be appropriate to help ameliorate issues pertaining to communication and collaboration in a team ( Margulies et al., 1977 ). There have also been suggestions for interventions targeted at supporting an age-diverse workforce ( Truxillo et al., 2015 ).

In contrast, the employee survey is much less clearly defined as an intervention tool, as the reasons to implement an employee survey vary. Research suggests that, generally, employee surveys are implemented for the purpose of organizational assessment, organizational change ( Hartley, 2001 ), or for improving communication ( Kraut, 2006 ). Also, the assessment of a current situation or current state of organizational culture might be to prepare for the upcoming implementation of change interventions ( Hartley, 2001 ). Hence, the survey is the diagnostic tool that precedes an intervention and is an indicator for the kind of action plans that could be useful. Based on the variety of topics a survey can cover, the types of identified needs to implement a change initiative can be just as versatile and can target different levels of the organization ( Falletta and Combs, 2002 ).

Therefore, examining employee surveys as change tools might be more challenging in comparison to targeted change initiatives with predefined goals. As the following discussion will show, this also hinders a general estimation of employee surveys’ effectiveness in achieving changes. It does, however, argue for the necessity to view the employee survey follow-up in a more differentiated manner, rather as a dichotomous process (i.e., action planning was or was not completed). Different types of interventions following the survey might require different implementation and research approaches than those that are currently applied.

The Effectiveness of Employee Surveys for Change

Generally, findings were mixed regarding the effectiveness of survey feedback and the employee survey follow-up process. Several studies found benefits for a variety of outcomes, but others could not replicate those findings. As Born and Mathieu (1996) already noted, the quality of change interventions is difficult to gauge between and even within studies, as any given survey feedback intervention is most likely not implemented equally well. For example, Nadler et al. (1980) reported varying levels of intervention implementation between departments regarding the number of meetings held, the people who led discussions, and the extent to which employees got involved in the action planning process. Also, throughout the literature included in this review, some employees received the survey results shortly after the survey (e.g., 2 weeks later; Hautaluoma and Gavin, 1975 ), and others waited 12 weeks or longer (e.g., Jury et al., 2009a , b ). However, most practitioner books and other resources on the topic recommend that results should be available as quickly as possible after survey participation, so that feelings and thoughts during the survey are still present when results are discussed (e.g., Kraut, 2006 ; Bungard et al., 2007 ). Also, study durations and the (number of) measurement time points varied greatly from a few weeks or months (e.g., Eklöf et al., 2004 ; Eklöf and Hagberg, 2006 ) to several years (e.g., Church et al., 2012 ). Some results suggested though that the more time participants had to conduct action planning (e.g., 2 years vs. 1 year), the more scores tended to improve ( Church and Oliver, 2006 ; Huebner and Zacher, 2021 ).

Furthermore, many studies reported issues during the implementation and confounds that could have diluted the results. For example, some researchers reported major restructuring of the organization during the intervention period of 2 years and generally much skepticism and apprehension of the workforce to participate in the survey ( Jury et al., 2009a , b ). Alderfer and Holbrook (1973) reported that some executives of the company thought that the researchers might have exaggerated the degree of problems that persisted in the company, which indicated a general lack of trust toward the research endeavor.

Related to this issue, we found that the literature provided differing levels of information and descriptions of the actual feedback meetings and developed action plans. Some studies described the intervention with much detail. For example, as one of few studies, Gavin and Krois (1983) examined specifically the topics discussed in feedback meetings and the duration of those discussions. Other studies, on the other hand, reported that feedback meetings were conducted, but the authors admitted that they did not examine how these meetings were conducted (e.g., Björklund et al., 2007 ; Huebner and Zacher, 2021 ). Furthermore, very few studies reported or discussed the effect sizes of their interventions, (for exceptions see e.g., La Grange and Geldenhuys, 2008 ; Huebner and Zacher, 2021 ). Even though the reporting of standardized effect sizes is widely recommended ( Appelbaum et al., 2018 ), it is oftentimes neglected in research, which hinders the ability to draw interferential conclusions from the study results ( Kelley and Preacher, 2012 ).

In summary, we conclude that such a great variety in quality of implementation and descriptions of the interventions limits their comparability and the conclusions that can be drawn from this research. Nevertheless, the majority of studies were able to find positive effects on some outcomes, which suggests that employee surveys can have beneficial effects in organizations when used to implement a proper follow-up. These conclusions should be viewed with caution though, as results might have been affected by publication bias because null results tend to not get published ( Landis et al., 2014 ).

Implications for Practice

Even though there are many books on the topic, the employee survey process remains challenging, and many organizations fail to harvest the full benefits of this common human resource practice ( Brown, 2021 ). Depending on the organization, different change agents or organizational actors might be responsible for the implementation of the process (e.g., internal or external consultants/survey practitioners, human resource administrators, or managers), which creates ambiguity and the difficulty of finding the best implementation strategy. It is important for the responsible organizational actors to acknowledge that there is no “one size fits all” approach to employee surveys and their follow-up. Different organizations will thrive with different implementation models, depending on their culture, work environment, and staff.

Nevertheless, some recommendations can be offered based on this review. It seems to be most effective to not only provide survey feedback data, but to also make sure that actual action planning takes place ( Bowers, 1973 ; Björklund et al., 2007 ; Church et al., 2012 ). Also, it is beneficial when the questionnaire fits the organization, and the items are actionable for managers and their teams ( Church et al., 2012 ). Managers should be properly involved in the follow-up process, as they are the key change agents who must drive the implementation of action plans ( Mann and Likert, 1952 ; Welbourne, 2016 ). However, it is also important that managers receive the necessary tools to do the job. These tools include training, sufficient time, support from top management, and other necessary resources ( Wiley, 2012 ). The involvement of other change agents, such as consultants who help analyze the data, can be beneficial, but managers should not create the habit of relying too heavily on such resources. They should rather be enabled and trained to understand and utilize the data self-reliantly in collaboration with their teams. On that note, other supporting tools, such as SKDA can be useful aids, but they do not exempt managers from properly understanding the data. Supporting change agents might also be helpful in situations where there is much tension between managers and subordinates, which could potentially inhibit fruitful feedback discussions. Lastly, high involvement of all stakeholders seems to most beneficial as it creates accountability and a deeper understanding and acceptance of the actions following the survey ( Mann and Likert, 1952 ).

Whereas following this set of recommendations will not guarantee a perfect employee survey follow-up implementation, we believe it can help. Implementing employee surveys is costly, and designing a useful follow-up can support organizations in getting the most out of their investment. Benefits can manifest as improvements in employee attitudes, physiological outcomes, and even organizational factors, as for example reduced turnover. Consequently, organizations should evaluate how ready their workforce is to master the employee survey follow-up. In the beginning, managers might require more support, but as they become more acquainted and comfortable with the process, and they have been enabled to function as active change agents in the organization, they might need less resources as support.

Limitations and Suggestions for Future Research

There are a few limitations to this systematic review worth noting. One limitation includes our method of searching for relevant literature in Google Scholar. One of this database’s shortcomings is that the search algorithm changes every day, making the search not completely replicable at a later point in time ( Bramer et al., 2016 ). Also, Google Scholar has low recall capabilities (only the first 1000 results are viewable), which is why it is preferable to also search in an additional database ( Bramer et al., 2016 ).

Another limitation is the exclusion of gray literature. As we only included studies published in peer-reviewed journals and edited books, the overall results might be subject to publication bias as null results tend to not get published ( Landis et al., 2014 ). Hence, as previously mentioned, the results of this review regarding the effectiveness of employee surveys for the purpose of OD should be viewed with caution.

Overall, drawing from other areas of industrial and organizational psychology, as for example the literature on leadership, teams, employee voice, and engagement could prove useful to examine the variables of the model that have not been sufficiently explored. For example, research on leadership suggests that different kinds of leadership behaviors contribute to the job performance of employees, but that such effects also depend on certain characteristics of employees ( Breevaart et al., 2016 ). Hence, leadership is an important variable that deserves more research attention, which could be accomplished by the application of leadership theories. Group voice climate also seems to be related to perceptions of leadership and group performance ( Frazier and Bowler, 2015 ), but as can be seen in Table 1 , culture and climate have not been fully explored as predictors or moderators of the employee survey process. Hence, we recommend cross-cultural examinations of post-survey practices. The alignment between company and employee survey strategy could also be crucial for this process, and we suggest examining such by conducting research in which the degree of alignment is measured. We also suggest that external factors should be examined in this research context. For example, the type of industry that the feedback meetings are held in could influence meeting effectiveness because action planning could be more or less influential, depending on industry-bound work environments.

Furthermore, we believe that research on the post-survey process would benefit from integrating and drawing from survey research, as for example research pertaining to survey modes (e.g., Borg and Zuell, 2012 ; Mueller et al., 2014 ) or questionnaire design and development (e.g., Roberson and Sundstrom, 1990 ; Alden, 2007 ). The survey itself should be considered an antecedent of the follow-up, as the type of data and data format could influence how the follow-up is carried out. Lastly, most studies included additional change agents who were involved in the survey feedback process, but future research should investigate these organizational actors in more depth. For example, qualitative data from experienced change agents could render important findings in regards to factors that inhibit or enhance the process from their perspective.

Overall, this body of literature provides much opportunity for the further integration of adjacent research areas, including other areas within industrial and organizational psychology, and for more theory-driven research. Whereas most records were published in industrial and organizational psychology journals, we also found some studies in journals of other disciplines, such as education or medicine (see Appendix). We propose that research in this area would benefit from more cross-disciplinary approaches. For example, research regarding physiological outcomes of survey feedback interventions might require the expertise of medical professionals.

Other disciplines, such as social psychology, could also provide useful insights for this research area. For example, the theory of planned behavior ( Ajzen, 2002 ) or control theory ( Carver and Scheier, 1982 ) could help explain certain behaviors during the employee survey follow-up discussions and render important findings for these processes. Applying other behavioral theories to the employee survey context could also put forth important findings, as for example goal setting theory ( Locke and Latham, 1990 ) or fundamental attribution error ( Ross, 1977 ).

Due to the applied nature of employee survey research, using experimental designs, specifically randomized controlled trials, can be challenging. Nevertheless, we believe this would be useful to examine the factors named above in more detail. Such designs could aid in systematically testing different process variables that are relevant for the employee survey follow-up. Also, examining the differing time intervals between the survey, receiving feedback, and action planning, should be examined, especially in light of the growing popularity of pulse surveys ( Welbourne, 2016 ). However, natural experiments can also render important findings regarding for example resources, as deficits in such might not become exposed unless natural settings are studied.

Overall, research on this topic has seemed to almost come to a halt. Out of 53 studies, 47 (∼90%) were published before 2010 – over 10 years ago. However, with increasing digitalization and the influx of new tools and ways of collaborating at the workplace, we require more research in this area to meet the newly emerging needs of organizations. This is especially relevant in light of the ongoing COVID-19 pandemic, which has started to change everyday life at work ( Allen et al., 2020 ; Rudolph et al., 2021 ).

Even though leaders can talk to their subordinates daily and on a regular basis, the employee survey and its follow-up remains an important communication forum for them. Generally, the results of this review suggest that the employee survey follow-up can lead to a variety of benefits for and improvements in organizations, but we have not sufficiently explored all factors that can support or inhibit this process. The literature yields many important findings for practitioners regarding the implementation and effectiveness of the process, but some research gaps remain. Hence, future research in this area should focus more on the relevant process variables and organizational actors involved, especially leaders who function as the main change agents in this data-based approach to OD.

Data Availability Statement

Author contributions.

L-AH and HZ contributed to conception and planning of the systematic review. L-AH performed the literature searches, organized the data, and wrote the first draft of the manuscript. Both authors contributed to manuscript revision, read, and approved the submitted version.

Author Disclaimer

The results, opinions, and conclusions expressed in this paper are not necessarily those of Volkswagen Aktiengesellschaft.

Conflict of Interest

L-AH was employed by company Volkswagen AG. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1 The protocol can be found here: https://osf.io/y5be9/?view_only=f0ca973da2334db1b504291318b7c402

2 The list of all references can be found here: https://osf.io/y5be9/?view_only=f0ca973da2334db1b504291318b7c402

We acknowledge support from Leipzig University for Open Access Publishing.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2021.801073/full#supplementary-material

  • Adams J., Sherwood J. J. (1979). An evaluation of organizational effectiveness: an appraisal of how army internal consultants use survey feedback in a military setting. Group Organ. Stud. 4 170–182. 10.1177/105960117900400205 [ CrossRef ] [ Google Scholar ]
  • Aguinis H., Gottfredson R. K., Joo H. (2012). Delivering effective performance feedback: the strengths-based approach. Bus. Horiz. 55 105–111. 10.1016/j.bushor.2011.10.004 [ CrossRef ] [ Google Scholar ]
  • Ajzen I. (2002). Residual effects of past behavior on later behavior: habituation and reasoned action perspectives. Pers. Soc. Psychol. Rev. 6 107–122. 10.1207/S15327957PSPR0602_02 [ CrossRef ] [ Google Scholar ]
  • Alden J. (2007). Surveying attitudes: questionnaires versus opinionnaires. Perform. Improv. 46 42–48. 10.1002/pfi.141 [ CrossRef ] [ Google Scholar ]
  • Alderfer C. P., Ferriss R. (1972). “ Understanding the impact of survey feedback ,” in The Social Technology of Organization Development , eds Burke W. W., Hornstein H. A. (Bethel, ME: National Training Laboratories; ), 234–243. [ Google Scholar ]
  • Alderfer C. P., Holbrook J. (1973). A new design for survey feedback. Educ. Urban Soc. 5 437–464. 10.1177/001312457300500405 [ CrossRef ] [ Google Scholar ]
  • Allen J. B., Jain S., Church A. H. (2020). Using a pulse survey approach to drive organizational change. Organ. Dev. Rev. 52 62–68. [ Google Scholar ]
  • Amba-Rao S. (1989). Survey feedback in a small manufacturing firm: an application. Organ. Dev. J. 7 92–100. [ Google Scholar ]
  • Anderzén I., Arnetz B. B. (2005). The impact of a prospective survey-based workplace intervention program on employee health, biologic stress markers, and organizational productivity. J. Occup. Environ. Med. 47 671–682. 10.1097/01.jom.0000167259.03247.1e [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Appelbaum M., Cooper H., Kline R. B., Mayo-Wilson E., Nezu A. M., Rao S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA publications and communications board task force report. Am. Psychol. 73 3–25. 10.1037/amp0000191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Atwater L. E., Brett J. F. (2005). Antecedents and consequences of reactions to developmental 360 ° feedback. J. Vocat. Behav. 66 532–548. 10.1016/j.jvb.2004.05.003 [ CrossRef ] [ Google Scholar ]
  • Atwater L. E., Brett J. F., Charles A. C. (2007). Multisource feedback: lessons learned and implications for practice. Hum. Resour. Manag. 46 285–307. 10.1002/hrm.20161 [ CrossRef ] [ Google Scholar ]
  • Atwater L. E., Waldman D. A., Atwater D., Cartier P. (2000). An upward feedback field experiment: supervisors’ cynicism, reactions, and commitment to subordinates. Pers. Psychol. 53 275–297. 10.1111/j.1744-6570.2000.tb00202.x [ CrossRef ] [ Google Scholar ]
  • Baker R. G., King H., MacDonald J. L., Horbar J. D. (2003). Using organizational assessment surveys for improvement in neonatal intensive care. Pediatrics 111 419–425. 10.1542/peds.111.SE1.e419 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beer M., Walton A. E. (1987). Organization change and development. Annu. Rev. Psychol. 38 339–367. 10.1146/annurev.ps.38.020187.002011 [ CrossRef ] [ Google Scholar ]
  • Besieux T. (2017). Why I hate feedback: anchoring effective feedback within organizations. Bus. Horiz. 60 435–439. 10.1016/j.bushor.2017.03.001 [ CrossRef ] [ Google Scholar ]
  • Björklund C., Grahn A., Jensen I., Bergström G. (2007). Does survey feedback enhance the psychosocial work environment and decrease sick leave? Eur. J. Work Organ. Psychol. 16 76–93. 10.1080/13594320601112169 [ CrossRef ] [ Google Scholar ]
  • Bordia P., Restubog S. L. D., Jimmieson N. L., Irmer B. E. (2011). Haunted by the past: effects of poor change management history on employee attitudes and turnover. Group Organ. Manag. 36 191–222. 10.1177/1059601110392990 [ CrossRef ] [ Google Scholar ]
  • Borg I., Zuell C. (2012). Write-in comments in employee surveys. Int. J. Manpow. 33 206–220. 10.1108/01437721211225453 [ CrossRef ] [ Google Scholar ]
  • Born D., Mathieu J. (1996). Differential effects of survey-guided feedback. Group Organ. Manag. 21 388–403. 10.1177/1059601196214002 [ CrossRef ] [ Google Scholar ]
  • Bowers D. G. (1973). OD techniques and their results in 23 organizations: the Michigan ICL study. J. Appl. Behav. Sci. 9 21–43. 10.1177/002188637300900103 [ CrossRef ] [ Google Scholar ]
  • Bowers D. G., Hausser D. L. (1977). Work group types and intervention effects in organizational development. Adm. Sci. Q. 22 76–94. 10.2307/2391747 [ CrossRef ] [ Google Scholar ]
  • Bracken D. W., Timmreck C. W., Fleenor J. W., Lynn S. (2001). 360 feedback from another angle. Hum. Resour. Manag. 40 3–20. 10.1002/hrm.4012 [ CrossRef ] [ Google Scholar ]
  • Bramer W. M., Giustini D., Kramer B. M. R. (2016). Comparing the coverage, recall, and precision of searches for 120 systematic reviews in Embase, MEDLINE, and google scholar: a prospective study. Syst. Rev. 5 1–7. 10.1186/s13643-016-0215-7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Breevaart K., Bakker A. B., Demerouti E., Derks D. (2016). Who takes the lead? A multi-source diary study on leadership, work engagement, and job performance. J. Organ. Behav. 37 309–325. 10.1002/job.2041 [ CrossRef ] [ Google Scholar ]
  • Brown L. D. (1972). “Research action”: organizational feedback, understanding, and change. J. Appl. Behav. Sci. 8 697–711. 10.1177/002188637200800606 [ CrossRef ] [ Google Scholar ]
  • Brown M. I. (2021). Does action planning create more harm than good? Common challenges in the practice of action planning after employee surveys . J. Appl. Behav. Sci. 10.1177/00218863211007555 [Epub ahead of print]. [ CrossRef ] [ Google Scholar ]
  • Bungard W., Müller K., Niethammer C. (eds). (2007). Mitarbeiterbefragung - Was Dann …? MAB und Folgeprozesse Erfolgreich Gestalten [Employee surveys – and then what …? Successfully Designing Employee Surveys and the Follow-Up Process]. Heidelberg: Springer Medizin Verlag, 10.1007/978-3-540-47841-6 [ CrossRef ] [ Google Scholar ]
  • Burke W. W., Coruzzi C. A., Church A. H. (1996). “ The organizational survey as an intervention for change ,” in Organizational Surveys: Tools for Assessment and Change , ed. Kraut A. I. (San Francisco, CA: Jossey-Bass; ), 41–66. [ Google Scholar ]
  • Burke W. W., Litwin G. H. (1992). A causal model of organizational performance and change. J. Manag. Dev. 18 523–545. 10.1177/014920639201800306 [ CrossRef ] [ Google Scholar ]
  • Carver C. S., Scheier M. F. (1982). Control theory: a useful conceptual framework for personality–social, clinical, and health psychology. Psychol. Bull. 92 111–135. 10.1037/0033-2909.92.1.111 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cattermole G., Johnson J., Roberts K. (2013). Employee engagement welcomes the dawn of an empowerment culture. Strateg. HR Rev. 12 250–254. 10.1108/shr-04-2013-0039 [ CrossRef ] [ Google Scholar ]
  • Chesler M., Flanders M. (1967). Resistance to research and research utilization: the death and life of a feedback attempt. J. Appl. Behav. Sci. 3 469–487. 10.1177/002188636700300403 [ CrossRef ] [ Google Scholar ]
  • Church A. H., Golay L. M., Rotolo C. T., Tuller M. D., Shull A. C., Desrosiers E. I. (2012). “ Without effort there can be no change: reexamining the impact of survey feedback and action planning on employee attitudes ,” in Research in Organizational Change and Development , Vol. 20 eds Woodman R. W., Pasmore W. A., Rami Shani A. B. (Bingley: Emerald Group Publishing; ), 223–264. [ Google Scholar ]
  • Church A. H., Margiloff A., Coruzzi C. (1995). Using surveys for change: an applied example in a pharmaceuticals organization. Leadersh. Organ. Dev. J. 16 3–11. 10.1108/01437739510089049 [ CrossRef ] [ Google Scholar ]
  • Church A. H., Oliver D. H. (2006). “ The importance of taking action, not just sharing survey feedback ” in Getting Action From Organizational Surveys: New Concepts, Technologies, and Applications , ed. Kraut A. I. (San Francisco, CA: Jossey-Bass; ), 102–130. [ Google Scholar ]
  • Church A. H., Waclawski J. (2017). Designing and Using Organizational Surveys. London: Routledge. [ Google Scholar ]
  • Conlon E. J., Short L. O. (1984). An empirical examination of survey feedback as an organizational change device. Group Organ. Stud. 9 399–416. 10.5465/ambpp.1983.4976350 [ CrossRef ] [ Google Scholar ]
  • Conway E., Monks K. (2011). Change from below: the role of middle managers in mediating paradoxical change. Hum. Resour. Manag. J. 21 190–203. 10.1111/j.1748-8583.2010.00135.x [ CrossRef ] [ Google Scholar ]
  • Costello J., Clarke C., Gravely G., D’Agostino-Rose D., Puopolo R. (2011). Working together to build a respectful workplace: transforming OR culture. AORN J. 93 115–126. 10.1016/j.aorn.2010.05.030 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cucina J. M., Walmsey P. T., Gast I. F., Martin N. R., Curtin P. (2017). Survey key driver analysis: are we driving down the right road? Ind. Organ. Psychol. 10 234–257. 10.1017/iop.2016.97 [ CrossRef ] [ Google Scholar ]
  • Daniels K. (2018). Guidance on conducting and reviewing systematic reviews (and meta-analyses) in work and organizational psychology. Eur. J. Work Organ. Psychol. 28 1–10. 10.1080/1359432X.2018.1547708 [ CrossRef ] [ Google Scholar ]
  • De Waal A. (2014). The employee survey: benefits, problems in practice, and the relation with the high performance organization. Strateg. HR Rev. 13 227–232. 10.1108/SHR-07-2014-0041 [ CrossRef ] [ Google Scholar ]
  • Demerouti E., Bakker A. B., Nachreiner F., Schaufeli W. B. (2001). The job demands-resources model of burnout. J. Appl. Psychol. 86 499–512. 10.1037//0021-9010.86.3.499 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • DeNisi A. S., Kluger A. N. (2000). Feedback effectiveness: can 360-degree appraisals be improved? Acad. Manag. Exec. 14 129–139. 10.5465/ame.2000.2909845 [ CrossRef ] [ Google Scholar ]
  • Dillman D. A., Smyth J. D., Christian L. M. (2014). Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method , 4th Edn. Hoboken, NJ: John Wiley & Sons, Inc. [ Google Scholar ]
  • Dodd W. E., Pesci M. L. (1977). Managing morale through survey feedback. Bus. Horiz. 20 36–45. 10.1016/0007-6813(77)90069-6 [ CrossRef ] [ Google Scholar ]
  • Duncan R. (1979). What is the right organization structure? Decision tree analysis provides the answer. Organ. Dyn. 7 59–80. 10.1016/0090-2616(79)90027-5 [ CrossRef ] [ Google Scholar ]
  • Eklöf M., Hagberg M. (2006). Are simple feedback interventions involving workplace data associated with better working environment and health? A cluster randomized controlled study among Swedish VDU workers. Appl. Ergon. 37 201–210. 10.1016/j.apergo.2005.04.003 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Eklöf M., Hagberg M., Toomingas A., Tornqvist E. W. (2004). Feedback of workplace data to individual workers, workgroups or supervisors as a way to stimulate working environment activity: a cluster randomized controlled study. Int. Arch. Occup. Environ. Health 77 505–514. 10.1007/s00420-004-0531-4 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Elo A.-L., Leppänen A., Sillanpää P. (1998). Applicability of survey feedback for an occupational health method in stress management. Occup. Med. 48 181–188. 10.1093/occmed/48.3.181 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Falletta S. V., Combs W. (2002). “ Surveys as a tool for organization development and change ,” in Organization Development: A Data-Driven Approach to Organizational Change , eds Waclawski J., Church A. H. (Hoboken, NJ: John Wiley & Sons; ), 78–101. [ Google Scholar ]
  • Feather K. (2008). Helping HR to measure up: arming the “soft” function with hard metrics. Strateg. HR Rev. 7 28–33. 10.1108/14754390810847531 [ CrossRef ] [ Google Scholar ]
  • Fedor D. B., Eder R. W., Buckley M. R. (1989). The contributory effects of supervisor intentions on subordinate feedback responses. Organ. Behav. Hum. Decis. Process. 44 396–414. 10.1016/0749-5978(89)90016-2 [ CrossRef ] [ Google Scholar ]
  • Foster C. A., Law M. R. F. (2006). How many perspectives provide a compass? Differentiating 360-degree feedback and multi-source feedback. Int. J. Sel. Assess. 14 288–291. 10.1111/j.1468-2389.2006.00347.x [ CrossRef ] [ Google Scholar ]
  • Fox K. E., Johnson S. T., Berkman L. F., Sianoja M., Soh Y., Kubzansky L. D., et al. (2021). Organisational- and group-level workplace interventions and their effect on multiple domains of worker well-being: a systematic review . Work Stress. 10.1080/02678373.2021.1969476 [Epub ahead of print]. [ CrossRef ] [ Google Scholar ]
  • Fraser K. J., Leach D. J., Webb S. (2009). Employee surveys: guidance to facilitate effective action. Eur. Work Organ. Psychol. Pract. 3 16–23. [ Google Scholar ]
  • Frazier M. L., Bowler W. M. (2015). Voice climate, supervisor undermining, and work outcomes. J. Manag. 41 841–863. 10.1177/0149206311434533 [ CrossRef ] [ Google Scholar ]
  • Friedlander F., Brown L. D. (1974). Organization development. Annu. Rev. Psychol. 1 313–341. 10.1146/annurev.ps.25.020174.001525 [ CrossRef ] [ Google Scholar ]
  • Gable S. A., Chyung S. Y., Marker A., Winiecki D. (2010). How should organizational leaders use employee engagement survey data? Perform. Improv. 49 17–25. 10.1002/pfi.20140 [ CrossRef ] [ Google Scholar ]
  • Gavin J. F., Krois P. A. (1983). Content and process of survey feedback sessions and their relation to survey responses: an initial study. Group Organ. Stud. 8 221–247. 10.1177/105960118300800208 [ CrossRef ] [ Google Scholar ]
  • Gavin J. F., McPhail S. M. (1978). Intervention and evaluation: a proactive team approach to OD. J. Appl. Behav. Sci. 14 175–194. 10.1177/002188637801400203 [ CrossRef ] [ Google Scholar ]
  • Gehlbach H., Robinson C. D., Finefter-Rosenbluh I., Benshoof C., Schneider J. (2018). Questionnaires as interventions: can taking a survey increase teachers’ openness to student feedback surveys? Educ. Psychol. 38 350–367. 10.1080/01443410.2017.1349876 [ CrossRef ] [ Google Scholar ]
  • Goldberg B., Gordon G. G. (1978). Designing attitude surveys for management action. Pers. J. 57 546–549. [ Google Scholar ]
  • Griffin M. A., Hart P. M., Wilson-Evered E. (2000). “ Using employee opinion surveys to improve organizational health ,” in Healthy and Productive Work: An International Perspective , eds Murphy L. R., Cooper C. L. (London: Taylor & Francis; ), 15–36. [ Google Scholar ]
  • Guzzo R. A., Jette R. D., Katzell R. A. (1985). The effects of psychologically based intervention programs on worker productivity: a meta-analysis. Pers. Psychol. 38 275–291. 10.1111/j.1744-6570.1985.tb00547.x [ CrossRef ] [ Google Scholar ]
  • Hand H. H., Estafen B. D., Sims H. P., Jr. (1975). How effective is data survey and feedback as a technique of organization development? An experiment. J. Appl. Behav. Sci. 11 333–347. 10.1177/002188637501100306 [ CrossRef ] [ Google Scholar ]
  • Hannum K. M. (2007). Measurement equivalence of 360 ° assessment data: are different raters rating the same constructs. Int. J. Sel. Assess. 27 293–301. 10.1111/j.1468-2389.2007.00389.x [ CrossRef ] [ Google Scholar ]
  • Hartley J. (2001). Employee surveys–strategic aid or hand-grenade for organisational and cultural change? Int. J. Public Sect. Manag. 14 184–204. 10.1108/09513550110390846 [ CrossRef ] [ Google Scholar ]
  • Hautaluoma J. E., Gavin J. F. (1975). Effects of organizational diagnosis and intervention on blue-collar “blues”. J. Appl. Behav. Sci. 11 475–496. 10.1177/002188637501100408 [ CrossRef ] [ Google Scholar ]
  • Hopkins D. (1982). Survey feedback as an organisation development intervention in educational settings: a review. Educ. Manag. Adm. 10 203–215. 10.1177/174114328201000304 [ CrossRef ] [ Google Scholar ]
  • Huebner L.-A., Zacher H. (2021). Effects of action planning after employee surveys . J. Pers. Psychol. 10.1027/1866-5888/a000285 Advance online publication [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hyland P. K., Woo V. A., Reeves D. W., Garrad L. (2017). In defense of responsible survey key driver analysis. Ind. Organ. Psychol. 10 277–283. 10.1017/iop.2017.19 [ CrossRef ] [ Google Scholar ]
  • Ilgen D. R., Fisher C. D., Taylor S. M. (1979). Consequences of individual feedback on behavior in organizations. J. Appl. Psychol. 64 349–371. 10.1037//0021-9010.64.4.349 [ CrossRef ] [ Google Scholar ]
  • Jacoby J., Mazursky D., Troutman T. (1984). When feedback is ignored: disutility of outcome feedback. J. Appl. Psychol. 69 531–545. 10.1037/0021-9010.69.3.531 [ CrossRef ] [ Google Scholar ]
  • Johnson J. W. (2017). Best practice recommendations for conducting key driver analysis. Ind. Organ. Psychol. 10 298–305. 10.1017/iop.2017.22 [ CrossRef ] [ Google Scholar ]
  • Jöns I. (2000). “ Supervisors as moderators of survey feedback and change processes in teams ,” in Innovative Theories, Tools and Practices in Work and Organizational Psychology , eds Vartiainen M., Avallone F., Anderson N. (Toronto, ON: Hogrefe & Huber; ), 155–171. [ Google Scholar ]
  • Jury C., Goh H. E., Olsen S. P., Elston J., Phillips J. (2009a). Actions and results from the Queensland health “better workplaces” staff opinion survey. Aust. Health Rev. 33 371–376. 10.1071/AH090371 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jury C., Machin M. A., Phillips J., Goh H. E., Olsen S. P., Patrick J. (2009b). Developing and implementing an action-oriented staff survey: Queensland health and the “better workplaces” initiative. Australian Health Review 33 365–370. 10.1071/AH090365 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Katz D., Kahn R. L. (eds). (1978). The Social Psychology of Organizations , Vol. 2 . New York, NY: Wiley. [ Google Scholar ]
  • Keiser N. L., Payne S. C. (2019). Are employee surveys biased? Impression management as a response bias in workplace safety constructs. Saf. Sci. 118 453–465. 10.1016/j.ssci.2019.05.051 [ CrossRef ] [ Google Scholar ]
  • Kelley K., Preacher K. J. (2012). On effect size. Psychol . Methods 17 , 137–152. 10.1037/a0028086 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Klein C., Synovec R., Zhang H., Lovato C., Howes J., Feinzig S. (2017). Survey key driver analysis: perhaps the right question is, “are we there yet?”. Ind. Organ. Psychol. 10 283–290. 10.1017/iop.2017.20 [ CrossRef ] [ Google Scholar ]
  • Klein S., Kraut A., Wolfson A. (1971). Employee reactions to attitude survey feedback: a study of the impact of structure and process. Adm. Sci. Q. 16 497–514. 10.2307/2391769 [ CrossRef ] [ Google Scholar ]
  • Kluger A. N., DeNisi A. (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 119 254–284. 10.1037//0033-2909.119.2.254 [ CrossRef ] [ Google Scholar ]
  • Knapp P., Mujtaba B. (2010). Designing, administering, and utilizing an employee attitude survey. J. Behav. Stud. Bus. 2 1–14. [ Google Scholar ]
  • Kraut A. I. (ed.) (2006). Getting Action from Organizational Surveys: New Concepts, Technologies, and Applications. San Francisco, CA: Jossey-Bass. [ Google Scholar ]
  • La Grange A., Geldenhuys D. J. (2008). The impact of feedback on changing organisational culture. South. Afr. Bus. Rev. 12 37–66. [ Google Scholar ]
  • Landis R. S., James L. R., Lance C. E., Pierce C. A., Rogelberg S. G. (2014). When is nothing something? Editorial for the null results special issue of journal of business and psychology. J. Bus. Psychol. 29 163–167. 10.1007/s10869-014-9347-8 [ CrossRef ] [ Google Scholar ]
  • Linke R. (2018). Mitarbeiterbefragungen Optimieren: Von der Befragung zum Wirksamen Management-Instrument [Optimizing Employee Surveys: From the Survey to the Effective Management Tool]. Wiesbaden: Springer Gabler. [ Google Scholar ]
  • Locke E. A., Latham G. P. (1990). A Theory of Goal Setting & Task Performance. Englewood Cliffs, NJ: Prentice Hall, Inc. [ Google Scholar ]
  • Macey W. H., Daum D. L. (2017). SKDA in context. Ind. Organ. Psychol. 10 268–277. 10.1017/iop.2017.18 [ CrossRef ] [ Google Scholar ]
  • Macey W. H., Fink A. A. (2020). “ Surveys and sensing: realizing the promise of listening to employees ,” in Employee Surveys and Sensing: Challenges and Opportunities , eds Macey W. H., Fink A. A. (Oxford: Oxford University Press; ), 1–22. [ Google Scholar ]
  • Mann F., Likert R. (1952). The need for research on the communication of research results. Hum. Organ. 11 15–19. [ Google Scholar ]
  • Margulies N., Wright P. L., Scholl R. W. (1977). Organization development techniques: their impact on change. Group Organ. Stud. 2 428–448. 10.1177/105960117700200405 [ CrossRef ] [ Google Scholar ]
  • Miles M. B., Hornstein H. A., Callahan D. M., Calder P. H., Schiavo S. R. (1969). “ The consequence of survey feedback: theory and evaluation ,” in The Planning of Change , 2nd Edn, eds Bennis W. G., Benne K. D., Chin R. (New York, NY: Holt, Rinehart and Winston; ), 457–468. [ Google Scholar ]
  • Moher D., Shamseer L., Clarke M., Ghersi D., Liberati A., Petticrew M., et al. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 4 1–9. 10.1186/2046-4053-4-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mueller K., Straatmann T., Hattrup K., Jochum M. (2014). Effects of personalized versus generic implementation of an intra-organizational online survey on psychological anonymity and response behavior: a field experiment. J. Bus. Psychol. 29 169–181. 10.1007/s10869-012-9262-9 [ CrossRef ] [ Google Scholar ]
  • Nadler D. A. (1976). The use of feedback for organizational change: promises and pitfalls. Group Organ. Stud. 1 177–186. 10.1177/105960117600100205 [ CrossRef ] [ Google Scholar ]
  • Nadler D. A. (1979). The effects of feedback on task group behavior: a review of the experimental research. Organ. Behav. Hum. Perform. 23 309–338. 10.1016/0030-5073(79)90001-1 [ CrossRef ] [ Google Scholar ]
  • Nadler D. A. (1980). “ Using organizational assessment data for planned organizational change ,” in Organizational Assessment: Perspectives on the Measurement of Organizational Behavior and the Quality of Work Life , eds Lawler E. E., III, Nadler D., Cammann C. (New York, NY: Wiley; ), 72–90. [ Google Scholar ]
  • Nadler D. A. (1981). Managing organizational change: an integrative perspective. J. Appl. Behav. Sci. 17 191–211. 10.1177/002188638101700205 [ CrossRef ] [ Google Scholar ]
  • Nadler D. A., Cammann C., Mirvis P. H. (1980). Developing a feedback system for work units: a field experiment in structural change. J. Appl. Behav. Sci. 16 41–62. 10.1177/002188638001600105 [ CrossRef ] [ Google Scholar ]
  • Nadler D. A., Mirvis P., Cammann C. (1976). The ongoing feedback system: experimenting with a new managerial tool. Organ. Dyn. 4 63–80. 10.1016/0090-2616(76)90045-0 [ CrossRef ] [ Google Scholar ]
  • Nadler D. A., Tushman M. L. (1980). A model for diagnosing organizational behavior. Organ. Dyn. 9 35–51. 10.1016/0090-2616(80)90039-x [ CrossRef ] [ Google Scholar ]
  • Neuman G. A., Edwards J. E., Raju N. S. (1989). Organizational development interventions: a meta-analysis of their effects on satisfaction and other attitudes. Pers. Psychol. 42 461–489. 10.1111/j.1744-6570.1989.tb00665.x [ CrossRef ] [ Google Scholar ]
  • Nowack K. M., Mashihi S. (2012). Evidence-based answers to 15 questions about leveraging 360-degree feedback. Consult. Psychol. J. Pract. Res. 64 157–182. 10.1037/a0030011 [ CrossRef ] [ Google Scholar ]
  • Ostroff C. (1992). The relationship between satisfaction, attitudes, and performance: an organizational level analysis. J. Appl. Psychol. 77 963–974. 10.1037//0021-9010.77.6.963 [ CrossRef ] [ Google Scholar ]
  • Peter M. A. (1994). Making the hidden obvious. Management education through survey feedback. J. Nurs. Adm. 24 13–19. 10.1097/00005110-199406000-00006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peter M. A., Lytle K. S., Swearengen P. (1997). Feedback to nurse managers about staff nurses’ perceptions of their jobs. Semin. Nurse Manag. 5 209–216. [ PubMed ] [ Google Scholar ]
  • Pichler S. (2012). The social context of performance appraisal and appraisal reactions: a meta-analysis. Hum. Resour. Manag. 51 709–732. 10.1002/hrm.21499 [ CrossRef ] [ Google Scholar ]
  • Porras J. I. (1979). The comparative impact of different OD techniques and intervention intensities. J. Appl. Behav. Sci. 15 156–178. 10.1177/002188637901500204 [ CrossRef ] [ Google Scholar ]
  • Porras J. I., Berg P. O. (1978). The impact of organization development. Acad. Manag. Rev. 3 249–266. 10.2307/257666 [ CrossRef ] [ Google Scholar ]
  • Porras J. I., Robertson P. J. (1992). “ Organizational development: theory, practice, and research ,” in Handbook of Industrial and Organizational Psychology , 3rd Edn, eds Dunnette M. D., Hough L. M. (Palo Alto, CA: Consulting Psychologists Press; ), 719–822. [ Google Scholar ]
  • Roberson M., Sundstrom E. (1990). Questionnaire design, return dates, and response favorableness in an employee attitude questionnaire. J. Appl. Psychol. 75 354–357. 10.1037/0021-9010.75.3.354 [ CrossRef ] [ Google Scholar ]
  • Rogelberg S. G., Luong A., Sederburg M. E., Cristol D. S. (2000). Employee attitude surveys: examining the attitudes of noncompliant employees. J. Appl. Psychol. 85 284–293. 10.1037/0021-9010.85.2.284 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rollins T. (1994). Turning employee survey results into high-impact improvements. Employ. Relat. Today 21 35–44. 10.1002/ert.3910210105 [ CrossRef ] [ Google Scholar ]
  • Ross L. (1977). “ The intuitive psychologist and his shortcomings: distortions in the attribution process ,” in Advances in Experimental Social Psychology , 10th Edn, ed. Berkowitz L. (New York, NY: Academic Press; ), 173–220. [ Google Scholar ]
  • Rotolo C. T., Price B. A., Fleck C. R., Smoak V. J., Jean V. (2017). Survey key driver analysis: our GPS to navigating employee attitudes. Ind. Organ. Psychol. Perspect. Sci. Pract. 10 306–313. 10.1017/iop.2017.23 [ CrossRef ] [ Google Scholar ]
  • Rudolph C. W., Allan B., Clark M., Hertel G., Hirschi A., Kunze F., et al. (2021). Pandemics: implications for research and practice in industrial and organizational psychology. Indus. Organ. Psychol. 14 1–35. 10.1017/iop.2020.48 [ CrossRef ] [ Google Scholar ]
  • Scherbaum C. A., Black J., Weiner S. P. (2017). With the right map, survey key driver analysis can help get organizations to the right destination. Indus. Organ. Psychol. 10 290–298. 10.1017/iop.2017.21 [ CrossRef ] [ Google Scholar ]
  • Siddaway A. P., Wood A. M., Hedges L. V. (2019). How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annu. Rev. Psychol. 70 747–770. 10.1146/annurev-psych-010418-102803 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Solinger O. N., Joireman J., Vantillborgh T., Balliet D. P. (2021). Change in unit-level job attitudes following strategic interventions: a meta-analysis of longitudinal studies. J. Organ. Behav. 42 964–986. 10.1002/job.2523 [ CrossRef ] [ Google Scholar ]
  • Solomon R. J. (1976). An examination of the relationship between a survey feedback O.D. technique and the work environment. Pers. Psychol. 29 583–594. 10.1111/j.1744-6570.1976.tb02081.x [ CrossRef ] [ Google Scholar ]
  • Sonnentag S., Frese M. (2002). “ Performance concepts and performance theory ,” in Psychological Management of Individual Performance , ed. Sonnentag S. (Chichester: John Wiley & Sons; ), 3–25. [ Google Scholar ]
  • Sugheir J., Coco M., Kaupins G. (2011). Perceptions of organizational survey within employee engagement efforts. Int. J. Bus. Public Adm. 8 48–61. [ Google Scholar ]
  • Swanson R. A., Zuber J. A. (1996). A case study of a failed organization development intervention rooted in the employee survey process. Perform. Improv. Q. 9 42–56. 10.1111/j.1937-8327.1996.tb00719.x [ CrossRef ] [ Google Scholar ]
  • Thompson L. F., Surface E. A. (2009). Promoting favorable attitudes toward personnel surveys: the role of follow-up. Mil. Psychol. 21 139–161. 10.1080/08995600902768693 [ CrossRef ] [ Google Scholar ]
  • Timmreck C. W., Bracken D. W. (1997). Multisource feedback: a study of its use in decision making. Employ. Relat. Today 24 21–27. 10.1002/ert.3910240104 [ CrossRef ] [ Google Scholar ]
  • Tomlinson G. (2010). Building a culture of high employee engagement. Strateg. HR Rev. 9 25–31. 10.1108/14754391011040046 [ CrossRef ] [ Google Scholar ]
  • Truxillo D. M., Cadiz D. M., Hammer L. B. (2015). Supporting the aging workforce: a review and recommendations for workplace intervention research. Annu. Rev. Organ. Psychol. Organ. Behav. 2 351–381. 10.1146/annurev-orgpsych-032414-111435 [ CrossRef ] [ Google Scholar ]
  • van Dierendonck D., Haynes C., Borrill C., Stride C. (2007). Effects of upward feedback on leadership behaviour toward subordinates. J. Manag. Dev. 26 228–238. 10.1108/02621710710732137 [ CrossRef ] [ Google Scholar ]
  • Vukotich G. (2014). 360 ° feedback: ready, fire, aim - issues with improper implementation. Perform. Improv. 53 30–35. 10.1002/pfi.21390 [ CrossRef ] [ Google Scholar ]
  • Wang M., Beal D. J., Chan D., Newman D. A., Vancouver J. B., Vandenberg R. J. (2017). Longitudinal research: a panel discussion on conceptual issues, research design, and statistical techniques. Work Aging Retire. 3 1–24. 10.1093/workar/waw033 [ CrossRef ] [ Google Scholar ]
  • Ward P. (2008). Reinventing the employee survey at Fujitsu Services. Strateg. Commun. Manag. 12 32–35. [ Google Scholar ]
  • Welbourne T. M. (2016). The potential of pulse surveys: transforming surveys into leadership tools. Employ. Relat. Today 43 33–39. 10.1002/ert.21548 [ CrossRef ] [ Google Scholar ]
  • Wiley J. (2012). Achieving change through a best practice employee survey. Strateg. HR Rev. 11 265–271. 10.1108/14754391211248675 [ CrossRef ] [ Google Scholar ]

Poll: Election interest hits new low in tight Biden-Trump race

The share of voters who say they have high interest in the 2024 election has hit a nearly 20-year low at this point in a presidential race, according to the latest national NBC News poll , with majorities holding negative views of both President Joe Biden and former President Donald Trump.

The poll also shows Biden trimming Trump’s previous lead to just 2 points in a head-to-head contest, an improvement within the margin of error compared to the previous survey, as Biden bests Trump on the issues of abortion and uniting the country, while Trump is ahead on competency and dealing with inflation.

And it finds inflation and immigration topping the list of most important issues facing the country, as just one-third of voters give Biden credit for an improving economy.

But what also stands out in the survey is how the low voter interest and the independent candidacy of Robert F. Kennedy Jr. could scramble what has been a stable presidential contest with more than six months until Election Day. While Trump holds a 2-point edge over Biden head to head, Biden leads Trump by 2 points in a five-way ballot test including Kennedy and other third-party candidates.

“I don’t think Biden has done much as a president. And if Trump gets elected, I just feel like it’s going to be the same thing as it was before Biden got elected,” said poll respondent Devin Fletcher, 37, of Wayne, Michigan, a Democrat who said he’s still voting for Biden.

“I just don’t feel like I have a candidate that I’m excited to vote for,” Fletcher added.

Another poll respondent from New Jersey, who declined to provide her name and voted for Biden in 2020, said she wouldn’t be voting in November.

“Our candidates are horrible. I have no interest in voting for Biden. He did nothing. And I absolutely will not vote for Trump,” she said.

Democratic pollster Jeff Horwitt of Hart Research Associates, who conducted the survey with Republican pollster Bill McInturff of Public Opinion Strategies, said, “Americans don’t agree on much these days, but nothing unites the country more than voters’ desire to tune this election out.”

The poll was conducted April 12-16, during yet another turbulent time in American politics, including the  beginning of Trump’s criminal trial  in New York and new attacks and heightened tensions  in the Middle East.

According to the poll, 64% of registered voters say they have high levels of interest in November’s election — registering either a “9” or a 10” on a 10-point scale of interest.

That’s lower than what the NBC News poll showed at this time in the 2008 (74%), 2012 (67%), 2016 (69%) and 2020 (77%) presidential contests.

The question dates to the 2008 election cycle. The lowest level of high election interest in the poll during a presidential cycle was in March 2012 — at 59%. But it quickly ticked up in the next survey.

This election cycle, high interest has been both low and relatively flat for months, according to the poll.

McInturff, the Republican pollster, says the high level of interest in the poll has “always been a signal for the level of turnout” for a presidential contest.

“It makes it very hard for us to predict turnout this far in advance of November, but every signal is turnout will be a lower percentage of eligible voters than in 2020,” he said.

By party, the current poll shows 70% of self-identified Republicans saying they have high interest in the coming election, compared with 65% of Democrats who say so.

Independents are at 48%, while only 36% of voters ages 18 to 34 rate themselves as highly interested in the election.

“They just aren’t low interest,” McInturff said of young voters. “They are off-the-charts low.”

NBC News poll: Frequently asked questions

Professional pollsters at a Democratic polling firm (Hart Research Associates) and a Republican firm (Public Opinion Strategies) have worked together to conduct and operate this poll since 1989. (Coldwater Corporation served as the Republican firm from 1989-2004.)

The polling firms employ a call center, where live interviewers speak by cell phone and telephone with a cross section of (usually) 1,000 respondents. The respondents are randomly selected from national lists of households and cell numbers. Respondents are asked for by name, starting with the youngest male adult or female adult in the household.

One of the common questions that critics ask of polls is, "I wasn't interviewed, so why should this poll matter?” By interviewing 1,000 respondents and applying minimal weights based on race, ethnicity, age, gender, education and the 2020 presidential vote, the poll achieves a representative sample of the nation at large – with a margin of error at a 95% confidence level.

NBC News editors and reporters — along with the pollsters at Hart Research and Public Opinion Strategies — all work to formulate the questions to try to capture the news and current events NBC is trying to gauge. Both Hart Research and Public Opinion Strategies work to ensure the language and placement of the questions are as neutral as possible.

Biden trims Trump’s lead

The poll also finds Trump narrowly ahead of Biden by 2 points among registered voters in a head-to-head matchup, 46% to 44% — down from Trump’s 5-point advantage in January, 47% to 42%.

The movement, which is within the poll’s margin of error of plus or minus 3.1 percentage points, is consistent with what other national polls have found in the Trump-Biden race.

Trump’s biggest advantages are among men (53% to 37%), white voters (54% to 37%) and white voters without college degrees (65% to 25%).

Biden’s top advantages are among Black voters (71% to 13%), women (50% to 39%) and Latinos (49% to 39%).

The poll shows the two candidates are essentially tied among independents (Biden 36%, Trump 34%) and voters ages 18-34 (Biden 44%, Trump 43%). One of the big polling mysteries this cycle is whether young voters have defected from Biden (as the NBC News poll has found over multiple surveys) or whether Democrats have maintained their advantage among that demographic.

When the ballot is expanded to five named candidates, Biden takes a 2-point lead over Trump: Biden 39%, Trump 37%, Kennedy 13%, Jill Stein 3% and Cornel West 2%.

Again, the result between Biden and Trump is within the poll’s margin of error.

Notably, the poll finds a greater share of Trump voters from the head-to-head matchup supporting Kennedy in the expanded ballot compared with Biden voters, different from the results of some other surveys.

(Read more here about how Kennedy's candidacy affe cts the 2024 race, according to the poll.)

The president’s approval rating ticks up to 42%

In addition, the poll found 42% of registered voters approving of Biden’s overall job performance — up 5 points since January’s NBC News poll, which found Biden at the lowest point of his presidency.

Fifty-six percent of voters say they disapprove of the job he has done, which is down 4 points from January.

Biden’s gains over the past few months have come from key parts of his 2020   base, especially among Democrats and Black voters. But he continues to hold low ratings among Latinos (40% approval), young voters (37%) and independents (36%).

“The data across this poll show that Joe Biden has begun to gain some ground in rebuilding his coalition from 2020,” said Horwitt, the Democratic pollster. “The question is whether he can build upon this momentum and make inroads with the groups of voters that still are holding back support.”

But McInturff, the GOP pollster, points out that the only recent presidents   who lost re-election had approval ratings higher than Biden’s at this point in the election cycle: George H.W. Bush (43%) and Trump (46%).

“President Biden has a precarious hold on the presidency and is in a difficult position as it relates to his re-election,” McInturff said.

On the issues, 39% of voters say they approve of Biden’s handling of the economy (up from 36% in January), 28% approve of his handling of border security and immigration, and just 27% approve of his handling of the Israel-Hamas war (down from 29% in January).

Voters gave Biden his highest issue rating on   addressing student loan debt, with 44% approving of his handling of the issue, compared with 51% who say they disapprove.

Biden leads on abortion and unity; Trump leads on inflation and competency

The NBC News poll asked voters to determine which candidate they thought is better on several different issues and attributes.

Biden holds a 15-point advantage over Trump on dealing with the issue of abortion, and he is ahead by 9 points on having the ability to bring the country together — though that is down from his 24-point advantage on that issue in the September 2020 NBC News poll.

Trump, meanwhile, leads in having the ability to handle a crisis (by 4 points), in having a strong record of accomplishments (by 7 points), in being competent and effective (by 11 points), in having the necessary mental and physical health to be president (by 19 points) and in dealing with inflation and the cost of living (by 22 points).

Inflation, immigration are the top 2024 issues

Inflation and the cost of living top the list of issues in the poll, with 23% of voters saying they’re the most important issue facing the country.

The other top voters is   immigration and the situation at the border (22%) — followed by   threats to democracy (16%), jobs and the economy (11%), abortion (6%) and health care (6%).

In addition, 63% of voters say their families’ incomes are falling behind the cost of living — essentially unchanged from what the poll found in 2022 and 2023.

And 53% of voters say the country’s economy hasn’t improved, compared with 33% who say that it has improved and that Biden deserves some credit for it and another 8% who agree the economy has improved but don’t give him credit for it.

“If I look back to when I had all three of my children in the house — we only have one child left in the house now, and we’re spending more now than what we did when we had a family of five,” said poll respondent Art Fales, 45, of Florida, who says he’s most likely voting for Trump.

But on a separate question — is there an issue so important that you’ll vote for or against a candidate solely on that basis? — the top responses are protecting democracy and constitutional rights (28%), immigration and border security (20%) and abortion (19%).

Indeed, 30% of Democrats, 29% of young voters and 27% of women say they are single-issue voters on abortion.

“I have a right to what I do with my body,” said poll respondent Amanda Willis, 28, of Louisiana, who said she’s voting for Biden. “And I don’t believe that other people should have the ability to determine that.”

Other poll findings

  • With Trump’s first criminal trial underway, 50% of voters say he is being held to the same standard as anyone else when it comes to his multiple legal challenges. That compares with 43% who believe he’s being unfairly targeted in the trials. 
  • 52% of voters have unfavorable views of Biden, while 53% share the same views of Trump.
  • And Democrats and Republicans are essentially tied in congressional preference, with 47% of voters preferring Republicans to control Congress and 46% wanting Democrats in charge. Republicans held a 4-point lead on this question in January.

The NBC News poll of 1,000 registered voters nationwide — 891 contacted via cellphone — was conducted April 12-16, and it has an overall margin of error of plus or minus 3.1 percentage points.

research articles on survey

Mark Murray is a senior political editor at NBC News.

research articles on survey

Sarah Dean is a 2024 NBC News campaign embed.

IMAGES

  1. (PDF) Principles of survey research part 6: Data analysis

    research articles on survey

  2. (PDF) Qualitative research articles: Guidelines, suggestions and needs

    research articles on survey

  3. Journal of Survey Statistics and Methodology Template

    research articles on survey

  4. Survey Questionnaire for research paper

    research articles on survey

  5. Understanding the 3 Main Types of Survey Research & Putting Them to Use

    research articles on survey

  6. 🎉 Survey research paper. How to Cite Surveys in a Research Paper. 2019

    research articles on survey

VIDEO

  1. AI Events and Current News

  2. What is a Survey and How to Design It? Research Beast

  3. What's a Valid and Reliable Survey Question? 🔥 [SURVEY TIPS 📄]

  4. Data Collection-Surveys and Observation

  5. Survey Research

  6. 1.1 Introduction to Surveying

COMMENTS

  1. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  2. High-Impact Articles

    High-Impact Articles. Journal of Survey Statistics and Methodology, sponsored by the American Association for Public Opinion Research and the American Statistical Association, began publishing in 2013.Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data.

  3. Survey response rates: Trends and a validity assessment framework

    Survey methodology has been and continues to be a pervasively used data-collection method in social science research. To better understand the state of the science, we first analyze response-rate information reported in 1014 surveys described in 703 articles from 17 journals from 2010 to 2020.

  4. Journal of Survey Statistics and Methodology

    Why Submit to JSSAM?. The Journal of Survey Statistics and Methodology is an international, high-impact journal sponsored by the American Association for Public Opinion Research (AAPOR) and the American Statistical Association.Published since 2013, the journal has quickly become a trusted source for a wide range of high quality research in the field.

  5. Advance articles

    Research Article 27 June 2023. Survey Consent to Administrative Data Linkage: Five Experiments on Wording and Format. Annette Jäckle and others. ... Effects of a Web-Mail Mode on Response Rates and Responses to a Care Experience Survey: Results of a Randomized Experiment

  6. Conducting Online Surveys

    Abstract. There is an established methodology for conducting survey research that aims to ensure rigorous research and robust outputs. With the advent of easy-to-use online survey platforms, however, the quality of survey studies has declined. This article summarizes the pros and cons of online surveys and emphasizes the key principles of ...

  7. (PDF) Understanding and Evaluating Survey Research

    Survey research is defined as. "the collection of information from. a sample of individuals through their. responses to questions" (Check &. Schutt, 2012, p. 160). This type of r e -. search ...

  8. A quick guide to survey research

    Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1. Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs ...

  9. Publishing survey research

    Publishing survey research. The questionnaire is the backbone of survey research for gathering primary data on attitudes, beliefs, and behaviors across populations, making it the primary tool for gauging public sentiment, including public trust in science. Nevertheless, it is not without limitations. As a tool, it has undergone refinement ...

  10. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  11. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Burns et al., 2008 12. A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method.

  12. Reducing respondents' perceptions of bias in survey research

    Survey research has become increasingly challenging. In many nations, response rates have continued a steady decline for decades, and the costs and time involved with collecting survey data have risen with it (Connelly et al., 2003; Curtin et al., 2005; Keeter et al., 2017).Still, social surveys are a cornerstone of social science research and are routinely used by the government and private ...

  13. The online survey as a qualitative research tool

    ABSTRACT. Fully qualitative surveys, which prioritise qualitative research values, and harness the rich potential of qualitative data, have much to offer qualitative researchers, especially given online delivery options. Yet the method remains underutilised, and there is little in the way of methodological discussion of qualitative surveys. Underutilisation and limited methodological ...

  14. Reporting Guidelines for Survey Research: An Analysis of ...

    Methods and Findings. We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of ...

  15. Survey Research

    Survey designs. Kerry Tanner, in Research Methods (Second Edition), 2018. Conclusion. Survey research designs remain pervasive in many fields. Surveys can appear deceptively simple and straightforward to implement. However valid results depend on the researcher having a clear understanding of the circumstances where their use is appropriate and the constraints on inference in interpreting and ...

  16. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  17. Writing Survey Questions

    Writing Survey Questions. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.

  18. Reporting Survey Based Studies

    CHOOSING A TARGET JOURNAL FOR SURVEY-BASED RESEARCH. Surveys can be published as original articles, brief reports or as a letter to the editor. Interestingly, most modern journals do not actively make mention of surveys in the instructions to the author. Thus, depending on the study design, the authors may choose the article category, cohort or ...

  19. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions.

  20. A Survey of U.S Adults' Opinions about Conduct of a Nationwide ...

    Objectives A survey of a population-based sample of U.S adults was conducted to measure their attitudes about, and inform the design of the Precision Medicine Initiative's planned national cohort study. Methods An online survey was conducted by GfK between May and June of 2015. The influence of different consent models on willingness to share data was examined by randomizing participants to ...

  21. Behind the Numbers: Questioning Questionnaires

    1. The research community needs to become more aware of and open to issues related to interpretation, language, and communication when conducting or assessing the quality of a survey study. The idea of so much of social reality being readily measurable (or even straightforwardly reported in interview statements) needs to be critically addressed.

  22. Surveys

    U.S. News & World Report surveyed 2,000 U.S. adults about health care issues, including why and how often they go to the doctor, how they choose their doctors and why they choose to (or don't ...

  23. The Rise of Annoying Customer Satisfaction Surveys and Questionnaires

    In the past year, the firm has analyzed 1.6 billion survey responses. That's a 4% increase over the prior year — and responses for the first quarter of 2024 were 10% above what Qualtrics ...

  24. Americans are getting less sleep. The biggest burden falls on women

    A majority - 57% - now say they could use more sleep, which is a big jump from a decade ago. It's an acceleration of an ongoing trend, according to the survey. In 1942, 59% of Americans said ...

  25. A critical look at online survey or questionnaire-based research

    Online survey or questionnaire-based studies collect information from participants responding to the study link using internet-based communication technology (e.g. E-mail, online survey platform). There has been a growing interest among researchers for using internet-based data collection methods during the COVID-19 pandemic, also reflected in ...

  26. The Limitations of Online Surveys

    Online surveys are becoming increasingly popular. There were 1682 PubMed hits for "online survey" (search phrase entered with quotes) in 2016; this number increased to 1994 in 2016, 2425 in 2017, 2872 in 2018, and 3182 in 2019. On August 15, 2020, the number of hits for 2020 was already 2742; when annualized, this number projects to 4387.

  27. Continual Learning of Large Language Models: A Comprehensive Survey

    The recent success of large language models (LLMs) trained on static, pre-collected, general datasets has sparked numerous research directions and applications. One such direction addresses the non-trivial challenge of integrating pre-trained LLMs into dynamic data distributions, task structures, and user preferences. Pre-trained LLMs, when tailored for specific needs, often experience ...

  28. Green future for air travel

    Travel fell sharply during the COVID-19 pandemic—airline revenues dropped by 60 percent in 2020, and air travel and tourism are not expected to return to 2019 levels before 2024. 1 While this downturn is worrisome, it is likely to be temporary. McKinsey's latest survey of more than 5,500 air travelers around the world shows that the ...

  29. Following Up on Employee Surveys: A Conceptual Framework and Systematic

    Employee surveys are often used to support organizational development (OD), and particularly the follow-up process after surveys, including action planning, is important. Nevertheless, this process is oftentimes neglected in practice, and research on it is limited as well. In this article, we first define the employee survey follow-up process ...

  30. Poll: Election interest hits new low in tight Biden-Trump race

    Democratic pollster Jeff Horwitt of Hart Research Associates, who conducted the survey with Republican pollster Bill McInturff of Public Opinion Strategies, said, "Americans don't agree on ...