Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on July 15, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs. surveys, questionnaire methods, open-ended vs. closed-ended questions, question wording, question order, step-by-step guide to design, other interesting articles, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives , placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleansing and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalize your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimizing these will help you avoid several types of research bias , including sampling bias , ascertainment bias , and undercoverage bias .

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

questionnaire in survey research sample

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • cost-effective
  • easy to administer for small and large groups
  • anonymous and suitable for sensitive topics

But they may also be:

  • unsuitable for people with limited literacy or verbal skills
  • susceptible to a nonresponse bias (most people invited may not complete the questionnaire)
  • biased towards people who volunteer because impersonal survey requests often go ignored.

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • help you ensure the respondents are representative of your target audience
  • allow clarifications of ambiguous or unclear questions and answers
  • have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • costly and time-consuming to perform
  • more difficult to analyze if you have qualitative responses
  • likely to contain experimenter bias or demand characteristics
  • likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalizable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert scale questions collect ordinal data using rating scales with 5 or 7 points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio scales , you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer “multiracial” for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle for productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarizing responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorize answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid research bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counter argument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favor flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barreled questions. Double-barreled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

You can organize the questions logically, with a clear progression from simple to complex. Alternatively, you can randomize the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioral or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimize order effects because they can be a source of systematic error or bias in your study.

Randomization

Randomization involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomization, order effects will be minimized in your dataset. But a randomized order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalize your variables of interest into questionnaire items. Operationalizing concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivized or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomize questions. Randomizing questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis. You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved April 3, 2024, from https://www.scribbr.com/methodology/questionnaire/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, survey research | definition, examples & methods, what is a likert scale | guide & examples, reliability vs. validity in research | difference, types and examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

28 Questionnaire Examples, Questions, & Templates to Survey Your Clients

Swetha Amaresan

Published: May 15, 2023

The adage "the customer is always right" has received some pushback in recent years, but when it comes to conducting surveys , the phrase is worth a deeper look. In the past, representatives were tasked with solving client problems as they happened. Now, they have to be proactive by solving problems before they come up.

Person fills out a questionnaire surrounded by question mark scrabble tiles

Salesforce found that 63% of customers expect companies to anticipate their needs before they ask for help. But how can a customer service team recognize these customer needs in advance and effectively solve them on a day-to-day basis?

→ Free Download: 5 Customer Survey Templates [Access Now]

A customer questionnaire is a tried-and-true method for collecting survey data to inform your customer service strategy . By hearing directly from the customer, you'll capture first-hand data about how well your service team meets their needs. In this article, you'll get free questionnaire templates and best practices on how to administer them for the most honest responses.

Table of Contents:

Questionnaire Definition

Survey vs. questionnaire, questionnaire templates.

  • Questionnaire Examples

Questionnaire Design

Survey question examples.

  • Examples of Good Survey Questions

How to Make a Questionnaire

A questionnaire is a research tool used to conduct surveys. It includes specific questions with the goal to understand a topic from the respondents' point of view. Questionnaires typically have closed-ended, open-ended, short-form, and long-form questions.

The questions should always stay as unbiased as possible. For instance, it's unwise to ask for feedback on a specific product or service that’s still in the ideation phase. To complete the questionnaire, the customer would have to imagine how they might experience the product or service rather than sharing their opinion about their actual experience with it.

Ask broad questions about the kinds of qualities and features your customers enjoy in your products or services and incorporate that feedback into new offerings your team is developing.

What makes a good questionnaire?

Define the goal, make it short and simple, use a mix of question types, proofread carefully, keep it consistent.

A good questionnaire should find what you need versus what you want. It should be valuable and give you a chance to understand the respondent’s point of view.

Make the purpose of your questionnaire clear. While it's tempting to ask a range of questions simultaneously, you'll get more valuable results if you stay specific to a set topic.

According to HubSpot research , 47% of those surveyed say their top reason for abandoning a survey is the time it takes to complete.

So, questionnaires should be concise and easy to finish. If you're looking for a respondent’s experience with your business, focus on the most important questions.

questionnaire in survey research sample

5 Free Customer Satisfaction Survey Templates

Easily measure customer satisfaction and begin to improve your customer experience.

  • Net Promoter Score
  • Customer Effort Score

You're all set!

Click this link to access this resource at any time.

5 Customer Survey Templates

Featured resource.

Your questionnaire should include a combination of question types, like open-ended, long-form, or short-ended questions.

Open-ended questions give users a chance to share their own answers. But closed-ended questions are more efficient and easy to quantify, with specific answer choices.

If you're not sure which question types are best, read here for more survey question examples .

While it's important to check spelling and grammar, there are two other things you'll want to check for a great questionnaire.

First, edit for clarity. Jargon, technical terms, and brand-specific language can be confusing for respondents. Next, check for leading questions. These questions can produce biased results that will be less useful to your team.

Consistency makes it easier for respondents to quickly complete your questionnaire. This is because it makes the questions less confusing. It can also reduce bias.

Being consistent is also helpful for analyzing questionnaire data because it makes it easier to compare results. With this in mind, keep response scales, question types, and formatting consistent.

In-Depth Interviews vs. Questionnaire

Questionnaires can be a more feasible and efficient research method than in-depth interviews. They are a lot cheaper to conduct. That’s because in-depth interviews can require you to compensate the interviewees for their time and give accommodations and travel reimbursement.

Questionnaires also save time for both parties. Customers can quickly complete them on their own time, and employees of your company don't have to spend time conducting the interviews. They can capture a larger audience than in-depth interviews, making them much more cost-effective.

It would be impossible for a large company to interview tens of thousands of customers in person. The same company could potentially get feedback from its entire customer base using an online questionnaire.

When considering your current products and services (as well as ideas for new products and services), it's essential to get the feedback of existing and potential customers. They are the ones who have a say in purchasing decisions.

A questionnaire is a tool that’s used to conduct a survey. A survey is the process of gathering, sampling, analyzing, and interpreting data from a group of people.

The confusion between these terms most likely stems from the fact that questionnaires and data analysis were treated as very separate processes before the Internet became popular. Questionnaires used to be completed on paper, and data analysis occurred later as a separate process. Nowadays, these processes are typically combined since online survey tools allow questionnaire responses to be analyzed and aggregated all in one step.

But questionnaires can still be used for reasons other than data analysis. Job applications and medical history forms are examples of questionnaires that have no intention of being statistically analyzed. The key difference between questionnaires and surveys is that they can exist together or separately.

Below are some of the best free questionnaire templates you can download to gather data that informs your next product or service offering.

What makes a good survey question?

Have a goal in mind, draft clear and distinct answers and questions, ask one question at a time, check for bias and sensitivity, include follow-up questions.

To make a good survey question, you have to choose the right type of questions to use. Include concise, clear, and appropriate questions with answer choices that won’t confuse the respondent and will clearly offer data on their experience.

Good survey questions can give a business good data to examine. Here are some more tips to follow as you draft your survey questions.

To make a good survey, consider what you are trying to learn from it. Understanding why you need to do a survey will help you create clear and concise questions that you need to ask to meet your goal. The more your questions focus on one or two objectives, the better your data will be.

You have a goal in mind for your survey. Now you have to write the questions and answers depending on the form you’re using.

For instance, if you’re using ranks or multiple-choice in your survey, be clear. Here are examples of good and poor multiple-choice answers:

Poor Survey Question and Answer Example

California:

  • Contains the tallest mountain in the United States.
  • Has an eagle on its state flag.
  • Is the second-largest state in terms of area.
  • Was the location of the Gold Rush of 1849.

Good Survey Question and Answer Example

What is the main reason so many people moved to California in 1849?

  • California's land was fertile, plentiful, and inexpensive.
  • The discovery of gold in central California.
  • The East was preparing for a civil war.
  • They wanted to establish religious settlements.

In the poor example, the question may confuse the respondent because it's not clear what is being asked or how the answers relate to the question. The survey didn’t fully explain the question, and the options are also confusing.

In the good example above, the question and answer choices are clear and easy to understand.

Always make sure answers and questions are clear and distinct to create a good experience for the respondent. This will offer your team the best outcomes from your survey.

It's surprisingly easy to combine multiple questions into one. They even have a name — they’re called "double-barreled" questions. But a good survey asks one question at a time.

For example, a survey question could read, "What is your favorite sneaker and clothing apparel brand?" This is bad because you’re asking two questions at once.

By asking two questions simultaneously, you may confuse your respondents and get unclear answers. Instead, each question should focus on getting specific pieces of information.

For example, ask, "What is your favorite sneaker brand?" then, "What is your favorite clothing apparel brand?" By separating the questions, you allow your respondents to give separate and precise answers.

Biased questions can lead a respondent toward a specific response. They can also be vague or unclear. Sensitive questions such as age, religion, or marital status can be helpful for demographics. These questions can also be uncomfortable for people to answer.

There are a few ways to create a positive experience with your survey questions.

First, think about question placement. Sensitive questions that appear in context with other survey questions can help people understand why you are asking. This can make them feel more comfortable responding.

Next, check your survey for leading questions, assumptions, and double-barreled questions. You want to make sure that your survey is neutral and free of bias.

Asking more than one survey question about an area of interest can make a survey easier to understand and complete. It also helps you collect more in-depth insights from your respondents.

1. Free HubSpot Questionnaire Template

HubSpot offers a variety of free customer surveys and questionnaire templates to analyze and measure customer experience. Choose from five templates: net promoter score, customer satisfaction, customer effort, open-ended questions, and long-form customer surveys.

2. Client Questionnaire Template

It's a good idea to gauge your clients' experiences with your business to uncover opportunities to improve your offerings. That will, in turn, better suit their lifestyles. You don't have to wait for an entire year to pass before polling your customer base about their experience either. A simple client questionnaire, like the one below, can be administered as a micro survey several times throughout the year. These types of quick survey questions work well to retarget your existing customers through social media polls and paid interactive ads.

1. How much time do you spend using [product or service]?

  • Less than a minute
  • About 1 - 2 minutes
  • Between 2 and 5 minutes
  • More than 5 minutes

2. In the last month, what has been your biggest pain point?

  • Finding enough time for important tasks
  • Delegating work
  • Having enough to do

3. What's your biggest priority right now?

  • Finding a faster way to work
  • Problem-solving
  • Staff development

send-now-hubspot-sales-bar

3. Website Questionnaire Template

Whether you just launched a brand new website or you're gathering data points to inform a redesign, you'll find customer feedback to be essential in both processes. A website questionnaire template will come in handy to collect this information using an unbiased method.

1. How many times have you visited [website] in the past month?

  • More than once

2. What is the primary reason for your visit to [website]?

  • To make a purchase
  • To find more information before making a purchase in-store
  • To contact customer service

3. Are you able to find what you're looking for on the website homepage?

4. Customer Satisfaction Questionnaire Template

If you've never surveyed your customers and are looking for a template to get started, this one includes some basic customer satisfaction questions. These will apply to just about any customer your business serves.

1. How likely are you to recommend us to family, friends, or colleagues?

  • Extremely unlikely
  • Somewhat unlikely
  • Somewhat likely
  • Extremely likely

2. How satisfied were you with your experience?

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10

3. Rank the following items in terms of their priority to your purchasing process.

  • Helpful staff
  • Quality of product
  • Price of product
  • Ease of purchase
  • Proximity of store
  • Online accessibility
  • Current need
  • Appearance of product

4. Who did you purchase these products for?

  • Family member
  • On behalf of a business

5. Please rate our staff on the following terms:

  • Friendly __ __ __ __ __ Hostile
  • Helpful __ __ __ __ __ Useless
  • Knowledgeable __ __ __ __ __ Inexperienced
  • Professional __ __ __ __ __ Inappropriate

6. Would you purchase from our company again?

7. How can we improve your experience for the future?

________________________________.

5. Customer Effort Score Questionnaire Template

The following template gives an example of a brief customer effort score (CES) questionnaire. This free template works well for new customers to measure their initial reaction to your business.

1. What was the ease of your experience with our company?

  • Extremely difficult
  • Somewhat difficult
  • Somewhat easy
  • Extremely easy

2. The company did everything it could to make my process as easy as possible.

  • Strongly disagree
  • Somewhat disagree
  • Somewhat agree
  • Strongly agree

3. On a scale of 1 to 10 (1 being "extremely quickly" and 10 being "extremely slowly"), how fast were you able to solve your problem?

4. How much effort did you have to put forth while working with our company?

  • Much more than expected
  • Somewhat more than expected
  • As much as expected
  • Somewhat less than expected
  • Much less than expected

6. Demographic Questionnaire Template

Here's a template for surveying customers to learn more about their demographic background. You could substantiate the analysis of this questionnaire by corroborating the data with other information from your web analytics, internal customer data, and industry data.

1. How would you describe your employment status?

  • Employed full-time
  • Employed part-time
  • Freelance/contract employee
  • Self-employed

2. How many employees work at your company?

3. How would you classify your role?

  • Individual Contributor

4. How would you classify your industry?

  • Technology/software
  • Hospitality/dining
  • Entertainment

Below, we have curated a list of questionnaire examples that do a great job of gathering valuable qualitative and quantitative data.

4 Questionnaire Examples

1. customer satisfaction questions.

patient satisfaction survey

Survey question examples: Multiple choice

Image Source

Rating Scale

Rating scale questions offer a scale of numbers and ask respondents to rate topics based on the sentiments assigned to that scale. This is effective when assessing customer satisfaction.

Rating scale survey question examples : "Rate your level of satisfaction with the customer service you received today on a scale of 1-10."

Survey question examples: Rating Scale

Yes or no survey questions are a type of dichotomous question. These are questions that only offer two possible responses. They’re useful because they’re quick to answer and can help with customer segmentation.

Yes or no survey questions example : "Have you ever used HubSpot before?"

Likert Scale

Likert scale questions assess whether a respondent agrees with the statement, as well as the extent to which they agree or disagree.

These questions typically offer five or seven responses, with sentiments ranging from items such as "strongly disagree" to "strongly agree." Check out this post to learn more about the Likert scale .

Likert scale survey question examples : “How satisfied are you with the service from [brand]?”

Survey question examples: Likert Scale

Open-ended questions ask a broader question or offer a chance to elaborate on a response to a close-ended question. They're accompanied by a text box that leaves room for respondents to write freely. This is particularly important when asking customers to expand on an experience or recommendation.

Open-ended survey question examples : "What are your personal goals for using HubSpot? Please describe."

Survey question examples: Open-Ended

Matrix Table

A matrix table is usually a group of multiple-choice questions grouped in a table. Choices for these survey questions are usually organized in a scale. This makes it easier to understand the relationships between different survey responses.

Matrix table survey question examples : "Rate your level of agreement with the following statements about HubSpot on a scale of 1-5."

Survey question examples: Matrix table

Rank Order Scaling

These questions ask respondents to rank a set of terms by order of preference or importance. This is useful for understanding customer priorities.

Rank order scaling examples : "Rank the following factors in order of importance when choosing a new job."

Survey question examples: Rank order scaling

Semantic Differential Scale

This scale features pairs of opposite adjectives that respondents use for rating, usually for a feature or experience. This type of question makes it easier to understand customer attitudes and beliefs.

Semantic differential scale question examples : "Rate your overall impression of this brand as friendly vs. unfriendly, innovative vs. traditional, and boring vs. exciting."

Survey question examples: Semantic differential scale

Side-By-Side Matrix

This matrix table format includes two sets of questions horizontally for easy comparison. This format can help with customer gap analysis.

Side-by-side matrix question examples : "Rate your level of satisfaction with HubSpot's customer support compared to its ease of use."

Survey question examples: Side-by-side matrix

Stapel Scale

The Stapel rating scale offers a single adjective or idea for rating. It uses a numerical scale with a zero point in the middle. This survey question type helps with in-depth analysis.

Stapel scale survey question examples : "Rate your overall experience with this product as +5 (excellent) to -5 (terrible)."

Survey question examples: Stapel scale

Constant Sum Survey Questions

In this question format, people distribute points to different choices based on the perceived importance of each point. This kind of question is often used in market research and can help your team better understand customer choices .

Constant sum survey question examples : "What is your budget for the following marketing expenses: Paid campaigns, Events, Freelancers, Agencies, Research."

Survey question examples: Constant sum

Image Choice

This survey question type shows several images. Then, it asks the respondent to choose the image that best matches their response to the question. These questions are useful for understanding your customers’ design preferences.

Image choice survey questions example : "Which of these three images best represents your brand voice?"

Survey question examples: Image chooser

Choice Model

This survey question offers a hypothetical scenario, then the respondent must choose from the presented options. It's a useful type of question when you are refining a product or strategy.

Choice model survey questions example : "Which of these three deals would be most appealing to you?"

Click Map Questions

Click map questions offer an image click on specific areas of the image in response to a question. This question uses data visualization to learn about customer preferences for design and user experience.

Click map question examples : "Click on the section of the website where you would expect to find pricing information."

Survey question examples: Choice model

Data Upload

This survey question example asks the respondent to upload a file or document in response to a question. This type of survey question can help your team collect data and context that might be tough to collect otherwise.

Data upload question examples : "Please upload a screenshot of the error you encountered during your purchase."

Survey question examples: Data Upload

Benchmarkable Questions

This question type asks a respondent to compare their answers to a group or benchmark. These questions can be useful if you're trying to compare buyer personas or other customer groups.

Benchmarkable survey questions example : "Compare your company's marketing budget to other companies in your industry."

Good Survey Questions

  • What is your favorite product?
  • Why did you purchase this product?
  • How satisfied are you with [product]?
  • Would you recommend [product] to a friend?
  • Would you recommend [company name] to a friend?
  • If you could change one thing about [product], what would it be?
  • Which other options were you considering before [product or company name]?
  • Did [product] help you accomplish your goal?
  • How would you feel if we did not offer this product, feature, or service?
  • What would you miss the most if you couldn't use your favorite product from us?
  • What is one word that best describes your experience using our product?
  • What's the primary reason for canceling your account?
  • How satisfied are you with our customer support?
  • Did we answer all of your questions and concerns?
  • How can we be more helpful?
  • What additional features would you like to see in this product?
  • Are we meeting your expectations?
  • How satisfied are you with your experience?

1. "What is your favorite product?"

This question is a great starter for your survey. Most companies want to know what their most popular products are, and this question cuts right to the point.

It's important to note that this question gives you the customer's perspective, not empirical evidence. You should compare the results to your inventory to see if your customers' answers match your actual sales. You may be surprised to find your customers' "favorite" product isn't the highest-selling one.

2. "Why did you purchase this product?"

Once you know their favorite product, you need to understand why they like it so much. The qualitative data will help your marketing and sales teams attract and engage customers. They'll know which features to advertise most and can seek out new leads similar to your existing customers.

3. "How satisfied are you with [product]?"

When you have a product that isn't selling, you can ask this question to see why customers are unhappy with it. If the reviews are poor, you'll know that the product needs reworking, and you can send it back to product management for improvement. Or, if these results are positive, they may have something to do with your marketing or sales techniques. You can then gather more info during the questionnaire and restrategize your campaigns based on your findings.

4. "Would you recommend [product] to a friend?"

This is a classic survey question used with most NPS® surveys. It asks the customer if they would recommend your product to one of their peers. This is extremely important because most people trust customer referrals more than traditional advertising. So, if your customers are willing to recommend your products, you'll have an easier time acquiring new leads.

5. "Would you recommend [company name] to a friend?"

Similar to the question above, this one asks the customer to consider your business as a whole and not just your product. This gives you insight into your brand's reputation and shows how customers feel about your company's actions. Even if you have an excellent product, your brand's reputation may be the cause of customer churn . Your marketing team should pay close attention to this question to see how they can improve the customer experience .

6. "If you could change one thing about [product], what would it be?"

This is a good question to ask your most loyal customers or ones that have recently churned. For loyal customers, you want to keep adding value to their experience. Asking how your product can improve helps your development team find flaws and increases your chances of retaining a valuable customer segment.

For customers that have recently churned, this question gives insight into how you can retain future users that are unhappy with your product or service. By giving these customers a space to voice their criticisms, you can either reach out and offer solutions or relay feedback for consideration.

7. "Which other options were you considering before [product or company name]?"

If you're operating in a competitive industry, customers will have more than one choice when considering your brand. And if you sell variations of your product or produce new models periodically, customers may prefer one version over another.

For this question, you should offer answers to choose from in a multiple-selection format. This will limit the types of responses you'll receive and help you get the exact information you need.

8. "Did [product] help you accomplish your goal?"

The purpose of any product or service is to help customers reach a goal. So, you should be direct and ask them if your company steered them toward success. After all, customer success is an excellent retention tool. If customers are succeeding with your product, they're more likely to stay loyal to your brand.

9. "How would you feel if we did not offer this product, feature, or service?"

Thinking about discontinuing a product? This question can help you decide whether or not a specific product, service, or feature will be missed if you were to remove it.

Even if you know that a product or service isn't worth offering, it's important to ask this question anyway because there may be a certain aspect of the product that your customers like. They'll be delighted if you can integrate that feature into a new product or service.

10. "If you couldn't use your favorite product from us, what would you miss the most about it?"

This question pairs well with the one above because it frames the customer's favorite product from a different point of view. Instead of describing why they love a particular product, the customer can explain what they'd be missing if they didn't have it at all. This type of question uncovers "fear of loss," which can be a very different motivating factor than "hope for gain."

11. "What word best describes your experience using our product?"

Your marketing team will love this question. A single word or a short phrase can easily sum up your customers’ emotions when they experience your company, product, or brand. Those emotions can be translated into relatable marketing campaigns that use your customers’ exact language.

If the responses reveal negative emotions, it's likely that your entire customer service team can relate to that pain point. Rather than calling it "a bug in the system," you can describe the problem as a "frustrating roadblock" to keep their experience at the forefront of the solution.

12. "What's the primary reason for canceling your account?"

Finding out why customers are unhappy with your product or service is key to decreasing your churn rate . If you don't understand why people leave your brand, it's hard to make effective changes to prevent future turnover. Or worse, you might alter your product or service in a way that increases your churn rate, causing you to lose customers who were once loyal supporters.

13. "How satisfied are you with our customer support?"

It's worth asking customers how happy they are with your support or service team. After all, an excellent product doesn't always guarantee that customers will stay loyal to your brand. Research shows that one in six customers will leave a brand they love after just one poor service experience.

14. "Did we answer all of your questions and concerns?"

This is a good question to ask after a service experience. It shows how thorough your support team is and whether they're prioritizing speed too much over quality. If customers still have questions and concerns after a service interaction, your support team is focusing too much on closing tickets and not enough on meeting customer needs .

15. "How can we be more helpful?"

Sometimes it's easier to be direct and simply ask customers what else you can do to help them. This shows a genuine interest in your buyers' goals which helps your brand foster meaningful relationships with its customer base. The more you can show that you sincerely care about your customers' problems, the more they'll open up to you and be honest about how you can help them.

16. What additional features would you like to see in this product?

With this question, your team can get inspiration for the company's next product launch. Think of the responses as a wish list from your customers. You can discover what features are most valuable to them and whether they already exist within a competitor's product.

Incorporating every feature suggestion is nearly impossible, but it's a convenient way to build a backlog of ideas that can inspire future product releases.

17. "Are we meeting your expectations?"

This is a really important question to ask because customers won't always tell you when they're unhappy with your service. Not every customer will ask to speak with a manager when they're unhappy with your business. In fact, most will quietly move on to a competitor rather than broadcast their unhappiness to your company. To prevent this type of customer churn, you need to be proactive and ask customers if your brand is meeting their expectations.

18. "How satisfied are you with your experience?"

This question asks the customer to summarize their experience with your business. It gives you a snapshot of how the customer is feeling in that moment and their perception of your brand. Asking this question at the right stage in the customer's journey can tell you a lot about what your company is doing well and where you can stand to improve.

Next, let's dig into some tips for creating your own questionnaire.

Start with templates as a foundation. Know your question types. Keep it brief when possible. Choose a simple visual design. Use a clear research process. Create questions with straightforward, unbiased language. Make sure every question is important. Ask one question at a time. Order your questions logically. Consider your target audience. Test your questionnaire.

1. Use questionnaire templates.

Rather than build a questionnaire from scratch, consider using questionnaire templates to get started. HubSpot's collection of customer-facing questionnaire templates can help you quickly build and send a questionnaire to your clients and analyze the results right on Google Drive.

net promoter score questionnaire templates

Vrnda LeValley , customer training manager at HubSpot, recommends starting with an alignment question like, "Does this class meet your expectations?" because it gives more context to any positive or negative scores that follow. She continues, "If it didn't meet expectations, then there will potentially be negative responses across the board (as well as the reverse)."

3. Keep it brief, when possible.

Most questionnaires don't need to be longer than a page. For routine customer satisfaction surveys, it's unnecessary to ask 50 slightly varied questions about a customer's experience when those questions could be combined into 10 solid questions.

The shorter your questionnaire is, the more likely a customer will complete it. Plus a shorter questionnaire means less data for your team to collect and analyze. Based on the feedback, it will be a lot easier for you to get the information you need to make the necessary changes in your organization and products.

4. Choose a simple visual design.

There's no need to make your questionnaire a stunning work of art. As long as it's clear and concise, it will be attractive to customers. When asking questions that are important to furthering your company, it's best to keep things simple. Select a font that’s common and easy to read, like Helvetica or Arial. Use a text size that customers of all abilities can navigate.

A questionnaire is most effective when all the questions are visible on a single screen. The layout is important. If a questionnaire is even remotely difficult to navigate, your response rate could suffer. Make sure that buttons and checkboxes are easy to click and that questions are visible on both computer and mobile screens.

5. Use a clear research process.

Before planning questions for your questionnaire, you'll need to have a definite direction for it. A questionnaire is only effective if the results answer an overarching research question. After all, the research process is an important part of the survey, and a questionnaire is a tool that's used within the process.

In your research process, you should first come up with a research question. What are you trying to find out? What's the point of this questionnaire? Keep this in mind throughout the process.

After coming up with a research question, it's a good idea to have a hypothesis. What do you predict the results will be for your questionnaire? This can be structured in a simple "If … then …" format. A structured experiment — yes, your questionnaire is a type of experiment — will confirm that you're only collecting and analyzing data necessary to answer your research question. Then, you can move forward with your survey .

6. Create questions with straightforward, unbiased language.

When crafting your questions, it's important to structure them to get the point across. You don't want any confusion for your customers because this may influence their answers. Instead, use clear language. Don't use unnecessary jargon, and use simple terms in favor of longer-winded ones.

You may risk the reliability of your data if you try to combine two questions. Rather than asking, "How was your experience shopping with us, and would you recommend us to others?" separate it into two separate questions. Customers will be clear on your question and choose a response most appropriate for each one.

You should always keep the language in your questions unbiased. You never want to sway customers one way or another because this will cause your data to be skewed. Instead of asking, "Some might say that we create the best software products in the world. Would you agree or disagree?" it may be better to ask, "How would you rate our software products on a scale of 1 to 10?" This removes any bias and confirms that all the responses are valid.

7. Ask only the most important questions.

When creating your questionnaire, keep in mind that time is one of the most valuable commodities for customers. Most aren't going to sit through a 50-question survey, especially when they're being asked about products or services they didn't use. Even if they do complete it, most of these will be half-hearted responses from fatigued customers who simply want to be finished with it.

If your questionnaire has five or 55 questions, make sure each has a specific purpose. Individually, they should be aimed at collecting certain pieces of information that reveal new insights into different aspects of your business. If your questions are irrelevant or seem out of place, your customers will be easily derailed by the survey. And, once the customer has lost interest, it'll be difficult to regain their focus.

8. Ask one question at a time.

Since every question has a purpose, ask them one at a time. This lets the customer focus and encourages them to share a thoughtful response. This is particularly important for open-ended questions where customers need to describe an experience or opinion.

By grouping questions together, you risk overwhelming busy customers who don't have time for a long survey. They may think you're asking them too much, or they might see your questionnaire as a daunting task. You want your survey to appear as painless as possible. Keeping your questions separated will make it more user-friendly.

9. Order your questions logically.

A good questionnaire is like a good book. The beginning questions should lay the framework, the middle ones should cut to the core issues, and the final questions should tie up all loose ends. This flow keeps customers engaged throughout the entire survey.

When creating your questionnaire, start with the most basic questions about demographics. You can use this information to segment your customer base and create different buyer personas.

Next, add in your product and services questions. These are the ones that offer insights into common customer roadblocks and where you can improve your business's offerings. Questions like these guide your product development and marketing teams looking for new ways to enhance the customer experience.

Finally, you should conclude your questionnaire with open-ended questions to understand the customer journey. These questions let customers voice their opinions and point out specific experiences they've had with your brand.

10. Consider your target audience.

Whenever you collect customer feedback, you need to keep in mind the goals and needs of your target audience. After all, the participants in this questionnaire are your active customers. Your questions should be geared toward the interests and experiences they've already had with your company.

You can even create multiple surveys that target different buyer personas. For example, if you have a subscription-based pricing model, you can personalize your questionnaire for each type of subscription your company offers.

11. Test your questionnaire.

Once your questionnaire is complete, it's important to test it. If you don't, you may end up asking the wrong questions and collecting irrelevant or inaccurate information. Start by giving your employees the questionnaire to test, then send it to small groups of customers and analyze the results. If you're gathering the data you're looking for, then you should release the questionnaire to all of your customers.

How Questionnaires Can Benefit Your Customer Service Strategy

Whether you have one customer or 1000 customers, their opinions matter when it comes to the success of your business. Their satisfaction with your offerings can reveal how well or how poorly your customer service strategy and business are meeting their needs. A questionnaire is one of the most powerful, cost-effective tools to uncover what your customers think about your business. When analyzed properly, it can inform your product and service launches.

Use the free questionnaire templates, examples, and best practices in this guide to conduct your next customer feedback survey.

Now that you know the slight difference between a survey and a questionnaire, it’s time to put it into practice with your products or services. Remember, a good survey and questionnaire always start with a purpose. But, a great survey and questionnaire give data that you can use to help companies increase the way customers respond to their products or services because of the questions.

Net Promoter, Net Promoter System, Net Promoter Score, NPS, and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., Fred Reichheld, and Satmetrix Systems, Inc.

Editor's note: This post was originally published in July 2018 and has been updated for comprehensiveness.

Don't forget to share this post!

Related articles.

Nonresponse Bias: What to Avoid When Creating Surveys

Nonresponse Bias: What to Avoid When Creating Surveys

How to Make a Survey with a QR Code

How to Make a Survey with a QR Code

50 Catchy Referral Slogans & How to Write Your Own

50 Catchy Referral Slogans & How to Write Your Own

How Automated Phone Surveys Work [+Tips and Examples]

How Automated Phone Surveys Work [+Tips and Examples]

Online Panels: What They Are & How to Use Them Effectively

Online Panels: What They Are & How to Use Them Effectively

The Complete Guide to Survey Logic (+Expert Tips)

The Complete Guide to Survey Logic (+Expert Tips)

Focus Group vs. Survey: Which One Should You Use?

Focus Group vs. Survey: Which One Should You Use?

Leading Questions: What They Are & Why They Matter [+ Examples]

Leading Questions: What They Are & Why They Matter [+ Examples]

What are Survey Sample Sizes & How to Find Your Sample Size

What are Survey Sample Sizes & How to Find Your Sample Size

24 Diversity, Equity, and Inclusion Survey Questions to Ask Your Employees

24 Diversity, Equity, and Inclusion Survey Questions to Ask Your Employees

5 free templates for learning more about your customers and respondents.

Service Hub provides everything you need to delight and retain customers while supporting the success of your whole front office

Read our research on: Gun Policy | International Conflict | Election 2024

Regions & Countries

Writing survey questions.

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

questionnaire in survey research sample

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

questionnaire in survey research sample

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

questionnaire in survey research sample

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

questionnaire in survey research sample

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

questionnaire in survey research sample

Home Surveys

Best Survey Examples for your research

Survey examples

Whether you are creating a survey for market research, customer satisfaction evaluation, academic study or for human resource evaluation for your organization, good survey examples can go a long way in ensuring that your survey is up to the standard to collect great responses such that you get the best possible insights for your research.

Here are some critical survey examples by categories for your next research project:

  • Market Research

Market research is one of the most common reasons to conduct a survey. This also includes product evaluation and product testing where you would live to collect feedback on the potential of a product or service in a given market and demographic segment.

Here are a few survey examples for market research that will help you create a great market research survey:

Concept Evaluation and Pricing survey : This survey is used for evaluating a potential product / service content and its correlation to its pricing. This is a critical market research example because more than half of all market research surveys are for product evaluations in correlation to pricing models.

This brings us to our next example.

Conjoint Analysis survey : Whenever you want to study a product or service and its correlation to pricing or any other attribute – conjoint analysis survey is the best template to move forward. Conjoint analysis is used to undertake studies that want to understand how one aspect or feature may affect purchasing or choice pattern for another feature and the entire product overall.

Advertising Effectiveness survey : Any marketing or advertising campaign has to answer for its effectiveness, ROI and consumer / audience impact. This example template covers the important and standard questions that must be included in your next advertising effectiveness survey such that you can draw insightful conclusions on how well your campaign was able to perform, make your audience aware of your brand and how well could you convince them to purchase your product!

See more examples : Marketing and market research surveys

  • Customer Satisfaction

As businesses become more customer centric, bench-marking on customer satisfaction using surveys has become a defined metric for customer success. Here are a few examples for your next customer satisfaction evaluation survey:

Net Promoter Score Survey : Any customer experience evaluation must include the most critical and also the most heavily used survey – the Net Promoter Score question. What makes this customer question so important is that it provides you with a numeric metric on the basis of how your customer’s experience was, was it good enough for them to recommend your brand to friends and family or whether they are potential brand detractor. As a business, you need answers to these questions and the NPS survey gets you this insight with just a single question.

Product Satisfaction Survey : This survey example explores customer feedback based on their experience with the product as well as the organization at points of contact of purchase and post-purchase support. For any business or non-profit, wanting to understand customer satisfaction with their product is a critical step towards improving overall customer experience – through product changes as well as service improvements.

This brings us to our next survey example.

Motivation and Buying Experience : While it is great to understand customer experience with your product – you need to dig deeper to the source if you want to make fundamental changes that increases your product purchases. You need to understand “why” people buy your product, how was their buying experience and what you need to do to leverage your existing advantages over competition and what you need to do to improve it further.

See more examples : Customer satisfaction surveys

  • Human Resources and Employee Evaluation

The buzz of employee engagement has caused most of the mature industries to understand how employee engagement and motivation can affect their productivity and commitment towards product / service improvements and customer success.

Here are a few great survey examples to evaluate if your human resources are motivated and contribute towards creating a winning work culture:

Job Satisfaction Survey : A popular saying goes like this – “If you want satisfied customers, you need satisfied employees”. What it means is that your employees are the resources that keep your customers happy – every employee work division is ultimately geared towards getting more customers and keeping them satisfied. If an employee is unsatisfied with their job, their is a good chance that they won’t be able to provide a dedicated and satisfactory output that satisfies your customers either. Therefore, first begin to understand how your employees feel at work using this survey example.

Supervisor Evaluation Survey : Managers and supervisors form the first layer of employee hierarchy and ar responsible for translating company values and team motivation down to the last employee. This makes it critically important to evaluate if your managers / supervisors are well trained to carry out their daily tasks and if you need to make improvements to your mid-management process.

Senior Management Survey : Employees, managers / supervisors and now the cycle completes with collecting feedback for the senior-most management and leadership teams, including your executive team. After their immediate reporting managers, every employee of an organization looks up to the senior management for references and motivations. Through ideas and actions, the senior management must be well equipped to drive the core values of your organization and trickle down employee motivation and team building skills.

See more examples : HR and Employee Surveys

  • Academic Evaluation

Colleges, schools and academic institutions are becoming increasingly active in collecting insightful feedback through surveys. More and more educational power houses now want to conduct academic surveys where they want to actively collect feedback from students, parents and professors to improve their quality of education.

Here are some popular survey examples:

Professor Evaluation Survey : Professors are “gurus”. What they know, needs to be passed down to every generation that studies under them with on-hands experience in conducting case studies and research projects together. A University’s collective reputation begins with the quality of education imparted by its professors. This survey example collects feedback on professors for great insights on what can be improved in your current standards and what your professors are doing great.

Student Stress Evaluation Survey : Student stress is a major reason for student drop-outs and even suicides. A University’s reputation depends heavily on how it helps students cope with stress from studies and academic processes. Use this survey example to survey your students for signs of work stress and fatigue. Moreover, this is also the time when students learn how to deal with stress, a stress that comes with making career decisions as well. This is the time you can train your students on stress management, in their student lives as well as ahead in their job and profession.

Graduation / University Completion Survey : Your exiting students are the best source for collecting feedback on your University. Your students who have successfully accomplished their degrees are not just success stories, but also stories of their experience at the university, what they felt helped them complete their studies and also what they found to have hindered progress. This is a great survey example for your next academic survey!

See more examples : Academic Evaluation and Student Surveys

  • Psychographic and Demographics

Psychology and demographic surveys are important to researchers in most fields because these surveys focus on understanding the psychology and demographic categorization of a respondent.

Here are some great survey examples :

Lifestyle Survey : This survey example explores the general lifestyle of a respondent and collects feedback on some basic demographic questions. This survey forms a good foundation for any psychological profile survey of a demographic segment or consumer base. This can also form the basis for a territorial survey of a particular region’s lifestyle and general choices.

Internet / Web Demographic Survey : In today’s increasing online world where most decisions and actions are made over the internet, it is important to understand not just the lifestyle of a demographic, but also their behaviour and preference while browsing the internet. This survey example will help you formulate the template needed for your next web survey.

LEARN ABOUT: Pricing Research

Business / Profession Survey : Another fundamental survey example for any psychographic / demographic researcher is the business and profession survey. This survey helps you capture details of the respondent’s profession, along with basic demographics.

See more : Psychographic / Demographic Surveys

MORE LIKE THIS

ux research software

Top 17 UX Research Software for UX Design in 2024

Apr 5, 2024

Healthcare Staff Burnout

Healthcare Staff Burnout: What it Is + How To Manage It

Apr 4, 2024

employee retention software

Top 15 Employee Retention Software in 2024

employee development software

Top 10 Employee Development Software for Talent Growth

Apr 3, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Research survey examples, templates, and types

Research surveys help base your next important decision on data. With our survey research templates and questions, gather valuable data easily and improve your business.

Get started

What are the benefits of survey research?

Providing data that can be relied on. Whether conducting market research or preparing a new product launch, research surveys supply the precise information needed to succeed. Avoid the confusion of conflicting opinions with data analysis that provides a clear picture of what people think.

At SurveyPlanet, we’re committed to making survey research easy to conduct. With our templates, have access to questions that will deliver the data you need.

The wide variety of research survey templates available is how to get useful data quickly—which makes developing more powerful solutions easier. Survey research can provide data you can rely on.

The wide variety of survey templates available helps develop the correct solution. At SurveyPlanet, we're committed to making research surveys easy to conduct and with our templates, we deliver on that promise.

What are research questionnaires?

They are a tool that returns insight about any topic. Just asking friends, family, and coworkers about a new product is not the best approach. Why? To put it simply, they're not a representative sample and may have biases.

What is needed is the opinions of your target audience. At the end of the day, it is their opinion that matters most. This requires a large enough sample to produce statistically significant data. That's where online surveys can play an important role.

Types of research surveys

Research questionnaires are a great tool to gain insights about all kinds of things (and not just business purposes). These surveys play an important role in extracting valuable insights from diverse populations. When thoughtfully designed, they become powerful instruments for informed decision-making and the advancement of knowledge across various domains.

Let's dive deeper into the types of surveys and where to apply them to get the best results.

Market research survey

Most businesses fail because their management believes their products and services are great—while the market thinks otherwise. To sell anything, the opinions of the people doing the buying need to be understood. Market research surveys offer insights about where a business stands with potential customers—and thus its potential market share—long before resources are dedicated to trying to make a product work in the marketplace.

Learn more about market research surveys.

Media consumption research survey

This type of survey explores how different people consume media content. It provides answers about what they view, how often they do so, and what kind of media they prefer. With a media consumption survey, learn everything about people's viewing and reading habits.

Reading preferences research survey

Ever wondered how, why, and what people enjoy reading? With a reading preferences research survey, such information can be discovered. By further analyzing the data, learn what different groups of people read (and the similarities and differences between different groups).

Product research survey

When launching a new product, understanding its target audience is crucial. This type of survey is a great tool that provides valuable feedback and insight that can be incorporated into a successful product launch.

Learn more about product research surveys.

Brand surveys

These help ascertain how customers feel about a brand. People buy from those they connect with; therefore, ask about their experiences and occasionally check in with them to see if they trust your brand.

Learn more about brand surveys.

Path-to-purchase research surveys

A path-to-purchase research survey investigates the steps consumers take from initial product awareness to final purchase. It typically includes questions about the decision-making process, product research, and factors influencing the ultimate purchasing decision. Such surveys can be conducted through various methods, but the best is via online surveys. The results of path-to-purchase surveys help businesses and marketers understand their target audience and develop effective marketing strategies.

Marketing research surveys

These help a company stand out from competitors and tailor marketing messages that better resonate with a target audience. Market research surveys are another type of research that is crucial when launching a new product or service.

Learn more about marketing research surveys.

Academic research surveys

These surveys are instrumental in improving knowledge about a specific subject. Consolidated results can be used to improve the efficiency of decision-making. Reliability is produced using methodologies and tools like questionnaires, surveys, interviews, and structured online forms.

Learn more about academic surveys.

Types of research methods

The three main types of research methods are exploratory, descriptive, and causal research.

Exploratory research

Exploratory research is conducted when a researcher seeks to explore a new subject or phenomenon with limited or no prior understanding. The primary goal of exploratory research is to gain insights, generate ideas, and form initial hypotheses for more in-depth investigation. This type of research is often the first step in the research process and is particularly useful when the topic is not well-defined or when there is a lack of existing knowledge. Researchers often use open-ended questions and qualitative methods to gather data, allowing them to adapt their approach as they learn more about the topic.

Descriptive research

Descriptive research aims to provide an accurate and detailed portrayal of a specific phenomenon or group. Unlike exploratory research, which seeks to generate insights and hypotheses, descriptive research is focused on describing the characteristics, behaviors, or conditions of a subject without manipulating variables.

Causal research

Causal research, also known as explanatory or experimental research, seeks to establish a cause-and-effect relationship between two or more variables. The primary goal of causal research is to determine whether a change in one variable causes a change in another variable. Unlike descriptive research, which focuses on describing relationships and characteristics, causal research involves manipulating one or more independent variables to observe their impact on dependent variables.

The research survey application

Research methods are designed to produce the best information from a group of research subjects (aka, the focus group). Such methods are used in many types of research and studies. They are methodologies that can be used for research study and data collection.

Depending on the kind of research and research methodology being carried out, different types of research survey questions are used, including multiple choice questions , Likert , scale questions , open-ended questions , demographic questions , and even image choice questions .

There are many survey applications that can collect data from many customers quickly and easily—a great way to get information about products, services, customer experiences, and marketing efforts.

Why you should use research questionnaires

The power of research questionnaires lies in their ease of use and cost-effectiveness. They provide answers to the most vital questions. What are the main benefits of these surveys?

  • You don't have to wonder WHO, WHAT, and WHY because this type of analysis provides answers to those—and many other—questions.
  • With a complete understanding of what's important in a research project, the best inquiries can be incorporated into survey questions.
  • Get an unbiased opinion from a target audience and use it to your advantage.
  • Collect data that matters and have it at your fingertips at all times.

Advantages and disadvantages of survey research

People use these surveys because they have many advantages compared to other research tools. What are the main advantages?

  • Cost-effective.
  • Collect data from many respondents.
  • Quantifiable results.
  • Convenient.
  • The most practical solution for gathering data.
  • Fast and reliable.
  • Easily comparable results.
  • Allows for the exploration of any topic.

While such advantages make it a no-brainer to use research questionnaires, it's always good to know their disadvantages:

  • Biased responses.
  • Cultural differences in understanding questions.
  • Analyzing and understanding responses can be difficult.
  • Some people won't read the questions before answering.
  • Survey fatigue.

However, when these issues are understood, mitigation strategies can be activated. Every research method has flaws, but we firmly believe their benefits outweigh their disadvantages.

To execute a research campaign, the creation of a survey is one of the first steps. This includes designing questions or using a premade template. Below are some of the best research survey examples, templates, and tips for designing these surveys.

20 research survey examples and templates

Specific survey questions for research depend on your goals. A research questionnaire can be conducted about any topic or interest. Here are some of the best questions and ranking prompts:

  • How often do you purchase books without actually reading them?
  • What is your favorite foreign language film?
  • During an average day, how many times do you check the news?
  • Who is your favorite football player of all time? Why?
  • Have you ever used any of the following travel websites to plan a vacation?
  • Do you currently use a similar or competing product?
  • On a scale of 1 to 5, how satisfied are you with the product?
  • What is your single favorite feature of our product?
  • When our product becomes available, are you likely to use it instead of a similar or competing product?
  • What improvements would you suggest for our service?
  • Please rank the following features in order of importance.
  • How often do you consume fruits and vegetables in a typical week?
  • How many days per week do you engage in physical activity?
  • Do you prefer traditional classroom learning or online learning?
  • How many hours a week do you spend studying for your courses?
  • What are your career aspirations upon completing your education?
  • Please rate our website's user interface from poor to excellent.
  • In what ways can we better support you as a customer?
  • Please rank the following factors in order of importance when choosing a new car.
  • Order the following smartphone features based on your preference.

Of course, you get demographic information like:

  • Employment status
  • Marital status
  • Household income

No matter the research topic, this demographic information will lead to better data-driven conclusions. Interested in knowing more about demographic survey questions? Check out our blog post explaining the advantages of gathering demographic information and how to do it appropriately.

Sign up for SurveyPlanet for free. Conduct your first survey to explore what people think. And don't worry about questions because we have some amazing templates to get you started.

Sign up now

Free unlimited surveys, questions and responses.

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Questionnaire – Definition, Types, and Examples

Questionnaire – Definition, Types, and Examples

Table of Contents

Questionnaire

Questionnaire

Definition:

A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people.

It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective. The questions may be open-ended or closed-ended, and the responses can be quantitative or qualitative. Questionnaires are widely used in research, marketing, social sciences, healthcare, and many other fields to collect data and insights from a target population.

History of Questionnaire

The history of questionnaires can be traced back to the ancient Greeks, who used questionnaires as a means of assessing public opinion. However, the modern history of questionnaires began in the late 19th century with the rise of social surveys.

The first social survey was conducted in the United States in 1874 by Francis A. Walker, who used a questionnaire to collect data on labor conditions. In the early 20th century, questionnaires became a popular tool for conducting social research, particularly in the fields of sociology and psychology.

One of the most influential figures in the development of the questionnaire was the psychologist Raymond Cattell, who in the 1940s and 1950s developed the personality questionnaire, a standardized instrument for measuring personality traits. Cattell’s work helped establish the questionnaire as a key tool in personality research.

In the 1960s and 1970s, the use of questionnaires expanded into other fields, including market research, public opinion polling, and health surveys. With the rise of computer technology, questionnaires became easier and more cost-effective to administer, leading to their widespread use in research and business settings.

Today, questionnaires are used in a wide range of settings, including academic research, business, healthcare, and government. They continue to evolve as a research tool, with advances in computer technology and data analysis techniques making it easier to collect and analyze data from large numbers of participants.

Types of Questionnaire

Types of Questionnaires are as follows:

Structured Questionnaire

This type of questionnaire has a fixed format with predetermined questions that the respondent must answer. The questions are usually closed-ended, which means that the respondent must select a response from a list of options.

Unstructured Questionnaire

An unstructured questionnaire does not have a fixed format or predetermined questions. Instead, the interviewer or researcher can ask open-ended questions to the respondent and let them provide their own answers.

Open-ended Questionnaire

An open-ended questionnaire allows the respondent to answer the question in their own words, without any pre-determined response options. The questions usually start with phrases like “how,” “why,” or “what,” and encourage the respondent to provide more detailed and personalized answers.

Close-ended Questionnaire

In a closed-ended questionnaire, the respondent is given a set of predetermined response options to choose from. This type of questionnaire is easier to analyze and summarize, but may not provide as much insight into the respondent’s opinions or attitudes.

Mixed Questionnaire

A mixed questionnaire is a combination of open-ended and closed-ended questions. This type of questionnaire allows for more flexibility in terms of the questions that can be asked, and can provide both quantitative and qualitative data.

Pictorial Questionnaire:

In a pictorial questionnaire, instead of using words to ask questions, the questions are presented in the form of pictures, diagrams or images. This can be particularly useful for respondents who have low literacy skills, or for situations where language barriers exist. Pictorial questionnaires can also be useful in cross-cultural research where respondents may come from different language backgrounds.

Types of Questions in Questionnaire

The types of Questions in Questionnaire are as follows:

Multiple Choice Questions

These questions have several options for participants to choose from. They are useful for getting quantitative data and can be used to collect demographic information.

  • a. Red b . Blue c. Green d . Yellow

Rating Scale Questions

These questions ask participants to rate something on a scale (e.g. from 1 to 10). They are useful for measuring attitudes and opinions.

  • On a scale of 1 to 10, how likely are you to recommend this product to a friend?

Open-Ended Questions

These questions allow participants to answer in their own words and provide more in-depth and detailed responses. They are useful for getting qualitative data.

  • What do you think are the biggest challenges facing your community?

Likert Scale Questions

These questions ask participants to rate how much they agree or disagree with a statement. They are useful for measuring attitudes and opinions.

How strongly do you agree or disagree with the following statement:

“I enjoy exercising regularly.”

  • a . Strongly Agree
  • c . Neither Agree nor Disagree
  • d . Disagree
  • e . Strongly Disagree

Demographic Questions

These questions ask about the participant’s personal information such as age, gender, ethnicity, education level, etc. They are useful for segmenting the data and analyzing results by demographic groups.

  • What is your age?

Yes/No Questions

These questions only have two options: Yes or No. They are useful for getting simple, straightforward answers to a specific question.

Have you ever traveled outside of your home country?

Ranking Questions

These questions ask participants to rank several items in order of preference or importance. They are useful for measuring priorities or preferences.

Please rank the following factors in order of importance when choosing a restaurant:

  • a. Quality of Food
  • c. Ambiance
  • d. Location

Matrix Questions

These questions present a matrix or grid of options that participants can choose from. They are useful for getting data on multiple variables at once.

Dichotomous Questions

These questions present two options that are opposite or contradictory. They are useful for measuring binary or polarized attitudes.

Do you support the death penalty?

How to Make a Questionnaire

Step-by-Step Guide for Making a Questionnaire:

  • Define your research objectives: Before you start creating questions, you need to define the purpose of your questionnaire and what you hope to achieve from the data you collect.
  • Choose the appropriate question types: Based on your research objectives, choose the appropriate question types to collect the data you need. Refer to the types of questions mentioned earlier for guidance.
  • Develop questions: Develop clear and concise questions that are easy for participants to understand. Avoid leading or biased questions that might influence the responses.
  • Organize questions: Organize questions in a logical and coherent order, starting with demographic questions followed by general questions, and ending with specific or sensitive questions.
  • Pilot the questionnaire : Test your questionnaire on a small group of participants to identify any flaws or issues with the questions or the format.
  • Refine the questionnaire : Based on feedback from the pilot, refine and revise the questionnaire as necessary to ensure that it is valid and reliable.
  • Distribute the questionnaire: Distribute the questionnaire to your target audience using a method that is appropriate for your research objectives, such as online surveys, email, or paper surveys.
  • Collect and analyze data: Collect the completed questionnaires and analyze the data using appropriate statistical methods. Draw conclusions from the data and use them to inform decision-making or further research.
  • Report findings: Present your findings in a clear and concise report, including a summary of the research objectives, methodology, key findings, and recommendations.

Questionnaire Administration Modes

There are several modes of questionnaire administration. The choice of mode depends on the research objectives, sample size, and available resources. Some common modes of administration include:

  • Self-administered paper questionnaires: Participants complete the questionnaire on paper, either in person or by mail. This mode is relatively low cost and easy to administer, but it may result in lower response rates and greater potential for errors in data entry.
  • Online questionnaires: Participants complete the questionnaire on a website or through email. This mode is convenient for both researchers and participants, as it allows for fast and easy data collection. However, it may be subject to issues such as low response rates, lack of internet access, and potential for fraudulent responses.
  • Telephone surveys: Trained interviewers administer the questionnaire over the phone. This mode allows for a large sample size and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Face-to-face interviews : Trained interviewers administer the questionnaire in person. This mode allows for a high degree of control over the survey environment and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Mixed-mode surveys: Researchers use a combination of two or more modes to administer the questionnaire, such as using online questionnaires for initial screening and following up with telephone interviews for more detailed information. This mode can help overcome some of the limitations of individual modes, but it requires careful planning and coordination.

Example of Questionnaire

Title of the Survey: Customer Satisfaction Survey

Introduction:

We appreciate your business and would like to ensure that we are meeting your needs. Please take a few minutes to complete this survey so that we can better understand your experience with our products and services. Your feedback is important to us and will help us improve our offerings.

Instructions:

Please read each question carefully and select the response that best reflects your experience. If you have any additional comments or suggestions, please feel free to include them in the space provided at the end of the survey.

1. How satisfied are you with our product quality?

  • Very satisfied
  • Somewhat satisfied
  • Somewhat dissatisfied
  • Very dissatisfied

2. How satisfied are you with our customer service?

3. How satisfied are you with the price of our products?

4. How likely are you to recommend our products to others?

  • Very likely
  • Somewhat likely
  • Somewhat unlikely
  • Very unlikely

5. How easy was it to find the information you were looking for on our website?

  • Somewhat easy
  • Somewhat difficult
  • Very difficult

6. How satisfied are you with the overall experience of using our products and services?

7. Is there anything that you would like to see us improve upon or change in the future?

…………………………………………………………………………………………………………………………..

Conclusion:

Thank you for taking the time to complete this survey. Your feedback is valuable to us and will help us improve our products and services. If you have any further comments or concerns, please do not hesitate to contact us.

Applications of Questionnaire

Some common applications of questionnaires include:

  • Research : Questionnaires are commonly used in research to gather information from participants about their attitudes, opinions, behaviors, and experiences. This information can then be analyzed and used to draw conclusions and make inferences.
  • Healthcare : In healthcare, questionnaires can be used to gather information about patients’ medical history, symptoms, and lifestyle habits. This information can help healthcare professionals diagnose and treat medical conditions more effectively.
  • Marketing : Questionnaires are commonly used in marketing to gather information about consumers’ preferences, buying habits, and opinions on products and services. This information can help businesses develop and market products more effectively.
  • Human Resources: Questionnaires are used in human resources to gather information from job applicants, employees, and managers about job satisfaction, performance, and workplace culture. This information can help organizations improve their hiring practices, employee retention, and organizational culture.
  • Education : Questionnaires are used in education to gather information from students, teachers, and parents about their perceptions of the educational experience. This information can help educators identify areas for improvement and develop more effective teaching strategies.

Purpose of Questionnaire

Some common purposes of questionnaires include:

  • To collect information on attitudes, opinions, and beliefs: Questionnaires can be used to gather information on people’s attitudes, opinions, and beliefs on a particular topic. For example, a questionnaire can be used to gather information on people’s opinions about a particular political issue.
  • To collect demographic information: Questionnaires can be used to collect demographic information such as age, gender, income, education level, and occupation. This information can be used to analyze trends and patterns in the data.
  • To measure behaviors or experiences: Questionnaires can be used to gather information on behaviors or experiences such as health-related behaviors or experiences, job satisfaction, or customer satisfaction.
  • To evaluate programs or interventions: Questionnaires can be used to evaluate the effectiveness of programs or interventions by gathering information on participants’ experiences, opinions, and behaviors.
  • To gather information for research: Questionnaires can be used to gather data for research purposes on a variety of topics.

When to use Questionnaire

Here are some situations when questionnaires might be used:

  • When you want to collect data from a large number of people: Questionnaires are useful when you want to collect data from a large number of people. They can be distributed to a wide audience and can be completed at the respondent’s convenience.
  • When you want to collect data on specific topics: Questionnaires are useful when you want to collect data on specific topics or research questions. They can be designed to ask specific questions and can be used to gather quantitative data that can be analyzed statistically.
  • When you want to compare responses across groups: Questionnaires are useful when you want to compare responses across different groups of people. For example, you might want to compare responses from men and women, or from people of different ages or educational backgrounds.
  • When you want to collect data anonymously: Questionnaires can be useful when you want to collect data anonymously. Respondents can complete the questionnaire without fear of judgment or repercussions, which can lead to more honest and accurate responses.
  • When you want to save time and resources: Questionnaires can be more efficient and cost-effective than other methods of data collection such as interviews or focus groups. They can be completed quickly and easily, and can be analyzed using software to save time and resources.

Characteristics of Questionnaire

Here are some of the characteristics of questionnaires:

  • Standardization : Questionnaires are standardized tools that ask the same questions in the same order to all respondents. This ensures that all respondents are answering the same questions and that the responses can be compared and analyzed.
  • Objectivity : Questionnaires are designed to be objective, meaning that they do not contain leading questions or bias that could influence the respondent’s answers.
  • Predefined responses: Questionnaires typically provide predefined response options for the respondents to choose from, which helps to standardize the responses and make them easier to analyze.
  • Quantitative data: Questionnaires are designed to collect quantitative data, meaning that they provide numerical or categorical data that can be analyzed using statistical methods.
  • Convenience : Questionnaires are convenient for both the researcher and the respondents. They can be distributed and completed at the respondent’s convenience and can be easily administered to a large number of people.
  • Anonymity : Questionnaires can be anonymous, which can encourage respondents to answer more honestly and provide more accurate data.
  • Reliability : Questionnaires are designed to be reliable, meaning that they produce consistent results when administered multiple times to the same group of people.
  • Validity : Questionnaires are designed to be valid, meaning that they measure what they are intended to measure and are not influenced by other factors.

Advantage of Questionnaire

Some Advantage of Questionnaire are as follows:

  • Standardization: Questionnaires allow researchers to ask the same questions to all participants in a standardized manner. This helps ensure consistency in the data collected and eliminates potential bias that might arise if questions were asked differently to different participants.
  • Efficiency: Questionnaires can be administered to a large number of people at once, making them an efficient way to collect data from a large sample.
  • Anonymity: Participants can remain anonymous when completing a questionnaire, which may make them more likely to answer honestly and openly.
  • Cost-effective: Questionnaires can be relatively inexpensive to administer compared to other research methods, such as interviews or focus groups.
  • Objectivity: Because questionnaires are typically designed to collect quantitative data, they can be analyzed objectively without the influence of the researcher’s subjective interpretation.
  • Flexibility: Questionnaires can be adapted to a wide range of research questions and can be used in various settings, including online surveys, mail surveys, or in-person interviews.

Limitations of Questionnaire

Limitations of Questionnaire are as follows:

  • Limited depth: Questionnaires are typically designed to collect quantitative data, which may not provide a complete understanding of the topic being studied. Questionnaires may miss important details and nuances that could be captured through other research methods, such as interviews or observations.
  • R esponse bias: Participants may not always answer questions truthfully or accurately, either because they do not remember or because they want to present themselves in a particular way. This can lead to response bias, which can affect the validity and reliability of the data collected.
  • Limited flexibility: While questionnaires can be adapted to a wide range of research questions, they may not be suitable for all types of research. For example, they may not be appropriate for studying complex phenomena or for exploring participants’ experiences and perceptions in-depth.
  • Limited context: Questionnaires typically do not provide a rich contextual understanding of the topic being studied. They may not capture the broader social, cultural, or historical factors that may influence participants’ responses.
  • Limited control : Researchers may not have control over how participants complete the questionnaire, which can lead to variations in response quality or consistency.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Survey Research

Survey Research – Types, Methods, Examples

Learn / Blog / Article

Back to blog

Survey questions 101: 70+ survey question examples, types of surveys, and FAQs

How well do you understand your prospects and customers—who they are, what keeps them awake at night, and what brought them to your business in search of a solution? Asking the right survey questions at the right point in their customer journey is the most effective way to put yourself in your customers’ shoes.

Last updated

Reading time.

questionnaire in survey research sample

This comprehensive intro to survey questions contains over 70 examples of effective questions, an overview of different types of survey questions, and advice on how to word them for maximum effect. Plus, we’ll toss in our pre-built survey templates, expert survey insights, and tips to make the most of AI for Surveys in Hotjar. ✨

Surveying your users is the simplest way to understand their pain points, needs, and motivations. But first, you need to know how to set up surveys that give you the answers you—and your business—truly need. Impactful surveys start here:

❓ The main types of survey questions : most survey questions are classified as open-ended, closed-ended, nominal, Likert scale, rating scale, and yes/no. The best surveys often use a combination of questions.

💡 70+ good survey question examples : our top 70+ survey questions, categorized across ecommerce, SaaS, and publishing, will help you find answers to your business’s most burning questions

✅ What makes a survey question ‘good’ : a good survey question is anything that helps you get clear insights and business-critical information about your customers 

❌ The dos and don’ts of writing good survey questions : remember to be concise and polite, use the foot-in-door principle, alternate questions, and test your surveys. But don’t ask leading or loaded questions, overwhelm respondents with too many questions, or neglect other tools that can get you the answers you need.

👍 How to run your surveys the right way : use a versatile survey tool like Hotjar Surveys that allows you to create on-site surveys at specific points in the customer journey or send surveys via a link

🛠️ 10 use cases for good survey questions : use your survey insights to create user personas, understand pain points, measure product-market fit, get valuable testimonials, measure customer satisfaction, and more

Use Hotjar to build your survey and get the customer insight you need to grow your business.

6 main types of survey questions

Let’s dive into our list of survey question examples, starting with a breakdown of the six main categories your questions will fall into:

Open-ended questions

Closed-ended questions

Nominal questions

Likert scale questions

Rating scale questions

'Yes' or 'no' questions

1. Open-ended survey questions

Open-ended questions  give your respondents the freedom to  answer in their own words , instead of limiting their response to a set of pre-selected choices (such as multiple-choice answers, yes/no answers, 0–10 ratings, etc.). 

Examples of open-ended questions:

What other products would you like to see us offer?

If you could change just one thing about our product, what would it be?

When to use open-ended questions in a survey

The majority of example questions included in this post are open-ended, and there are some good reasons for that:

Open-ended questions help you learn about customer needs you didn’t know existed , and they shine a light on areas for improvement that you may not have considered before. If you limit your respondents’ answers, you risk cutting yourself off from key insights.

Open-ended questions are very useful when you first begin surveying your customers and collecting their feedback. If you don't yet have a good amount of insight, answers to open-ended questions will go a long way toward educating you about who your customers are and what they're looking for.

There are, however, a few downsides to open-ended questions:

First, people tend to be less likely to respond to open-ended questions in general because they take comparatively more effort to answer than, say, a yes/no one

Second, but connected: if you ask consecutive open-ended questions during your survey, people will get tired of answering them, and their answers might become less helpful the more you ask

Finally, the data you receive from open-ended questions will take longer to analyze compared to easy 1-5 or yes/no answers—but don’t let that stop you. There are plenty of shortcuts that make it easier than it looks (we explain it all in our post about how to analyze open-ended questions , which includes a free analysis template.)

💡 Pro tip: if you’re using Hotjar Surveys, let our AI for Surveys feature analyze your open-ended survey responses for you. Hotjar AI reviews all your survey responses and provides an automated summary report of key findings, including supporting quotes and actionable recommendations for next steps.

2. Closed-ended survey questions

Closed-end questions limit a user’s response options to a set of pre-selected choices. This broad category of questions includes

‘Yes’ or ‘no’ questions

When to use closed-ended questions

Closed-ended questions work brilliantly in two scenarios:

To open a survey, because they require little time and effort and are therefore easy for people to answer. This is called the foot-in-the-door principle: once someone commits to answering the first question, they may be more likely to answer the open-ended questions that follow.

When you need to create graphs and trends based on people’s answers. Responses to closed-ended questions are easy to measure and use as benchmarks. Rating scale questions, in particular (e.g. where people rate customer service or on a scale of 1-10), allow you to gather customer sentiment and compare your progress over time.

3. Nominal questions

A nominal question is a type of survey question that presents people with multiple answer choices; the answers are  non-numerical in nature and don't overlap  (unless you include an ‘all of the above’ option).

Example of nominal question:

What are you using [product name] for?

Personal use

Both business and personal use

When to use nominal questions

Nominal questions work well when there is a limited number of categories for a given question (see the example above). They’re easy to create graphs and trends from, but the downside is that you may not be offering enough categories for people to reply.

For example, if you ask people what type of browser they’re using and only give them three options to choose from, you may inadvertently alienate everybody who uses a fourth type and now can’t tell you about it.

That said, you can add an open-ended component to a nominal question with an expandable ’other’ category, where respondents can write in an answer that isn’t on the list. This way, you essentially ask an open-ended question that doesn’t limit them to the options you’ve picked.

4. Likert scale questions

The Likert scale is typically a 5- or 7-point scale that evaluates a respondent’s level of agreement with a statement or the intensity of their reaction toward something.

The scale develops symmetrically: the median number (e.g. a 3 on a 5-point scale) indicates a point of neutrality, the lowest number (always 1) indicates an extreme view, and the highest number (e.g. a 5 on a 5-point scale) indicates the opposite extreme view.

Example of a Likert scale question:

#The British Museum uses a Likert scale Hotjar survey to gauge visitors’ reactions to their website optimizations

When to use Likert scale questions

Likert-type questions are also known as ordinal questions because the answers are presented in a specific order. Like other multiple-choice questions, Likert scale questions come in handy when you already have some sense of what your customers are thinking. For example, if your open-ended questions uncover a complaint about a recent change to your ordering process, you could use a Likert scale question to determine how the average user felt about the change.

A series of Likert scale questions can also be turned into a matrix question. Since they have identical response options, they are easily combined into a single matrix and break down the pattern of single questions for users.

5. Rating scale questions

Rating scale questions are questions where the answers map onto a numeric scale (such as rating customer support on a scale of 1-5, or likelihood to recommend a product from 0-10).

Examples of rating questions:

How likely are you to recommend us to a friend or colleague on a scale of 0-10?

How would you rate our customer service on a scale of 1-5?

When to use rating questions

Whenever you want to assign a numerical value to your survey or visualize and compare trends , a rating question is the way to go.

A typical rating question is used to determine Net Promoter Score® (NPS®) : the question asks customers to rate their likelihood of recommending products or services to their friends or colleagues, and allows you to look at the results historically and see if you're improving or getting worse. Rating questions are also used for customer satisfaction (CSAT) surveys and product reviews.

When you use a rating question in a survey, be sure to explain what the scale means (e.g. 1 for ‘Poor’, 5 for ‘Amazing’). And consider adding a follow-up open-ended question to understand why the user left that score.

Example of a rating question (NPS):

#Hotjar's Net Promoter Score® (NPS®) survey template lets you add open-ended follow-up questions so you can understand the reasons behind users' ratings

6. ‘Yes’ or ‘no’ questions

These dichotomous questions are super straightforward, requiring a simple ‘yes’ or ‘no’ reply.

Examples of yes/no questions:

Was this article useful? (Yes/No)

Did you find what you were looking for today? (Yes/No)

When to use ‘yes’ or ‘no’ questions

‘Yes’ and ‘no’ questions are a good way to quickly segment your respondents . For example, say you’re trying to understand what obstacles or objections prevent people from trying your product. You can place a survey on your pricing page asking people if something is stopping them, and follow up with the segment who replied ‘yes’ by asking them to elaborate further.

These questions are also effective for getting your foot in the door: a ‘yes’ or ‘no’ question requires very little effort to answer. Once a user commits to answering the first question, they tend to become more willing to answer the questions that follow, or even leave you their contact information.

#Web design agency NerdCow used Hotjar Surveys to add a yes/no survey on The Transport Library’s website, and followed it up with an open-ended question for more insights

70+ more survey question examples

Below is a list of good survey questions, categorized across ecommerce, software as a service (SaaS), and publishing. You don't have to use them word-for-word, but hopefully, this list will spark some extra-good ideas for the surveys you’ll run immediately after reading this article. (Plus, you can create all of them with Hotjar Surveys—stick with us a little longer to find out how. 😉)

📊 9 basic demographic survey questions

Ask these questions when you want context about your respondents and target audience, so you can segment them later. Consider including demographic information questions in your survey when conducting user or market research as well. 

But don’t ask demographic questions just for the sake of it—if you're not going to use some of the data points from these sometimes sensitive questions (e.g. if gender is irrelevant to the result of your survey), move on to the ones that are truly useful for you, business-wise. 

Take a look at the selection of examples below, and keep in mind that you can convert most of them to multiple choice questions:

What is your name?

What is your age?

What is your gender?

What company do you work for?

What vertical/industry best describes your company?

What best describes your role?

In which department do you work?

What is the total number of employees in your company (including all locations where your employer operates)?

What is your company's annual revenue?

🚀 Get started: gather more info about your users with our product-market fit survey template .

👥 20+ effective customer questions

These questions are particularly recommended for ecommerce companies:

Before purchase

What information is missing or would make your decision to buy easier?

What is your biggest fear or concern about purchasing this item?

Were you able to complete the purpose of your visit today?

If you did not make a purchase today, what stopped you?

After purchase

Was there anything about this checkout process we could improve?

What was your biggest fear or concern about purchasing from us?

What persuaded you to complete the purchase of the item(s) in your cart today?

If you could no longer use [product name], what’s the one thing you would miss the most?

What’s the one thing that nearly stopped you from buying from us?

👉 Check out our 7-step guide to setting up an ecommerce post-purchase survey .

Other useful customer questions

Do you have any questions before you complete your purchase?

What other information would you like to see on this page?

What were the three main things that persuaded you to create an account today?

What nearly stopped you from creating an account today?

Which other options did you consider before choosing [product name]?

What would persuade you to use us more often?

What was your biggest challenge, frustration, or problem in finding the right [product type] online?

Please list the top three things that persuaded you to use us rather than a competitor.

Were you able to find the information you were looking for?

How satisfied are you with our support?

How would you rate our service/support on a scale of 0-10? (0 = terrible, 10 = stellar)

How likely are you to recommend us to a friend or colleague? ( NPS question )

Is there anything preventing you from purchasing at this point?

🚀 Get started: learn how satisfied customers are with our expert-built customer satisfaction and NPS survey templates .

Set up a survey in seconds

Use Hotjar's free survey templates to build virtually any type of survey, and start gathering valuable insights in moments.

🛍 30+ product survey questions

These questions are particularly recommended for SaaS companies:

Questions for new or trial users

What nearly stopped you from signing up today?

How likely are you to recommend us to a friend or colleague on a scale of 0-10? (NPS question)

Is our pricing clear? If not, what would you change?

Questions for paying customers

What convinced you to pay for this service?

What’s the one thing we are missing in [product type]?

What's one feature we can add that would make our product indispensable for you?

If you could no longer use [name of product], what’s the one thing you would miss the most?

🚀 Get started: find out what your buyers really think with our pricing plan feedback survey template .

Questions for former/churned customers

What is the main reason you're canceling your account? Please be blunt and direct.

If you could have changed one thing in [product name], what would it have been?

If you had a magic wand and could change anything in [product name], what would it be?

🚀 Get started: find out why customers churn with our free-to-use churn analysis survey template .

Other useful product questions

What were the three main things that persuaded you to sign up today?

Do you have any questions before starting a free trial?

What persuaded you to start a trial?

Was this help section useful?

Was this article useful?

How would you rate our service/support on a scale of 1-10? (0 = terrible, 10 = stellar)

Is there anything preventing you from upgrading at this point?

Is there anything on this page that doesn't work the way you expected it to?

What could we change to make you want to continue using us?

If you did not upgrade today, what stopped you?

What's the next thing you think we should build?

How would you feel if we discontinued this feature?

What's the next feature or functionality we should build?

🚀 Get started: gather feedback on your product with our free-to-use product feedback survey template .

🖋 20+ effective questions for publishers and bloggers

Questions to help improve content.

If you could change just one thing in [publication name], what would it be?

What other content would you like to see us offer?

How would you rate this article on a scale of 1–10?

If you could change anything on this page, what would you have us do?

If you did not subscribe to [publication name] today, what was it that stopped you?

🚀 Get started: find ways to improve your website copy and messaging with our content feedback survey template .

New subscriptions

What convinced you to subscribe to [publication] today?

What almost stopped you from subscribing?

What were the three main things that persuaded you to join our list today?

Cancellations

What is the main reason you're unsubscribing? Please be specific.

Other useful content-related questions

What’s the one thing we are missing in [publication name]?

What would persuade you to visit us more often?

How likely are you to recommend us to someone with similar interests? (NPS question)

What’s missing on this page?

What topics would you like to see us write about next?

How useful was this article?

What could we do to make this page more useful?

Is there anything on this site that doesn't work the way you expected it to?

What's one thing we can add that would make [publication name] indispensable for you?

If you could no longer read [publication name], what’s the one thing you would miss the most?

💡 Pro tip: do you have a general survey goal in mind, but are struggling to pin down the right questions to ask? Give Hotjar’s AI for Surveys a go and watch as it generates a survey for you in seconds with questions tailored to the exact purpose of the survey you want to run.

What makes a good survey question?

We’ve run through more than 70 of our favorite survey questions—but what is it that makes a good survey question, well, good ? An effective question is anything that helps you get clear insights and business-critical information about your customers , including

Who your target market is

How you should price your products

What’s stopping people from buying from you

Why visitors leave your website

With this information, you can tailor your website, products, landing pages, and messaging to improve the user experience and, ultimately, maximize conversions .

How to write good survey questions: the DOs and DON’Ts

To help you understand the basics and avoid some rookie mistakes, we asked a few experts to give us their thoughts on what makes a good and effective survey question.

Survey question DOs

✅ do focus your questions on the customer.

It may be tempting to focus on your company or products, but it’s usually more effective to put the focus back on the customer. Get to know their needs, drivers, pain points, and barriers to purchase by asking about their experience. That’s what you’re after: you want to know what it’s like inside their heads and how they feel when they use your website and products.

Rather than asking, “Why did you buy our product?” ask, “What was happening in your life that led you to search for this solution?” Instead of asking, “What's the one feature you love about [product],” ask, “If our company were to close tomorrow, what would be the one thing you’d miss the most?” These types of surveys have helped me double and triple my clients.

✅ DO be polite and concise (without skimping on micro-copy)

Put time into your micro-copy—those tiny bits of written content that go into surveys. Explain why you’re asking the questions, and when people reach the end of the survey, remember to thank them for their time. After all, they’re giving you free labor!

✅ DO consider the foot-in-the-door principle

One way to increase your response rate is to ask an easy question upfront, such as a ‘yes’ or ‘no’ question, because once people commit to taking a survey—even just the first question—they’re more likely to finish it.

✅ DO consider asking your questions from the first-person perspective

Disclaimer: we don’t do this here at Hotjar. You’ll notice all our sample questions are listed in second-person (i.e. ‘you’ format), but it’s worth testing to determine which approach gives you better answers. Some experts prefer the first-person approach (i.e. ‘I’ format) because they believe it encourages users to talk about themselves—but only you can decide which approach works best for your business.

I strongly recommend that the questions be worded in the first person. This helps create a more visceral reaction from people and encourages them to tell stories from their actual experiences, rather than making up hypothetical scenarios. For example, here’s a similar question, asked two ways: “What do you think is the hardest thing about creating a UX portfolio?” versus “My biggest problem with creating my UX portfolio is…” 

The second version helps get people thinking about their experiences. The best survey responses come from respondents who provide personal accounts of past events that give us specific and real insight into their lives.

✅ DO alternate your questions often

Shake up the questions you ask on a regular basis. Asking a wide variety of questions will help you and your team get a complete view of what your customers are thinking.

✅ DO test your surveys before sending them out

A few years ago, Hotjar created a survey we sent to 2,000 CX professionals via email. Before officially sending it out, we wanted to make sure the questions really worked. 

We decided to test them out on internal staff and external people by sending out three rounds of test surveys to 100 respondents each time. Their feedback helped us perfect the questions and clear up any confusing language.

Survey question DON’Ts

❌ don’t ask closed-ended questions if you’ve never done research before.

If you’ve just begun asking questions, make them open-ended questions since you have no idea what your customers think about you at this stage. When you limit their answers, you just reinforce your own assumptions.

There are two exceptions to this rule:

Using a closed-ended question to get your foot in the door at the beginning of a survey

Using rating scale questions to gather customer sentiment (like an NPS survey)

❌ DON’T ask a lot of questions if you’re just getting started

Having to answer too many questions can overwhelm your users. Stick with the most important points and discard the rest.

Try starting off with a single question to see how your audience responds, then move on to two questions once you feel like you know what you’re doing.

How many questions should you ask? There’s really no perfect answer, but we recommend asking as few as you need to ask to get the information you want. In the beginning, focus on the big things:

Who are your users?

What do potential customers want?

How are they using your product?

What would win their loyalty?

❌ DON’T just ask a question when you can combine it with other tools

Don’t just use surveys to answer questions that other tools (such as analytics) can also answer. If you want to learn about whether people find a new website feature helpful, you can also observe how they’re using it through traditional analytics, session recordings , and other user testing tools for a more complete picture.

Don’t use surveys to ask people questions that other tools are better equipped to answer. I’m thinking of questions like “What do you think of the search feature?” with pre-set answer options like ‘Very easy to use,’ ‘Easy to use,’ etc. That’s not a good question to ask. 

Why should you care about what people ‘think’ about the search feature? You should find out whether it helps people find what they need and whether it helps drive conversions for you. Analytics, user session recordings, and user testing can tell you whether it does that or not.

❌ DON’T ask leading questions

A leading question is one that prompts a specific answer. Avoid asking leading questions because they’ll give you bad data. For example, asking, “What makes our product better than our competitors’ products?” might boost your self-esteem, but it won’t get you good information. Why? You’re effectively planting the idea that your own product is the best on the market.

❌ DON’T ask loaded questions

A loaded question is similar to a leading question, but it does more than just push a bias—it phrases the question such that it’s impossible to answer without confirming an underlying assumption.

A common (and subtle) form of loaded survey question would be, “What do you find useful about this article?” If we haven’t first asked you whether you found the article useful at all, then we’re asking a loaded question.

❌ DON’T ask about more than one topic at once

For example, “Do you believe our product can help you increase sales and improve cross-collaboration?”

This complex question, also known as a ‘double-barreled question’, requires a very complex answer as it begs the respondent to address two separate questions at once:

Do you believe our product can help you increase sales?

Do you believe our product can help you improve cross-collaboration?

Respondents may very well answer 'yes', but actually mean it for the first part of the question, and not the other. The result? Your survey data is inaccurate, and you’ve missed out on actionable insights.

Instead, ask two specific questions to gather customer feedback on each concept.

How to run your surveys

The format you pick for your survey depends on what you want to achieve and also on how much budget or resources you have. You can

Use an on-site survey tool , like Hotjar Surveys , to set up a website survey that pops up whenever people visit a specific page: this is useful when you want to investigate website- and product-specific topics quickly. This format is relatively inexpensive—with Hotjar’s free forever plan, you can even run up to 3 surveys with unlimited questions for free.

questionnaire in survey research sample

Use Hotjar Surveys to embed a survey as an element directly on a page: this is useful when you want to grab your audience’s attention and connect with customers at relevant moments, without interrupting their browsing. (Scroll to the bottom of this page to see an embedded survey in action!) This format is included on Hotjar’s Business and Scale plans—try it out for 15 days with a free Ask Business trial .

Use a survey builder and create a survey people can access in their own time: this is useful when you want to reach out to your mailing list or a wider audience with an email survey (you just need to share the URL the survey lives at). Sending in-depth questionnaires this way allows for more space for people to elaborate on their answers. This format is also relatively inexpensive, depending on the tool you use.

Place survey kiosks in a physical location where people can give their feedback by pressing a button: this is useful for quick feedback on specific aspects of a customer's experience (there’s usually plenty of these in airports and waiting rooms). This format is relatively expensive to maintain due to the material upkeep.

Run in-person surveys with your existing or prospective customers: in-person questionnaires help you dig deep into your interviewees’ answers. This format is relatively cheap if you do it online with a user interview tool or over the phone, but it’s more expensive and time-consuming if done in a physical location.

💡 Pro tip: looking for an easy, cost-efficient way to connect with your users? Run effortless, automated user interviews with Engage , Hotjar’s user interview tool. Get instant access to a pool of 200,000+ participants (or invite your own), and take notes while Engage records and transcribes your interview.

10 survey use cases: what you can do with good survey questions

Effective survey questions can help improve your business in many different ways. We’ve written in detail about most of these ideas in other blog posts, so we’ve rounded them up for you below.

1. Create user personas

A user persona is a character based on the people who currently use your website or product. A persona combines psychographics and demographics and reflects who they are, what they need, and what may stop them from getting it.

Examples of questions to ask:

Describe yourself in one sentence, e.g. “I am a 30-year-old marketer based in Dublin who enjoys writing articles about user personas.”

What is your main goal for using this website/product?

What, if anything, is preventing you from doing it?

👉 Our post about creating simple and effective user personas in four steps highlights some great survey questions to ask when creating a user persona.

🚀 Get started: use our user persona survey template or AI for Surveys to inform your user persona.

2. Understand why your product is not selling

Few things are more frightening than stagnant sales. When the pressure is mounting, you’ve got to get to the bottom of it, and good survey questions can help you do just that.

What made you buy the product? What challenges are you trying to solve?

What did you like most about the product? What did you dislike the most?

What nearly stopped you from buying?

👉 Here’s a detailed piece about the best survey questions to ask your customers when your product isn’t selling , and why they work so well.

🚀 Get started: our product feedback survey template helps you find out whether your product satisfies your users. Or build your surveys in the blink of an eye with Hotjar AI.

3. Understand why people leave your website

If you want to figure out why people are leaving your website , you’ll have to ask questions.

A good format for that is an exit-intent pop-up survey, which appears when a user clicks to leave the page, giving them the chance to leave website feedback before they go.

Another way is to focus on the people who did convert, but just barely—something Hotjar founder David Darmanin considers essential for taking conversions to the next level. By focusing on customers who bought your product (but almost didn’t), you can learn how to win over another set of users who are similar to them: those who almost bought your products, but backed out in the end.

Example of questions to ask:

Not for you? Tell us why. ( Exit-intent pop-up —ask this when a user leaves without buying.)

What almost stopped you from buying? (Ask this post-conversion .)

👉 Find out how HubSpot Academy increased its conversion rate by adding an exit-intent survey that asked one simple question when users left their website: “Not for you? Tell us why.”

🚀 Get started: place an exit-intent survey on your site. Let Hotjar AI draft the survey questions by telling it what you want to learn.

I spent the better half of my career focusing on the 95% who don’t convert, but it’s better to focus on the 5% who do. Get to know them really well, deliver value to them, and really wow them. That’s how you’re going to take that 5% to 10%.

4. Understand your customers’ fears and concerns

Buying a new product can be scary: nobody wants to make a bad purchase. Your job is to address your prospective customers’ concerns, counter their objections, and calm their fears, which should lead to more conversions.

👉 Take a look at our no-nonsense guide to increasing conversions for a comprehensive write-up about discovering the drivers, barriers, and hooks that lead people to converting on your website.

🚀 Get started: understand why your users are tempted to leave and discover potential barriers with a customer retention survey .

5. Drive your pricing strategy

Are your products overpriced and scaring away potential buyers? Or are you underpricing and leaving money on the table?

Asking the right questions will help you develop a pricing structure that maximizes profit, but you have to be delicate about how you ask. Don’t ask directly about price, or you’ll seem unsure of the value you offer. Instead, ask questions that uncover how your products serve your customers and what would inspire them to buy more.

How do you use our product/service?

What would persuade you to use our product more often?

What’s the one thing our product is missing?

👉 We wrote a series of blog posts about managing the early stage of a SaaS startup, which included a post about developing the right pricing strategy —something businesses in all sectors could benefit from.

🚀 Get started: find the sweet spot in how to price your product or service with a Van Westendorp price sensitivity survey or get feedback on your pricing plan .

6. Measure and understand product-market fit

Product-market fit (PMF) is about understanding demand and creating a product that your customers want, need, and will actually pay money for. A combination of online survey questions and one-on-one interviews can help you figure this out.

What's one thing we can add that would make [product name] indispensable for you?

If you could change just one thing in [product name], what would it be?

👉 In our series of blog posts about managing the early stage of a SaaS startup, we covered a section on product-market fit , which has relevant information for all industries.

🚀 Get started: discover if you’re delivering the best products to your market with our product-market fit survey .

7. Choose effective testimonials

Human beings are social creatures—we’re influenced by people who are similar to us. Testimonials that explain how your product solved a problem for someone are the ultimate form of social proof. The following survey questions can help you get some great testimonials.

What changed for you after you got our product?

How does our product help you get your job done?

How would you feel if you couldn’t use our product anymore?

👉 In our post about positioning and branding your products , we cover the type of questions that help you get effective testimonials.

🚀 Get started: add a question asking respondents whether you can use their answers as testimonials in your surveys, or conduct user interviews to gather quotes from your users.

8. Measure customer satisfaction

It’s important to continually track your overall customer satisfaction so you can address any issues before they start to impact your brand’s reputation. You can do this with rating scale questions.

For example, at Hotjar, we ask for feedback after each customer support interaction (which is one important measure of customer satisfaction). We begin with a simple, foot-in-the-door question to encourage a response, and use the information to improve our customer support, which is strongly tied to overall customer satisfaction.

How would you rate the support you received? (1-5 scale)

If 1-3: How could we improve?

If 4-5: What did you love about the experience?

👉 Our beginner’s guide to website feedback goes into great detail about how to measure customer service, NPS , and other important success metrics.

🚀 Get started: gauge short-term satisfaction level with a CSAT survey .

9. Measure word-of-mouth recommendations

Net Promoter Score is a measure of how likely your customers are to recommend your products or services to their friends or colleagues. NPS is a higher bar than customer satisfaction because customers have to be really impressed with your product to recommend you.

Example of NPS questions (to be asked in the same survey):

How likely are you to recommend this company to a friend or colleague? (0-10 scale)

What’s the main reason for your score?

What should we do to WOW you?

👉 We created an NPS guide with ecommerce companies in mind, but it has plenty of information that will help companies in other industries as well.

🚀 Get started: measure whether your users would refer you to a friend or colleague with an NPS survey . Then, use our free NPS calculator to crunch the numbers.

10. Redefine your messaging

How effective is your messaging? Does it speak to your clients' needs, drives, and fears? Does it speak to your strongest selling points?

Asking the right survey questions can help you figure out what marketing messages work best, so you can double down on them.

What attracted you to [brand or product name]?

Did you have any concerns before buying [product name]?

Since you purchased [product name], what has been the biggest benefit to you?

If you could describe [brand or product name] in one sentence, what would you say?

What is your favorite thing about [brand or product name]?

How likely are you to recommend this product to a friend or colleague? (NPS question)

👉 We talk about positioning and branding your products in a post that’s part of a series written for SaaS startups, but even if you’re not in SaaS (or you’re not a startup), you’ll still find it helpful.

Have a question for your customers? Ask!

Feedback is at the heart of deeper empathy for your customers and a more holistic understanding of their behaviors and motivations. And luckily, people are more than ready to share their thoughts about your business— they're just waiting for you to ask them. Deeper customer insights start right here, with a simple tool like Hotjar Surveys.

Build surveys faster with AI🔥

Use AI in Hotjar Surveys to build your survey, place it on your website or send it via email, and get the customer insight you need to grow your business.

FAQs about survey questions

How many people should i survey/what should my sample size be.

A good rule of thumb is to aim for at least 100 replies that you can work with.

You can use our  sample size calculator  to get a more precise answer, but understand that collecting feedback is research, not experimentation. Unlike experimentation (such as A/B testing ), all is not lost if you can’t get a statistically significant sample size. In fact, as little as ten replies can give you actionable information about what your users want.

How many questions should my survey have?

There’s no perfect answer to this question, but we recommend asking as few as you need to ask in order to get the information you want. Remember, you’re essentially asking someone to work for free, so be respectful of their time.

Why is it important to ask good survey questions?

A good survey question is asked in a precise way at the right stage in the customer journey to give you insight into your customers’ needs and drives. The qualitative data you get from survey responses can supplement the insight you can capture through other traditional analytics tools (think Google Analytics) and behavior analytics tools (think heatmaps and session recordings , which visualize user behavior on specific pages or across an entire website).

The format you choose for your survey—in-person, email, on-page, etc.—is important, but if the questions themselves are poorly worded you could waste hours trying to fix minimal problems while ignoring major ones a different question could have uncovered. 

How do I analyze open-ended survey questions?

A big pile of  qualitative data  can seem intimidating, but there are some shortcuts that make it much easier to analyze. We put together a guide for  analyzing open-ended questions in 5 simple steps , which should answer all your questions.

But the fastest way to analyze open questions is to use the automated summary report with Hotjar AI in Surveys . AI turns the complex survey data into:

Key findings

Actionable insights

Will sending a survey annoy my customers?

Honestly, the real danger is  not  collecting feedback. Without knowing what users think about your page and  why  they do what they do, you’ll never create a user experience that maximizes conversions. The truth is, you’re probably already doing something that bugs them more than any survey or feedback button would.

If you’re worried that adding an on-page survey might hurt your conversion rate, start small and survey just 10% of your visitors. You can stop surveying once you have enough replies.

Related articles

questionnaire in survey research sample

User research

5 tips to recruit user research participants that represent the real world

Whether you’re running focus groups for your pricing strategy or conducting usability testing for a new product, user interviews are one of the most effective research methods to get the needle-moving insights you need. But to discover meaningful data that helps you reach your goals, you need to connect with high-quality participants. This article shares five tips to help you optimize your recruiting efforts and find the right people for any type of research study.

Hotjar team

questionnaire in survey research sample

How to instantly transcribe user interviews—and swiftly unlock actionable insights

After the thrill of a successful user interview, the chore of transcribing dialogue can feel like the ultimate anticlimax. Putting spoken words in writing takes several precious hours—time better invested in sharing your findings with your team or boss.

But the fact remains: you need a clear and accurate user interview transcript to analyze and report data effectively. Enter automatic transcription. This process instantly transcribes recorded dialogue in real time without human help. It ensures data integrity (and preserves your sanity), enabling you to unlock valuable insights in your research.

questionnaire in survey research sample

Shadz Loresco

questionnaire in survey research sample

An 8-step guide to conducting empathetic (and insightful) customer interviews

Customer interviews uncover your ideal users’ challenges and needs in their own words, providing in-depth customer experience insights that inform product development, new features, and decision-making. But to get the most out of your interviews, you need to approach them with empathy. This article explains how to conduct accessible, inclusive, and—above all—insightful interviews to create a smooth (and enjoyable!) process for you and your participants.

Business growth

Marketing tips

How to conduct your own market research survey (with example)

Hero image with an icon of a survey

After watching a few of those sketches, you can imagine why real-life focus groups tend to be pretty small. Even without any over-the-top personalities involved, it's easy for these groups to go off the rails.

So what happens when you want to collect market research at a larger scale? That's where the market research survey comes in. Market surveys allow you to get just as much valuable information as an in-person interview, without the burden of herding hundreds of rowdy Eagles fans through a product test.

Table of contents:

What is a market research survey?

Why conduct market research, primary vs. secondary market research.

6 types of market research surveys

How to write and conduct a market research survey

Tips for running a market research survey.

Market research survey campaign example questions

Market research survey template

Use automation to put survey results into action

A market research survey is a questionnaire designed to collect key information about a company's target market and audience that will help guide business decisions about products and services, branding angles, and advertising campaigns.

Market surveys are what's known as "primary research"—that is, information that the researching company gathers firsthand. Secondary research consists of data that another organization gathered and published, which other researchers can then use for their own reports. Primary research is more expensive and time-intensive than secondary research, which is why you should only use market research surveys to obtain information that you can't get anywhere else. 

A market research survey can collect information on your target customers':

Experiences

Preferences, desires, and needs

Values and motivations

The types of information that can usually be found in a secondary source, and therefore aren't good candidates for a market survey, include your target customers':

Demographic data

Consumer spending data

Household size

Lots of this secondary information can be found in a public database like those maintained by the Census Bureau and Bureau of Labor Statistics . There are also a few free market research tools that you can use to access more detailed data, like Think with Google , Data USA , and Statista . Or, if you're looking to learn about your existing customer base, you can also use a CRM to automatically record key information about your customers each time they make a purchase.

If you've exhausted your secondary research options and still have unanswered questions, it's time to start thinking about conducting a market research survey.

The first thing to figure out is what you're trying to learn, and from whom. Are you beta testing a new product or feature with existing users? Or are you looking to identify new customer personas for your marketers to target? There are a number of different ways to use a marketing research survey, and your choice will impact how you set up the questionnaire.

Here are some examples of how market research surveys can be used to fill a wide range of knowledge gaps for companies:

A B2B software company asks real users in its industry about Kanban board usage to help prioritize their project view change rollout.

A B2C software company asks its target demographic about their mobile browsing habits to help them find features to incorporate into their forthcoming mobile app.

A printing company asks its target demographic about fabric preferences to gauge interest in a premium material option for their apparel lines.

A wholesale food vendor surveys regional restaurant owners to find ideas for seasonal products to offer.

Market surveys are what's known as "primary research"—that is, information that the researching company gathers firsthand. Secondary research consists of data that another organization gathered and published, which other researchers can then use for their own reports. 

Primary research is more expensive and time-intensive than secondary research, which is why you should only use market research surveys to obtain information that you can't get anywhere else. 

Lots of this secondary information can be found in a public database like those maintained by the Census Bureau and Bureau of Labor Statistics . There are also a few free market research tools that you can use to access more detailed data, like Think with Google , Data USA , and Statista . 

Or, if you're looking to learn about your existing customer base, you can also use a CRM to automatically record key information about your customers each time they make a purchase.

6 types of market research survey

Depending on your goal, you'll need different types of market research. Here are six types of market research surveys.

1. Buyer persona research

A buyer persona or customer profile is a simple sketch of the types of people that you should be targeting as potential customers. 

A buyer persona research survey will help you learn more about things like demographics, household makeup, income and education levels, and lifestyle markers. The more you learn about your existing customers, the more specific you can get in targeting potential customers. You may find that there are more buyer personas within your user base than the ones that you've been targeting.

2. Sales funnel research

The sales funnel is the path that potential customers take to eventually become buyers. It starts with the target's awareness of your product, then moves through stages of increasing interest until they ultimately make a purchase. 

With a sales funnel research survey, you can learn about potential customers' main drivers at different stages of the sales funnel. You can also get feedback on how effective different sales strategies are. Use this survey to find out:

How close potential buyers are to making a purchase

What tools and experiences have been most effective in moving prospective customers closer to conversion

What types of lead magnets are most attractive to your target audience

3. Customer loyalty research

Whenever you take a customer experience survey after you make a purchase, you'll usually see a few questions about whether you would recommend the company or a particular product to a friend. After you've identified your biggest brand advocates , you can look for persona patterns to determine what other customers are most likely to be similarly enthusiastic about your products. Use these surveys to learn:

The demographics of your most loyal customers

What tools are most effective in turning customers into advocates

What you can do to encourage more brand loyalty

4. Branding and marketing research

The Charmin focus group featured in that SNL sketch is an example of branding and marketing research, in which a company looks for feedback on a particular advertising angle to get a sense of whether it will be effective before the company spends money on running the ad at scale. Use this type of survey to find out:

Whether a new advertising angle will do well with existing customers

Whether a campaign will do well with a new customer segment you haven't targeted yet

What types of campaign angles do well with a particular demographic

5. New products or features research

Whereas the Charmin sketch features a marketing focus group, this one features new product research for a variety of new Hidden Valley Ranch flavors. Though you can't get hands-on feedback on new products when you're conducting a survey instead of an in-person meeting, you can survey your customers to find out:

What features they wish your product currently had

What other similar or related products they shop for

What they think of a particular product or feature idea

Running a survey before investing resources into developing a new offering will save you and the company a lot of time, money, and energy.

6. Competitor research

You can get a lot of information about your own customers and users via automatic data collection , but your competitors' customer base may not be made up of the same buyer personas that yours is. Survey your competitors' users to find out:

Your competitors ' customers' demographics, habits, and behaviors

Whether your competitors have found success with a buyer persona you're not targeting

Information about buyers for a product that's similar to one you're thinking about launching

Feedback on what features your competitors' customers wish their version of a product had

Once you've narrowed down your survey's objectives, you can move forward with designing and running your survey.

Step 1: Write your survey questions

A poorly worded survey, or a survey that uses the wrong question format, can render all of your data moot. If you write a question that results in most respondents answering "none of the above," you haven't learned much. 

You'll find dozens of question types and even pre-written questions in most survey apps . Here are a few common question types that work well for market surveys.

Categorical questions

Also known as a nominal question, this question type provides numbers and percentages for easy visualization, like "35% said ABC." It works great for bar graphs and pie charts, but you can't take averages or test correlations with nominal-level data.

Yes/No: The most basic survey question used in polls is the Yes/No question, which can be easily created using your survey app or by adding Yes/No options to a multiple-choice question. 

Multiple choice: Use this type of question if you need more nuance than a Yes/No answer gives. You can add as many answers as you want, and your respondents can pick only one answer to the question. 

Checkbox: Checkbox questions add the flexibility to select all the answers that apply. Add as many answers as you want, and respondents aren't limited to just one. 

A screenshot of a multiple choice question asking about how you travel to work with various answers and an option to type in your own answer in an "other" field

Ordinal questions

This type of question requires survey-takers to pick from options presented in a specific order, like "income of $0-$25K, $26K-$40K, $41K+." Like nominal questions, ordinal questions elicit responses that allow you to analyze counts and percentages, though you can't calculate averages or assess correlations with ordinal-level data.

Dropdown: Responses to ordinal questions can be presented as a dropdown, from which survey-takers can only make one selection. You could use this question type to gather demographic data, like the respondent's country or state of residence. 

Ranking: This is a unique question type that allows respondents to arrange a list of answers in their preferred order, providing feedback on each option in the process. 

Interval/ratio questions

For precise data and advanced analysis, use interval or ratio questions. These can help you calculate more advanced analytics, like averages, test correlations, and run regression models. Interval questions commonly use scales of 1-5 or 1-7, like "Strongly disagree" to "Strongly agree." Ratio questions have a true zero and often ask for numerical inputs (like "How many cups of coffee do you drink per day? ____").

Ranking scale: A ranking scale presents answer choices along an ordered value-based sequence, either using numbers, a like/love scale, a never/always scale, or some other ratio interval. It gives more insight into people's thoughts than a Yes/No question. 

Matrix: Have a lot of interval questions to ask? You can put a number of questions in a list and use the same scale for all of them. It simplifies gathering data about a lot of similar items at once. 

Example : How much do you like the following: oranges, apples, grapes? Hate/Dislike/Ok/Like/Love

Textbox: A textbox question is needed for collecting direct feedback or personal data like names. There will be a blank space where the respondent can enter their answer to your question on their own. 

Screenshot example of an interval question about how much you enjoy commuting to work with options to indicate how much a person agrees and disagrees with a statement

Step 2: Choose a survey platform

There are a lot of survey platforms to choose from, and they all offer different and unique features. Check out Zapier's list of the best online survey apps to help you decide.

Most survey apps today look great on mobile, but be sure to preview your survey on your phone and computer, at least, to make sure it'll look good for all of your users.

A screenshot image of two survey questions on a mobile device rather than a desktop view to illustrate the importance of checking to see how a survey will show up on multiple platforms

If you have the budget, you can also purchase survey services from a larger research agency. 

Step 3: Run a test survey

Before you run your full survey, conduct a smaller test on 5%-10% of your target respondent pool size. This will allow you to work out any confusing wording or questions that result in unhelpful responses without spending the full cost of the survey. Look out for:

Survey rejection from the platform for prohibited topics

Joke or nonsense textbox answers that indicate the respondent didn't answer the survey in earnest

Multiple choice questions with an outsized percentage of "none of the above" or "N/A" responses

Step 4: Launch your survey

If your test survey comes back looking good, you're ready to launch the full thing! Make sure that you leave ample time for the survey to run—you'd be surprised at how long it takes to get a few thousand respondents. 

Even if you've run similar surveys in the past, leave more time than you need. Some surveys take longer than others for no clear reason, and you also want to build in time to conduct a comprehensive data analysis.

Step 5: Organize and interpret the data

Unless you're a trained data analyst, you should avoid crunching all but the simplest survey data by hand. Most survey platforms include some form of reporting dashboard that will handle things like population weighting for you, but you can also connect your survey platform to other apps that make it easy to keep track of your results and turn them into actionable insights.

You know the basics of how to conduct a market research survey, but here are some tips to enhance the quality of your data and the reliability of your findings.

Find the right audience: You could have meticulously crafted survey questions, but if you don't target the appropriate demographic or customer segment, it doesn't really matter. You need to collect responses from the people you're trying to understand. Targeted audiences you can send surveys to include your existing customers, current social media followers, newsletter subscribers, attendees at relevant industry events, and community members from online forums, discussion boards, or other online communities that cater to your target audience. 

Take advantage of existing resources: No need to reinvent the wheel. You may be able to use common templates and online survey platforms like SurveyMonkey for both survey creation and distribution. You can also use AI tools to create better surveys. For example, generative AI tools like ChatGPT can help you generate questions, while analytical AI tools can scan survey responses to help sort, tag, and report on them. Some survey apps have AI built into them already too.

Focus questions on a desired data type: As you conceptualize your survey, consider whether a qualitative or quantitative approach will better suit your research goals. Qualitative methods are best for exploring in-depth insights and underlying motivations, while quantitative methods are better for obtaining statistical data and measurable trends. For an outcome like "optimize our ice cream shop's menu offerings," you may want to find out which flavors of ice cream are most popular with teens. This would require a quantitative approach, for which you would use categorical questions that can help you rank potential flavors numerically.

Establish a timeline: Set a realistic timeline for your survey, from creation to distribution to data collection and analysis. You'll want to balance having your survey out long enough to generate a significant amount of responses but not so long that it loses relevance. That length can vary widely based on factors like type of survey, number of questions, audience size, time sensitivity, question format, and question length.

Define a margin of error: Your margin of error shows how much the survey results might differ from the real opinions of the entire group being studied. Since you can't possibly survey every single person in your desired population, you'll have to settle on an acceptable percentage of error upfront, a percentage figure that varies by sample size, sample proportion, and confidence interval. According to University of Wisconsin-Madison's Pamela Hunter , 95% is the industry standard confidence level (though small sample sizes may get by with 90%). At the 95% level, for example, an acceptable margin of error for a survey of 500 respondents would be 3%. That means that if 80% of respondents give a positive response to a question, the data shows that between 77-83% respond positively 95 out of 100 times.

Market research survey campaign example

Let's say you own a market research company, and you want to use a survey to gain critical insights into your market. You prompt users to fill out your survey before they can access gated premium content.

Survey questions: 

1. What size is your business? 

<10 employees

11-50 employees

51-100 employees

101-200 employees

>200 employees

2. What industry type best describes your role?

3. On a scale of 1-4, how important would you say access to market data is?

1 - Not important

2 - Somewhat important

3 - Very important

4 - Critically important

4. On a scale of 1 (least important) to 5 (most important), rank how important these market data access factors are.

Accuracy of data

Attractive presentation of data

Cost of data access

Range of data presentation formats

Timeliness of data

5. True or false: your job relies on access to accurate, up-to-date market data.

Survey findings: 

63% of respondents represent businesses with over 100 employees, while only 8% represent businesses with under 10.

71% of respondents work in sales, marketing, or operations.

80% of respondents consider access to market data to be either very important or critically important.

"Timeliness of data" (38%) and "Accuracy of data" (32%) were most commonly ranked as the most important market data access factor.

86% of respondents claimed that their jobs rely on accessing accurate, up-to-date market data.

Insights and recommendations: Independent analysis of the survey indicates that a large percentage of users work in the sales, marketing, or operations fields of large companies, and these customers value timeliness and accuracy most. These findings can help you position future report offerings more effectively by highlighting key benefits that are important to customers that fit into related customer profiles. 

Market research survey example questions

Your individual questions will vary by your industry, market, and research goals, so don't expect a cut-and-paste survey to suit your needs. To help you get started, here are market research survey example questions to give you a sense of the format.

Yes/No: Have you purchased our product before?

Multiple choice: How many employees work at your company?

<10 / 10-20 / 21-50 / 51-100 / 101-250 / 250+

Checkbox: Which of the following features do you use in our app?

Push notifications / Dashboard / Profile customization / In-app chat

Dropdown: What's your household income? 

$0-$10K / $11-$35K / $36-$60K / $61K+

Ranking: Which social media platforms do you use the most? Rank in order, from most to least.

Facebook / Instagram / Twitter / LinkedIn / Reddit

Ranking scale: On a scale of 1-5, how would you rate our customer service? 

1 / 2 / 3 / 4 / 5

Textbox: How many apps are installed on your phone? Enter a number: 

Market research survey question types

Good survey apps typically offer pre-designed templates as a starting point. But to give you a more visual sense of what these questions might look like, we've put together a document showcasing common market research survey question types.

Screenshot of Zapier's market research survey question format guide

You're going to get a lot of responses back from your survey—why dig through them all manually if you don't have to? Automate your survey to aggregate information for you, so it's that much easier to uncover findings. 

Related reading:

Poll vs. survey: What is a survey and what are polls?

The best online survey apps

The best free form builders and survey tools

How to get people to take a survey

This article was originally published in June 2015 by Stephanie Briggs. The most recent update, with contributions from Cecilia Gillen, was in September 2023.

Get productivity tips delivered straight to your inbox

We’ll email you 1-3 times per week—and never share your information.

Amanda Pell picture

Amanda Pell

Amanda is a writer and content strategist who built her career writing on campaigns for brands like Nature Valley, Disney, and the NFL. When she's not knee-deep in research, you'll likely find her hiking with her dog or with her nose in a good book.

  • Forms & surveys

Related articles

Hero image for TikTok app tips with the TikTok logo on a green background

8 essential tips for maximizing TikTok ads ROI

8 essential tips for maximizing TikTok ads...

Hero image of an envelope on a light blue background to illustrate emails

The best marketing newsletters in 2024

A hero image for Google Ads app tips with the Google Ads logo on a blue background

11 Google Ads examples (and how to use their strategies)

11 Google Ads examples (and how to use their...

A hero image with an icon representing AI writing

How will AI change SEO content production?

Improve your productivity automatically. Use Zapier to get your apps working together.

A Zap with the trigger 'When I get a new lead from Facebook,' and the action 'Notify my team in Slack'

  • Utility Menu

University Logo

Harvard University Program on Survey Research

  • Questionnaire Design Tip Sheet

This PSR Tip Sheet provides some basic tips about how to write good survey questions and design a good survey questionnaire.

PSR Resources

  • Managing and Manipulating Survey Data: A Beginners Guide
  • Finding and Hiring Survey Contractors
  • How to Frame and Explain the Survey Data Used in a Thesis
  • Overview of Cognitive Testing and Questionnaire Evaluation
  • Sampling, Coverage, and Nonresponse Tip Sheet
  • Introduction to Surveys for Honors Thesis Writers
  • PSR Introduction to the Survey Process
  • Related Centers/Programs at Harvard
  • General Survey Reference
  • Institutional Review Boards
  • Select Funding Opportunities
  • Survey Analysis Software
  • Professional Standards
  • Professional Organizations
  • Major Public Polls
  • Survey Data Collections
  • Major Longitudinal Surveys
  • Other Links

Child Care and Early Education Research Connections

Survey research and questionnaires.

Descriptions of key issues in survey research and questionnaire design are highlighted in the following sections. Modes of data collection approaches are described together with their advantages and disadvantages. Descriptions of commonly used sampling designs are provided and the primary sources of survey error are identified. Terms relating to the topics discussed here are defined in the  Research Glossary .

Survey Research

Questionnaire design, modes of survey administration, sources of error.

Survey research is a commonly-used method of collecting information about a population of interest. The population may be composed of a group of individuals (e.g., children under age five, kindergarteners, parents of young children) or organizations (e.g., early care and education programs, k-12 public and private schools).

There are many different types of surveys, several ways to administer them, and different methods for selecting the sample of individuals or organizations that will be invited to participate. Some surveys collect information on all members of a population and others collect data on a subset of a population. Examples of the former are the National Center for Education Statistics'  Common Core of Data  and the Administration for Children and Families'  Survey of Early Head Start Programs  (PDF).

A survey may be administered to a sample of individuals (or to the entire population) at a single point in time (cross-sectional survey), or the same survey may be administered to different samples from the population at different time points (repeat cross-sectional). Other surveys may be administered to the same sample of individuals at different time points (longitudinal survey). The Survey of Early Head Start Programs is an example of a cross-sectional survey and the National Household Education Survey Program is an example of a repeat cross-sectional survey. Examples of longitudinal surveys include the  Head Start Family and Child Experiences Survey  and the  Early Childhood Longitudinal Study , Birth and Kindergarten Cohorts.

Regardless of the type of survey, there are two key features of survey research:

  • Questionnaires—a predefined series of questions used to collect information from individuals.

The American Association for Public Opinion Research (AAPOR) offers recommendations on how to produce the best survey possible:  Best Practices for Survey Research .

AAPOR also provides guidelines on how to assess the quality of a survey:  Evaluating Survey Quality in Today's Complex Environment .

Advantages and Disadvantages of Survey Research

  • Surveys are a cost-effective and efficient means of gathering information about a population.
  • Data can be collected from a large number of respondents. In general, the larger the number of respondents (i.e., the larger the sample size), the more accurate will be the information that is derived from the survey.
  • Sampling using probability methods to select potential survey respondents makes it possible to estimate the characteristics (e.g., socio-demographics, attitudes, behaviors, opinions, skills, preferences and values) of a population without collecting data from all members of the population.
  • Depending on the population and type of information sought, survey questionnaires can be administered in-person or remotely via telephone, mail, online and mobile devices.

Disadvantages

  • Questions asked in surveys tend to be broad in scope.
  • Surveys often do not allow researchers to develop an in-depth understanding of individual circumstances or the local culture that may be the root cause of respondent behavior.
  • Respondents may be reluctant to share sensitive information about themselves and others.
  • Respondents may provide socially desirable responses to the questions asked. That is, they may give answers that they believe the researcher wants to hear or answers that shed the best light on them and others. For example, they may over-report positive behaviors and under-report negative behaviors.
  • A growing problem in survey research is the widespread decline in response rates, or percentage of those selected to participate who chose to do so.

The two most common types of survey questions are closed-ended questions and open-ended questions.

Closed-Ended Questions

  • The respondents are given a list of predetermined responses from which to choose their answer.
  • The list of responses should include every possible response and the meaning of the responses should not overlap.
  • An example of a close-ended survey question would be, "Please rate how strongly you agree or disagree with the following statement: 'I feel good about my work on the job.' Do you strongly agree, somewhat agree, neither agree nor disagree, somewhat disagree, or strongly disagree?"
  • A Likert scale, which is used in the example above, is a commonly used set of responses for closed-ended questions.
  • Closed-ended questions are usually preferred in survey research because of the ease of counting the frequency of each response.

Open-Ended Questions

  • Survey respondents are asked to answer each question in their own words. An example would be, "In the last 12 months, what was the total income of all members of your household from all sources before taxes and other deductions?" Another would be, "Please tell me why you chose that child care provider?"
  • It is worth noting that a question can be either open-ended or close-ended depending on how it is asked. In the previous example, if the question on household income asked respondents to choose from a given set of income ranges instead, it would be considered close-ended.
  • Responses are usually categorized into a smaller list of responses that can be counted for statistical analysis.

A well designed questionnaire is more than a collection of questions on one or more topics. When designing a questionnaire, researchers must consider a number of factors that can affect participation and the responses given by survey participants. Some of the things researchers must consider to help ensure high rates of participation and accurate survey responses include:

  • Sensitive questions, such as questions about income, drug use, or sexual activity, should generally be placed near the end of the survey. This allows a level of trust or psychological comfort to be established with the respondent before asking questions that might be embarrassing or more personal.
  • Researchers also recommend putting routine questions, such as age, gender, and marital status, at the end of the questionnaire.
  • Questions that are more central to the research topic or question and that may serve to engage the respondent should be asked early. For example, a survey on children's early development that is administered to parents should ask questions that are specific to their children in the beginning or near the beginning of the survey.
  • Double-barreled questions, which ask two questions in one, should never be used in a survey. An example of a double-barreled question is, "Please rate how strongly you agree or disagree with the following statement: 'I feel good about my work on the job, and I get along well with others at work.'" This question is problematic because survey respondents are asked to give one response for their feelings about two conditions of their job.
  • Researchers should avoid or limit the use of professional jargon or highly specialized terms, especially in surveys of the general population.
  • Question and response option text should use words that are at the appropriate reading level for research participants.
  • The use of complex sentence structures should be avoided.
  • Researchers should avoid using emotionally loaded or biased words and phrases.
  • The length of a questionnaire is always a consideration. There is a tendency to try and ask too many questions and cover too many topics. The questionnaire should be kept to a reasonable length and only include questions that are central to the research question(s). The length should be appropriate to the mode of administration. For example, in general, online surveys are shorter than surveys administered in-person.

Questionnaires and the procedures that will be used to administer them should be pretested (or field tested) before they are used in a main study. The goal of the pretest is to identify any problems with how questions are asked, whether they are understood by individuals similar to those who will participate in the main study, and whether response options in close-ended questions are adequate. For example, a parent questionnaire that will be used in a large study of preschool-age children may be administered first to a small (often non-random) sample of parents in order to identify any problems with how questions are asked and understood and whether the response options that are offered to parents are adequate.

Based on the findings of the pretest, additions or modifications to questionnaire items and administration procedures are made prior to their use in the main study.

See the following for more information about questionnaire design:

  • A Brief Guide to Questionnaire Development  (PDF)
  • Survey Design

Surveys can be administered in four ways: through the mail, by telephone, in-person or online. When deciding which of these approaches to use, researchers consider: the cost of contacting the study participant and of data collection, the literacy level of participants, response rate requirements, respondent burden and convenience, the complexity of the information that is being sought and the mix of open-ended and close-ended questions.

Some of the main advantages and disadvantages of the different modes of administration are summarized below.

Mail Surveys

  • Advantages: Low cost; respondents may be more willing to share information and to answer sensitive questions; respondent convenience, can respond on their own schedule
  • Disadvantages: Generally lower response rates; only reaches potential respondents who are associated with a known address; not appropriate for low literacy audiences; no interviewer, so responses cannot be probed for more detail or clarification; participants' specific concerns and questions about the survey and its purpose cannot be addressed

Telephone Surveys

  • Advantages: Higher response rates; responses can be gathered more quickly; responses can be probed; participants' concerns and questions can be addressed immediately
  • Disadvantages: More expensive than mail surveys; depending on how telephone numbers are identified, some groups of potential respondents may not be reached; use of open-ended questions is limited given limits on survey length

In-Person Surveys

  • Advantages: Highest response rates; better suited to collecting complex information; more opportunities to use open-ended questions and to probe respondent answers; interviewer can immediately address any concerns participant has about the survey and answer their questions
  • Disadvantages: Very expensive; time-consuming; respondents may be reluctant to share personal or sensitive information when face-to-face with an interviewer

Online Surveys

  • Advantages: Very low cost; responses can be gathered quickly; respondents may be more willing to share information and to answer sensitive questions; questionnaires are programmed, which allows for more complex surveys that follow skip patterns based on previous responses; respondent convenience, can respond on their own schedule
  • Disadvantages: Potentially lower response rates; limited use of open-ended questions; not possible to probe respondents' answers or to address their concerns about participation

Increasingly, researchers are using a mix of these methods of administration. Mixed-mode or multi-mode surveys use two or more data collection modes in order to increase survey response. Participants are given the option of choosing the mode that they prefer, rather than this being dictated by the research team. For example, the  Head Start Family and Child Experience Survey (2014-2015)  offers teachers the option of completing the study's teacher survey online or using a paper questionnaire. Parents can complete the parent survey online or by phone.

See the following for additional information about survey administration:

  • Four Survey Methodologies: A Comparison of Pros and Cons  (PDF)
  • Collecting Survey Data
  • Improving Response to Web and Mixed-Mode Surveys

In child care and early education research as well as research in other areas, it is often not feasible to survey all members of the population of interest. Therefore, a sample of the members of the population would be selected to represent the total population.

A primary strength of sampling is that estimates of a population's characteristics can be obtained by surveying a small proportion of the population. For example, it would not be feasible to interview all parents of preschool-age children in the U.S. in order to obtain information about their choices of child care and the reasons why they chose certain types of care as opposed to others. Thus, a sample of preschoolers' parents would be selected and interviewed, and the data they provide would be used to estimate the types of child care parents as a whole choose and their reasons for choosing these programs. There are two broad types of sampling:

  • Nonprobability sampling : The selection of participants from a population is not determined by chance. Each member of the population does not have a known or given chance of being selected into the sample. Findings from nonprobability (nonrandom) samples cannot be generalized to the population of interest. Consequently, it is problematic to make inferences about the population. Common nonprobability sampling techniques include convenience sampling, snowball sampling, quota sampling and purposive sampling.

Probability sampling : The selection of participants from the population is determined by chance and with each individual having a known, non-zero probability of selection. It provides accurate descriptions of the population and therefore good generalizability. In survey research, it is the preferred sampling method.

Three forms of probability sampling are described here:

Simple Random Sampling This is the most basic form of sampling. Every member of the population has an equal chance of being selected. This sampling process is similar to a lottery: the entire population of interest could be selected for the survey, but only a few are chosen at random. For example, researchers may use random-digit dialing to perform simple random sampling for telephone surveys. In this procedure, telephone numbers are generated by a computer at random and called to identify individuals to participate in the survey.

Stratified Sampling Stratified sampling is used when researchers want to ensure representation across groups, or strata, in the population. The researchers will first divide the population into groups based on characteristics such as race/ethnicity, and then draw a random sample from each group. The groups must be mutually exclusive and cover the population. Stratified sampling provides greater precision than a simple random sample of the same size.

Cluster Sampling Cluster sampling is generally used to control costs and when it is geographically impossible to undertake a simple random sample. For example, in a household survey with face-to-face interviews, it is difficult and expensive to survey households across the nation using a simple random sample design. Instead, researchers will randomly select geographic areas (for example, counties), then randomly select households within these areas. This creates a cluster sample, in which respondents are clustered together geographically.

Survey research studies often use a combination of these probability methods to select their samples.  Multistage sampling  is a probability sampling technique where sampling is carried out in several stages. It is often used to select samples when a single frame is not available to select members for a study sample. For example, there is no single list of all children enrolled in public school kindergartens across the U.S. Therefore, researchers who need a sample of kindergarten children will first select a sample of schools with kindergarten programs from a school frame (e.g., National Center for Education Statistics' Common Core of Data) (Stage 1). Lists of all kindergarten classrooms in selected schools are developed and a sample of classrooms selected in each of the sampled schools (Stage 2). Finally, lists of children in the sampled classrooms are compiled and a sample of children is selected from each of the classroom lists (Stage 3). Many of the national surveys of child care and early education (e.g., the Head Start Family and Child Experiences Survey and the Early Childhood Longitudinal Survey-Kindergarten Cohort) use a multistage approach.

Multistage, cluster and stratified sampling require that certain adjustments be made during the statistical analysis. Sampling or analysis weights are often used to account for differences in the probability of selection into the sample as well as for other factors (e.g., sampling frame, undercoverage, and nonresponse). Standard errors are calculated using methodologies that are different from those used for a simple random sample. Information on these adjustments is provided by the National Center for Education Statistics through its  Distance Learning Dataset Training System .

See the following for additional information about the different types of sampling approaches and their use:

  • National Center for Education Statistics Distance Learning Dataset Training System: Analyzing NCES Complex Survey Data
  • Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards
  • Nonprobability Sampling
  • The Future of Survey Sampling
  • Sampling Methods (StatPac)

Estimates of the characteristics of a population using survey data are subject to two basic sources of error: sampling error and nonsampling error. The extent to which estimates of the population mean, proportion and other population values differ from the true values of these is affected by these errors.

  • Sampling error  is the error that occurs because all members of the population are not sampled and measured. The value of a statistic (e.g., mean or percentage) that is calculated from different samples that are drawn from the same population will not always be the same. For example, if several different samples of 5,000 people are drawn at random from the U.S. population, the average income of the 5,000 people in those samples will differ. (In one sample, Bill Gates may have been selected at random from the population, which would lead to a very high mean income for that sample. Researchers use a statistic called the standard error to measure the extent to which estimated statistics (percentages, means, and coefficients) vary from what would be found in other samples. The smaller the standard error, the more precise are the estimates from the sample. Generally, standard errors and sample size are negatively related, that is, larger samples have smaller standard errors.

Nonsampling error  includes all errors that can affect the accuracy of research findings other than errors associated with selecting the sample (sampling error). They can occur in any phase of a research study (planning and design, data collection, or data processing). They include errors that occur due to coverage error (when units in the target population are missing from the sampling frame), nonresponse to surveys (nonresponse error), measurement errors due to interviewer or respondent behavior, errors introduced by how survey questions were worded or by how data were collected (e.g., in-person interview, online survey), and processing error (e.g., errors made during data entry or when coding open-ended survey responses). While sampling error is limited to sample surveys, nonsampling error can occur in all surveys.

Measurement Error

Measurement error  is the difference between the value measured in a survey or on a test and the true value in the population. Some factors that contribute to measurement error include the environment in which a survey or test is administered (e.g., administering a math test in a noisy classroom could lead children to do poorly even though they understand the material), poor measurement tools (e.g., using a tape measure that is only marked in feet to measure children's height would lead to inaccurate measurement), rater or interviewer effects (e.g., survey staff who deviate from the research protocol).

Measurement error falls into two broad categories: systematic error and random error. Systematic error is the more serious of the two.

Occurs when the survey responses are systematically different from the target population responses. It is caused by factors that systematically affect the measurement of a variable across the sample.

For example, if a researcher only surveyed individuals who answered their phone between 9 and 5, Monday through Friday, the survey results would be biased toward individuals who are available to answer the phone during those hours (e.g., individuals who are not in the labor force or who work outside of the traditional Monday through Friday, 9 am to 5 pm schedule).

  • Nonobservational error -- Error introduced when individuals in the target population are systematically excluded from the sample, such as in the example above.
  • Observational error -- Error introduced when respondents systematically answer survey question incorrectly. For example, surveys that ask respondents how much they weigh may underestimate the population's weight because some respondents are likely to report their weight as less than it actually is.
  • Systematic errors tend to have an effect on responses and scores that is consistently in one direction (positive or negative). As a result, they contribute to bias in estimates.

Random error is an expected part of survey research, and statistical techniques are designed to account for this sort of measurement error. It is caused by factors that randomly affect measurement of the variable across the sample.

Random error occurs because of natural and uncontrollable variations in the survey process, i.e., the mood of the respondent, lack of precision in measures used, and the particular measures/instruments (e.g., inaccuracy in scales used to measure children's weight).

For example, a researcher may administer a survey about marital happiness. However, some respondents may have had a fight with their spouse the evening prior to the survey, while other respondents' spouses may have cooked the respondent's favorite meal. The survey responses will be affected by the random day on which the respondents were chosen to participate in the study. With random error, the positive and negative influences on the survey measures are expected to balance out.

  • Unlike systematic errors, random errors do not have a consistent positive or negative effect on measurement. Instead, across the sample the effects are both positive and negative. Such errors are often considered noise and add variability, though not bias, to the data.

See the following for additional information about the different types and sources of errors:

  • Nonresponse Error, Measurement Error and Mode of Data Collection
  • Total Survey Error: Design, Implementation, and Evaluation
  • Data Accuracy

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 10 October 2022.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs surveys, questionnaire methods, open-ended vs closed-ended questions, question wording, question order, step-by-step guide to design, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyse data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleaning and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalise your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimising these will help you avoid sampling bias .

Prevent plagiarism, run a free check.

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • Cost-effective
  • Easy to administer for small and large groups
  • Anonymous and suitable for sensitive topics

But they may also be:

  • Unsuitable for people with limited literacy or verbal skills
  • Susceptible to a nonreponse bias (most people invited may not complete the questionnaire)
  • Biased towards people who volunteer because impersonal survey requests often go ignored

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • Help you ensure the respondents are representative of your target audience
  • Allow clarifications of ambiguous or unclear questions and answers
  • Have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • Costly and time-consuming to perform
  • More difficult to analyse if you have qualitative responses
  • Likely to contain experimenter bias or demand characteristics
  • Likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions, or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalisable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert-type questions collect ordinal data using rating scales with five or seven points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio data, you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer ‘multiracial’ for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle to productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarising responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorise answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counterargument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favour flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barrelled questions. Double-barrelled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

You can organise the questions logically, with a clear progression from simple to complex. Alternatively, you can randomise the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioural or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimise order effects because they can be a source of systematic error or bias in your study.

Randomisation

Randomisation involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomisation, order effects will be minimised in your dataset. But a randomised order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Follow this step-by-step guide to design your questionnaire.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalise your variables of interest into questionnaire items. Operationalising concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivised or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomise questions. Randomising questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis.

You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, October 10). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved 2 April 2024, from https://www.scribbr.co.uk/research-methods/questionnaire-design/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, doing survey research | a step-by-step guide & examples, what is a likert scale | guide & examples, reliability vs validity in research | differences, types & examples.

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Market Research
  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

Survey Template

Experience management often requires surveys. As a Qualtrics customer or free account user, you have access to our complete collection of pre-made, customer, product, employee and brand experience survey templates.

Integration

Apple Messages

Net Promoter Score (NPS) Survey

Logo Testing Survey

Employee Satisfaction Survey

Student Satisfaction Survey

Manager Feedback Survey

Employee Engagement Survey

Quick Poll Template

Customer Satisfaction Survey

Employee Exit Interview Survey

Product Research Survey

Customer Service Survey

Find a survey template that’s right for you

A free Qualtrics account gives you access to more than 50+ free survey templates anytime you need inspiration or guidance.

Surveys are a powerful way to gather feedback and insights, whatever your role. If you come across a question that you can’t answer, save time on questionnaire design and get started with an expert-designed survey template to get the information you’ll need to make the right decision.

Our survey templates are designed by our specialist team of subject matter experts and researchers, so you can be sure that our best practice question choices and clear designs will get more engagement and better quality data.

We also know that if you have the right tool, you’ll see the benefits quickly. That’s why every free account uses the same sophisticated survey software that’s used by more than 18,000 brands around the world.

It’s easy to use and makes reporting a breeze with our powerful analytics and easy-to-use dashboards built-in. Plus, as a Qualtrics customer using one of our advanced products you’ll have access to even more sophisticated analysis like Stats iQ, and more research-focused question types and logic - all that pair perfectly with expert-built survey templates.

What options do I have?

In the pool of free survey templates, you can choose survey options that help you with your:

  • Customer experience (CX) - Customer satisfaction, Net Promoter Score and event feedback surveys that lets you hear what your customers think of what you’re doing and what you should do next.
  • Brand experience (BX) - Surveys like a brand awareness or a logo testing survey can help you see how effective and memorable your brand marketing, vision and values is to your target market.
  • Employee experience (Ex) - Your business can flourish with the help of an employee engagement survey or an employee onboarding survey that helps you listen in a quick and easy way.
  • Product experience (Px) - Give your products and services an edge over your competition through running feature prioritization or pricing surveys to understand your product’s potential at market.

Each survey template is ready to use straight away with no fuss, or you can customize the survey format by simply tweaking it from your account. Alternatively, you have the option to create your own survey layout and design from scratch, choosing from an unbeatable range of options and question choices.

Every team — management, marketing, HR, branding, recruitment, product development, research and more — will find a free survey they’ll love, which can help them save time and stressful days ahead.

Whether you’re starting out, branching out or just curious, we’re pleased that you’re here. Get your free accounts in a few minutes and then take your time to explore the different survey templates available for you here.

Not a Qualtrics XM Customer?

Qualtrics Experience Management Platform™ is used by the world’s most iconic brands to optimize the four core experiences of business.

Request Demo

Ready to learn more about Qualtrics?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.328(7451); 2004 May 29

Hands-on guide to questionnaire research

Selecting, designing, and developing your questionnaire, petra m boynton.

1 Department of Primary Care and Population Sciences, University College London, Archway Campus, London N19 5LW

Trisha Greenhalgh

Associated data, short abstract.

Anybody can write down a list of questions and photocopy it, but producing worthwhile and generalisable data from questionnaires needs careful planning and imaginative design

The great popularity with questionnaires is they provide a “quick fix” for research methodology. No single method has been so abused. 1

Questionnaires offer an objective means of collecting information about people's knowledge, beliefs, attitudes, and behaviour. 2 , 3 Do our patients like our opening hours? What do teenagers think of a local antidrugs campaign and has it changed their attitudes? Why don't doctors use computers to their maximum potential? Questionnaires can be used as the sole research instrument (such as in a cross sectional survey) or within clinical trials or epidemiological studies.

Randomised trials are subject to strict reporting criteria, 4 but there is no comparable framework for questionnaire research. Hence, despite a wealth of detailed guidance in the specialist literature, 1 - 3 , 5 w1-w8 elementary methodological errors are common. 1 Inappropriate instruments and lack of rigour inevitably lead to poor quality data, misleading conclusions, and woolly recommendations. w8 In this series we aim to present a practical guide that will enable research teams to do questionnaire research that is well designed, well managed, and non-discriminatory and which contributes to a generalisable evidence base. We start with selecting and designing the questionnaire. ​ questionnaire.

An external file that holds a picture, illustration, etc.
Object name is boyp118786.f2.jpg

What information are you trying to collect?

You and your co-researchers may have different assumptions about precisely what information you would like your study to generate. A formal scoping exercise will ensure that you clarify goals and if necessary reach an agreed compromise. It will also flag up potential practical problems—for example, how long the questionnaire will be and how it might be administered.

As a rule of thumb, if you are not familiar enough with the research area or with a particular population subgroup to predict the range of possible responses, and especially if such details are not available in the literature, you should first use a qualitative approach (such as focus groups) to explore the territory and map key areas for further study. 6

Is a questionnaire appropriate?

People often decide to use a questionnaire for research questions that need a different method. Sometimes, a questionnaire will be appropriate only if used within a mixed methodology study—for example, to extend and quantify the findings of an initial exploratory phase. Table A on bmj.com gives some real examples where questionnaires were used inappropriately. 1

Box 1: Pitfalls of designing your own questionnaire

Natasha, a practice nurse, learns that staff at a local police station have a high incidence of health problems, which she believes are related to stress at work. She wants to test the relation between stress and health in these staff to inform the design of advice services. Natasha designs her own questionnaire. Had she completed a thorough literature search for validated measures, she would have found several high quality questionnaires that measure stress in public sector workers. 8 Natasha's hard work produces only a second rate study that she is unable to get published.

Research participants must be able to give meaningful answers (with help from a professional interviewer if necessary). Particular physical, mental, social, and linguistic needs are covered in the third article of this series. 7

Could you use an existing instrument?

Using a previously validated and published questionnaire will save you time and resources; you will be able to compare your own findings with those from other studies, you need only give outline details of the instrument when you write up your work, and you may find it easier to get published (box 1).

Increasingly, health services research uses standard questionnaires designed for producing data that can be compared across studies. For example, clinical trials routinely include measures of patients' knowledge about a disease, 9 satisfaction with services, 10 or health related quality of life. 11 - 13 w3 w9 The validity (see below) of this approach depends on whether the type and range of closed responses reflects the full range of perceptions and feelings that people in all the different potential sampling frames might hold. Importantly, health status and quality of life instruments lose their validity when used beyond the context in which they were developed. 12 , 14 , 15 w3 w10-12

If there is no “off the peg” questionnaire available, you will have to construct your own. Using one or more standard instruments alongside a short bespoke questionnaire could save you the need to develop and validate a long list of new items.

Is the questionnaire valid and reliable?

A valid questionnaire measures what it claims to measure. In reality, many fail to do this. For example, a self completion questionnaire that seeks to measure people's food intake may be invalid because it measures what they say they have eaten, not what they have actually eaten. 16 Similarly, responses on questionnaires that ask general practitioners how they manage particular clinical conditions differ significantly from actual clinical practice. w13 An instrument developed in a different time, country, or cultural context may not be a valid measure in the group you are studying. For example, the item “I often attend gay parties” may have been a valid measure of a person's sociability level in the 1950s, but the wording has a very different connotation today.

Reliable questionnaires yield consistent results from repeated samples and different researchers over time. Differences in results come from differences between participants, not from inconsistencies in how the items are understood or how different observers interpret the responses. A standardised questionnaire is one that is written and administered so all participants are asked the precisely the same questions in an identical format and responses recorded in a uniform manner. Standardising a measure increases its reliability.

Just because a questionnaire has been piloted on a few of your colleagues, used in previous studies, or published in a peer reviewed journal does not mean it is either valid or reliable. The detailed techniques for achieving validity, reliability, and standardisation are beyond the scope of this series. If you plan to develop or modify a questionnaire yourself, you must consult a specialist text on these issues. 2 , 3

How should you present your questions?

Questionnaire items may be open or closed ended and be presented in various formats ( figure ). Table B on bmj.com examines the pros and cons of the two approaches. Two words that are often used inappropriately in closed question stems are frequently and regularly. A poorly designed item might read, “I frequently engage in exercise,” and offer a Likert scale giving responses from “strongly agree” through to “strongly disagree.” But “frequently” implies frequency, so a frequency based rating scale (with options such as at least once a day, twice a week, and so on) would be more appropriate. “Regularly,” on the other hand, implies a pattern. One person can regularly engage in exercise once a month whereas another person can regularly do so four times a week. Other weasel words to avoid in question stems include commonly, usually, many, some, and hardly ever. 17 w14

An external file that holds a picture, illustration, etc.
Object name is boyp118786.f1.jpg

Examples of formats for presenting questionnaire items

Box 2: A closed ended design that produced misleading information

Customer: I'd like to discontinue my mobile phone rental please.

Company employee: That's fine, sir, but I need to complete a form for our records on why you've made that decision. Is it (a) you have moved to another network; (b) you've upgraded within our network; or (c) you can't afford the payments?

Customer: It isn't any of those. I've just decided I don't want to own a mobile phone any more. It's more hassle than it's worth.

Company employee: [after a pause] In that case, sir, I'll have to put you down as “can't afford the payments.”

Closed ended designs enable researchers to produce aggregated data quickly, but the range of possible answers is set by the researchers not respondents, and the richness of potential responses is lower. Closed ended items often cause frustration, usually because researchers have not considered all potential responses (box 2). 18

Ticking a particular box, or even saying yes, no, or maybe can make respondents want to explain their answer, and such free text annotations may add richly to the quantitative data. You should consider inserting a free text box at the end of the questionnaire (or even after particular items or sections). Note that participants need instructions (perhaps with examples) on how to complete free text items in the same way as they do for closed questions.

If you plan to use open ended questions or invite free text comments, you must plan in advance how you will analyse these data (drawing on the skills of a qualitative researcher if necessary). 19 You must also build into the study design adequate time, skills, and resources for this analysis; otherwise you will waste participants' and researchers' time. If you do not have the time or expertise to analyse free text responses, do not invite any.

Some respondents (known as yea sayers) tend to agree with statements rather than disagree. For this reason, do not present your items so that strongly agree always links to the same broad attitude. For example, on a patient satisfaction scale, if one question is “my GP generally tries to help me out,” another question should be phrased in the negative, such as “the receptionists are usually impolite.”

Apart from questions, what else should you include?

A common error by people designing questionnaires for the first time is simply to hand out a list of the questions they want answered. Table C on bmj.com gives a checklist of other things to consider. It is particularly important to provide an introductory letter or information sheet for participants to take away after completing the questionnaire.

What should the questionnaire look like?

Researchers rarely spend sufficient time on the physical layout of their questionnaire, believing that the science lies in the content of the questions and not in such details as the font size or colour. Yet empirical studies have repeatedly shown that low response rates are often due to participants being unable to read or follow the questionnaire (box 3). 3 w6 In general, questions should be short and to the point (around 12 words or less), but for issues of a sensitive and personal nature, short questions can be perceived as abrupt and threatening, and longer sentences are preferred. w6

How should you select your sample?

Different sampling techniques will affect the questions you ask and how you administer your questionnaire (see table D on bmj.com ). For more detailed advice on sampling, see Bowling 20 and Sapsford. 3

If you are collecting quantitative data with a view to testing a hypothesis or assessing the prevalence of a disease or problem (for example, about intergroup differences in particular attitudes or health status), seek statistical advice on the minimum sample size. 3

What approvals do you need before you start?

Unlike other methods, questionnaires require relatively little specialist equipment or materials, which means that inexperienced and unsupported researchers sometimes embark on questionnaire surveys without completing the necessary formalities. In the United Kingdom, a research study on NHS patients or staff must be:

  • Formally approved by the relevant person in an organisation that is registered with the Department of Health as a research sponsor (typically, a research trust, university or college) 21 ;
  • Consistent with data protection law and logged on the organisation's data protection files (see next article in series) 19
  • Accordant with research governance frameworks 21
  • Approved by the appropriate research ethics committee (see below).

Box 3: Don't let layout let you down

Meena, a general practice tutor, wanted to study her fellow general practitioners' attitudes to a new training scheme in her primary care trust. She constructed a series of questions, but when they were written down, they covered 10 pages, which Meena thought looked off putting. She reduced the font and spacing of her questionnaire, and printed it double sided, until it was only four sides in length. But many of her colleagues refused to complete it, telling her they found it too hard to read and work through. She returned the questionnaire to its original 10 page format, which made it easier and quicker to complete, and her response rate increased greatly.

Summary points

Questionnaire studies often fail to produce high quality generalisable data

When possible, use previously validated questionnaires

Questions must be phrased appropriately for the target audience and information required

Good explanations and design will improve response rates

In addition, if your questionnaire study is part of a formal academic course (for example, a dissertation), you must follow any additional regulations such as gaining written approval from your supervisor.

A study is unethical if it is scientifically unsound, causes undue offence or trauma, breaches confidentiality, or wastes people's time or money. Written approval from a local or multicentre NHS research ethics committee (more information at www.corec.org.uk ) is essential but does not in itself make a study ethical. Those working in non-NHS institutions or undertaking research outside the NHS may need to submit an additional (non-NHS) ethical committee application to their own institution or research sponsor.

The committee will require details of the study design, copies of your questionnaire, and any accompanying information or covering letters. If the questionnaire is likely to cause distress, you should include a clear plan for providing support to both participants and researchers. Remember that just because you do not find a question offensive or distressing does not mean it will not upset others. 6

As we have shown above, designing a questionnaire study that produces usable data is not as easy as it might seem. Awareness of the pitfalls is essential both when planning research and appraising published studies. Table E on bmj.com gives a critical appraisal checklist for evaluating questionnaire studies. In the following two articles we will discuss how to select a sample, pilot and administer a questionnaire, and analyse data and approaches for groups that are hard to research.

Supplementary Material

This is the first in a series of three articles on questionnaire research

Susan Catt supplied additional references and feedback. We also thank Alicia O'Cathain, Jill Russell, Geoff Wong, Marcia Rigby, Sara Shaw, Fraser MacFarlane, and Will Callaghan for feedback on earlier versions. Numerous research students and conference delegates provided methodological questions and case examples of real life questionnaire research, which provided the inspiration and raw material for this series. We also thank the hundreds of research participants who over the years have contributed data and given feedback to our students and ourselves about the design, layout, and accessibility of instruments.

Contributors and sources: PMB and TG have taught research methods in a primary care setting for the past 13 years, specialising in practical approaches and using the experiences and concerns of researchers and participants as the basis of learning. This series of papers arose directly from questions asked about real questionnaire studies. To address these questions we explored a wide range of sources from the psychological and health services research literature.

Competing interests: None declared.

  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, teachers’ perceptions of the barriers to stem teaching in qatar’s secondary schools: a structural equation modeling analysis.

questionnaire in survey research sample

  • 1 Educational Research Center, College of Education, Qatar University, Doha, Qatar
  • 2 Qatar University Young Scientists Center (QUYSC), Qatar University, Doha, Qatar

Introduction: Educators play a pivotal role in shaping students’ academic achievements, particularly in STEM (science, technology, engineering, and mathematics) fields. The instructional techniques employed by teachers significantly impact students’ decisions to pursue or persist in STEM disciplines. This research aims to explore the challenges faced by high school STEM teachers in Qatar in delivering effective STEM instruction.

Methods: Data was collected through a survey administered to 290 high school STEM teachers across thirty-nine schools in Qatar. The survey targeted teachers in the 11th and 12th grades. Structural Equation Modeling (SEM) was utilized to analyze the data and examine teachers’ perceived barriers to effective STEM instruction.

Results: The findings revealed various barriers hindering STEM instruction. These barriers were categorized into school-related, student-related, technology-related, and teaching-related factors. All the hypothesized teaching barriers [i.e., (student-related: β = –0.243, p < 0.001); (school-related: β = –0.122, p < 0.001), (technologyrelated: β = –0.123, p = 0.040); and (instruction-related: β = –0.112, p < 0.018)] were negatively related to teachers’ STEM teaching. Among the various obstacles, it appears that the most formidable challenges for high school STEM teachers are related to students (β = –0.243, p < 0.001).

Discussion: Understanding these barriers is crucial for informing educational policies and developing strategies to enhance STEM learning in Qatar’s high schools. Addressing these barriers is essential to provide adequate resources, professional development opportunities, and support systems. By addressing these challenges, Qatar can foster a conducive environment for effective STEM instruction, thereby nurturing a future generation of STEM professionals.

1 Introduction

Science, technology, engineering, and mathematics (STEM) education has garnered increased attention in the past decade, prompting calls for a heightened emphasis particularly on the quality of STEM teaching ( Btool and Koc, 2017 ). The STEM education approach advocates for a novel teaching and learning methodology, emphasizing hands-on inquiry and open-ended exploration ( Waters and Orange, 2022 ). This approach facilitates the development of 21st-century quintessential skills, such as problem-solving, creative thinking, collaborative teamwork, and technology literacy, catering to students with diverse interests, abilities, and experiences ( Ichsan et al., 2023 ). In light of the many global challenges and potential threats, the knowledge/skills pertaining to STEM are crucial for comprehending and addressing these pressing issues. This underscores the significance of STEM as a driver of prosperity and sustainable development for present and future generations ( AlMuraie et al., 2021 ).

In this regard, teachers are key figures in driving STEM initiatives globally, with a particular emphasis on those instructing science subjects ( Oliveros Ruiz et al., 2014 ). Numerous studies underscore the significance of science education across various academic levels ( Kola, 2013 ; Oliveros Ruiz et al., 2014 ). Researchers contend that the primary objective of science education is to equip individuals with the skills required to become scientists and technologists, crucial for advancing research and innovation ( Ichsan et al., 2023 ). This preparation serves as the cornerstone for the economic prosperity and well-being of emerging economies and contributes to the overall development of nations.

In the unique context of Qatar, the past few years have witnessed concerted efforts to shift from an economy that is reliant on gas and oil resource wealth to one centered on knowledge and innovation, as outlined in the Qatar National Vision 2030 ( Tan et al., 2014 ). Underlying this transformation is an earnest and compelling call for action to cultivate national expertise ( Ben Hassen, 2021 ). Indeed, there is a pressing demand for professionals in STEM fields in Qatar, a concern voiced repeatedly by educators, government officials, and industry stakeholders ( Cherif et al., 2016 , 2021 ). Despite the increasing demand for STEM professionals in Qatar, the number of Qatari citizens possessing the education and training necessary to support the vital industries of their country’s economy remains alarmingly low. This disconnect between education and the job market in Qatar has led to a significant proportion of unskilled and semi-skilled citizens being employed in the public sector ( Babar et al., 2019 ). Consequently, the private sector has had to rely on foreign workers to bridge the gap in STEM professions. With a scarcity of young individuals pursuing STEM careers, Qatar’s dependence on expatriate labor in these fields is set to persist.

Adding to the challenges of a foreign-dominated labor force in Qatar is the fact that many highly educated Qatari citizens hold degrees in non-STEM disciplines. Furthermore, there is clear evidence that a significant number of Qataris, particularly males, do not aspire to pursue higher education ( Sellami et al., 2017 ), which has serious implications for efforts to develop a sustainable local STEM workforce ( Al-Misnad, 2012 ). Interestingly, there is a dearth of documented research exploring these issues related to the shortage of skilled professionals in Qatar and the broader Gulf Cooperation Council (GCC) region ( Al-Misnad, 2012 ; Sellami et al., 2017 ; Babar et al., 2019 ). Despite notable progress in terms of equitable access to formal education, enrollment rates, and literacy rates in Qatar, critics argue that the country’s education system still falls short in producing highly skilled graduates who can contribute effectively to the nation’s development and prosperity ( Ben Hassen, 2021 ). This dependence on highly skilled foreign professionals further compounds the issue. To enhance the capabilities of its skilled workforce, Qatar must make concerted efforts to increase the enrollment of both men and women in disciplines aligned with the knowledge economy, on par with developing nations.

In light of the preceding background information, STEM teaching is pivotal to Qatar’s economic prosperity. While the country’s national development strategy underscores the importance of STEM education for progress and development, the practical implementation of STEM teaching faces numerous challenges, especially in developing countries such as Qatar and the larger GCC region ( Cherif et al., 2016 ). Accordingly, this study aims to explore teachers’ perceptions of salient barriers to STEM teaching in Qatar. The uniqueness of this present study lies in providing research-based insights into these obstacles from an Arab Middle Eastern perspective.

This paper is structured as follows. The section below offers a review of the relevant literature that has addressed the main challenges that impede STEM teaching. This is followed by a statement of the theoretical framework guiding this study as well as a description of the problem statement and the research questions. The next section details the research methods employed in our study, including a description of participants, instruments, and data analysis. A presentation of the study’s results is provided next, in turn, followed by a discussion and interpretation of these results. The paper concludes with some important recommendations for policy and practice.

2 Review of literature

In view of the growing demand for professionals possessing the critical skills and knowledge that are essential for economic growth and development, the responsibility lies with educational institutions to prepare students equipped with vital STEM skill sets ( AlMuraie et al., 2021 ). Improving students’ STEM-related capabilities requires schools to enhance their STEM education offerings and reconfigure their instructional methods. Central to such educational reforms is the imperative to incorporate teachers as a vital element ( Antonova et al., 2022 ).

Serving as essential catalysts in the educational journey, teachers play a central role in providing STEM education ( Kim, 2021 ). They possess the capacity to profoundly influence students’ academic performance in STEM subjects and, in the long run, shape their interest in and enthusiasm for pursuing STEM fields of study and eventual careers ( Blazar and Kraft, 2017 ). Students’ learning experiences, encompassing both theoretical classroom knowledge and hands-on practical experience, are pivotal factors in augmenting their proficiency in STEM-related skills and knowledge ( Romlie et al., 2021 ; Rohendi et al., 2023 ). When coupled with the guidance of dedicated teachers and access to high-quality STEM programs and curricula, these experiences create an optimal environment for nurturing students’ innate talents and capabilities within the realm of STEM disciplines ( MacFarlane, 2021 ).

The existing body of literature sheds light on the intricate interplay of several factors — broadly the individual (personal) and environmental (contextual)—that can either facilitate or impede STEM teaching ( Nugent et al., 2015 ; Sellami et al., 2017 ). For instance, researchers have proposed a range of social (i.e., contextual, school environment-related, family/peer/teachers support, etc.), individual (student-related, teachers-related in terms of knowledge, interest, self-efficacy, etc.), and instructional (curriculum, student-related, teacher-related, etc.), factors that contribute to the creation of favorable conditions for effective STEM teaching ( Nugent et al., 2015 ; Margot and Kettler, 2019 ; Wahono and Chang, 2019 ; Dong et al., 2020 ; Hamad et al., 2022 ; Karkouti et al., 2022 ). One of the studies exemplifies the comprehensive review of teachers’ perspectives on STEM education and has pinpointed six primary barriers that pose challenges to STEM teaching ( Margot and Kettler, 2019 ). These barriers are closely tied to the curriculum, pedagogical approaches, assessment methods, teacher support, student factors, and structural systems within the educational landscape ( Margot and Kettler, 2019 ). As outlined in the literature, these barriers, encompass various facets, such as teachers’ beliefs, knowledge, and comprehension of STEM, as well as difficulties in applying STEM concepts to specific topics, and challenges in establishing connections between different STEM subjects, etc. ( Wahono and Chang, 2019 ; Dong et al., 2020 ). Additional obstacles comprise inadequate teacher preparation, limited opportunities for professional development, a shortage of qualified STEM teachers, insufficient integration of cross-disciplinary content, low levels of student motivation, curriculum changes, inadequate resources and facilities, and assessments that may not effectively align with STEM education objectives ( Hamad et al., 2022 ; Karkouti et al., 2022 ). Ongoing discussions on STEM education highlight obstacles that impede the implementation of effective interdisciplinary teaching methods. At the same time, contemporary dialogues and arguments concerning STEM education underscore the hindrances that obstruct the successful adoption of interdisciplinary STEM teaching approaches.

From a more extensive viewpoint, various obstacles may hinder STEM teaching, encompassing issues related to instruction, students, technology, school, etc. ( Al-Misnad, 2012 ; Sellami et al., 2017 ; Babar et al., 2019 ). Drawing from insights into existing literature, this present study seeks to explore the connections among impediments associated with students, technology, schools, and instruction as perceived by STEM teachers (refer to Figure 1 ).

www.frontiersin.org

Figure 1. Barriers to STEM teaching.

3 Theoretical framework

The conceptual foundation of this study is based on similar research, which delved into the perceptions of high school teachers regarding the obstacles to teaching STEM in Qatar, including student-related, technology-related, school-related, and instruction-related barriers in teaching STEM ( Sellami et al., 2022 ). The study employed descriptive statistics and logistic regression models to understand how teachers perceived these barriers. However, our current study distinguishes itself by using SEM to investigate the path coefficients and uncover the significant relationships between the investigated constructs.

The theoretical framework underpinning this study ( Bandura, 1989 ) and Attribution Theory ( Bandura, 1997 ; Weiner, 2010 ). In this research, the social cognitive theory (SCT) serves as a valuable theoretical framework, offering insights into the barriers impeding STEM teaching by considering both individual factors (related to students and teachers) and environmental factors (associated with the context or school). In contrast, AT, a well-established research paradigm in social psychology, offers insights into understanding why specific behaviors or events occur and how individuals contribute to these occurrences. In this research, Atribution theory which focuses on how individuals explain the causes of events, can be applied to understand the barriers to STEM teaching. The theory can provide insights into how teachers attribute the challenges and successes in STEM education, shedding light on the factors impacting their STEM teaching.

The use of SCT focuses on what aspects of STEM were perceived as barriers whereas AT highlights how individuals attribute STEM teaching barriers. Therefore, guided by the existing literature, our study postulates that high school STEM teachers in Qatar should confront challenges that impact their teaching processes. These challenges are examined through the lenses of SCT and AT, which consider the interplay between individual beliefs and environmental factors in shaping STEM education.

3.1 Problem statement and research questions

As discussed, one of the key components of Qatar’s educational reform is to improve the standards of education by enhancing the quality of schoolteachers ( Nasser, 2017 ). In this respect, this study is important as it intends to investigate salient barriers to STEM education from a teacher’s perspective. Therefore, this study will extend knowledge related to the challenges that thwart STEM education. As such, this research aligns with Qatar National Vision-2030, which highlights Qatar’s need to transform into a knowledge-based economy. Based on the preceding deliberations, this study employs SEM to facilitate a comprehensive exploration of teachers’ perspectives on key obstacles to STEM education. After undergoing a critical literature review, this study put forth four hypotheses, as is shown in Figure 2 below:

www.frontiersin.org

Figure 2. Hypothesized model.

H1: Student-related barriers negatively influence STEM teaching in higher education.
H2: Technology-related barriers negatively influence STEM teaching in higher education.
H3: School-related barriers negatively influence STEM teaching in higher education.
H4: Instruction-related barriers negatively influence STEM teaching in higher education.

4 Research methods

An exploratory quantitative research approach was adopted to examine teachers’ perceptions of the main impediments to STEM education. This research design involved a review of the relevant literature on STEM-teaching barriers ( Al-Misnad, 2012 ; Nugent et al., 2015 ; Sellami et al., 2017 ; Babar et al., 2019 ), where themes were identified to guide the creation of a quantitative instrument. This instrument is then employed to delve deeper into the research problem ( Creswell et al., 2011 ; Berman, 2017 ). A survey questionnaire was then developed to explore the barriers related to STEM teaching (i.e., student-related, technology-related, school environment-related, and teaching methods.

The survey was conducted both in person and virtually during the 2021 Spring Semester, spanning from March to April 2021. The survey administration involved physical questionnaires [paper-and-pencil interviewing (PAPI)] and computer-assisted personal interviews (CAPI). The latter involved gathering survey data through face-to-face interviews conducted by interviewers, using computers, smartphones, and tablets. This technique allowed the interviewers to input responses directly into these devices, enabling real-time data collection and reducing the need for manual data entry ( Blazar and Kraft, 2017 ).

For the purpose of this study, data was gathered from thirty-nine high schools randomly selected from across Qatar. These schools were a combination of both local government schools (56.4%) and private schools (43.6%) in Qatar. Following the approval process from Qatar University’s research ethics board (IRB), the research team contacted school board superintendents and teachers to secure their consent for data collection within their respective schools. After excluding teachers who did not complete the entire survey, a total of 290 STEM teachers participated in this research study. The study involved a nationwide survey and the sample was representative of the entire country. With the given number of completions, the maximum sampling error for a percentage in the teacher survey was approximately +/−2.4 percentage points. The computation of this sampling error accounts for design effects, encompassing influences from weighting, stratification, and clustering. One possible interpretation of sampling errors is that if the survey is repeated 100 times using the same procedure, the sampling errors would encompass the “true value” in 95 out of the 100 surveys. It is important to note that the calculation of sampling errors was feasible in this survey due to the sample being derived from a known probability-based sampling scheme set by the Ministry of Education.

Table 1 provides an overview of the teacher-related variables, showing their gender distribution (54.5% males and 45.5% females) and age groups, with the majority falling between the ages of 31 to 40 (40.1%). A significant portion of the participants had a bachelor’s degree (59.5%) and nearly most of the teachers were expatriates (96%). In terms of their teaching assignments, the largest group of teachers taught both grades 11 and 12 (45.8%), while 25.8% exclusively taught grade 11, and 24.7% exclusively taught grade 12.

www.frontiersin.org

Table 1. Teacher-related variables ( n = 290).

4.2 Survey instrument

The survey had three primary objectives: (a) Gathering fundamental background information, (b) Systematically documenting teaching approaches, and (c) Structurally documenting the key challenges encountered in effective STEM teaching. The implementation process involved three phases: (1) the development of the survey, (2) the testing of the survey through a pilot study, and (3) the administration of the survey.

Step 1: To develop the survey, we examined existing research on STEM teaching barriers ( Shadle et al., 2017 ; Sturtevant and Wheeler, 2019 ; Karkouti et al., 2022 ; Kayan-Fadlelmula et al., 2022 ; Sellami et al., 2022 ). The review of existing literature provided valuable insights into the specific target areas of the study. It helped us gain a better comprehension of how teachers perceive STEM teaching and the associated barriers. The survey employed a five-point Likert scale to assess close-ended items across five distinct constructs: i.e., (a) Student-related barriers, (b) technology-related barriers, (c) school-related barriers, (d) teaching-related barriers, and (e) implementation of STEM instruction. For each construct, teachers were presented with various response options tailored to the type of question. This included disagree-agree questions (ranging from 1 = strongly disagree to 6 = strongly agree); Frequency questions (ranging from 1 = never to 5 = always); Percentage questions; Rating questions (ranging from 1 = very poor to 5 = very good), Emphasis questions (ranging from 1 = none to 5 = heavy), and significance questions (ranging from not important at all = 1 to very important = 5). These diverse set of question types allowed for a comprehensive assessment of teachers’ perceptions and experiences related to STEM education.

Step 2: During this phase, the survey that was designed was pilot-tested with two focus groups, one conducted in Arabic and the other in English. This step was crucial for refining the survey instrument. The discussions within these focus groups proved invaluable in addressing concerns related to the wording of the survey questions. This process enabled us to rephrase and clarify questions that were inadequately worded or potentially confusing. The insights gained from the focus group discussions helped ensure that the survey was clear, and concise, and effectively collected the necessary data to achieve these goals.

Step 3: The third phase of survey execution involved the distribution of questionnaires after the reception of signed consent forms from both teachers and school authorities. Teachers were given the option to respond to the survey in either English or Arabic. On average, it took participants between 13 and 17 min to complete the study.

4.3 Data measures

The survey constructs were carefully designed as quantitative measures to capture key factors essential for addressing the research questions of this study. These measures encompassed various constructs, including student-related, technology-related, and school-related teaching barriers, as well as teacher STEM pedagogy implementations. The rationale behind selecting these measures stemmed from prior analyses that highlighted the existence of numerous obstacles impeding effective STEM teaching, such as restrictive teaching hours, curriculum challenges, student-related conflicts, evaluation difficulties, and lack of teacher support ( Margot and Kettler, 2019 ; Dong et al., 2020 ; Hamad et al., 2022 ; Karkouti et al., 2022 ). Below are the details of the formulation of these measures:

4.3.1 Student-related teaching barrier

The student-related teaching barrier explored the extent to which the teaching methods of educators were influenced by issues related to students. These issues covered the following areas: a lack of necessary skills, a lack of requisite knowledge, inadequate sleep, classroom disruptions, and reduced interest. Teachers’ perceptions of student-related barriers were reverse coded due to negative statements and the codes “−2” and “1 was assigned to the responses “often” and “always”, respectively. Meanwhile, a value of “0” was assigned to “undecided” “1” and “2” for “rarely” and “never”, respectively. Technology-related teaching barrier: For this barrier, teachers were responsible for evaluating the degree to which technology-related challenges affected their teaching. These challenges included several factors, including insufficient computers, lack of internet speed or bandwidth, outdated or malfunctioning computers, lack of technical support, and insufficient interactive whiteboards. The responses provided by teachers were coded using the same methodology adopted for student-related barriers to represent technology-related barriers

4.3.2 School-related teaching barrier

Here, teachers were tasked with assessing the degree to which their teaching was influenced by various challenges within the school environment. These challenges were represented by a variety of factors, which include technical support, STEM training, and pedagogical assistance, curriculum and teaching hours, availability of instructional materials and supplies, adequacy of classroom facilities, the state of school computers, organization of school spaces, administrative and budgetary constraints, the overall school environment, and the level of support and interest from fellow teachers. Similar coding on a 5-point Likert scale has been followed.

4.3.3 Instruction-related teaching barrier

The fourth construct utilized in this analysis is referred to as a school-related teaching barrier. Teachers were asked to detail the extent to which their teaching was impacted by school-related challenges. These challenges included insufficient school laboratory resources, overcrowded classrooms, inefficient school time management, administrative limitations, budget constraints, and the pressure to prepare students for examinations. For consistency, a coding system similar to the other barriers was implemented, the 5-point Likert scale.

4.3.4 STEM teaching

The fifth and final construct is STEM teaching, where teachers were presented with a scale to indicate the degree to which they utilized pedagogical approaches. This scale covered a spectrum from ( Btool and Koc, 2017 ) 0–20% to ( Oliveros Ruiz et al., 2014 ) 81–100%. The pedagogical approaches under consideration include project- and problem-based methods, collaborative learning, and the flipped classroom model as examples. To streamline the analysis, the responses provided by teachers were translated into numerical values. Each specified percentage range was assigned a numerical code, ranging from 1 to 5, as follows: 0–20% corresponded to 1, 21–40% to 2, 41–60% to 3, 61–80% to 4, and 81–100% to 5.

4.4 Data analysis

The data analysis was conducted using the Statistical Package for the Social Sciences (SPSS) statistics software and SPSS AMOS (Analysis of Moment Structures), version 29.0.0.0. Initially, an Exploratory Factor Analysis (EFA) was employed to gain insights into data reliability, item quality, and construct validity. Five steps were involved in implementing factor analysis. (1) Data adequacy and evaluation: This step involved assessing the suitability of the data for factor analysis, (2) Construct extraction: Factors or constructs were extracted from the data, (3) Factor selection: criteria were applied to determine which factors should be retained/removed, (4) Rotation technique: A rotation approach was employed to optimize factor interpretability, (5) Results Analysis: The results of the factor analysis were analyzed and non-contributing factors were removed, resulting in the construction of a structural model containing significant constructs. For EFA, statistical indicators such as Kaiser Meyer Olkin’s value and Bartlett’s test of sphericity were computed to assess the appropriateness of the data for factor analysis.

To better understand how the different components (questions) overlap or differ in explaining the variance in their respective indicators, the study evaluated the construct validity of each component, specifically focusing on convergent validity and discriminant validity. Convergent validity was assessed using the average variance extracted (AVE), which represents the average of the squared loadings of the indicators associated with each component. Discriminant validity, on the other hand, was gauged using the heterotrait–monotrait ratio (HTMT) of correlations. HTMT compares the average correlations between indicators measuring different components to the average correlations among indicators measuring the same component.

Additionally, the survey model’s internal consistency reliability was evaluated using two tests: Cronbach’s Alpha and MacDonald’s Omega. These tests provide insights into the reliability and consistency of the survey’s measurement scales. Descriptive statistics were computed for the overall analysis of the data based on the data evaluations according to the paper’s scope. Finally, SEM was employed to address the stated hypotheses.

4.4.1 Goodness of fit measures for SEM

The study assessed various goodness-of-fit measures to evaluate the model’s fit in SEM. These measures included the chi-square divided by degrees of freedom (χ 2 /df), Tucker-Lewis Index (TLI), Comparative Fit Index (CFI), Root Mean Residual (RMR), and Root Mean Square Error of Approximation (RMSEA), Root Mean Square Residuals (RMSR), Normed Fit Index (NFI) ( Hair et al., 2012 ).

4.5 Validation of the instruments

To derive constructs that would adequately tackle the research questions in this study, factor analysis was utilized. This analysis encompassed principal component analysis and varimax rotation, with a minimum factor loading requirement of 0.50. The suitability of the data for factor analysis was verified by its significance, as indicated by the chi-squared test (χ2) = 5561.089, p < 0.001). To further confirm the adequacy of the sample, the Kaiser–Mayer–Olkin and Bartlett’s test of sphericity was employed. The Kaiser–Mayer–Olkin value, which stood at 0.919, indicated that the data was appropriate for factor analysis. To evaluate construct validity, convergent validity was determined by computing the AVE for all indicators within each construct. The AVE was calculated to be above 0.7, which is considered an acceptable value ( Fornell and Larcker, 1981 ).

Discriminant validity was evaluated using the HTMT ratio of correlations, and the resulting value was found to be 0.8, which is also considered an acceptable value. Moreover, to validate the internal consistency, Cronbach’s Alpha and MacDonald’s Omega were computed. All the values were within the acceptable range (>0.7) ( Cohen et al., 2002 ). At the same time, composite reliability (CR) was calculated, and all these values fell within the acceptable threshold (>0.6). The results of factor loadings and internal reliability are provided in Table 2 below. Finally, to evaluate the hypotheses, the study employs a SEM approach to analyze the relationship between the constructs concerning teachers’ barriers and STEM teaching.

www.frontiersin.org

Table 2. Results of confirmatory factor analysis and reliability tests ( n = 290).

The findings of this study provide valuable insights into the main obstacles encountered by STEM teachers in their teaching. These findings are presented and structured in alignment with the four research hypotheses of the study, and they should serve as a compelling call to action for educators, scholars, and policymakers, urging them to implement necessary reforms in the field within the context of Qatar. Before delving into the research hypotheses, it is essential to examine the descriptive analysis of teachers’ responses concerning the different teaching barriers, namely student-related, technology-related, school-related, and teaching-related. This step is crucial for gaining an understanding of which barrier presents the greatest challenge to teachers.

The results indicate that of the obstacles linked to students, the most significant challenge for teachers is the issue of “inadequate sleep among students” (mean = 3.35, S.D. = 1.08). In terms of technology-related hindrances, “insufficient internet bandwidth or speed” (mean = 2.60, S.D. = 1.24) is the foremost challenge. Concerning school-related factors, the greatest challenge arises from the “pressure to prepare students for exams” (mean = 2.69, S.D. = 1.29). Lastly, regarding barriers connected to instruction, the most prominent challenge is “teachers having an excessive number of teaching hours” (mean score of 3.34 and a standard deviation of 1.08).

5.1 Structural model and hypothesis testing

In our SEM, the construct “STEM teaching” was employed as a dependent observed variable while the other barriers (student-related, school-related, technology-related, and instruction-related) were considered as independent observed variables. We utilized the maximum-likelihood method for estimating the model’s parameters, and all analyses were based on the variance-covariance matrices. The Goodness-of-Fit model was established and found to be satisfactory ( Hair et al., 2012 ). The Goodness-of-Fit indices fell within the acceptable range, which includes criteria such as chi-squared divided by degrees of freedom (χ2 / DF) < 5, Goodness-of-Fit Index (GFI) > 0.9, Adjusted Goodness-of-Fit Index (AGFI) > 0.8, Comparative Fit Index (CFI) > 0.9, Root Mean Square Residuals (RMSR) < 0.1, Root Mean Square Error of Approximation (RMSEA) < 0.08, Normed Fit Index (NFI) > 0.9, and Parsimony Normed Fit Index (PNFI) > 0.6 (refer to Table 3 ). In summary, the structural model’s good fit has been verified, paving the way for further examination of the structural model.

www.frontiersin.org

Table 3. Measures of goodness-of-fit.

Figure 3 and Table 4 present the results of the SEM analysis. The findings indicate that all the teaching barriers [i.e., (student-related: β = −0.243, p < 0.001); (school-related: β = −0.122, p < 0.001), (technology-related: β = −0.123, p = 0.040); and (instruction-related: β = −0.112, p < 0.018)] were negatively related to teachers’ STEM teaching. These results illustrated that all the hypotheses formulated in the study have been supported ( Table 4 ). All the path coefficients established emerged as significant at 0.05 level. Among the various obstacles, it appears that the most formidable challenges for high school STEM teachers are related to students (β = −0.243, p < 0.001). Teachers reported several student-related barriers, including students lacking the necessary skills and knowledge, students not getting sufficient sleep, classroom disruptions caused by students, and a lack of student interest.

www.frontiersin.org

Figure 3. Diagrammatic representation of SEM approach, illustrating the correlation between teachers’ STEM teaching and associated barriers.

www.frontiersin.org

Table 4. Results from SEM.

6 Discussion

This study delves into the barriers that high school teachers in Qatar encounter in teaching STEM subjects. Our research examined a series of variables, including those related to students, technology, the school environment, and instructional factors from teachers’ perspectives. As was stated previously, the SCT and AT provided the theoretical foundation for exploring these factors ( Heffernan, 1988 ). The research findings presented below are interpreted through the lenses of SCT and AT, which serve as the theoretical models underpinning our study. AT aided in comprehending how teachers attribute challenges in STEM teaching to individual and contextual factors. On the other hand, SCT furnished a valuable framework for understanding what social and cognitive factors affected STEM teaching and – similar to AT – offered insights into personal and environmental barriers. Both models were useful in exploring the significant inter-relationships within the teaching context (i.e., STEM teaching and associated barriers).

The findings derived from the present study indicated that student-related teaching barriers are negatively correlated to STEM teaching. The results disclosed three specific barriers to STEM teaching as reported by teachers: students’ lack of the required skills (mean = 3.34), students’ lack of the required knowledge (mean = 3.34), and students not having enough sleep (mean = 3.45). These results are in alignment with recent research conducted by Tran and Moskovsky (2022) and Børte et al. (2023) . These studies have unveiled teaching barriers associated with students encountering challenges in solving STEM-related problems, displaying lower academic performance, and struggling to apply their knowledge to independent STEM-related tasks. Whether or not this could be an indication of declining interest among students in STEM learning is yet to be confirmed by future research. Further, empirical research is necessary to delve into the underlying causes and mechanisms that perpetuate these challenges.

Results suggest that technology-related teaching barriers are negatively correlated to STEM teaching. Teachers cited obstacles associated with the availability of technical resources and technical assistance/support. While teachers in Qatar emphasized the importance of having access to technology resources, they also reported that schools often lacked suitable or sufficient educational software ( Moyo, 2017 ). Additionally, teachers indicated they had limited access to information and communication technology (ICT) infrastructure due to a restricted curriculum ( Moyo, 2017 ). Alshaboul et al. (2022) report that teachers’ positive or negative beliefs also play a significant role in determining their access to electronic devices/technology in the classroom.

Discomfort and inconvenience in integrating/using technology in the classroom can be considered technology-related barriers to STEM teaching. Various factors may contribute to this, including insufficient skills, such as a lack of self-efficacy, and confidence, difficulties in classroom management, or appropriate online assessments, concerns about privacy, and a shortage of effective ICT-based training. According to Al-Thani et al. (2021) , there is a notable absence of professional development (PD) opportunities in Qatar, and the existing PD strategies lack clear direction, purpose, or progress. Findings from a study conducted by Said et al. (2023) , which involved 245 preparatory and secondary school teachers from 16 different schools in Qatar, highlight the pressing need for substantial PD to help teachers deliver STEM effectively. Teachers also emphasize the necessity for adequate PD to address pedagogical challenges associated with the adoption of new technology-enhanced teaching methods ( Said et al., 2023 ).

Teachers also expressed a desire for improved teacher training workshops that are held annually but repetitively ( Al-Thani et al., 2021 ). Certain studies advocated for validated models that assist teachers in overcoming technology-related barriers and enhancing effective pedagogical delivery. One such model is the mentoring model, which involves providing professional support from experienced teachers to newly hired teachers ( Abu-Tineh and Sadiq, 2018 ). Similarly, Said et al. (2023) focused on teachers’ PD using the Technological Pedagogical and Content Knowledge (TPACK) model. This model helps teachers effectively use/integrate technology during instruction. Another noteworthy model is PICRAT, which is student-centered, pedagogy-driven, tailored to specific contexts, and practical for teachers as it guides all considerations for effective technology use in classrooms ( Kimmons et al., 2020 ). In the PICRAT model, “PIC” stands for Passive, Interactive, Creative Learning, and it refers to how students engage with technology within a specific educational context or field. On the other hand, “RAT” stands for Replacement, Amplification, and Transformation, and it signifies the influence of technology on a teacher’s practices when it’s integrated into their teaching methods ( Kimmons et al., 2020 ). Although there are several PD models for effective technology integration and combating technology-related barriers, only the TPACK model and mentoring model have been reported in the context of Qatar.

The third variable that we investigated, namely school-related teaching barriers, was found to have a negative correlation with STEM teaching. Teachers reported various school-related challenges, including the pressure to prepare students for exams, constraints related to the budget and the administration when accessing adequate teaching materials, concerns about the school environment, dealing with overcrowded classrooms, and facing limitations with inadequate school laboratories. These results echo findings in a recent study that looked at the context of Qatar by Sellami et al. (2022) . While the influence of school-related variables and their connection with STEM in STEM Qatar is a largely understudied area, some recommendations to address the relevant challenges are proposed in this study. For instance, the issue of limited access to adequate teaching resources could potentially be resolved by enhancing school libraries through the expansion of library resources and the improvement of information technology facilities ( Gunasekera and Balasubramani, 2020 ).

To address the issue of the pressure teachers feel in preparing students for exams, potential remedies include stress management interventions, such as cognitive-behavioral-based and mindfulness-based interventions ( von Keyserlingk et al., 2020 ). Cognitive-behavioral-based interventions involve cognitive training and the practice of strategic behaviors, equipping teachers with both knowledge and skills to effectively manage work-related stress ( von Keyserlingk et al., 2020 ). On the other hand, mindfulness-based interventions emphasize cognitive and behavioral strategies that focus on the experience of feelings and thoughts, rather than the specific content of those thoughts. These strategies aim to promote awareness and acceptance without judgment, making them integral components of mindfulness-based approaches ( von Keyserlingk et al., 2020 ).

Instruction-related teaching barriers have also been identified as having a negative correlation with STEM teaching. Teachers reported several challenges, including inadequate training in STEM education and a lack of pedagogical models tailored for STEM teaching. They also highlighted issues related to the imposed school curricula, excessive teaching hours, and a shortage of teaching materials. Existing literature demonstrates a positive relationship between the pressure stemming from imposed curricula and the perceived stress among teachers ( Putwain and von der Embse, 2019 ; von Keyserlingk et al., 2020 ). Research has also shown a negative relationship between teachers’ self-efficacy and their perceived stress ( Putwain and von der Embse, 2019 ). In simple terms, when teachers possess a high level of self-efficacy in STEM, they tend to experience less stress in response to curriculum changes ( von Keyserlingk et al., 2020 ). This underscores the importance of implementing PD programs for teachers, specifically targeting STEM education, to enhance their self-efficacy and better equip them to handle curriculum changes with reduced stress. The literature has also shown that excessive teaching hours constitute a real challenge for teachers ( Ismail et al., 2019 ). Demonstrably, this challenge has been consistently cited as a significant factor that greatly impacts teachers’ motivation to teach STEM subjects contributes to increased stress levels, and leads to lower job satisfaction among teachers when they are teaching STEM ( von Keyserlingk et al., 2020 ).

A comprehensive systematic review that drew data from 25 articles spanning the globe also reinforces the importance of providing support to teachers to enhance their capacity to implement STEM education effectively ( Margot and Kettler, 2019 ). This support includes collaborations with colleagues, ensuring access to well-crafted curricula, receiving support from the school, drawing upon past experiences, and having access to impactful professional development opportunities ( Margot and Kettler, 2019 ). As a result of these study findings, there is a clear and compelling need for school management to offer robust support to teachers. This support should encompass the provision of PD programs geared toward enhancing their skills in STEM education, as well as implementing stress management interventions to help teachers effectively manage the stress associated with their teaching responsibilities ( Karkouti et al., 2022 ).

Conclusively, the study’s main limitations stem from its exclusive reliance on quantitative survey data, specifically from high school teachers in Qatar, looking at their perceptions of challenges to STEM teaching. To gain a more informed insight and understanding of the factors influencing technology integration, the study would benefit from also utilizing qualitative data. For instance, conducting focus group interviews, in-depth one-on-one discussions, and follow-up interviews would enable an in-depth exploration of the underlying reasons behind these barriers. Another limitation is that the study primarily focuses on high school teachers’ perspectives, overlooking those of educators in lower grade levels. Incorporating data from primary and preparatory teachers would broaden the study’s insights and offer a comparative viewpoint. It is worth noting that different results and conclusions might arise when considering teachers with diverse demographics. However, we believe that the study’s reliability is supported by robust statistical analysis, using a more stringent significance level (e.g., p < 0.01). Furthermore, it is important to acknowledge the need for a longitudinal analysis of teachers’ perceptions of barriers to STEM instruction. Because teachers’ beliefs and attitudes change and evolve, a longitudinal study would capture these shifts and changes, and provide valuable insights into long-term trends in STEM education.

7 Conclusion and recommendations

Teachers are the cornerstone of educational excellence and hold significant sway over students’ academic achievements in STEM. Specifically, the teaching methods utilized by teachers and their skillful application in the classroom play a pivotal role in influencing whether students choose to pursue and persist in STEM fields of study and future careers. Therefore, it is important to understand teachers’ experiences of teaching STEM and the challenges they encounter. Guided by SCT and AT, this study identified a range of factors impeding STEM teaching: school-related, student-related, technology-related, and teaching-related barriers.

This research intends to explore the experiences of high school STEM teachers in Qatar, focusing specifically on the barriers they face in teaching STEM. The research findings underscore the importance of barriers related to schools, students, technology, and teaching methods in the context of STEM teaching within the classroom. Additionally, the study highlighted that student-related barriers were the most prominent impediments affecting STEM instruction. We believe that these findings provide crucial insights that can inform the development of effective STEM learning practices in high schools in Qatar.

Overall, this study calls for investing in teachers’ knowledge and expertise and for the need to provide support for them in terms of emotional, informational, instrumentational, and appraisal aspects in Qatar. Emotional support entails sharing personal experiences, demonstrating empathy toward teachers, and implementing effective stress management strategies to assist them in coping with work-related stress. Informational support involves creating well-thought-out plans and recommending actions to facilitate problem-solving. Instrumental support encompasses offering tangible assistance, direct aid, and PD programs to enable teachers to reach their objectives. Equally significant is the concept of appraisal support, which nurtures an environment promoting self-evaluation, constructive feedback, and affirmation, all contributing to enhancing teachers’ motivation and overall well-being.

Data availability statement

The original contributions presented in this study are included in this article/supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Qatar University (QU-IRB 1424-EA/20) on 1 March 2020. The participants provided written informed consent for participation in the study.

Authors contributions

AS: Methodology, Writing – review and editing, Conceptualization, Funding acquisition, Project administration. MS: Formal analysis, Methodology, Writing – original draft. JB: Formal analysis, Writing – review and editing. ZA: Methodology, Supervision, Writing – review and editing.

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. The project was funded by the Qatar University (Reference: QUCG-SESRI-20/21-1). This research has been partially supported by Qatar University Exceptional Grant (Ref.: QU-ERC-23_24-1) entitled “Student interest and perseverance in STEM-related fields of study and Careers” from Qatar University.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abu-Tineh, A., and Sadiq, H. (2018). Characteristics and models of effective professional development: The case of school teachers in qatar. Profess. Dev. Educ. 44, 311–322.

Google Scholar

Al-Misnad, S. (2012). The dearth of qatari men in higher education: Reasons and implications. Washington, DC: Middle East Institute.

AlMuraie, E., Algarni, N., and Alahmad, N. (2021). Upper-secondary school science teachers’ perceptions of the integrating mechanisms and importance of STEM education. J. Baltic Sci. Educ. 20, 546–557. doi: 10.33225/jbse/21.20.546

Crossref Full Text | Google Scholar

Alshaboul, Y., Alazaizeh, M., Al-Shboul, Q., Newsome, M., and Abu-Tineh, A. (2022). University instructors and students’ attitudes toward distance education: The case of qatar. J. Positive Sch. Psychol. 6, 7940–7959.

Al-Thani, W., Ari, I., and Koç, M. (2021). Education as a critical factor of sustainability: Case study in qatar from the teachers’ development perspective. Sustainability 13:11525.

Antonova, O., Antonov, O., and Polishchuk, N. (2022). STEM-APPROACH IN EDUCATION AND PREPARATION OF THE TEACHER FOR ITS IMPLEMENTATION. Zhyto. Ivan Franko State Univers. J. pedagog. Sci. 3, 267–281. doi: 10.1088/1742-6596/1806/1/012219

Babar, Z., Ewers, M., and Khattab, N. (2019). Im/mobile highly skilled migrants in Qatar. J. Ethn. Migr. Stud. 45, 1553–1570. doi: 10.1080/1369183X.2018.1492372

Bandura, A. (1989). “Social Cognitive Theory,” in Annals of Child Development , ed. R. Vasta (Greenwich: Jai Press LTD), 1–60.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: WH Freeman.

Ben Hassen, T. (2021). The state of the knowledge-based economy in the Arab world: Cases of Qatar and Lebanon. Euromed J. Bus. 16, 129–153. doi: 10.1108/EMJB-03-2020-0026/full/html

Berman, E. (2017). An exploratory sequential mixed methods approach to understanding researchers’ data management practices at UVM: Integrated findings to develop research data services. Burlington, VT: University of Vermont.

Blazar, D., and Kraft, M. (2017). Teacher and teaching effects on students’ attitudes and behaviors. Educ. Eval. Policy Anal. 39, 146–170. doi: 10.3102/0162373716670260

PubMed Abstract | Crossref Full Text | Google Scholar

Børte, K., Nesje, K., and Lillejord, S. (2023). Barriers to student active learning in higher education. Teach. High. Educ. 28, 597–615.

Btool, H., and Koc, M. (2017). Challenges and drivers for Qatar’s transformation into a knowledge-based economy and society-work in progress in education system reforms. PressAcad. Proc. 4, 281–288. doi: 10.17261/Pressacademia.2017.545

Cherif, R., Hasanov, F., and Pande, A. (2021). Riding the energy transition: Oil beyond 2040. Asian Econ. Policy Rev. 16, 117–137. doi: 10.1111/aepr.12317

Cherif, R., Hasanov, F., and Zhu, M. (2016). Breaking the oil spell: The Gulf falcons’ path to diversification. Washington, DC: International Monetary Fund.

Cohen, L., Manion, L., and Morrison, K. (2002). Research methods in education. Milton Park: Routledge.

Creswell, J., Klassen, A., Plano Clark, V., and Smith, K. (2011). Best practices for mixed methods research in the health sciences. Bethesda (Maryland) 2013, 541–545. doi: 10.1016/B978-0-323-91888-6.00033-8

Dong, Y., Wang, J., Yang, Y., and Kurup, P. (2020). Understanding intrinsic challenges to STEM instructional practices for Chinese teachers based on their beliefs and knowledge base. Int. J. STEM Educ. 7, 1–12. doi: 10.1186/s40594-020-00245-0

Fornell, C., and Larcker, D. (1981). Structural equation models with unobservable variables and measurement error: Algebra and statistics. Los Angeles, CA: Sage Publications.

Gunasekera, C., and Balasubramani, R. (2020). Use of information and communication technologies by school teachers in sri lanka for information seeking. Libr Philos Pract. 2020:3979.

Hair, J., Sarstedt, M., Ringle, C., and Mena, J. (2012). An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Market. Sci. 40, 414–433. doi: 10.1108/IJCHM-10-2016-0568/full/html

Hamad, S., Tairab, H., Wardat, Y., Rabbani, L., AlArabi, K., Yousif, M., et al. (2022). Understanding science teachers’ implementations of integrated STEM: Teacher perceptions and practice. Sustainability 14:3594. doi: 10.1186/s40594-018-0101-z

Heffernan, C. (1988). Social foundations of thought and action: A social cognitive theory, Albert Bandura Englewood Cliffs, New Jersey: Prentice Hall, 1986, xiii+ 617 pp. Hardback. US $39.50. Behav. Change 5, 37–38.

Ichsan, I., Suharyat, Y., Santosa, T., and Satria, E. (2023). The effectiveness of STEM-based learning in teaching 21st century skills in generation Z student in science learning: A meta-analysis. J. Penelit. Pendidik. IPA 9, 150–166.

Ismail, M., Salleh, M., and Nasir, N. (2019). The issues and challenges in empowering STEM on science teachers in Malaysian secondary schools. Int. J. Acad. Res. Bus. Soc. Sci. 9, 430–444. doi: 10.6007/IJARBSS/v9-i13/6869

Karkouti, I., Abu-Shawish, R., and Romanowski, M. (2022). Teachers’ understandings of the social and professional support needed to implement change in qatar. Heliyon 8:e08818. doi: 10.1016/j.heliyon.2022.e08818

Kayan-Fadlelmula, F., Sellami, A., Abdelkader, N., and Umer, S. (2022). A systematic review of STEM education research in the GCC countries: Trends, gaps and barriers. Int. J. STEM Educ. 9, 1–24. doi: 10.1186/s40594-021-00319-7

Kim, M. (2021). A systematic review of the design work of STEM teachers. Res. Sci. Technol. Educ. 39, 131–155. doi: 10.1080/02635143.2019.1682988

Kimmons, R., Graham, C., and West, R. (2020). The PICRAT model for technology integration in teacher preparation. Contemp. Issues Technol. Teach. Educ. 20, 176–198.

Kola, A. (2013). Importance of science education to national development and problems militating against its development. Am. J. Educ. Res. 1, 225–229. doi: 10.12691/education-1-7-2

MacFarlane, B. (2021). Infrastructure of comprehensive STEM programming for advanced learners. STEM education for high-ability learners. Milton Park: Routledge, 139–160.

Margot, K., and Kettler, T. (2019). Teachers’ perception of STEM integration and education: A systematic literature review. Int. J. STEM Educ. 6, 1–16. doi: 10.1186/s40594-018-0151-2

Moyo, C. (2017). Evaluating the current usage and integration of ICTs in education: A review of teachers’ usage and barriers to integration. TEXILA Int. J. Acad. Res. 4, 107–115.

Nasser, R. (2017). Qatar’s educational reform past and future: Challenges in teacher development. Open Rev. Educ. Res. 4, 1–19.

Nugent, G., Barker, B., Welch, G., Grandgenett, N., Wu, C., and Nelson, C. (2015). A model of factors contributing to STEM learning and career orientation. Int. J. Sci. Educ. 37, 1067–1088.

Oliveros Ruiz, M., Vargas Osuna, L., Valdez Salas, B., Schorr Wienner, M., Sevilla Garcia, J., Cabrera Cordova, E., et al. (2014). The importance of teaching science and technology in early education levels in an emerging economy. Bull. Sci. Technol. Soc. 34, 87–93.

Putwain, D., and von der Embse, N. (2019). Teacher self-efficacy moderates the relations between imposed pressure from imposed curriculum changes and teacher stress. Educ. Psychol. 39, 51–64.

Rohendi, D., Wahyudin, D., and Kusumah, I. (2023). Online learning using STEM-based media: To improve mathematics abilities of vocational high school students. Int. J. Instruct. 16, 377–392.

Romlie, M., Sudjono, I., and Subandi, M. (2021). Integrating STEM into the formal and instructional curriculum to support machine construction design. Teknol. Kejuruan 44, 100–107.

Said, Z., Mansour, N., and Abu-Tineh, A. (2023). Integrating technology pedagogy and content knowledge in Qatar’s preparatory and secondary schools: The perceptions and practices of STEM teachers. EURASIA J. Maths. Sci. Technol. Educ. 19:em2271.

Sellami, A., Ammar, M., and Ahmad, Z. (2022). Exploring teachers’ perceptions of the barriers to teaching STEM in high schools in qatar. Sustainability 14:15192.

Sellami, A., El-Kassem, R., Al-Qassass, H., and Al-Rakeb, N. (2017). A path analysis of student interest in STEM, with specific reference to qatari students. EURASIA J. Maths. Sci. Technol. Educ. 13, 6045–6067.

Shadle, S., Marker, A., and Earl, B. (2017). Faculty drivers and barriers: Laying the groundwork for undergraduate STEM education reform in academic departments. Int. J. STEM Educ. 4, 1–13. doi: 10.1186/s40594-017-0062-7

Sturtevant, H., and Wheeler, L. (2019). The STEM faculty instructional barriers and identity survey (FIBIS): Development and exploratory results. Int. J. STEM Educ. 6, 1–22. doi: 10.1186/s40594-019-0185-0

Tan, T., Al-Khalaqi, A., and Al-Khulaifi, N. (2014). Qatar national vision 2030. Sustain. Dev. 19, 65–81.

Tran, L., and Moskovsky, C. (2022). Students as the source of demotivation for teachers: A case study of Vietnamese university EFL teachers. Soc. Psychol. Educ. 25, 1527–1544.

von Keyserlingk, L., Becker, M., Jansen, M., and Maaz, K. (2020). Leaving the pond—Choosing an ocean: Effects of student composition on STEM major choices at university. J. Educ. Psychol. 112:751. doi: 10.1037/edu0000378

Wahono, B., and Chang, C. (2019). Assessing teacher’s attitude, knowledge, and application (AKA) on STEM: An effort to foster the sustainable development of STEM education. Sustainability 11:950. doi: 10.3390/su11040950

Waters, C., and Orange, A. (2022). STEM-driven school culture: Pillars of a transformative STEM approach. J. Pedagog. Res. 6, 72–90. doi: 10.33902/JPR.202213550

Weiner, B. (2010). The development of an attribution-based theory of motivation: A history of ideas. Educ. Psychol. 45, 28–36.

Keywords : teachers, teaching barriers, high school, STEM education, structural equation modeling (SEM)

Citation: Sellami A, Santhosh M, Bhadra J and Ahmad Z (2024) Teachers’ perceptions of the barriers to STEM teaching in Qatar’s secondary schools: a structural equation modeling analysis. Front. Educ. 9:1333669. doi: 10.3389/feduc.2024.1333669

Received: 05 November 2023; Accepted: 08 March 2024; Published: 04 April 2024.

Reviewed by:

Copyright © 2024 Sellami, Santhosh, Bhadra and Ahmad. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zubair Ahmad, [email protected]

National Center for Science and Engineering Statistics

  • 2022 - 2023
  • 2021 - 2022
  • 2020 - 2021
  • All previous cycle years

The Survey of Federal Funds for Research and Development is an annual census of federal agencies that conduct research and development (R&D) programs and the primary source of information about U.S. federal funding for R&D.

Survey Info

  • tag for use when URL is provided --> Methodology
  • tag for use when URL is provided --> Data
  • tag for use when URL is provided --> Analysis

The Survey of Federal Funds for Research and Development (R&D) is the primary source of information about federal funding for R&D in the United States. The survey is an annual census completed by the federal agencies that conduct R&D programs. Actual data are collected for the fiscal year just completed; estimates are obtained for the current fiscal year.

Areas of Interest

  • Government Funding for Science and Engineering
  • Research and Development

Survey Administration

Synectics for Management Decisions, Inc. (Synectics) performed the data collection for volume 72 (FYs 2022–23) under contract to the National Center for Science and Engineering Statistics.

Survey Details

  • Survey Description (PDF 127 KB)
  • Data Tables (PDF 4.8 MB)

Featured Survey Analysis

Federal R&D Obligations Increased 0.4% in FY 2022; Estimated to Decline in FY 2023.

Federal R&D Obligations Increased 0.4% in FY 2022; Estimated to Decline in FY 2023

Image 2752

Survey of Federal Funds for R&D Overview

Methodology, survey description, survey overview (fys 2022–23 survey cycle; volume 72).

The annual Survey of Federal Funds for Research and Development (Federal Funds for R&D) is the primary source of information about federal funding for R&D in the United States. The results of the survey are also used in the federal government’s calculation of U.S. gross domestic product at the national and state level, used for policy analysis, and used for budget purposes for the Federal Laboratory Consortium for Technology Transfer, the Small Business Innovation Research, and the Small Business Technology Transfer. The survey is sponsored by the National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF).

Data collection authority

The information is solicited under the authority of the National Science Foundation Act of 1950, as amended, and the America COMPETES Reauthorization Act of 2010.

Major changes to recent survey cycle

Key survey information, initial survey year, reference period.

FYs 2022–23.

Response unit

Federal agencies.

Sample or census

Population size.

The population consists of the 32 federal agencies that conduct R&D programs, excluding the Central Intelligence Agency (CIA).

Sample size

Not applicable; the survey is a census of all federal agencies that conduct R&D programs, excluding the CIA.

Key variables

Key variables of interest are listed below.

The survey provides data on federal obligations by the following key variables:

  • Federal agency
  • Field of R&D (formerly field of science and engineering)
  • Geographic location (within the United States and by foreign country or economy)
  • Performer (type of organization doing the work)
  • R&D plant (facilities and major equipment)
  • Type of R&D (research, development, test, and evaluation [RDT&E] for Department of Defense [DOD] agencies)
  • Basic research
  • Applied research
  • Development, also known as experimental development

The survey provides data on federal outlays by the following key variables:

  • R&D (RDT&E for DOD agencies)

R&D plant

Note that the variables “R&D,” “type of R&D,” and “R&D plant” in this survey use definitions comparable to those used by the Office of Management and Budget Circular A-11 , Section 84 (Schedule C).

Survey Design

Target population.

The population consists of the federal agencies that conduct R&D programs, excluding the CIA. For the FYs 2022–23 cycle, a total of 32 federal agencies (14 federal departments and 18 independent agencies) reported R&D data.

Sampling frame

The survey is a census of all federal agencies that conduct R&D programs, excluding the CIA. The agencies are identified from information in the president’s budget submitted to Congress. The Analytical Perspectives volume and the “Detailed Budget Estimates by Agency” section of the appendix to the president’s budget identify agencies that receive funding for R&D.

Sample design

Not applicable.

Data Collection and Processing

Data collection.

Synectics for Management Decisions, Inc. (Synectics) performed the data collection for volume 72 (FYs 2022–23) under contract to NCSES. Agencies were initially contacted by e-mail to verify the contact information of each agency-level survey respondent. A Web-based data collection system is used for the survey. Multiple subdivisions of some federal departments were permitted to submit information to create a complete accounting of the departments’ R&D funding activities.

Data collection for Federal Funds for R&D began in May 2023 and continued into September 2023.

Data processing

A Web-based data collection system is used to collect and manage data for the survey. This Web-based system was designed to help improve survey reporting and reduce data collection and processing costs by offering respondents direct online reporting and editing.

All data collection efforts, data imports, and trend checking are accomplished using the Web-based data collection system. The Web-based data collection system has a component that allows survey respondents to enter their data online; it also has a component that allows the contractor to monitor support requests, data entry, and data issues.

Estimation techniques

Published totals are created by summing respondent data, there are no survey weights or other adjustments.

Survey Quality Measures

Sampling error, coverage error.

Given the existence of a complete list of all eligible agencies, there is no known coverage error. The CIA is purposely excluded.

Nonresponse error

There is no unit nonresponse. To increase item response, agencies are encouraged to estimate when actual data are unavailable. The survey instrument allows respondents to enter data or skip data fields. There are several possible sources of nonresponse error by respondents, including inadvertently skipping data fields or skipping data fields when data are unavailable.

Measurement error

Some measurement problems are known to exist in the Federal Funds of R&D data. Some agencies cannot report the full costs of R&D, the final performer of R&D, or R&D plant data.

For example, DOD does not include headquarters’ costs of planning and administering R&D programs, which are estimated at a fraction of 1% of its total cost. DOD has stated that identification of amounts at this level is impracticable.

The National Institutes of Health (NIH) in the Department of Health and Human Services currently has many of its awards in its financial system without any field of R&D code. Therefore, NIH uses an alternate source to estimate its research dollars by field of R&D. NIH uses scientific class codes (based upon history of grant, content of the title, and the name of the awarding institute or center) as an approximation for field of R&D.

The National Aeronautics and Space Administration (NASA) does not include any field of R&D codes in its financial database. Consequently, NASA must estimate what percentage of the agency’s research dollars are allocated into the fields of R&D.

Also, agencies are required to report the ultimate performer of R&D. However, through past workshops, NCSES has learned that some agencies do not always track their R&D dollars to the ultimate performer of R&D. This leads to some degree of misclassification of performers of R&D, but NCSES has not determined the extent of the errors in performer misclassification by the reporting agencies.

R&D plant data are underreported to some extent because of the difficulty some agencies, particularly DOD and NASA, encounter in identifying and reporting these data. DOD’s respondents report obligations for R&D plant funded under the agency’s appropriation for construction, but they are able to identify only a small portion of the R&D plant support that is within R&D contracts funded from DOD’s appropriation for RDT&E. Similarly, NASA respondents cannot separately identify the portions of industrial R&D contracts that apply to R&D plant because these data are subsumed in the R&D data covering industrial performance. NASA R&D plant data for other performing sectors are reported separately.

Data Availability and Comparability

Data availability.

Annual data are available for FYs 1951–2023.

Data comparability

Until the release of volume 71 (FYs 2021–22) the information included in this survey had been unchanged since volume 23 (FYs 1973–75), when federal obligations for research to universities and colleges by agency and detailed field of science and engineering were added to the survey. Other variables (such as type of R&D and type of performer) are available from the early 1950s on. The volume 71 survey revisions maintained the four main R&D crosscuts (i.e., type of R&D, field of R&D [previously referred to as field of science and engineering], type of performer, and geographic area) collected previously. However, there were revisions within these crosscuts to ensure consistency with other NCSES surveys. These include revisions to the fields of R&D and the type of performer categories (see Technical Notes, table A-3 for a crosswalk of the fields of science and engineering to the fields of R&D). In addition, new variables were added, such as field of R&D for experimental development (whereas before, the survey participants had only reported fields of R&D [formerly fields of science] for basic research and applied research). Grants and contracts for extramural R&D performers and obligations to University Affiliated Research Centers were also added in volume 71.

Every time new data are released, there may be changes to past years’ data because agencies sometimes update older information or reclassify responses for prior years as additional budget data become available. For trend comparisons, use the historical data from only the most recent publication, which incorporates changes agencies have made in prior year data to reflect program reclassifications or other corrections. Do not use data published earlier.

Data Products

Publications.

NCSES publishes data from this survey annually in tables and analytic reports available at Federal Funds for R&D Survey page and in the Science and Engineering State Profiles .

Electronic access

Access to the data for major data elements are available in NCSES’s interactive data tool at https://ncsesdata.nsf.gov/ .

Technical Notes

Survey overview, data collection and processing methods, data comparability (changes), definitions.

Purpose. The annual Survey of Federal Funds for Research and Development (Federal Funds for R&D) is the primary source of information about federal funding for R&D in the United States. The results of the survey are also used in the federal government’s calculation of U.S. gross domestic product at the national and state level, for policy analysis, and for budget purposes for the Federal Laboratory Consortium for Technology Transfer, the Small Business Innovation Research, and the Small Business Technology Transfer. In addition, as of volume 71, the Survey of Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions (Federal S&E Support Survey) was integrated into this survey as a module, making Federal Funds for R&D the comprehensive data source on federal science and engineering (S&E) funding to individual academic and nonprofit institutions.

Data collection authority.  The information is solicited under the authority of the National Science Foundation Act of 1950, as amended, and the America COMPETES Reauthorization Act of 2010.

Survey contractor. Synectics for Management Decisions, Inc. (Synectics).

Survey sponsor. The National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF).

Frequency . Annual.

Initial survey year . 1951.

Reference period . FYs 2022–23.

Response unit. Federal agencies.

Sample or census. Census.

Population size. For the FYs 2022–23 cycle, a total of 32 federal agencies reported R&D data. (See section “ Survey Design ” for details.)

Sample size. Not applicable; the survey is a census of all federal agencies that conduct R&D programs, excluding the Central Intelligence Agency (CIA).

Target population. The population consists of the federal agencies that conduct R&D programs, excluding the CIA. For the FYs 2022–23 cycle, a total of 32 federal agencies (14 federal departments and 18 independent agencies) reported R&D data.

Sampling f rame. The survey is a census of all federal agencies that conduct R&D programs, excluding the CIA. The agencies are identified from information in the president’s budget submitted to Congress. The Analytical Perspectives volume and the “Detailed Budget Estimates by Agency” section of the appendix to the president’s budget identify agencies that receive funding for R&D.

Sample design. Not applicable.

Data collection. Data for FYs 2022–23 (volume 72) were collected by Synectics under contract to NCSES (for a full list of fiscal years canvassed by survey volume reference, see Table A-4 ). Data collection began with an e-mail to each agency to verify the name, phone number, and e-mail address of each agency-level survey respondent. A Web-based data collection system is used for the survey. Because multiple subdivisions of some federal departments completed the survey, there were 72 agency-level respondents: 6 federal departments that reported for themselves, 48 agencies within another 8 federal departments, and 18 independent agencies. However, lower offices could also be authorized to enter data: in Federal Funds for R&D nomenclature, agency-level offices could authorize program offices, program offices could authorize field offices, and field offices could authorize branch offices. When these suboffices are included, there were 725 total respondents: 72 agencies, 95 program offices, 178 field offices, and 380 branch offices.

Since volume 66, each survey cycle collects information for 2 federal government fiscal years: the fiscal year just completed (FY 2022—i.e., 1 October 2021 through 30 September 2022) and the current fiscal year during the start of the survey collection period (i.e., FY 2023). FY 2022 data are completed transactions. FY 2023 data are estimates of congressional appropriation actions and apportionment and reprogramming decisions.

Data collection began on 10 May 2023, and the requested due date for data submissions was 5 August 2023. Data collection was extended until all surveyed agencies provided complete and final survey data in September 2023.

Mode. Federal Funds for R&D uses a Web-based data collection system. The Web-based system consists of a data collection component that allows survey respondents to enter their data online and a monitoring component that allows the data collection contractor to monitor support requests, data entry, and data issues. The Web-based system’s two components are password protected so that only authorized respondents and staff can access them. However, some agencies submit their data in alternative formats such as Excel files, which are later imported into the Web-based system. All edit and trend checks are accomplished through the Web-based system. Final submission occurs through the Web-based system after all edit failures and trend checks have been resolved.

Response rate. The unit response rate is 100%.

Data checking . Data errors in Federal Funds for R&D are flagged automatically by the Web-based data collection system: respondents cannot submit their final data to NCSES until all required fields have been completed without errors. Once data are submitted, specially written SAS programs are run to check each agency’s submission to identify possible discrepancies, to ensure data from all suboffices are included correctly, and to check that there were no inadvertent shifts in reporting from one year to the next. As always, respondents are contacted to resolve potential reporting errors that cannot be reconciled by the narratives. Explanations of questionable data are noted by the survey respondents for NCSES review.

Imputation . None.

Weighting. None.

Variance estimation. Not applicable.

Sampling error. Not applicable.

Coverage error. Given the existence of a complete list of all eligible agencies, there is no known coverage error. The CIA is purposely excluded.

Nonresponse error. There is no unit nonresponse. To increase item response, agencies are encouraged to estimate when actual data are unavailable. The survey instrument allows respondents to enter data or skip data fields; however, blank fields are not accepted for survey submission, and respondents must either populate the fields with data or with $0 if the question is not applicable. There are several possible sources of nonresponse error by respondents, including inadvertently skipping data fields, skipping data fields when data are unavailable, or entering $0 when specific data are unavailable.

Measurement error . Some measurement problems are known to exist in the Federal Funds of R&D data. Some agencies cannot report the full costs of R&D, the final performer of R&D, or R&D plant data.

For example, the Department of Defense (DOD) does not include headquarters’ costs of planning and administering R&D programs, which are estimated at a fraction of 1% of its total cost. DOD has stated that identification of amounts at this level is impracticable.

The National Institutes of Health (NIH) in the Department of Health and Human Services (HHS) currently has many of its awards in its financial system without any field of R&D code. Therefore, NIH uses an alternate source to estimate its research dollars by field of R&D. NIH uses scientific class codes (based upon history of grant, content of the title, and the name of the awarding institute or center) as an approximation for field of R&D.

Agencies are asked to report the ultimate performer of R&D. However, through past workshops, NCSES has learned that some agencies do not always track their R&D dollars to the ultimate performer of R&D. In the case of transfers to other federal agencies, the originating agency often does not have information on the final disposition of funding made by the receiving agency. Therefore, intragovernmental transfers, which are classified as federal intramural funding, may have some degree of extramural performance. This leads to some degree of misclassification of performers of R&D, but NCSES has not determined the extent of the errors in performer misclassification by the reporting agencies.

Differences in agency and NCSES classification of some performers will also lead to some degree of measurement error. For example, although many university research foundations are legally organized as nonprofit organizations and may be classified as such within a reporting agency’s own system of record, NCSES classifies these as component units of higher education. These classification differences may contribute to differences in findings by the Federal Funds for R&D and the Federal S&E Support Survey in federal agency obligations to both higher education and nonprofit institutions.

R&D plant data are underreported to some extent because of the difficulty some agencies, particularly DOD and NASA, encounter in identifying and reporting these data. DOD’s respondents report obligations for R&D plant that are funded under the agency’s appropriation for construction, but they are able to identify only a small portion of the R&D plant support that is within R&D contracts funded from DOD’s appropriation for research, development, testing, and evaluation (RDT&E). Similarly, NASA respondents cannot separately identify the portions of industrial R&D contracts that apply to R&D plant because these data are subsumed in the R&D data covering industrial performance. NASA R&D plant data for other performing sectors are reported separately.

Data revisions. When completing the current year’s survey, agencies naturally revise their estimates for the last year of the previous report—in this case, FY 2022. Sometimes, survey submissions also reflect reappraisals and revisions in classification of various aspects of agencies’ R&D programs; in those instances, NCSES requests that agencies provide revised prior year data to maintain consistency and comparability with the most recent R&D concepts.

For trend comparisons, use the historical data from only the most recent publication, which incorporates changes agencies have made in prior year data to reflect program reclassifications or other corrections. Do not use data published earlier.

Changes in survey coverage and population. This cycle (volume 72, FYs 2022–23), one department, the Department of Homeland Security (DHS), became the agency respondent instead of continuing to delegate that role to its bureaus; one agency was added as a respondent—the Department of Agriculture’s (USDA’s) Natural Resources Conservation Service; one agency, the Department of Transportation’s Maritime Administration, resumed reporting; and two agencies, the Department of Treasury’s Internal Revenue Service (IRS) and the independent agency the Federal Communications Commission, ceased to report.

Changes in questionnaire .

  • No changes were made to the questionnaire for volume 72.
  • The survey was redesigned for volume 71 (FYs 2021–22). The Federal S&E Support Survey was integrated as the final two questions in the Federal Funds for R&D questionnaire. (NCSES will continue to publish these data separately at https://ncses.nsf.gov/surveys/federal-support-survey/ .)
  • Four other new questions were added to the standard and DOD versions of the questionnaire; the questions covered, for the fiscal year just completed (FY 2021), R&D deobligations (Standard and DOD Question 4), nonfederal R&D obligations by type of agreement (Standard Question 10 and DOD Question 11), R&D obligations provided to other federal agencies (Standard Question 11 and DOD Question 12), and R&D and R&D plant obligations to university affiliated research centers (Standard Question 17 and DOD Question 19). One new question added solely to the DOD questionnaire (DOD Question 6) was about obligations for Small Business Innovation Research and Small Business Technology Transfer for the fiscal year just completed and the current fiscal year at the time of collection (i.e., FYs 2021 and 2022). Many of the other survey questions were reorganized and revised.
  • For volume 71, some changes were made within the questions for consistency with other NCSES surveys. Among the performer categories, federally funded R&D centers (FFRDCs), which in previous volumes were included among the extramural performers, became one of the intramural performers. Other changes include retitling of certain performer categories, where “industry” was changed to “businesses” and “universities and colleges” was changed to “higher education.”
  • For volume 71, “field of R&D” was used instead of the former “field of science and engineering.” The survey started collecting field of R&D information for experimental development obligations; previously, field of R&D information was collected only for research obligations.
  • For volume 71, federal obligations for research performed at higher education institutions, by detailed field of R&D was asked of all agencies. Previously these data had only been collected from the Departments of Agriculture, Defense, Energy, HHS, and Homeland Security; NASA; and NSF. 
  • For volume 71, geographic distribution of R&D obligations was asked of all agencies. Previously, these data had only been collected from the Departments of Agriculture, Commerce, Defense, Energy, HHS, Homeland Security; NASA; and NSF. Agencies are asked to provide the principal location (state or outlying area) of the work performed by the primary contractor, grantee, or intramural organization; assign the obligations to the location of the headquarters of the U.S. primary contractor, grantee, or intramural organization; or, for DOD agencies, list the funds as undistributed for classified funds.
  • For volume 71, collection of data on funding type (stimulus and non-stimulus) was limited to Question 5 on type of R&D.
  • For volume 71, grants and contracts for extramural R&D performers and obligations to University Affiliated Research Centers were added.
  • For volume 70 (FYs 2020–21), agencies were requested to report COVID-19 pandemic-related R&D from the agency’s initial appropriations, as well as from any stimulus funds received from the Coronavirus Aid, Relief, and Economic Security (CARES) Act, plus any other pandemic-related supplemental appropriations. Two tables in the questionnaire were modified to collect the stimulus and non-stimulus amounts separately (tables 1 and 2), and seven tables in the questionnaire (tables 6.1, 6.2, 7.1, 11.1, 11.2, 12.1, and 13.1) were added for respondents to specify stimulus and non-stimulus funding by various categories. The data on stimulus funding is reported in volume 70’s data table 132. The Biomedical Advanced Research and Development Authority accounted for 66% of all COVID-19 R&D in FY 2020; these obligations primarily include transfers to the other agencies to help facilitate execution of contractual awards under Operation Warp Speed.
  • For volume 70 (FYs 2020–21), the optional narrative tables that ask for comparisons of the R&D obligations reported in Federal Funds for R&D with corresponding amounts in the Federal S&E Support Survey (standard questionnaire only) were renumbered from tables 6B and 6C to tables 6A and 6B.
  • In volumes 68 (FYs 2018–19) and 69 (FYs 2019–20), table 6A, which collected information on federal intramural R&D obligations, was deactivated, and agencies were instructed not to complete it.
  • For volumes 66 (FYs 2016–17) and 67 (FYs 2017–18), table 6A (formerly table VI.A) was included, but it was modified so that it no longer collected laboratory names.
  • Starting with volume 66 (FYs 2016–17), the survey collects 2 federal government fiscal years—actual data for the fiscal year just completed and estimates for the current fiscal year. Previously, the survey also collected projected obligations for the next fiscal year based on the president’s budget request to Congress. For volume 66, data were collected for only 2 fiscal years due to the delayed FY 2018 budget formulation process. However, after consultation with data users, NCSES determined that the projections were not as useful as the budget authority data presented in the budget request.
  • In volume 66, the survey table numbering was changed from Roman numerals I–XI and, for selected agencies, the letters A–E, to Arabic numerals 1–16. The order of tables remained the same.
  • In the volume 66 DOD-version of the questionnaire, the definition of major systems development was changed to represent DOD Budget Activities 4 through 6 instead of Budget Activities 4 through 7, and questions relating to funding for Operational Systems Development (Budget Activity 7) were added to the instrument. The survey’s narrative tables 6 and 11 were removed from the DOD-version of the questionnaire.
  • For volume 65 (FYs 2015–17), the survey reintroduced table VI.A to collect information on federal intramural R&D obligations, including the names and addresses of all federal laboratories that received federal intramural R&D obligations. The table was included in both the standard and DOD questionnaires.
  • For volume 62 (FYs 2012–14), the survey added table VI.A to the standard questionnaire for that volume only to collect information on FY 2012 federal intramural R&D obligations, including the names and addresses of all federal laboratories that received federal intramural R&D obligations.
  • In volumes 59 (FYs 2009–11) and 60 (FYs 2010–12), questions relating to funding from the American Recovery and Reinvestment Act of 2009 (ARRA) were added to the data collection instruments. The survey collected separate outlays and obligations for ARRA and non-ARRA sources of funding, by performer and geography for FYs 2009 and 2010.
  • Starting with volume 59 (FYs 2009–11), federal funding data were requested in actual dollars (instead of rounded in thousands, as was done through volume 58).

Changes in reporting procedures or classification.

  • FY 2022. During the volume 72 cycle (FYs 2022–23), NASA revised its FY 2021 data by field of R&D and performer categories based on improved classification procedures developed during the volume 72 reporting period.
  • FY 2021. During the volume 71 cycle (FYs 2021–22), NCSES decided to remove “U.S.” from names like “U.S. Space Force” to conform with other surveys. For Federal Funds for R&D, this change will first appear in the detailed statistical tables.
  • FY 2020. For volume 70 (FYs 2020 and 2021), data include obligations from supplemental COVID-19 pandemic-related appropriations (e.g., CARES Act) plus any other pandemic-related supplemental appropriations.
  • FY 2020. The Department of Energy’s (DOE’s) Naval Reactor Program reclassified some of its R&D obligations from industry-administered FFRDCs to the industry sector.
  • FY 2020. The Department of the Air Force (AF) and the DOE’s Energy Efficiency and Renewable Energy (EERE) partially revised their FY 2019 data. AF revised its operational system development classified program numbers for businesses excluding business or industry-administered FFRDCs, and EERE revised its outlay numbers.
  • FY 2019. For volume 69 (FYs 2019–20), FY 2020 preliminary data do not include obligations from supplemental COVID-19 pandemic-related appropriations (e.g., CARES Act).
  • FY 2019. The Biomedical Advanced Research and Development Authority began reporting. For volume 69 (FYs 2019–20), it could not submit any geographical data, so its data were reported as undistributed on the state tables.
  • FY 2019. The U.S. Agency for Global Media (formerly the Broadcasting Board of Governors), which did not report data between FY 2008 and FY 2018, resumed reporting.
  • FY 2018. The HHS Centers for Medicare and Medicaid (CMS) funding was reported by the CMS Office of Financial Management at an agency-wide level instead of by the CMS Center for Medicare and Medicaid Innovation and its R&D group, the Office of Research, Development, and Information, which used to report at a component level.
  • FY 2018. The Department of State added the Global Health Programs R&D funding.
  • FY 2018. The Department of Veterans Affairs added funds for the Medical Services support to the existing R&D funding to fully report the total cost of intramural R&D. Although the Medical Services do not directly fund specific R&D activities, they host intramural research programs that were not previously reported.
  • FY 2018. DHS’s Countering Weapons of Mass Destruction (CWMD) Office was established on 7 December 2017. CWMD consolidated primarily the Domestic Nuclear Detection Office (DNDO) and a majority of the Office of Health Affairs, as well as other DHS elements. Prior to FY 2018, data reported for the CWMD would have been under the DNDO.
  • FY 2018. DOE revised its FYs 2016 and 2017 data after discovering its Office of Fossil Energy reported “in thousands” instead of actual dollars for volumes 66 (FYs 2016–17) and 67 (FYs 2017–18).
  • FY 2018. USDA’s Economic Research Service (ERS) partially revised its FYs 2009 and 2010 data during the volume 61 (FYs 2011–13) cycle. NCSES discovered a discrepancy that was corrected during the volume 68 cycle, completing the revision.
  • FY 2018. DHS’s Transportation Security Administration, which did not report data between FY 2010 and FY 2017, resumed reporting for volume 68 (FYs 2018–19).
  • FY 2018. DHS’s U.S. Secret Service, which did not report data between FY 2009 and FY 2017, resumed reporting for volume 68 (FYs 2018–19).
  • FY 2018. NCSES discovered that in some past volumes, the obligations reported for basic research in certain foreign countries were greater than the corresponding obligations reported for R&D; the following data were corrected as a result: DOD and Chemical and Biological Defense FY 2003 data, defense agencies and activities FY 2003 and FY 2011 data, AF FY 2009 data, and Department of the Navy FY 2005, FY 2011, and FY 2013 data; DOE and Office of Science FY 2009 data; HHS and Centers for Disease Control and Prevention (CDC) FY 2008 and FY 2017 data; and NSF FY 2001 data. NCSES also discovered that some obligations reported for academic performers were greater than the corresponding obligations reported for total performers, and DOD and AF FY 2009 data, DOE and Fossil Energy FY 1999 data, and NASA FY 2008 data were corrected. Finally, NCSES discovered a problem with FY 2017 HHS CDC personnel costs data, which were then also corrected.
  • FY 2017. The Department of the Treasury’s IRS performed a detailed evaluation and assessment of its programs and determined that none of its functions can be defined as R&D activity as defined in Office of Management and Budget (OMB) Circular A-11. The review included discussions with program owners and relevant contractors who perform work on behalf of the IRS. The IRS also provided a negative response to the OMB data call on R&D under Circular A-11 for the same reference period (FYs 2017–18). Despite no longer having any R&D obligations, the IRS still sponsors an FFRDC, the Center for Enterprise Modernization.
  • FY 2017. NASA estimated that the revised OMB definition for "experimental development" reduced its reported R&D total by about $2.7 billion in FY 2017 and $2.9 billion in FY 2018 from what would have been reported under the previous definition prior to volume 66 (FYs 2016–17).
  • FY 2017. The Patient-Centered Outcomes Research Trust Fund (PCORTF) was established by Congress through the Patient Protection and Affordable Care Act of 2010, signed by the president on 23 March 2010. PCORTF began reporting for volume 67 (FYs 2017–18), but it also submitted data for FYs 2011–16.
  • FY 2017. The Tennessee Valley Authority, which did not report data between FY 1999 and FY 2016, resumed reporting for volume 67 (FYs 2017–18).
  • FY 2017. The U.S. Postal Service, which did not report data between FY 1999 and FY 2016, resumed reporting for volume 67 (FYs 2017–18) and submitted data for FYs 2015–16.
  • FY 2017. During the volume 67 (FYs 2017–18) data collection, DHS’s Science and Technology Directorate revised its FY 2016 data.
  • FY 2016. The Administrative Office of the U.S. Courts began reporting as of volume 66 (FYs 2016–17).
  • Beginning with FY 2016, the totals reported for development obligations and outlays represent a refinement to this category by more narrowly defining it to be “experimental development.” Most notably, totals for development do not include the DOD Budget Activity 7 (Operational System Development) obligations and outlays. Those funds, previously included in DOD’s development totals, support the development efforts to upgrade systems that have been fielded or have received approval for full rate production and anticipate production funding in the current or subsequent fiscal year. Therefore, the data are not directly comparable with totals reported in previous years.
  • Prior to the volume 66 launch, the definitions of basic research, applied research, experimental development, R&D, and R&D plant were revised to match the definitions used by OMB in the July 2016 version of Circular A-11, Section 84 (Schedule C).
  • FYs 2016–17. Before the volume 66 survey cycle, NSF updated the list of foreign performers in Federal Funds R&D to match the list of countries and territories in the Department of State’s Bureau of Intelligence and Research fact sheet of Independent States in the World and fact sheet of Dependencies and Areas of Special Sovereignty. Country lists in volume 66 data tables and later may differ from those in previous reports.
  • FY 2015. The HHS Administration for Community Living (ACL) began reporting in FY 2015, replacing the Administration on Aging, which was transferred to ACL when ACL was established on 18 April 2012. Several programs that serve older adults and people with disabilities were transferred from other agencies to ACL, including a number of programs from the Department of Education due to the 2014 Workforce Innovation and Opportunities Act.
  • FY 2015. The Department of the Interior’s Bureau of Land Management and U.S. Fish and Wildlife Service, which did not report data between FY 1999 and FY 2014, resumed reporting.
  • In January 2014, all Research and Innovative Technology Administration programs were transferred into the Office of the Assistant Secretary for Research and Technology in the Office of the Secretary of Transportation.
  • FY 2014. DHS’s Domestic Nuclear Detection Office began reporting for FY 2014.
  • FY 2014. The Department of State data for FY 2014 were excluded due to their poor quality.
  • FY 2013. NASA revamped its reporting process so that the data for FY 2012 forward are not directly comparable with totals reported in previous years.
  • FY 2012. NASA began reporting International Space Station (ISS) obligations as research rather than R&D plant.
  • Starting with volume 62 (FYs 2012–14), an “undistributed” category was added to the geographic location tables for DOD obligations for which the location of performance is not reported. It includes DOD obligations for industry R&D that were included in individual state totals prior to FY 2012 and DOD obligations for other performers that were not reported prior to FY 2011. This change was applied retroactively to FY 2011 data.
  • Starting with volume 61 (FYs 2011–13), DOD subagencies other than the Defense Advanced Research Projects Agency were reported as an aggregate total under other defense agencies to enable complete reporting of DOD R&D (both unclassified and classified). Consequently, DOD began reporting additional classified R&D not previously reported by its subagencies.
  • FY 2011. USDA’s ERS partially revised its data for FYs 2009 and 2010 during the volume 61 (FYs 2011–13) cycle.
  • FY 2010. NASA resumed reporting ISS obligations as R&D plant.
  • FYs 2000–09. Beginning in FY 2000, AF did not report Budget Activity 6.7 Operational Systems Development data because the agency misunderstood the reporting requirements. During the volume 57 data collection cycle, AF edited prior year data for FYs 2000–07 to include Budget Activity 6.7 Operational Systems Development data. These data revisions were derived from FY 2007 distribution percentages that were then applied backward to revise data for FYs 2000–06.
  • FYs 2006–07. NASA’s R&D obligations decreased by $1 billion. Of this amount, $850 million was accounted for by obligations for operational projects that NASA excluded in FY 2007 but reported in FY 2006. The remainder was from an overall decrease in obligations between FYs 2006 and 2007.
  • FY 2006. NASA reclassified funding for the following items as operational costs: Space Operations, the Hubble Space Telescope, the Stratospheric Observatory for Infrared Astronomy, and the James Webb Space Telescope. This funding was previously reported as R&D plant.
  • FYs 2005–07. Before the volume 55 survey cycle, NSF updated the list of foreign performers in Federal Funds R&D to match the list of countries and territories in the Department of State’s Bureau of Intelligence and Research fact sheet of Independent States in the World and fact sheet of Dependencies and Areas of Special Sovereignty. Area and country lists in volume 55 data tables and later may differ from those in previous reports.
  • FYs 2004–06. NASA implemented a full-cost budget approach, which includes all of the direct and indirect costs for procurement, personnel, travel, and other infrastructure-related expenses relative to a particular program and project. NASA’s data for FY 2004 and later years may not be directly comparable with its data for FY 2003 and earlier years.
  • FY 2004. NIH revised its financial database; beginning with FY 2004, NIH records no longer contain information on the field of S&E. Data for FY 2004 and later years are not directly comparable with data for FY 2003 and earlier years.
  • Data for FYs 2003–06 from the Substance Abuse and Mental Health Services Administration (SAMHSA) are estimates based on SAMHSA's obligations by program activity budget and previously reported funding for development.
  • FY 2003. SAMHSA reclassified some of its funding categories as non-R&D that had been considered to be R&D in prior years.
  • On 25 November 2002, the president signed the Homeland Security Act of 2002, establishing DHS. DHS includes the R&D activities previously reported by the Federal Emergency Management Agency, the Science and Technology Directorate, the Transportation Security Administration, the U.S. Coast Guard, and the U.S. Secret Service.
  • FY 2000. NASA reclassified the ISS as a physical asset, reclassified ISS Research as equipment, and transferred funding for the program from R&D to R&D plant.
  • FY 2000. NIH reclassified as research the activities that it had previously classified as development. NIH data for FY 2000 forward reflect this change. For more information on the classification changes at NASA and NIH, refer to Classification Revisions Reduce Reported Federal Development Obligations (InfoBrief NSF 02-309), February 2002, available at https://www.nsf.gov/statistics/nsf02309 .
  • FYs 1996–98. The lines on the survey instrument for the special foreign currency program and for detailed field of S&E were eliminated beginning with the volume 46 survey cycle. Two tables depicting data on foreign performers by region, country, and agency that were removed before publication of volume 43 were reinstated with volume 46.
  • FYs 1994–96. During the volume 44 survey cycle, the Director for Defense Research and Engineering (DDR&E) at DOD requested that NSF further clarify the true character of DOD’s R&D program, particularly as it compares with other federal agencies, by adding more detail to development obligations reported by DOD respondents. Specifically, DOD requested that NSF allow DOD agencies to report development obligations in two separate categories: advanced technology development and major systems development. An excerpt from a letter written by Robert V. Tuohy, Chief, Program Analysis and Integration at DDR&E, to John E. Jankowski, Program Director, Research and Development Statistics Program, Division of Science Resources Statistics, NSF, explains the reasoning behind the DDR&E request: “The DOD’s R&D program is divided into two major pieces, Science and Technology (S&T) and Major Systems Development. The other federal agencies’ entire R&D programs are equivalent in nature to DOD’s S&T program, with the exception of the Department of Energy and possibly NASA. Comparing those other agency programs to DOD’s program, including the development of weapons systems such as F-22 Fighter and the New Attack Submarine, is misleading.”
  • FYs 1990–92. Since volume 40, DOD has reported research obligations and development obligations separately. Tables reporting obligations for research, by state and performer, and obligations for development, by state and performer, were specifically created for DOD. Circumstances specific to DOD are (1) DOD funds the preponderance of federal development and (2) DOD development funded at institutions of higher education is typically performed at university-affiliated nonacademic laboratories, which are separate from universities’ academic departments, where university research is typically performed.

Agency and subdivision. An agency is an organization of the federal government whose principal executive officer reports to the president. The Library of Congress and the Administrative Office of the U.S. Courts are also included in the survey, even though the chief officer of the Library of Congress reports to Congress and the U.S. Courts are part of the judicial branch. Subdivision refers to any organizational unit of a reporting agency, such as a bureau, division, office, or service.

Development . See R&D and R&D plant.

Fields of R&D (formerly fields of science and engineering ) . A list of the 41 fields of R&D reported on can be found on the survey questionnaire. In the data tables, the fields are grouped into 9 major areas: computer and information sciences; geosciences, atmospheric sciences, and ocean sciences; life sciences; mathematics and statistics; physical sciences; psychology; social sciences; engineering; and other fields. Table A-3 provides a crosswalk of the fields of science and engineering used in volume 70 and earlier surveys to the revised fields of R&D collected under volume 71.

Federal obligations for research performed at higher education institutions , by detailed field of R&D . As of volume 71, all respondents were required to report these obligations. Previously, this information was reported by seven agencies (the Departments of Agriculture, Defense, Energy, Health and Human Services, and Homeland Security; NASA; and NSF).

Geographic distribution of R&D obligations. As of volume 71, all respondents were required to respond to this portion of the survey. Previously, the 11 largest R&D funding agencies responded to this portion (the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, the Interior, and Transportation; the Environmental Protection Agency; NASA; and NSF). Respondents are asked to provide the principal location (state or outlying area) of the work performed by the primary contractor, grantee, or intramural organization, assign the obligations to the location of the headquarters of the U.S. primary contractor, grantee, or intramural organization, or list the funds as undistributed.

Obligations and outlays. Obligations represent the amounts for orders placed, contracts awarded, services received, and similar transactions during a given period, regardless of when funds were appropriated and when future payment of money is required. Outlays represent the amounts for checks issued and cash payments made during a given period, regardless of when funds were appropriated.

Performer. A group or organization carrying out an operational function or an extramural organization or a person receiving support or providing services under a contract or grant.

  • Intramural performers are agencies of the federal government, including federal employees who work on R&D both onsite and offsite and, as of volume 71, FFRDCs.
  • Federal. The work of agencies of the federal government is carried out directly by agency personnel. Obligations reported under this category are for activities performed or to be performed by the reporting agency itself or are for funds that the agency transfers to another federal agency for performance of R&D (intragovernmental transfers). Although the receiving agency may obligate these funds to extramural performers (businesses, universities and colleges, other nonprofit institutions, FFRDCs, nonfederal government, and foreign) they are reported as part of the federal sector by the originating agency. Federal activities cover not only actual intramural R&D performance but also the costs associated with administration of intramural R&D programs and extramural R&D procurements by federal personnel. Intramural activities also include the costs of supplies and off-the-shelf equipment (equipment that has gone beyond the development or prototype stage) procured for use in intramural R&D. For example, an operational launch vehicle purchased from an extramural source by NASA and used for intramural performance of R&D is reported as a part of the cost of intramural R&D.
  • Federally funded research and development centers (FFRDCs) —R&D-performing organizations that are exclusively or substantially financed by the federal government and are supported by the federal government either to meet a particular R&D objective or in some instances to provide major facilities at universities for research and associated training purposes. Each center is administered by an industrial firm, a university, or another nonprofit institution (see https://www.nsf.gov/statistics/ffrdclist/ for the Master Government List of FFRDCs maintained by NSF).
  • Extramural performers are organizations outside the federal sector that perform R&D with federal funds under contract, grant, or cooperative agreement. Only costs associated with actual R&D performance are reported. Types of extramural performers:
  • Businesses (previously “ Industry or i ndustr ial firms ”) —Organizations that may legally distribute net earnings to individuals or to other organizations.
  • Higher education institutions (previously “ Universities and colleges ”) —Institutions of higher education in the United States that engage primarily in providing resident or accredited instruction for a not less than a 2-year program above the secondary school level that is acceptable for full credit toward a bachelor’s degree or that provide not less than a 1-year program of training above the secondary school level that prepares students for gainful employment in a recognized occupation. Included are colleges of liberal arts; schools of arts and sciences; professional schools, as in engineering and medicine, including affiliated hospitals and associated research institutes; and agricultural experiment stations. Other examples of universities and colleges include community colleges, 4-year colleges, universities, and freestanding professional schools (medical schools, law schools, etc.).
  • Other nonprofit institutions —Private organizations other than educational institutions whose net earnings do not benefit either private stockholders or individuals and other private organizations organized for the exclusive purpose of turning over their entire net earnings to such nonprofit organizations. Examples of nonprofit institutions include foundations, trade associations, charities, and research organizations.
  • State and local governments —State and local government agencies, excluding state or local universities and colleges, agricultural experiment stations, medical schools, and affiliated hospitals. (Federal R&D funds obligated directly to such state and local institutions are excluded in this category. However, they are included under the universities and colleges category in this report.) R&D activities under the state and local governments category are performed either by the state or local agencies themselves or by other organizations under grants or contracts from such agencies. Regardless of the ultimate performer, federal R&D funds directed to state and local governments are reported only under this sector.
  • Non-U.S. performers (previously “Foreign performers”) —Other nations’ citizens, organizations, universities and colleges, governments, as well as international organizations located outside the United States, that perform R&D. In most cases, foreigners performing R&D in the United States are not reported here. Excluded from this category are U.S. agencies, U.S. organizations, or U.S. citizens performing R&D abroad for the federal government. Examples of foreign performers include the North Atlantic Treaty Organization, the United Nations Educational, Scientific, and Cultural Organization, and the World Health Organization. An exception in the past was made in the case of U.S. citizens performing R&D abroad under special foreign-currency funds; these activities were included under the foreign performers category but have not been collected since the mid-1990s.
  • Private individuals —When an R&D grant or contract is awarded directly to a private individual, obligations incurred are placed under the category businesses.

R &D and R&D plant. Amounts for R&D and R&D plant include all direct, incidental, or related costs resulting from, or necessary to, performance of R&D and costs of R&D plant as defined below, regardless of whether R&D is performed by a federal agency (intramurally) or by private individuals and organizations under grant or contract (extramurally). R&D excludes routine product testing, quality control, mapping and surveys, collection of general-purpose statistics, experimental production, and the training of scientific personnel.

  • Research is defined as systematic study directed toward fuller scientific knowledge or understanding of the subject studied. Research is classified as either basic or applied, according to the objectives of the sponsoring agency.
  • Basic research is defined as experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts. Basic research may include activities with broad or general applications in mind, such as the study of how plant genomes change, but should exclude research directed toward a specific application or requirement, such as the optimization of the genome of a specific crop species.
  • Applied research is defined as original investigation undertaken in order to acquire new knowledge. Applied research is, however, directed primarily toward a specific practical aim or objective.
  • Development , also known as experimental development, is defined as creative and systematic work, drawing on knowledge gained from research and practical experience, which is directed at producing new products or processes or improving existing products or processes. Like research, experimental development will result in gaining additional knowledge.

For reporting experimental development activities, the following are included:

The production of materials, devices, and systems or methods, including the design, construction, and testing of experimental prototypes.

Technology demonstrations, in cases where a system or component is being demonstrated at scale for the first time, and it is realistic to expect additional refinements to the design (feedback R&D) following the demonstration. However, not all activities that are identified as “technology demonstrations” are R&D.

However, experimental development excludes the following:

User demonstrations where the cost and benefits of a system are being validated for a specific use case. This includes low-rate initial production activities.

Pre-production development, which is defined as non-experimental work on a product or system before it goes into full production, including activities such as tooling and development of production facilities.

To better differentiate between the part of the federal R&D budget that supports science and key enabling technologies (including technologies for military and nondefense applications) and the part that primarily supports testing and evaluation (mostly of defense-related systems), NSF collects development dollars from DOD in two categories: advanced technology development and major systems development.

DOD uses RDT&E Budget Activities 1–7 to classify data into the survey categories. Within DOD’s research categories, basic research is classified as Budget Activity 1, and applied research is classified as Budget Activity 2. Within DOD’s development categories, advanced technology development is classified as Budget Activity 3. Starting in volume 66, major systems development is classified as Budget Activities 4–6 instead of Budget Activities 4–7 and includes advanced component development and prototypes, system development and demonstration, and RDT&E management support; data on Budget Activity 7, operational systems development, is collected separately. (Note: As a historical artifact from previous DOD budget authority terminology, funds for Budget Activity categories 1 through 7 are sometimes referred to as 6.1 through 6.7 monies.)

  • Demonstration includes amounts for activities that are part of R&D (i.e., that are intended to prove or to test whether a technology or method does in fact work). Demonstrations intended primarily to make information available about new technologies or methods are excluded.
  • R&D plant is defined as spending on both R&D facilities and major equipment as defined in OMB Circular A-11 Section 84 (Schedule C) and includes physical assets, such as land, structures, equipment, and intellectual property (e.g., software or applications) that have an estimated useful life of 2 years or more. Reporting for R&D plant includes the purchase, construction, manufacture, rehabilitation, or major improvement of physical assets regardless of whether the assets are owned or operated by the federal government, states, municipalities, or private individuals. The cost of the asset includes both its purchase price and all other costs incurred to bring it to a form and location suitable for use.
  • For reporting construction of R&D facilities and major moveable R&D equipment, include the following:

Construction of facilities that are necessary for the execution of an R&D program. This may include land, major fixed equipment, and supporting infrastructure such as a sewer line, or housing at a remote location. Many laboratory buildings will include a mixture of R&D facilities and office space. The fraction of the building that is considered to be used for R&D may be calculated based on the percentage of square footage that is used for R&D.

Acquisition, design, or production of major movable equipment, such as mass spectrometers, research vessels, DNA sequencers, and other movable major instrumentation for use in R&D activities.

Programs of $1 million or more that are devoted to the purchase or construction of R&D major equipment.

Exclude the following:

Construction of other non-R&D facilities.

Minor equipment purchases, such as personal computers, standard microscopes, and simple spectrometers (report these costs under total R&D, not R&D Plant).

Obligations for foreign R&D plant are limited to federal funds for facilities that are located abroad and used in support of foreign R&D.

Technical Tables

Questionnaires, view archived questionnaires, key data tables.

Recommended data tables

Research, development, and R&D plant

Research and experimental development, research obligations, geographic distribution of obligations, data tables, research, development, test, and evaluation (rdt&e), intramural obligations for research and experimental development and r&d plant, basic research obligations, applied research obligations, experimental development obligations, obligations to university affiliated research centers: fy 2022, research obligations to higher education performers, basic research obligations to higher education performers, applied research obligations to higher education performers, experimental development obligations to higher education performers, foreign performer obligations, by region, country or economy, and agency, geographic distribution of department of defense rdt&e obligations, outlays, by agency, obligations, by agency, obligations, by performer: fys 1967–2023, obligations, by detailed field of science and engineering, obligations, by state or location, general notes.

These tables present the results of volume 72 (FYs 2022–23) of the Survey of Federal Funds for Research and Development. This annual census, completed by the federal agencies that conduct research and development (R&D) programs, is the primary source of information about federal funding for R&D in the United States. Actual data are collected for the fiscal year just completed; estimates are obtained for the current fiscal year.

Acknowledgments and Suggested Citation

Acknowledgments, suggested citation.

Christopher V. Pece of the National Center for Science and Engineering Statistics (NCSES) developed and coordinated this report under the guidance of Amber Levanon Seligson, NCSES Program Director, and the leadership of Emilda B. Rivers, NCSES Director; Christina Freyman NCSES Deputy Director; and John Finamore, NCSES Chief Statistician. Gary Anderson and Jock Black (NCSES) reviewed the report.

Under contract to NCSES, Synectics for Management Decisions, Inc. conducted the survey and prepared the statistics for this report. Synectics staff members who made significant contributions include LaVonda Scott, Elizabeth Walter, Suresh Kaja, Peter Ahn, and John Millen.

NCSES thanks the federal agency staff that provided information for this report.

National Center for Science and Engineering Statistics (NCSES). 2024. Federal Funds for Research and Development: Fiscal Years 202 2 –2 3 . NSF 24-321. Alexandria, VA: National Science Foundation. Available at  https://ncses.nsf.gov/surveys/federal-funds-research-development/2022-2023#data

Featured Analysis

Definitions of research and development, related content, related collections, survey contact.

For additional information about this survey or the methodology, contact

Get e-mail updates from NCSES

NCSES is an official statistical agency. Subscribe below to receive our latest news and announcements.

  • Open access
  • Published: 03 April 2024

Perception, practice, and barriers toward research among pediatric undergraduates: a cross-sectional questionnaire-based survey

  • Canyang Zhan 1 &
  • Yuanyuan Zhang 2  

BMC Medical Education volume  24 , Article number:  364 ( 2024 ) Cite this article

82 Accesses

Metrics details

Scientific research activities are crucial for the development of clinician-scientists. However, few people pay attention to the current situation of medical research in pediatric medical students in China. This study aims to assess the perceptions, practices and barriers toward medical research of pediatric undergraduates.

This cross-sectional study was conducted among third-year, fourth-year and fifth-year pediatric students from Zhejiang University School of Medicine in China via an anonymous online questionnaire. The questionnaires were also received from fifth-year students majoring in other medicine programs [clinical medicine (“5 + 3”) and clinical medicine (5-year)].

The response rate of pediatric undergraduates was 88.3% (68/77). The total sample of students enrolled in the study was 124, including 36 students majoring in clinical medicine (“5 + 3”) and 20 students majoring in clinical medicine (5-year). Most students from pediatrics (“5 + 3”) recognized that research was important. Practices in scientific research activities are not satisfactory. A total of 51.5%, 35.3% and 36.8% of the pediatric students participated in research training, research projects and scientific article writing, respectively. Only 4.4% of the pediatric students contributed to publishing a scientific article, and 14.7% had attended medical congresses. None of them had given a presentation at a congress. When compared with fifth-year students in the other medicine program, the frequency of practices toward research projects and training was lower in the pediatric fifth-year students. Lack of time, lack of guidance and lack of training were perceived as the main barriers to scientific work. Limited English was another obvious barrier for pediatric undergraduates. Pediatric undergraduates preferred to participate in clinical research (80.9%) rather than basic research.

Conclusions

Although pediatric undergraduates recognized the importance of medical research, interest and practices in research still require improvement. Lack of time, lack of guidance, lack of training and limited English were the common barriers to scientific work. Therefore, research training and English improvement were recommended for pediatric undergraduates.

Peer Review reports

Medical education includes the learning of basic clinical medical knowledge and the cultivation of scientific research abilities. Scientific research, an essential part of medical education, is increasingly important, as it can greatly improve medical care [ 1 , 2 ]. Scientific research activities are crucial for the development of clinician-scientists, who have key roles in clinical research and translational medicine. Therefore, medical education is increasingly emphasizing the cultivation of scientific research abilities. Strengthening scientific research training helps students to develop independent critical thinking, improve the ability of observation, and foster the problem-solving skills. It is suggested that developing undergraduate research benefits the students, the faculty mentors, the university or institution, and eventually society [ 2 , 3 ]. As a result, there is a growing trend to integrate scientific research training into undergraduate medical education. Early exposure to scientific research was recommended in undergraduate medical students [ 4 , 5 ]. In fact, an international questionnaire study showed that among 1625 responses collected from 38 countries, less than half (42.7%) agree/strongly agree that their medical schools provided “sufficient training in medical research” [ 6 ]. The training or practices about medical research in undergraduates is not universal. In China, few people pay attention to the current situation of medical research in undergraduates, especially for pediatric medical students.

Due to changes in China’s birth policy (two-child policy in 2016 and the three-child policy in 2021), child health needs are increasing [ 7 ]. The shortage of pediatricians is alarming in China. Therefore, numerous policies have been implemented to meet the challenges of the shortage of pediatricians, including reinstating pediatrics as an independent discipline in medical school enrollment and increasing the enrollment of pediatrics. The number of pediatricians has increased year by year. The number of pediatricians in China increased from 118,500 in 2015 (0.52 pediatricians per 1000 children under the age of 14) to 206,000 in 2021 (0.78 pediatricians per 1000 children under the age of 14). With the increase in pediatric enrollment, pediatric medical education is facing new challenges. It is urgent to study the current situation of cultivation of pediatric medical students, one of which is the scientific research abilities [ 8 , 9 ]. However, as the particular background of pediatrics, very little is known about the perception, practice and barriers toward medical research in pediatric undergraduates. The purpose of this study was to address the gap by assessing the practices, perceptions and barriers toward medical research of pediatric undergraduates at Zhejiang University. The results can help to improve the mode of cultivating scientific research abilities among pediatric medical students.

The study was conducted from March to April 2023. The study was approved by the Ethics Review Committee of the Children’s Hospital of Zhejiang University School of Medicine and was undertaken according to the Helsinki declaration. Participants provided written informed consent upon applying to participate in the study.

Study design and setting

This is a cross-sectional study conducted via an online questionnaire and the questionnaire was done simultaneously in all students. The study aimed to investigate the perception, practices and barriers toward research in pediatric undergraduates from Zhejiang University School of Medicine, and to investigate the differences in research among undergraduate students from clinical medicine (“5 + 3” integrated program, pediatrics) [pediatrics (“5 + 3”)], clinical medicine (“5 + 3” integrated program) [clinical medicine (“5 + 3”)] and clinical medicine (5-year).

The clinical medicine of Zhejiang University School of Medicine (ZUSM) includes a 5-year program, a “5 + 3” integrated program, and a 8-year MD. Program. The clinical medicine (5-year) program is the basis of clinical medicine education.Graduates need to complete 3 years of standardized residency training to become doctors. The clinical medicine (“5 + 3”) model combines the 5-year medical undergraduate education, 3-year standardized residency training and postgraduate education. Since 2015, 20 to 30 students who are interested in pediatrics were selected from second-year undergraduate students of clinical medicine (“5 + 3”) to continue studies as pediatrics (“5 + 3”) every year. Since 2019, ZUSM established pediatrics (“5 + 3”) program. 20–30 students have been enrolled independently every year.

Participants

All of the third-, fourth-, and fifth-year undergraduate students in pediatrics (“5 + 3”) and some of the fifth-year undergraduate students from clinical medicine (“5 + 3”) and clinical medicine (5-year) who expressed an interest in participating in the study were enrolled.

Data collection

The questionnaire was self-designed after reviewing the literature and consulting senior faculty. For the purpose of testing its clarity and reliability, the questionnaire was pilot tested among 36 undergraduate students. Their feedback was mainly related to the structure of the questionnaire. To address these comments, the questionnaire was modified to reach the final draft, which was distributed to the student sample included in the study. The reliability coefficient was assessed by Cronbach’s alpha, and the validity was evaluated by Kaiser-Meyer-Olkin (KMO).

There are four sections of the questionnaire used in this study:

The first part covered 3 statements (gender, grade and major).

The second part examined the participants’ perceptions of medical research, including 5 statements (importance, enhancement of competitiveness, practising thinking ability, solving clinical problems, and being interesting).

The third part examined practices in medical research, including 6 statements (project, training, write paper, publish paper, attend academic conference and conference communication).

The barriers to medical research were assessed in the last part, including 7 statements.

Perception and barriers toward medical research were evaluated using a five-point Likert scale ranging from 1 to 5 (1 = strongly disagree; 2 = disagree, 3 = uncertain, 4 = agree, 5 = strongly agree).

Statistical analysis

Categorical data are represented as numbers and frequencies. For ease of reporting and analyzing data, the responses of “agree” and “strongly agree” were grouped and reported as agreements, and “disagree” and “strongly disagree” were grouped as disagreements. The chi-square test was used to test the difference in the frequency of participation in research practices. The student’s perception score based on grades was analyzed using Fisher’s exact test, and attitude between the year of study was analyzed by ANOVA or a nonparametric test (Kruskal-Wallis H test). The statistical analysis was performed using IBM SPSS version 26. P  < 0.05 was considered significant.

The reliability coefficient of the questionnaire was assessed by Cronbach’s alpha; it was 0.73 for perception and 0.78 for barriers. KMO was 0.80 for perception (Bartlett’s sphericity test: χ2 = 200.4, p  < 0.001) and 0.73 for barriers (Bartlett’s sphericity test: χ2 = 278.4, p  < 0.001), indicating the appropriateness of the factor analysis. The factor analysis was carried out using the principal component analysis with varimax rotation. For perception, one factor explains 58.2% of the variance. For barriers, two-factor solution explains 60.2% of the variance.

The response rate was 79.2% (19/24) in the third year, 88% (22/25) in the fourth year and 96.4% (27/28) in the fifth year students in pediatrics (“5 + 3”), and the total response rate was 88.3% (68/77). The number of fifth-year students majoring in clinical medicine (“5 + 3”) and clinical medicine (5-year) was 36 and 20, respectively. Thus, a total of 124 students participated in the questionnaire. Among the participants, approximately 46% were male and 54% were female.

Perception regarding scientific research among the students majoring in pediatrics (“5 + 3”)

The majority of students in pediatrics (“5 + 3”) recognized that research was important (92.6%), such as increasing competitiveness, solving clinical problems and improving thinking (Fig.  1 ). Approximately half of the students in pediatrics (“5 + 3”) were interested in the research.

figure 1

Perception regarding scientific research among the students majoring in pediatrics

Among the third-, fourth-, and fifth-year students in pediatrics (“5 + 3”), there was a significant difference in the effect of research on thinking ability (Table  1 ). A stronger understanding of the importance of research for thinking abilities was found in students from the fifth year.

Comparing the perception of medical research among the fifth-year students from the different medicine programs, there was a significant difference in the interest in research (Table  2 ). The fifth-year undergraduates from clinical medicine (5-year) received the highest score for interest in scientific research, followed by pediatrics (“5 + 3”).

Practices regarding scientific research among students majoring in pediatrics (“5 + 3”)

More than half of the students in pediatrics (“5 + 3”) participated in research training. Approximately 36.8% of them were involved in writing scientific articles, and 35.3% participated in research projects (Table  3 ). Only 4.4% of the students in pediatrics (“5 + 3”) contributed to publishing a scientific article, and 14.7% of the students in pediatrics (“5 + 3”) had attended medical congresses. However, none of the students had made a presentation at congresses.

A statistically significant difference was observed among different grades in the pediatrics (“5 + 3”) program, with fifth-year students having a much higher rate of participation in conferences. However, no significant differences were observed in other forms of medical research practices.

When compared with fifth-year students from other programs (clinical medicine “5 + 3” or 5-year), the students in pediatrics (“5 + 3”) had a lower rate of participation in the projects (Table  4 ). The rate of participation in the research training of the pediatric students was lower than that of clinical medicine (5-year) (44.44% vs. 75%). There were no significant differences in other research practices, such as writing articles and attending congress.

Barriers regarding scientific research among the students majoring in pediatrics (“5 + 3”)

The most common barriers to research work for pediatric students were lack of training (85.3%), lack of time (83.9%), and lack of mentorship (82.4%).

However, the top three barriers to research work in fifth-year pediatric students were lack of training (96.3%), limited English (88.89%) and lack of time (88.89%). We found that the barrier of “lack of training” became increasingly apparent with grade, which was significantly obvious in fifth-year pediatric students compared with other grades (Table  5 ). The other barriers had no significant differences among the three grades from the pediatrics (“5 + 3”) program.

When compared with fifth-year students from other programs (clinical medicine “5 + 3” or 5-year), the rate of agreement about the barrier of “limited English” was significantly higher in fifth-year students from the pediatrics (“5 + 3”) program. There were no significant differences in other barriers among fifth-year students from different majors (Table  6 ).

The type of research activities willing to involve in the future among the students majoring in pediatrics (“5 + 3”)

A total of 88.2% of students in pediatrics (“5 + 3”) wanted to participate in the training of scientific research activities. Furthermore, when asked about the type of future scientific research activities, 80.9% of students wanted to participate in clinical research, and only 19.1% of students wanted to be involved in basic research. There was no significant difference in the different grades of the students from the pediatrics (“5 + 3”) program (Fig.  2 A).

figure 2

Types of research activities that students majoring in pediatrics are willing to be involved with in the future ( A ). Types of research activities that the students from different programs are willing to be involved with in the future ( B ). When compared with students in clinical medicine (“5 + 3”), fifth-year students in pediatrics (“5 + 3”) were significantly less likely to participate in basic research (* P  = 0.001)

Compared with students in clinical medicine (“5 + 3”), fifth-year students in pediatrics (“5 + 3”) were significantly less likely to participate in basic research (Fig.  2 B).

In China, to solve the shortage of pediatricians, pediatric programs have resumed in some medical schools, including Zhejiang University, in recent years. In this study, we focused on the perceptions, practices and barriers to scientific research in pediatric undergraduates from Zhejiang University.

With global progress, more research is required to advance knowledge and innovation in all fields. Likewise, at the present time, research activities are a highly important skill for medical practitioner. Medical students were encouraged to take active part in scientific research and prepare for today’s knowledge-driven world [ 2 ]. In the current study, we found an overall positive perception of scientific research in pediatric undergraduates. More than 90% of pediatric students agreed (“strongly agree” and “agree”) that scientific research was important, which could make them more competitive and improve their thinking.

Although the students had a positive perception of medical research, their practice of conducting research remained unsatisfactory. When compared with the fifth-year undergraduates from clinical medicine (“5 + 3”) (66.67%) and clinical medicine (5-year) (75%), only 33.33% of the fifth-year undergraduates in pediatrics (“5 + 3”) have participated in scientific research projects. The number of paper publications was very small (third-year of Pediatric (“5 + 3”) 0, fourth-year 4.5% and fifth-year 7.4%). It was significantly less than the publication rate of final-year students in the United States (46.5%) and Australia (roughly one-third) [ 10 , 11 ]. In another study in Romania, 31% of fifth-year students declared that they had prepared a scientific presentation for a medical congress at least once [ 12 ]. Moreover, none of the students in the study presented their paper in the scientific forum. A study in India also found that the undergraduate students’ experience of presenting paper in scientific forums was only 5% and publication 5.6% [ 13 ]. As part of the curriculum, some Indian universities require postgraduates to present papers and submit manuscripts for publication. Nevertheless, the practices regarding scientific research of undergraduates is still relatively poor. Lack of time, lack of guidance and lack of training for research careers were found to be the major obstacles in medical research for both pediatric students and others, which is consistent with previous reports [ 5 , 14 , 15 ]. The questionnaire in residents also found that lack of time was a critical problem for scientific research [ 16 ]. There is no common practice about how to solve this difficulty. In the literature, it was usually recommended that integration of scientific research training into the curricular requirements for undergraduates or residency programs for residents should be implemented [ 7 , 14 , 17 , 18 ]. An increasing number of medical schools have individual projects as a component of their curriculum or mandatory medical research projects to develop research competencies [ 19 , 20 ].

Interestingly, in fifth-year pediatric undergraduates (“5 + 3”), English limitations were found to be one of the most common barriers. The barrier of the limitation of English was increasingly better as the grades increased in pediatric students. We speculated that this was related to the increasing awareness of the importance of scientific research and participation in scientific research activities, increasing demand for reading English literature and writing English articles. Furthermore, the English limitation barrier for pediatric students was more obvious than that for students from clinical medicine (“5 + 3”) and clinical medicine (5-year). They are worried about academic English. Horwitz et al. first proposed “foreign language anxiety” [ 21 ]. Deng and Zhou explored medical students’ medical English anxiety in Sichuan, China. They found that 85.2% of the students surveyed suffered moderate above medical English anxiety [ 22 ]. In the questionnaire, 88.89% of the fifth-year pediatric students believed that limited English was one of the most important barriers for scientific research. Currently, English is the chief language of communication in the field of medical science, including correspondence, conferences, writing scientific articles, and reading literature. Ma Y noted that medical English should be the most important component of college English teaching for medical students [ 23 ]. At Zhejiang University, all of the students, including those majoring in pediatrics (“5 + 3”), clinical medicine (“5 + 3”) and clinical medicine (5-year), had a medical English course during the undergraduate period. Thus, the course could not satisfy the demands for scientific research, such as reading English literature, writing English paper and oral presentation in English. To solve this barrier, it was suggested to understand the requirements of pediatric students for medical English learning and offer more courses about medical English or English writing training for pediatric students. Furthermore, undergraduates should be encouraged to participate in local, regional or national conferences that are not in English but in Chinese language, which can increase the interest in participating in scientific research.

Most of the pediatric students tended to choose clinical research, while only 19.1% wanted to attend basic research. The proportion of fifth-year students in pediatrics (“5 + 3”) choosing basic research was much lower than the students from the clinical medicine (“5 + 3”) program. It is speculated that pediatrics usually have heavier clinical work with relative poor scientific practice in China, compare with doctors from other clinical department. They are likely to concern the clinical research. The students in pediatrics might not obtain sufficient scientific guidance from their clinician teachers compared with those from other medicine program. According to the data, the Pediatric College could conduct more scientific research training directed at clinical research, such as the design, conduct and administration of clinical trials. The simulation-based clinical research curriculum is considered to be a better approach training of clinician-scientists compared with traditional clinical research teaching [ 24 ]. On the other hand, we might need to do more to improve the interest in basic research for pediatric undergraduates.

The major limitation of the present study is the small sample size. Only 20 to 30 students have been enrolled in pediatrics (“5 + 3”) of ZUSM every year. Therefore, multicenter studies (multiple medical schools) might be better to understand the perception, practice, and barriers of medical research among pediatric undergraduates. Even so, the findings in this study indicate that lack of time, lack of guidance, lack of training and limited English might be the common barriers to scientific work for pediatric undergraduates. Furthermore, the questionnaire for teachers and administrators would be performed to offer some concrete solutions in future.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

Zhejiang University School of Medicine

Kaiser-Meyer-Olkin

Hanney SR, González-Block MA. Health research improves healthcare: now we have the evidence and the chance to help the WHO spread such benefits globally. Health Res Policy Syst. 2015;13:12.

Article   Google Scholar  

Adebisi YA. Undergraduate students’ involvement in research: values, benefits, barriers and recommendations. Ann Med Surg (Lond). 2022;81:104384.

Google Scholar  

Petrella JK, Jung AP. Undergraduate research: importance, benefits, and challenges. Int J Exerc Sci. 2008;1(3):91–5.

Stone C, Dogbey GY, Klenzak S, Van Fossen K, Tan B, Brannan GD. Contemporary global perspectives of medical students on research during undergraduate medical education: a systematic literature review. Med Educ Online. 2018;23(1):1537430.

El Achi D, Al Hakim L, Makki M, Mokaddem M, Khalil PA, Kaafarani BR, et al. Perception, attitude, practice and barriers towards medical research among undergraduate students. BMC Med Educ. 2020;17(1):195.

Funston G, Piper RJ, Connell C, Foden P, Young AM, O’Neill P. Medical student perceptions of research and research-orientated careers: an international questionnaire study. Med Teach. 2016;38(10):1041–8.

Tatum M. China’s three-child policy. Lancet. 2021;397:2238.

Rivkees SA, Kelly M, Lodish M, Weiner D. The Pediatric Medical Student Research Forum: fostering interest in Pediatric Research. J Pediatr. 2017;188:3–4.

Barrett KJ, Cooley TM, Schwartz AL, Hostetter MK, Clapp DW, Permar SR. Addressing gaps in Pediatric Scientist Development: the Department Chair View of 2 AMSPDC-Sponsored Programs. J Pediatr. 2020;222:7–e124.

Jacobs CD, Cross PC. The value of medical student research: the experience at Stanford University School of Medicine. Med Educ. 1995;29(5):342–6.

Muhandiramge J, Vu T, Wallace MJ, Segelov E. The experiences, attitudes and understanding of research amongst medical students at an Australian medical school. BMC Med Educ. 2021;21(1):267.

Pop AI, Lotrean LM, Buzoianu AD, Suciu SM, Florea M. Attitudes and practices regarding Research among Romanian Medical Undergraduate Students. Int J Environ Res Public Health. 2022;19(3):1872.

Pallamparthy S, Basavareddy A. Knowledge, attitude, practice, and barriers toward research among medical students: a cross-sectional questionnaire-based survey. Perspect Clin Res. 2019;10:73–8.

Assar A, Matar SG, Hasabo EA, Elsayed SM, Zaazouee MS, Hamdallah A, et al. Knowledge, attitudes, practices and perceived barriers towards research in undergraduate medical students of six arab countries. BMC Med Educ. 2022;22(1):44.

Kharraz R, Hamadah R, AlFawaz D, Attasi J, Obeidat AS, Alkattan W, et al. Perceived barriers towards participation in undergraduate research activities among medical students at Alfaisal University-College of Medicine: a Saudi Arabian perspective. Med Teach. 2016;38(Suppl 1):S12–8.

Fournier I, Stephenson K, Fakhry N, Jia H, Sampathkumar R, Lechien JR, et al. Barriers to research among residents in Otolaryngology - Head & Neck surgery around the world. Eur Ann Otorhinolaryngol Head Neck Dis. 2019;136(3S):S3–7.

Abu-Zaid A, Alkattan K. Integration of scientific research training into undergraduate medical education: a reminder call. Med Educ Online. 2013;18:22832.

Eyigör H, Kara CO. Otolaryngology residents’ attitudes, experiences, and barriers regarding the Medical Research. Turk Arch Otorhinolaryngol. 2021;59(3):215–22.

Möller R, Shoshan M. Medical students’ research productivity and career preferences; a 2-year prospective follow-up study. BMC Med Educ. 2017;17(1):51.

Laidlaw A, Aiton J, Struthers J, Guild S. Developing research skills in medical students: AMEE Guide 69. Med Teach. 2012;34(9):e754–71.

Horwitz EK, Horwitz MBH, Cope J. Foreign Language Classroom anxiety. Mod Lang J. 1986;70(2):125–32.

Deng J, Zhou K, Al-Shaibani GKS. Medical English anxiety patterns among medical students in Sichuan, China. Front Psychol. 2022;13:895117.

Ma Y. Exploring medical English curriculum and teaching from the perspective of ESP-A case study of a medical English teaching. Technol Enhan Lang Educ. 2009;125(1):60–3.

Yan S, Huang Q, Huang J, Wang Y, Li X, Wang Y, et al. Clinical research capability enhanced for medical undergraduates: an innovative simulation-based clinical research curriculum development. BMC Med Educ. 2022;22(1):543.

Download references

Acknowledgements

The authors thank all the students who participated as volunteers for their contribution to the study.

This work was supported by grants from the “14th Five-Year Plan” teaching reform project of an ordinary undergraduate university in Zhejiang Province (jg20220041) and project of graduate education research in Zhejiang University (20210317).

Author information

Authors and affiliations.

Department of Neonatology, Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China

Canyang Zhan

Department of Pulmonology, Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China

Yuanyuan Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

CZ designed and supervised the study progress. CZ and YZ wrote the manuscript and collected and analyzed the questionnaire data. All the authors have read and approved the manuscript prior to submission.

Corresponding author

Correspondence to Yuanyuan Zhang .

Ethics declarations

Ethics approval and consent to participate.

Our study was approved by the Ethics Review Committee of the Children’s Hospital of Zhejiang University School of Medicine and was undertaken according to the Helsinki declaration. Written informed consent was obtained from each participant upon their application to the work.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zhan, C., Zhang, Y. Perception, practice, and barriers toward research among pediatric undergraduates: a cross-sectional questionnaire-based survey. BMC Med Educ 24 , 364 (2024). https://doi.org/10.1186/s12909-024-05361-x

Download citation

Received : 14 October 2023

Accepted : 27 March 2024

Published : 03 April 2024

DOI : https://doi.org/10.1186/s12909-024-05361-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Undergraduate research
  • Medical research

BMC Medical Education

ISSN: 1472-6920

questionnaire in survey research sample

Home Toggle navigation FR Toggle Search Search the site Search About us About us Head office Regional offices History Archives Background materials Photos and videos Accessibility Contact us Corporate governance Board of Directors Governing Council and Senior Management Governance documents Educational resources The Economy, Plain and Simple Explainers Financial education resources Careers Take a central role at the Bank of Canada with our current opportunities and scholarships.

Interest Rate Announcement and Monetary Policy Report

09:45 (ET) On eight scheduled dates each year, the Bank of Canada announces the setting for the overnight rate target in a press release explaining the factors behind the decision. Four times a year, Governing Council presents the Monetary Policy Report : the Bank’s base-case projection for inflation and growth in the Canadian economy, and its assessment of risks.

See the media advisory .

We use cookies to help us keep improving this website.

IMAGES

  1. Survey Research: A Quantitative Technique

    questionnaire in survey research sample

  2. Survey Questionnaire Template

    questionnaire in survey research sample

  3. 30+ Questionnaire Templates (Word) ᐅ TemplateLab

    questionnaire in survey research sample

  4. Questionnaire Sample For Research Paper PDF

    questionnaire in survey research sample

  5. Research Questionnaire Examples

    questionnaire in survey research sample

  6. Research Questionnaire Examples

    questionnaire in survey research sample

VIDEO

  1. Training Data Entry Using a Sample Questionnaire/ Survey

  2. Questionnaire || Meaning and Definition || Type and Characteristics || Research Methodology ||

  3. How to Assess the Quantitative Data Collected from Questionnaire

  4. Survey Research Method in Psychology] Urdu/ Hindi #wellnessbyfarah #psychologylessons #psychology

  5. The format of survey questionnaire SAMPLE QUESTIONNAIRE

  6. How To Create Student Interest Survey For Free in 2 minutes

COMMENTS

  1. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  2. 150+ Free Questionnaire Examples & Sample Survey Templates

    Filter by survey type. All our sample survey template questions are expert-certified by professional survey methodologists to make sure you ask questions the right way-and get reliable results. You can send out our templates as is, choose separate variables, add additional questions, or customize our questionnaire templates to fit your needs.

  3. 28 Questionnaire Examples, Questions, & Templates to Survey Your Clients

    1. Free HubSpot Questionnaire Template. HubSpot offers a variety of free customer surveys and questionnaire templates to analyze and measure customer experience. Choose from five templates: net promoter score, customer satisfaction, customer effort, open-ended questions, and long-form customer surveys.

  4. 21 Questionnaire Templates: Examples and Samples

    A questionnaire is defined a market research instrument that consists of questions or prompts to elicit and collect responses from a sample of respondents. This article enlists 21 questionnaire templates along with samples and examples. It also describes the different types of questionnaires and the question types that are used in these questionnaires.

  5. Writing Survey Questions

    We frequently test new survey questions ahead of time through qualitative research methods such as focus groups, cognitive interviews, pretesting (often using an online, opt-in sample), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on ...

  6. Questionnaire: Definition, How to Design, Types & Examples

    The written questionnaire is the heart and soul of any survey research project. Whether you conduct your survey using an online questionnaire, in person, by email or over the phone, the way you design your questionnaire plays a critical role in shaping the quality of the data and insights that you'll get from your target audience.

  7. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  8. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  9. PDF Question and Questionnaire Design

    1. Early questions should be easy and pleasant to answer, and should build rapport between the respondent and the researcher. 2. Questions at the very beginning of a questionnaire should explicitly address the topic of the survey, as it was described to the respondent prior to the interview. 3. Questions on the same topic should be grouped ...

  10. Best Survey Examples for your research

    See more examples : Marketing and market research surveys. Customer Satisfaction. As businesses become more customer centric, bench-marking on customer satisfaction using surveys has become a defined metric for customer success. Here are a few examples for your next customer satisfaction evaluation survey: Net Promoter Score Survey : Any ...

  11. Survey Research: Definition, Examples & Methods

    Here, we cover a few: 1. They're relatively easy to do. Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience, the data collection is usually straightforward regardless of which survey type you use. 2.

  12. Research survey

    This includes designing questions or using a premade template. Below are some of the best research survey examples, templates, and tips for designing these surveys. 20 research survey examples and templates. Specific survey questions for research depend on your goals. A research questionnaire can be conducted about any topic or interest.

  13. (PDF) Questionnaires and Surveys

    So, questionnaires have remained the most popular research tool (Fife-Schaw, 2006), being relatively easy to assess and collect useful surveys from a population (Jones et al., 2013).

  14. Questionnaire

    A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people. ... The choice of mode depends on the research objectives, sample size, and available resources. Some common modes of administration include:

  15. Survey Questions: 70+ Survey Question Examples & Survey Types

    Impactful surveys start here: The main types of survey questions: most survey questions are classified as open-ended, closed-ended, nominal, Likert scale, rating scale, and yes/no. The best surveys often use a combination of questions. 💡 70+ good survey question examples: our top 70+ survey questions, categorized across ecommerce, SaaS, and ...

  16. How to conduct your own market research survey (with example)

    A market research survey is a questionnaire designed to collect key information about a company's target market and audience that will help guide business decisions about products and services, ... 95% is the industry standard confidence level (though small sample sizes may get by with 90%). At the 95% level, for example, an acceptable margin ...

  17. Questionnaire Design Tip Sheet

    Questionnaire Design Tip Sheet. This PSR Tip Sheet provides some basic tips about how to write good survey questions and design a good survey questionnaire. PSR Questionnaire Tip Sheet. 40 KB. Printer-friendly version.

  18. Survey Research and Questionnaires

    Survey Research and Questionnaires. Descriptions of key issues in survey research and questionnaire design are highlighted in the following sections. Modes of data collection approaches are described together with their advantages and disadvantages. Descriptions of commonly used sampling designs are provided and the primary sources of survey ...

  19. Questionnaire Design

    Questionnaires vs surveys. A survey is a research method where you collect and analyse data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  20. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  21. Survey Templates: Free Questionnaire & Examples

    Find a survey template that's right for you. A free Qualtrics account gives you access to more than 50+ free survey templates anytime you need inspiration or guidance. Surveys are a powerful way to gather feedback and insights, whatever your role. If you come across a question that you can't answer, save time on questionnaire design and get ...

  22. Hands-on guide to questionnaire research: Selecting, designing, and

    Questionnaires can be used as the sole research instrument (such as in a cross sectional survey) or within clinical trials or epidemiological studies. Randomised trials are subject to strict reporting criteria, 4 but there is no comparable framework for questionnaire research.

  23. Development of the CAHPS In-Center Hemodialysis Survey

    The development of this CAHPS survey involved several steps, including a review of existing surveys, a scientific assessment of potential items, public input, cognitive testing, and field testing, which includes psychometric analyses of the performance of items and their domains. All of these steps contributed to the drafting of the questionnaire and subsequent revisions.

  24. Frontiers

    The survey constructs were carefully designed as quantitative measures to capture key factors essential for addressing the research questions of this study. These measures encompassed various constructs, including student-related, technology-related, and school-related teaching barriers, as well as teacher STEM pedagogy implementations.

  25. Survey of Federal Funds for Research and Development 2022

    The Survey of Federal Funds for Research and Development is an annual census of federal agencies that conduct research and development (R&D) programs and the primary source of information about U.S. federal funding for R&D. ... Sample or census. ... A list of the 41 fields of R&D reported on can be found on the survey questionnaire. In the data ...

  26. Online Questionnaire with Fibromyalgia Patients Shows Negative ...

    Fibromyalgia (FM) is a multidimensional disorder in which intense chronic pain is accompanied by a variety of psychophysical symptoms that impose a burden on the patients' quality of life. Despite the efforts and the recent advancement in research, FM pathogenesis and effective treatment remain unknown. Recently, the possible role of dietary patterns and/or components has been gaining attention.

  27. Perception, practice, and barriers toward research among pediatric

    Scientific research activities are crucial for the development of clinician-scientists. However, few people pay attention to the current situation of medical research in pediatric medical students in China. This study aims to assess the perceptions, practices and barriers toward medical research of pediatric undergraduates. This cross-sectional study was conducted among third-year, fourth-year ...

  28. Traffic mode recognition based on optimized temporal convolutional

    With the continuous development of urban transportation planning, how to accurately obtain information on individual travel patterns has become the core problem of research. However, traditional methods of residential travel surveys, such as paper questionnaires, telephone interviews, and mail inquiries, are limited by low data accuracy, organizational difficulties, and limited sample size ...

  29. Interest Rate Announcement and Monetary Policy Report

    09:45 (ET)On eight scheduled dates each year, the Bank of Canada announces the setting for the overnight rate target in a press release explaining the factors behind the decision. Four times a year, Governing Council presents the Monetary Policy Report: the Bank's base-case projection for inflation and growth in the Canadian economy, and its assessment of risks.