Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

Your ultimate guide to questionnaires and how to design a good one

The written questionnaire is the heart and soul of any survey research project. Whether you conduct your survey using an online questionnaire, in person, by email or over the phone, the way you design your questionnaire plays a critical role in shaping the quality of the data and insights that you’ll get from your target audience. Keep reading to get actionable tips.

What is a questionnaire?

A questionnaire is a research tool consisting of a set of questions or other ‘prompts’ to collect data from a set of respondents.

When used in most research, a questionnaire will consist of a number of types of questions (primarily open-ended and closed) in order to gain both quantitative data that can be analyzed to draw conclusions, and qualitative data to provide longer, more specific explanations.

A research questionnaire is often mistaken for a survey - and many people use the term questionnaire and survey, interchangeably.

But that’s incorrect.

Which is what we talk about next.

Get started with our free survey maker with 50+ templates

Survey vs. questionnaire – what’s the difference?

Before we go too much further, let’s consider the differences between surveys and questionnaires.

These two terms are often used interchangeably, but there is an important difference between them.

Survey definition

A survey is the process of collecting data from a set of respondents and using it to gather insights.

Survey research can be conducted using a questionnaire, but won’t always involve one.

Questionnaire definition

A questionnaire is the list of questions you circulate to your target audience.

In other words, the survey is the task you’re carrying out, and the questionnaire is the instrument you’re using to do it.

By itself, a questionnaire doesn’t achieve much.

It’s when you put it into action as part of a survey that you start to get results.

Advantages vs disadvantages of using a questionnaire

While a questionnaire is a popular method to gather data for market research or other studies, there are a few disadvantages to using this method (although there are plenty of advantages to using a questionnaire too).

Let’s have a look at some of the advantages and disadvantages of using a questionnaire for collecting data.

Advantages of using a questionnaire

1. questionnaires are relatively cheap.

Depending on the complexity of your study, using a questionnaire can be cost effective compared to other methods.

You simply need to write your survey questionnaire, and send it out and then process the responses.

You can set up an online questionnaire relatively easily, or simply carry out market research on the street if that’s the best method.

2. You can get and analyze results quickly

Again depending on the size of your survey you can get results back from a questionnaire quickly, often within 24 hours of putting the questionnaire live.

It also means you can start to analyze responses quickly too.

3. They’re easily scalable

You can easily send an online questionnaire to anyone in the world and with the right software you can quickly identify your target audience and your questionnaire to them.

4. Questionnaires are easy to analyze

If your questionnaire design has been done properly, it’s quick and easy to analyze results from questionnaires once responses start to come back.

This is particularly useful with large scale market research projects.

Because all respondents are answering the same questions, it’s simple to identify trends.

5. You can use the results to make accurate decisions

As a research instrument, a questionnaire is ideal for commercial research because the data you get back is from your target audience (or ideal customers) and the information you get back on their thoughts, preferences or behaviors allows you to make business decisions.

6. A questionnaire can cover any topic

One of the biggest advantages of using questionnaires when conducting research is (because you can adapt them using different types and styles of open ended questions and closed ended questions) they can be used to gather data on almost any topic.

There are many types of questionnaires you can design to gather both quantitative data and qualitative data - so they’re a useful tool for all kinds of data analysis.

Disadvantages of using a questionnaire

1. respondents could lie.

This is by far the biggest risk with a questionnaire, especially when dealing with sensitive topics.

Rather than give their actual opinion, a respondent might feel pressured to give the answer they deem more socially acceptable, which doesn’t give you accurate results.

2. Respondents might not answer every question

There are all kinds of reasons respondents might not answer every question, from questionnaire length, they might not understand what’s being asked, or they simply might not want to answer it.

If you get questionnaires back without complete responses it could negatively affect your research data and provide an inaccurate picture.

3. They might interpret what’s being asked incorrectly

This is a particular problem when running a survey across geographical boundaries and often comes down to the design of the survey questionnaire.

If your questions aren’t written in a very clear way, the respondent might misunderstand what’s being asked and provide an answer that doesn’t reflect what they actually think.

Again this can negatively affect your research data.

4. You could introduce bias

The whole point of producing a questionnaire is to gather accurate data from which decisions can be made or conclusions drawn.

But the data collected can be heavily impacted if the researchers accidentally introduce bias into the questions.

This can be easily done if the researcher is trying to prove a certain hypothesis with their questionnaire, and unwittingly write questions that push people towards giving a certain answer.

In these cases respondents’ answers won’t accurately reflect what is really happening and stop you gathering more accurate data.

5. Respondents could get survey fatigue

One issue you can run into when sending out a questionnaire, particularly if you send them out regularly to the same survey sample, is that your respondents could start to suffer from survey fatigue.

In these circumstances, rather than thinking about the response options in the questionnaire and providing accurate answers, respondents could start to just tick boxes to get through the questionnaire quickly.

Again, this won’t give you an accurate data set.

Questionnaire design: How to do it

It’s essential to carefully craft a questionnaire to reduce survey error and optimize your data . The best way to think about the questionnaire is with the end result in mind.

How do you do that?

Start with questions, like:

  • What is my research purpose ?
  • What data do I need?
  • How am I going to analyze that data?
  • What questions are needed to best suit these variables?

Once you have a clear idea of the purpose of your survey, you’ll be in a better position to create an effective questionnaire.

Here are a few steps to help you get into the right mindset.

1. Keep the respondent front and center

A survey is the process of collecting information from people, so it needs to be designed around human beings first and foremost.

In his post about survey design theory, David Vannette, PhD, from the Qualtrics Methodology Lab explains the correlation between the way a survey is designed and the quality of data that is extracted.

“To begin designing an effective survey, take a step back and try to understand what goes on in your respondents’ heads when they are taking your survey.

This step is critical to making sure that your questionnaire makes it as likely as possible that the response process follows that expected path.”

From writing the questions to designing the survey flow, the respondent’s point of view should always be front and center in your mind during a questionnaire design.

2. How to write survey questions

Your questionnaire should only be as long as it needs to be, and every question needs to deliver value.

That means your questions must each have an individual purpose and produce the best possible data for that purpose, all while supporting the overall goal of the survey.

A question must also must be phrased in a way that is easy for all your respondents to understand, and does not produce false results.

To do this, remember the following principles:

Get into the respondent's head

The process for a respondent answering a survey question looks like this:

  • The respondent reads the question and determines what information they need to answer it.
  • They search their memory for that information.
  • They make judgments about that information.
  • They translate that judgment into one of the answer options you’ve provided. This is the process of taking the data they have and matching that information with the question that’s asked.

When wording questions, make sure the question means the same thing to all respondents. Words should have one meaning, few syllables, and the sentences should have few words.

Only use the words needed to ask your question and not a word more .

Note that it’s important that the respondent understands the intent behind your question.

If they don’t, they may answer a different question and the data can be skewed.

Some contextual help text, either in the introduction to the questionnaire or before the question itself, can help make sure the respondent understands your goals and the scope of your research.

Use mutually exclusive responses

Be sure to make your response categories mutually exclusive.

Consider the question:

What is your age?

Respondents that are 31 years old have two options, as do respondents that are 40 and 55. As a result, it is impossible to predict which category they will choose.

This can distort results and frustrate respondents. It can be easily avoided by making responses mutually exclusive.

The following question is much better:

This question is clear and will give us better results.

Ask specific questions

Nonspecific questions can confuse respondents and influence results.

Do you like orange juice?

  • Like very much
  • Neither like nor dislike
  • Dislike very much

This question is very unclear. Is it asking about taste, texture, price, or the nutritional content? Different respondents will read this question differently.

A specific question will get more specific answers that are actionable.

How much do you like the current price of orange juice?

This question is more specific and will get better results.

If you need to collect responses about more than one aspect of a subject, you can include multiple questions on it. (Do you like the taste of orange juice? Do you like the nutritional content of orange juice? etc.)

Use a variety of question types

If all of your questionnaire, survey or poll questions are structured the same way (e.g. yes/no or multiple choice) the respondents are likely to become bored and tune out. That could mean they pay less attention to how they’re answering or even give up altogether.

Instead, mix up the question types to keep the experience interesting and varied. It’s a good idea to include questions that yield both qualitative and quantitative data.

For example, an open-ended questionnaire item such as “describe your attitude to life” will provide qualitative data – a form of information that’s rich, unstructured and unpredictable. The respondent will tell you in their own words what they think and feel.

A quantitative / close-ended questionnaire item, such as “Which word describes your attitude to life? a) practical b) philosophical” gives you a much more structured answer, but the answers will be less rich and detailed.

Open-ended questions take more thought and effort to answer, so use them sparingly. They also require a different kind of treatment once your survey is in the analysis stage.

3. Pre-test your questionnaire

Always pre-test a questionnaire before sending it out to respondents. This will help catch any errors you might have missed. You could ask a colleague, friend, or an expert to take the survey and give feedback. If possible, ask a few cognitive questions like, “how did you get to that response?” and “what were you thinking about when you answered that question?” Figure out what was easy for the responder and where there is potential for confusion. You can then re-word where necessary to make the experience as frictionless as possible.

If your resources allow, you could also consider using a focus group to test out your survey. Having multiple respondents road-test the questionnaire will give you a better understanding of its strengths and weaknesses. Match the focus group to your target respondents as closely as possible, for example in terms of age, background, gender, and level of education.

Note: Don't forget to make your survey as accessible as possible for increased response rates.

Questionnaire examples and templates

There are free questionnaire templates and example questions available for all kinds of surveys and market research, many of them online. But they’re not all created equal and you should use critical judgement when selecting one. After all, the questionnaire examples may be free but the time and energy you’ll spend carrying out a survey are not.

If you’re using online questionnaire templates as the basis for your own, make sure it has been developed by professionals and is specific to the type of research you’re doing to ensure higher completion rates. As we’ve explored here, using the wrong kinds of questions can result in skewed or messy data, and could even prompt respondents to abandon the questionnaire without finishing or give thoughtless answers.

You’ll find a full library of downloadable survey templates in the Qualtrics Marketplace , covering many different types of research from employee engagement to post-event feedback . All are fully customizable and have been developed by Qualtrics experts.

Qualtrics // Experience Management

Qualtrics, the leader and creator of the experience management category, is a cloud-native software platform that empowers organizations to deliver exceptional experiences and build deep relationships with their customers and employees.

With insights from Qualtrics, organizations can identify and resolve the greatest friction points in their business, retain and engage top talent, and bring the right products and services to market. Nearly 20,000 organizations around the world use Qualtrics’ advanced AI to listen, understand, and take action. Qualtrics uses its vast universe of experience data to form the largest database of human sentiment in the world. Qualtrics is co-headquartered in Provo, Utah and Seattle.

Related Articles

December 20, 2023

Top market research analyst skills for 2024

November 7, 2023

Brand Experience

The 4 market research trends redefining insights in 2024

September 14, 2023

How BMG and Loop use data to make critical decisions

August 21, 2023

Designing for safety: Making user consent and trust an organizational asset

June 27, 2023

The fresh insights people: Scaling research at Woolworths Group

June 20, 2023

Bank less, delight more: How Bankwest built an engine room for customer obsession

June 16, 2023

How Qualtrics Helps Three Local Governments Drive Better Outcomes Through Data Insights

April 1, 2023

Academic Experience

How to write great survey questions (with examples)

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

sample questionnaire research methods

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

sample questionnaire research methods

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

sample questionnaire research methods

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

sample questionnaire research methods

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

  • Privacy Policy

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

sample questionnaire research methods

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

interactive presentation software

Top 12 Interactive Presentation Software to Engage Your User

May 29, 2024

Trend Report

Trend Report: Guide for Market Dynamics & Strategic Analysis

Cannabis Industry Business Intelligence

Cannabis Industry Business Intelligence: Impact on Research

May 28, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.14(3); Jul-Sep 2023
  • PMC10405529

Designing and validating a research questionnaire - Part 1

Priya ranganathan.

Department of Anaesthesiology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India

Carlo Caduff

1 Department of Global Health and Social Medicine, King’s College London, London, United Kingdom

Questionnaires are often used as part of research studies to collect data from participants. However, the information obtained through a questionnaire is dependent on how it has been designed, used, and validated. In this article, we look at the types of research questionnaires, their applications and limitations, and how a new questionnaire is developed.

INTRODUCTION

In research studies, questionnaires are commonly used as data collection tools, either as the only source of information or in combination with other techniques in mixed-method studies. However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated. In this two-part series, we discuss how to design (part 1) and how to use and validate (part 2) a research questionnaire. It is important to emphasize that questionnaires seek to gather information from other people and therefore entail a social relationship between those who are doing the research and those who are being researched. This social relationship comes with an obligation to learn from others , an obligation that goes beyond the purely instrumental rationality of gathering data. In that sense, we underscore that any research method is not simply a tool but a situation, a relationship, a negotiation, and an encounter. This points to both ethical questions (what is the relationship between the researcher and the researched?) and epistemological ones (what are the conditions under which we can know something?).

At the start of any kind of research project, it is crucial to select the right methodological approach. What is the research question, what is the research object, and what can a questionnaire realistically achieve? Not every research question and not every research object are suitable to the questionnaire as a method. Questionnaires can only provide certain kinds of empirical evidence and it is thus important to be aware of the limitations that are inherent in any kind of methodology.

WHAT IS A RESEARCH QUESTIONNAIRE?

A research questionnaire can be defined as a data collection tool consisting of a series of questions or items that are used to collect information from respondents and thus learn about their knowledge, opinions, attitudes, beliefs, and behavior and informed by a positivist philosophy of the natural sciences that consider methods mainly as a set of rules for the production of knowledge; questionnaires are frequently used instrumentally as a standardized and standardizing tool to ask a set of questions to participants. Outside of such a positivist philosophy, questionnaires can be seen as an encounter between the researcher and the researched, where knowledge is not simply gathered but negotiated through a distinct form of communication that is the questionnaire.

STRENGTHS AND LIMITATIONS OF QUESTIONNAIRES

A questionnaire may not always be the most appropriate way of engaging with research participants and generating knowledge that is needed for a research study. Questionnaires have advantages that have made them very popular, especially in quantitative studies driven by a positivist philosophy: they are a low-cost method for the rapid collection of large amounts of data, even from a wide sample. They are practical, can be standardized, and allow comparison between groups and locations. However, it is important to remember that a questionnaire only captures the information that the method itself (as the structured relationship between the researcher and the researched) allows for and that the respondents are willing to provide. For example, a questionnaire on diet captures what the respondents say they eat and not what they are eating. The problem of social desirability emerges precisely because the research process itself involves a social relationship. This means that respondents may often provide socially acceptable and idealized answers, particularly in relation to sensitive questions, for example, alcohol consumption, drug use, and sexual practices. Questionnaires are most useful for studies investigating knowledge, beliefs, values, self-understandings, and self-perceptions that reflect broader social, cultural, and political norms that may well diverge from actual practices.

TYPES OF RESEARCH QUESTIONNAIRES

Research questionnaires may be classified in several ways:

Depending on mode of administration

Research questionnaires may be self-administered (by the research participant) or researcher administered. Self-administered (also known as self-reported or self-completed) questionnaires are designed to be completed by respondents without assistance from a researcher. Self-reported questionnaires may be administered to participants directly during hospital or clinic visits, mailed through the post or E-mail, or accessed through websites. This technique allows respondents to answer at their own pace and simplifies research costs and logistics. The anonymity offered by self-reporting may facilitate more accurate answers. However, the disadvantages are that there may be misinterpretations of questions and low response rates. Significantly, relevant context information is missing to make sense of the answers provided. Researcher-reported (or interviewer-reported) questionnaires may be administered face-to-face or through remote techniques such as telephone or videoconference and are associated with higher response rates. They allow the researcher to have a better understanding of how the data are collected and how answers are negotiated, but are more resource intensive and require more training from the researchers.

The choice between self-administered and researcher-administered questionnaires depends on various factors such as the characteristics of the target audience (e.g., literacy and comprehension level and ability to use technology), costs involved, and the need for confidentiality/privacy.

Depending on the format of the questions

Research questionnaires can have structured or semi-structured formats. Semi-structured questionnaires allow respondents to answer more freely and on their terms, with no restrictions on their responses. They allow for unusual or surprising responses and are useful to explore and discover a range of answers to determine common themes. Typically, the analysis of responses to open-ended questions is more complex and requires coding and analysis. In contrast, structured questionnaires provide a predefined set of responses for the participant to choose from. The use of standard items makes the questionnaire easier to complete and allows quick aggregation, quantification, and analysis of the data. However, structured questionnaires can be restrictive if the scope of responses is limited and may miss potential answers. They also may suggest answers that respondents may not have considered before. Respondents may be forced to fit their answers into the predetermined format and may not be able to express personal views and say what they really want to say or think. In general, this type of questionnaire can turn the research process into a mechanical, anonymous survey with little incentive for participants to feel engaged, understood, and taken seriously.

STRUCTURED QUESTIONS: FORMATS

Some examples of close-ended questions include:

e.g., Please indicate your marital status:

  • Prefer not to say.

e.g., Describe your areas of work (circle or tick all that apply):

  • Clinical service
  • Administration
  • Strongly agree
  • Strongly disagree.
  • Numerical scales: Please rate your current pain on a scale of 1–10 where 1 is no pain and 10 is the worst imaginable pain
  • Symbolic scales: For example, the Wong-Baker FACES scale to rate pain in older children
  • Ranking: Rank the following cities as per the quality of public health care, where 1 is the best and 5 is the worst.

A matrix questionnaire consists of a series of rows with items to be answered with a series of columns providing the same answer options. This is an efficient way of getting the respondent to provide answers to multiple questions. The EORTC QLQ-C30 is an example of a matrix questionnaire.[ 1 ]

For a more detailed review of the types of research questions, readers are referred to a paper by Boynton and Greenhalgh.[ 2 ]

USING PRE-EXISTING QUESTIONNAIRES VERSUS DEVELOPING A NEW QUESTIONNAIRE

Before developing a questionnaire for a research study, a researcher can check whether there are any preexisting-validated questionnaires that might be adapted and used for the study. The use of validated questionnaires saves time and resources needed to design a new questionnaire and allows comparability between studies.

However, certain aspects need to be kept in mind: is the population/context/purpose for which the original questionnaire was designed similar to the new study? Is cross-cultural adaptation required? Are there any permission needed to use the questionnaire? In many situations, the development of a new questionnaire may be more appropriate given that any research project entails both methodological and epistemological questions: what is the object of knowledge and what are the conditions under which it can be known? It is important to understand that the standardizing nature of questionnaires contributes to the standardization of objects of knowledge. Thus, the seeming similarity in the object of study across diverse locations may be an artifact of the method. Whatever method one uses, it will always operate as the ground on which the object of study is known.

DESIGNING A NEW RESEARCH QUESTIONNAIRE

Once the researcher has decided to design a new questionnaire, several steps should be considered:

Gathering content

It creates a conceptual framework to identify all relevant areas for which the questionnaire will be used to collect information. This may require a scoping review of the published literature, appraising other questionnaires on similar topics, or the use of focus groups to identify common themes.

Create a list of questions

Questions need to be carefully formulated with attention to language and wording to avoid ambiguity and misinterpretation. Table 1 lists a few examples of poorlyworded questions that could have been phrased in a more appropriate manner. Other important aspects to be noted are:

Examples of poorly phrased questions in a research questionnaire

  • Provide a brief introduction to the research study along with instructions on how to complete the questionnaire
  • Allow respondents to indicate levels of intensity in their replies, so that they are not forced into “yes” or “no” answers where intensity of feeling may be more appropriate
  • Collect specific and detailed data wherever possible – this can be coded into categories. For example, age can be captured in years and later classified as <18 years, 18–45 years, 46 years, and above. The reverse is not possible
  • Avoid technical terms, slang, and abbreviations. Tailor the reading level to the expected education level of respondents
  • The format of the questionnaire should be attractive with different sections for various subtopics. The font should be large and easy to read, especially if the questionnaire is targeted at the elderly
  • Question sequence: questions should be arranged from general to specific, from easy to difficult, from facts to opinions, and sensitive topics should be introduced later in the questionnaire.[ 3 ] Usually, demographic details are captured initially followed by questions on other aspects
  • Use contingency questions: these are questions which need to be answered only by a subgroup of the respondents who provide a particular answer to a previous question. This ensures that participants only respond to relevant sections of the questionnaire, for example, Do you smoke? If yes, then how long have you been smoking? If not, then please go to the next section.

TESTING A QUESTIONNAIRE

A questionnaire needs to be valid and reliable, and therefore, any new questionnaire needs to be pilot tested in a small sample of respondents who are representative of the larger population. In addition to validity and reliability, pilot testing provides information on the time taken to complete the questionnaire and whether any questions are confusing or misleading and need to be rephrased. Validity indicates that the questionnaire measures what it claims to measure – this means taking into consideration the limitations that come with any questionnaire-based study. Reliability means that the questionnaire yields consistent responses when administered repeatedly even by different researchers, and any variations in the results are due to actual differences between participants and not because of problems with the interpretation of the questions or their responses. In the next article in this series, we will discuss methods to determine the reliability and validity of a questionnaire.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Qualitative study design: Surveys & questionnaires

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups
  • Observation
  • Surveys & questionnaires
  • Study Designs Home

Surveys & questionnaires

Qualitative surveys use open-ended questions to produce long-form written/typed answers. Questions will aim to reveal opinions, experiences, narratives or accounts. Often a useful precursor to interviews or focus groups as they help identify initial themes or issues to then explore further in the research. Surveys can be used iteratively, being changed and modified over the course of the research to elicit new information. 

Structured Interviews may follow a similar form of open questioning.  

Qualitative surveys frequently include quantitative questions to establish elements such as age, nationality etc. 

Qualitative surveys aim to elicit a detailed response to an open-ended topic question in the participant’s own words.  Like quantitative surveys, there are three main methods for using qualitative surveys including face to face surveys, phone surveys, and online surveys. Each method of surveying has strengths and limitations.

Face to face surveys  

  • Researcher asks participants one or more open-ended questions about a topic, typically while in view of the participant’s facial expressions and other behaviours while answering. Being able to view the respondent’s reactions enables the researcher to ask follow-up questions to elicit a more detailed response, and to follow up on any facial or behavioural cues that seem at odds with what the participants is explicitly saying.
  • Face to face qualitative survey responses are likely to be audio recorded and transcribed into text to ensure all detail is captured; however, some surveys may include both quantitative and qualitative questions using a structured or semi-structured format of questioning, and in this case the researcher may simply write down key points from the participant’s response.

Telephone surveys

  • Similar to the face to face method, but without researcher being able to see participant’s facial or behavioural responses to questions asked. This means the researcher may miss key cues that would help them ask further questions to clarify or extend participant responses to their questions, and instead relies on vocal cues.

Online surveys

  • Open-ended questions are presented to participants in written format via email or within an online survey tool, often alongside quantitative survey questions on the same topic.
  • Researchers may provide some contextualising information or key definitions to help ‘frame’ how participants view the qualitative survey questions, since they can’t directly ask the researcher about it in real time. 
  • Participants are requested to responses to questions in text ‘in some detail’ to explain their perspective or experience to researchers; this can result in diversity of responses (brief to detailed).
  • Researchers can not always probe or clarify participant responses to online qualitative survey questions which can result in data from these responses being cryptic or vague to the researcher.
  • Online surveys can collect a greater number of responses in a set period of time compared to face to face and phone survey approaches, so while data may be less detailed, there is more of it overall to compensate.

Qualitative surveys can help a study early on, in finding out the issues/needs/experiences to be explored further in an interview or focus group. 

Surveys can be amended and re-run based on responses providing an evolving and responsive method of research. 

Online surveys will receive typed responses reducing translation by the researcher 

Online surveys can be delivered broadly across a wide population with asynchronous delivery/response. 

Limitations

Hand-written notes will need to be transcribed (time-consuming) for digital study and kept physically for reference. 

Distance (or online) communication can be open to misinterpretations that cannot be corrected at the time. 

Questions can be leading/misleading, eliciting answers that are not core to the research subject. Researchers must aim to write a neutral question which does not give away the researchers expectations. 

Even with transcribed/digital responses analysis can be long and detailed, though not as much as in an interview. 

Surveys may be left incomplete if performed online or taken by research assistants not well trained in giving the survey/structured interview. 

Narrow sampling may skew the results of the survey. 

Example questions

Here are some example survey questions which are open ended and require a long form written response:

  • Tell us why you became a doctor? 
  • What do you expect from this health service? 
  • How do you explain the low levels of financial investment in mental health services? (WHO, 2007) 

Example studies

  • Davey, L. , Clarke, V. and Jenkinson, E. (2019), Living with alopecia areata: an online qualitative survey study. British Journal of Dermatology, 180 1377-1389. Retrieved from https://onlinelibrary-wiley-com.ezproxy-f.deakin.edu.au/doi/10.1111%2Fbjd.17463    
  • Richardson, J. (2004). What Patients Expect From Complementary Therapy: A Qualitative Study. American Journal of Public Health, 94(6), 1049–1053. Retrieved from http://ezproxy.deakin.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=s3h&AN=13270563&site=eds-live&scope=site  
  • Saraceno, B., van Ommeren, M., Batniji, R., Cohen, A., Gureje, O., Mahoney, J., ... & Underhill, C. (2007). Barriers to improvement of mental health services in low-income and middle-income countries. The Lancet, 370(9593), 1164-1174. Retrieved from https://www-sciencedirect-com.ezproxy-f.deakin.edu.au/science/article/pii/S014067360761263X?via%3Dihub  

Below has more detail of the Lancet article including actual survey questions at: 

  • World Health Organization. (2007.) Expert opinion on barriers and facilitating factors for the implementation of existing mental health knowledge in mental health services. Geneva: World Health Organization. https://apps.who.int/iris/handle/10665/44808
  • Green, J. 1961-author., & Thorogood, N. (2018). Qualitative methods for health research. SAGE. Retrieved from http://ezproxy.deakin.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=cat00097a&AN=deakin.b4151167&authtype=sso&custid=deakin&site=eds-live&scope=site   
  • JANSEN, H. The Logic of Qualitative Survey Research and its Position in the Field of Social Research Methods. Forum Qualitative Sozialforschung, 11(2), Retrieved from http://www.qualitative-research.net/index.php/fqs/article/view/1450/2946  
  • Neilsen Norman Group, (2019). 28 Tips for Creating Great Qualitative Surveys. Retrieved from https://www.nngroup.com/articles/qualitative-surveys/   
  • << Previous: Documents
  • Next: Interviews >>
  • Last Updated: Apr 8, 2024 11:12 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs

Research-Methodology

Questionnaires

Questionnaires can be classified as both, quantitative and qualitative method depending on the nature of questions. Specifically, answers obtained through closed-ended questions (also called restricted questions) with multiple choice answer options are analyzed using quantitative methods. Research findings in this case can be illustrated using tabulations, pie-charts, bar-charts and percentages.

Answers obtained to open-ended questionnaire questions (also known as unrestricted questions), on the other hand, are analyzed using qualitative methods. Primary data collected using open-ended questionnaires involve discussions and critical analyses without use of numbers and calculations.

There are following types of questionnaires:

Computer questionnaire . Respondents are asked to answer the questionnaire which is sent by mail. The advantages of the computer questionnaires include their inexpensive price, time-efficiency, and respondents do not feel pressured, therefore can answer when they have time, giving more accurate answers. However, the main shortcoming of the mail questionnaires is that sometimes respondents do not bother answering them and they can just ignore the questionnaire.

Telephone questionnaire .  Researcher may choose to call potential respondents with the aim of getting them to answer the questionnaire. The advantage of the telephone questionnaire is that, it can be completed during the short amount of time. The main disadvantage of the phone questionnaire is that it is expensive most of the time. Moreover, most people do not feel comfortable to answer many questions asked through the phone and it is difficult to get sample group to answer questionnaire over the phone.

In-house survey .  This type of questionnaire involves the researcher visiting respondents in their houses or workplaces. The advantage of in-house survey is that more focus towards the questions can be gained from respondents. However, in-house surveys also have a range of disadvantages which include being time consuming, more expensive and respondents may not wish to have the researcher in their houses or workplaces for various reasons.

Mail Questionnaire . This sort of questionnaires involve the researcher to send the questionnaire list to respondents through post, often attaching pre-paid envelope. Mail questionnaires have an advantage of providing more accurate answer, because respondents can answer the questionnaire in their spare time. The disadvantages associated with mail questionnaires include them being expensive, time consuming and sometimes they end up in the bin put by respondents.

Questionnaires can include the following types of questions:

Open question questionnaires . Open questions differ from other types of questions used in questionnaires in a way that open questions may produce unexpected results, which can make the research more original and valuable. However, it is difficult to analyze the results of the findings when the data is obtained through the questionnaire with open questions.

Multiple choice question s. Respondents are offered a set of answers they have to choose from. The downsize of questionnaire with multiple choice questions is that, if there are too many answers to choose from, it makes the questionnaire, confusing and boring, and discourages the respondent to answer the questionnaire.

Dichotomous Questions .  Thes type of questions gives two options to respondents – yes or no, to choose from. It is the easiest form of questionnaire for the respondent in terms of responding it.

Scaling Questions . Also referred to as ranking questions, they present an option for respondents to rank the available answers to questions on the scale of given range of values (for example from 1 to 10).

For a standard 15,000-20,000 word business dissertation including 25-40 questions in questionnaires will usually suffice. Questions need be formulated in an unambiguous and straightforward manner and they should be presented in a logical order.

Questionnaires as primary data collection method offer the following advantages:

  • Uniformity: all respondents are asked exactly the same questions
  • Cost-effectiveness
  • Possibility to collect the primary data in shorter period of time
  • Minimum or no bias from the researcher during the data collection process
  • Usually enough time for respondents to think before answering questions, as opposed to interviews
  • Possibility to reach respondents in distant areas through online questionnaire

At the same time, the use of questionnaires as primary data collection method is associated with the following shortcomings:

  • Random answer choices by respondents without properly reading the question.
  • In closed-ended questionnaires no possibility for respondents to express their additional thoughts about the matter due to the absence of a relevant question.
  • Collecting incomplete or inaccurate information because respondents may not be able to understand questions correctly.
  • High rate of non-response

Survey Monkey represents one of the most popular online platforms for facilitating data collection through questionnaires. Substantial benefits offered by Survey Monkey include its ease to use, presentation of questions in many different formats and advanced data analysis capabilities.

Questionnaires

Survey Monkey as a popular platform for primary data collection

There are other alternatives to Survey Monkey you might want to consider to use as a platform for your survey. These include but not limited to Jotform, Google Forms, Lime Survey, Crowd Signal, Survey Gizmo, Zoho Survey and many others.

My  e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step approach  contains a detailed, yet simple explanation of quantitative methods. The e-book explains all stages of the research process starting from the selection of the research area to writing personal reflection. Important elements of dissertations such as research philosophy, research approach, research design, methods of data collection and data analysis are explained in simple words.

John Dudovskiy

Questionnaires

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 10 October 2022.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs surveys, questionnaire methods, open-ended vs closed-ended questions, question wording, question order, step-by-step guide to design, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyse data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleaning and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalise your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimising these will help you avoid sampling bias .

Prevent plagiarism, run a free check.

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • Cost-effective
  • Easy to administer for small and large groups
  • Anonymous and suitable for sensitive topics

But they may also be:

  • Unsuitable for people with limited literacy or verbal skills
  • Susceptible to a nonreponse bias (most people invited may not complete the questionnaire)
  • Biased towards people who volunteer because impersonal survey requests often go ignored

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • Help you ensure the respondents are representative of your target audience
  • Allow clarifications of ambiguous or unclear questions and answers
  • Have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • Costly and time-consuming to perform
  • More difficult to analyse if you have qualitative responses
  • Likely to contain experimenter bias or demand characteristics
  • Likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions, or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalisable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert-type questions collect ordinal data using rating scales with five or seven points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio data, you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer ‘multiracial’ for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle to productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarising responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorise answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counterargument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favour flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barrelled questions. Double-barrelled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

You can organise the questions logically, with a clear progression from simple to complex. Alternatively, you can randomise the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioural or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimise order effects because they can be a source of systematic error or bias in your study.

Randomisation

Randomisation involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomisation, order effects will be minimised in your dataset. But a randomised order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Follow this step-by-step guide to design your questionnaire.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalise your variables of interest into questionnaire items. Operationalising concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivised or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomise questions. Randomising questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis.

You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, October 10). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved 31 May 2024, from https://www.scribbr.co.uk/research-methods/questionnaire-design/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, doing survey research | a step-by-step guide & examples, what is a likert scale | guide & examples, reliability vs validity in research | differences, types & examples.

Different methods of survey sampling

2022-02-05 Market Research

What are the different methods of survey sampling?

Determining your sample, or the group of people participating in your survey, is one of the most crucial steps of conducting it.

There are two main kinds of sampling techniques — probability and non-probability sampling. Learn about the specific types of sampling within these categories to help you with your next survey.

Probability sampling methods

With probability sampling, each element of the population has a definite, non-zero probability of being used in the sample. This method guarantees that the entire population is represented in that selection. There are several types of probability sampling methods, including:

Simple random sampling

Simple random sampling is the purest type of probability sampling. With this method, individuals are chosen randomly, giving each member of the population an equal chance of being selected as the subject.

Systematic sampling

In a systematic sample, individuals are selected at regular intervals. For example, every 10th person on the population list may be selected to participate. This method assures that the population is sampled evenly.

Stratified sampling

Before a stratified sample is taken, the population is divided into groups based on characteristics pertinent to the research, such as age or gender. The population is then randomly sampled within these specific strata. This complex method of sampling ensures each category of the population is represented in the sample.

Cluster sampling

With cluster sampling, every member of the population is assigned to a group known as a cluster. A sample of clusters is chosen using a probability method like random sampling, and only individuals within the sampled cluster are surveyed.

Multistage sampling

Multistage sampling uses several different probability sampling methods. For example, your sampling process may begin with cluster sampling. Then, you use simple random sampling to choose a subset of participants from each cluster to create the final sample.

Non-probability sampling methods

With non-probability sampling techniques, the sample is collected based on specific criteria, so not every member of the population has a chance of being selected. These sampling methods are often used for online surveys. The different types of non-probability sampling include:

Convenience sampling

In a convenience sample, individuals are selected for how easily accessible they are to the researcher. This method is typically used during preliminary research phases.

Quota sampling

Quota sampling is similar to stratified sampling, except it assigns a quota to each population subset, meaning that the sample must include a specific number of individuals from each group.

Judgment/purposive sampling

With judgment or purposive sampling, the researcher selects individuals for a specific quality relevant to the study. For example, if you want to study what it takes to graduate summa cum laude, you would survey individuals who graduated with that distinction.

Snowball sampling

In a snowball sample, you rely on your initial survey respondents to refer you to new participants.

Voluntary sampling

A voluntary sample is made up of people who volunteer to take part in the survey. Typically, these respondents have a strong interest in the survey topic.

Contact Cint for survey solutions

At Cint, our tools help you connect with survey respondents  to help complete your survey.  Contact us  to learn more.

More from our blog

Cint wins Data Quality award at I-COM Data Creativity Awards 2024

Cint wins Data Quality award at I-COM Data Creativity Awards 2024

Cint scoops second prize for its achievements in data quality with a ground-breaking Forsta partnership on a secure server-to-server (S2S) API integration.

How a researcher at Harvard University successfully decoded the opinions of independent voters

How a researcher at Harvard University successfully decoded the opinions of independent voters

Political scientists Andrew O'Donohue and Daniel Markovits conducted a survey with Cint to understand how prosecution of Donald Trump affected public opinion among independent voters.

Ariel Madway, Associate Director, Marketing Events on the upcoming events season

Ariel Madway, Associate Director, Marketing Events on the upcoming events season

From London to Malaga and Cairns, Ariel Madway takes us on a journey through Cint's busy events season, her planning inspiration and what she's most excited about.

sample questionnaire research methods

18 Different Types of Survey Methods + Pros & Cons

sample questionnaire research methods

There are many reasons why surveys are important. Surveys help researchers find solutions, create discussions, and make decisions. They can also get to the bottom of the really important stuff, like, coffee or tea? Dogs or cats? Elvis or The Beatles? When it comes to finding the answers to these questions, there are 18 different types of survey methods to use.

Create your first survey, form, or poll now!

18 Different Types of Survey Methods

Different surveys serve different purposes, which is why there are a number of them to choose from. “What are the types of surveys I should use,” you ask? Here’s a look at the 18 types of survey methods researchers use today.

1. Interviews

Also known as in-person surveys or household surveys, this used to be one of the most popular types of survey to conduct. Researchers like them because they involve getting face-to-face with individuals. Of course, this method of surveying may seem antiquated when today we have online surveying at our fingertips. However, interviews still serve a purpose. 

Researchers conduct interviews when they want to discuss something personal with people. For example, they may have questions that may require extensive probing to uncover the truth. Sure, some interviewees may be more comfortable answering questions confidentially behind a keyboard. However, a skilled interviewer is able to put them at ease and get genuine responses. They can often go deeper than you may be able to using other surveying methods. 

Often, in-person interviews are recorded on camera. This way, an expert can review them afterward. They do this to determine if the answers given may be false based on an interviewee’s change in tone. A change in facial expressions and body movements may also be a signal they pick up on. 

2. Intercept Surveys

While interviews tend to choose respondents and have controls in place, intercept surveys (or “man on the spot”) surveys are conducted at certain locations or events. This involves having an interviewer, or multiple interviewers, scoping out an area and asking people, generally at random, for their thoughts or viewpoints on a particular topic. 

3. Focus Groups

These types of surveys are conducted in person as well. However, focus groups involve a number of people rather than just one individual. The group is generally small but demographically diverse and led by a moderator. The focus group may be sampling new products, or to have a discussion around a particular topic, often a hot-button one. 

The purpose of a focus group survey is often to gauge people’s reaction to a product in a group setting or to get people talking, interacting—and yes, arguing—with the moderator taking notes on the group’s behavior and attitudes. This is often the most expensive survey method as a trained moderator must be paid. In addition, locations must be secured, often in various cities, and participants must be heavily incentivized to show up. Gift cards in the $75-100 range for each survey participant are the norm.   

4. Panel Sampling

Recruiting survey-takers from a panel maintained by a research company is a surefire way to get respondents. Why? Because people have specifically signed up to take them. The benefit of these types of surveys for research, of course, is there you can be assured responses. In addition, you can filter respondents by a variety of criteria to be sure you’re speaking with your target audience.

The downside is data quality. These individuals get survey offers frequently. So, they may rush through them to get their inventive and move on to the next one. In addition, if you’re constantly tapping into the same people from the same panel, are you truly getting a representative sample?

5. Telephone Surveys

Most telephone survey research types are conducted through random digit dialing (RDD). RDD can reach both listed  and  unlisted numbers, improving sampling accuracy. Surveys are conducted by interviewers through computer-assisted telephone interviewing (CATI) software. CATI displays the questionnaire to the interviewer with a rotation of questions.  

Telephone surveys started in the 1940s. In fact, in a  recent blog , we recount how the predictions for the 1948 presidential election were completely wrong because of sampling bias in telephone surveys. Rising in popularity in the late 50s and early 60s when the telephone became common in most American households, telephone surveys are no longer a very popular method of conducting a survey. Why? Because many people refuse to take telephone surveys or simply are not answering calls from a number they don’t recognize.

6. Post-Call Surveys

If a telephone survey is going to be conducted, today it is usually a post-call survey. This is often accomplished through IVR, or interactive voice response. IVR means there is no interviewer involved. Instead, customers record answers to pre-recorded questions using numbers on their touch-tone keypads. If a question is open-ended, the interviewee can respond by speaking and the system records the answer. IVR surveys are often deployed to measure how a customer feels about a service they just received. For example, after calling your bank, you may be asked to stay on the line to answer a series of questions about your experience.

Most post-call surveys are either  NPS surveys  or customer satisfaction (CSAT) surveys. The former asks the customer “How likely are you to recommend our organization to a f riend or family based on your most recent interaction?” while the CSAT survey asks customers “How satisfied are you with the results of your most recent interaction?”.   NPS survey results reflect how the customer feels about the brand, while CSAT surveys a re all about individual agent and contact center performance.   

7. SMS Text Surveys

Many people rarely using their phone to talk anymore, and ignore calls from unknown numbers. This has given rise to the SMS (Short Messaging Service) text survey. SMS surveys are delivered via text to people who have opted in to receive notifications from the sender. This means that there is usually some level of engagement, improving response rates. The one downside is that questions typically need to be short, and answers are generally 1-2 words or simply numbers (this is why many NPS surveys, gauging customer satisfaction, are often conducted via SMS text). Be careful not to send too many text surveys, as a person can opt-out just as easily, usually by texting STOP.

8. Mail-in Surveys / Postal Surveys

These are delivered right to respondents’ doorsteps! Mail surveys were frequently used before the advent of the internet when respondents were spread out geographically and budgets were modest. After all, mail-in surveys didn’t require much cost other than the postage. 

So are mail-in surveys going the way of the dinosaur? Not necessarily. They are still occasionally more valuable compared to different methods of surveying. Because they are going to a specific name and home address, they often feel more personalized. This personalization can prompt the recipient to complete the survey. 

They’re also good for surveys of significant length. Most people have short attention spans, and won’t spend more than a few minutes on the phone or filling out an online survey. At least, not without an incentive! However, with a mail-in survey, the person can complete it at their leisure. They can fill out some of it, set it aside, and then come back to it later. This gives mail-in surveys a relatively high response rate.

9. Kiosk Surveys

These surveys happen on a computer screen at a physical location. You’ve probably seen them popping up in stores, hotel lobbies, hospitals, and office spaces. These days, they’re just about anywhere a researcher or marketer wants to collect data from customers or passers-by.  Kiosk surveys  provide immediate feedback following a purchase or an interaction. They collect responses while the experience is still fresh in the respondent’s mind. This makes their judgment more trustworthy. Below is an example of a SurveyLegend kiosk survey at McDonald’s. The kiosk survey collects information, thanks the respondent for their feedback, and then resets for the next customer. Read how to  create your own kiosk survey here .

kiosk mode

10. Email Surveys

Email surveys are one of the most effective surveying methods as they are delivered directly to your audience via their online account. They can be used by anyone for just about anything, and are easily customized for a particular audience. Another good thing about email surveys is you can easily see who did or did not open the survey and make improvements to it for a future send to increase response rates. You can also A/B test subject lines, imagery, and so on to see which is more effective. SurveyLegend offers dozens of different types of online survey questions, which we explore in our blog  12 Different Types of Survey Questions and When to Use Them (with Examples) .

Types of Questions on Surveys

11. Pop-up Surveys

A pop-up survey is a feedback form that pops up on a website or app. Although the main window a person is reading on their screen remains visible, it is temporarily disabled until a user interacts with the pop-up, either agreeing to leave feedback or closing out of it. The survey itself is typically about the company whose site or app the user is currently visiting (as opposed to an intercept survey, which is an invitation to take a survey hosted on a different site).

A pop-up survey attempts to grab website visitors’ attention in a variety of ways, popping up in the middle of the screen, moving in from the side, or covering the entire screen. While they can be intrusive, they also have many benefits. Read about the  benefits of pop-up surveys here .

12. Embedded Surveys

The opposite of pop-up surveys, these surveys live directly on your website or another website of your choice. Because the survey cannot be X’ed out of like a pop-up, it takes up valuable real estate on your site, or could be expensive to implement on someone else’s site. In addition, although the  embedded survey  is there at all times, it may not get the amount of attention a pop-up does since it’s not “in the respondent’s face.”

13. Social Media Surveys

There are more than  3.5 billion people  are using social media worldwide, a number projected to increase to almost 4.5 billion in 2025. This makes social media extremely important to marketers and researchers. Using platforms such as Facebook, Twitter, Instagram, and the new Threads, many companies and organizations send out social media surveys regularly. Because people check their social media accounts quite regularly, it’s a good way to collect responses and monitor changes in satisfaction levels or popular opinion. Check out our blog on  social media surveys  for more benefits and valuable tips.

14. Mobile Surveys

Mobile traffic has now overtaken desktop computers as the most used device for accessing the internet, with more than 54% of the share. But don’t fret – you don’t have to create an entirely new survey to reach people on their phones or tablets. Online poll makers like SurveyLegend are responsive, so when you create a desktop version of a survey, it automatically becomes mobile-friendly. The survey renders, or displays, on any device or screen regardless of size, with elements on the page automatically rearranging themselves, shrinking, or expanding as necessary. Learn more about our  responsive surveys .

15. Mobile App Surveys

Today, most companies have a mobile app. These can be an ideal way to conduct surveys as people have to willingly download your app; this means, they already have a level of engagement with your company or brand making them more likely to respond to your surveys.

16. QR Code Surveys

QR Code or QRC is an abbreviation of “Quick Response Code.” These two-dimensional encoded images, when scanned, deliver hidden information that’s stored on it. They’re different from barcodes because they can house a lot more information, including website URLs, phone numbers, or up to 4,000 characters of text. The recent QR code comeback provides a good opportunity for researchers to collect data. Place the QR code anywhere – on flyers, posters, billboards, commercials – and all someone had to do is scan it with the mobile device to have immediate access to a survey. Read more about the  benefits of QR code surveys .

17. Delphi Surveys

A Delphi survey is a structured research method used to gather the collective opinions and insights of a panel of experts on a particular topic. The process involves several rounds of questionnaires or surveys. Each round is designed to narrow things down until a consensus or hypothyses can be formed. One of the key features of the Delphi survey research is that participants are unknown to each other, thereby eliminating influence.

18. AI Surveys

Artificial intelligence is the latest types of survey method. Using AI, researchers allow the technology to ask survey questions. These “Chatbots” can even ask follow-up questions on the spot based on a respondent’s answer. There can be drawbacks, however. If a person suspects survey questions are coming from AI, they may be less likely to respond (or may respond incorrectly to mess with the AI). Additionally, AI is not good with emotions, so asking sensitive questions in an emotionless manner could be off putting to people.  Read more about AI Surveys .

Online Surveys: Ideal for Collecting Data and Feedback

Statistic: Countries with the largest digital populations in the world as of January 2023 (in millions) | Statista

That’s not all. People can take online surveys just about anywhere thanks to mobile devices. The use of these devices across age groups is balancing out as well. Check out smartphone use by age group below.

Statistic: Share of adults in the United States who owned a smartphone from 2015 to 2021, by age group | Statista

With more and more people accessing the internet through their mobile devices, now you can reach teens while they’re between classes and adults during their subway commute to work. Can’t say that for those other types of surveys !

Online surveys are also extremely cost-efficient. You don’t have to spend money on paper, printing, postage, or an interviewer. This significantly reduces set-up and administration costs. This also allows researchers and companies to send out a survey very expeditiously. Additionally, many online survey tools provide in-depth analysis of survey data. This saves you from having to spend money on further research once the survey is complete. 

Researchers have their pick of options when it’s time to survey people. Which method you choose may depend upon cost, reach, and the types of questions.

Now, you may be wondering, “ Where can I make free surveys ?” You can get started with free online surveys using SurveyLegend! He re are a few things that make SurveyLegend the ideal choice for different types of surveys for research ( or for fun) .

  • When it comes to surveys, brief is best to keep respondents attention. So, SurveyLegend automatically collects some data, such as the participant’s location, reducing the number of questions you have to ask.
  • People like eye candy and many surveys are just plain dull. SurveyLegend offers beautifully rendered pre-designed surveys that will get your participant’s attention – and keep it through to completion!
  • Today, most people take surveys on mobile devices. Often surveys desktop surveys don’t translate well, resulting in a high drop-off rate. SurveyLegend’s designs are responsive, automatically adjusting to any screen size.

What’s your favorite method of surveying people? (Hey… that’s a good topic for a survey!) Sound off in the comments!

Frequently Asked Questions (FAQs)

The 10 most common survey methods are online surveys, in-person interviews, focus groups, panel sampling, telephone surveys, post-call surveys, mail-in surveys, pop-up surveys, mobile surveys, and kiosk surveys.

Benefits of online surveys include their ability to reach a broad audience and that they are relatively inexpensive.

Kiosk surveys are surveys on a computer screen at the point of sale.

A focus group is an in-person interview or survey involving a group of people rather than just one individual. The group is generally small but demographically diverse, and led by a moderator. 

Jasko Mahmutovic

How to Write Survey Questions Ebook

Related Articles You Might Like

sample questionnaire research methods

How To Create a Follow-up Survey & Questions To Ask

“The fortune is in the follow-up.”  – Jim Rohn Rohn, an American entrepreneur, author, and motivational speaker who passed in 2009, understood the importance of follow-up. He would often...

sample questionnaire research methods

How To Create a Successful Webinar Survey & Questions To Ask

Webinars continue to fuel successful marketing initiatives and learning platforms. But not all webinars are created equal. If you’ve attended a virtual event in the past – and it’s 2024,...

sample questionnaire research methods

What Is A Closed-Loop Survey & Five Steps To Closing The Loop

When we talk about “closing the loop,” we’re not referring to that childhood method of tying shoelaces! In business, closing the loop refers to completing a cycle or ensuring...

Privacy Overview

Examples

Research Question

Ai generator.

sample questionnaire research methods

A research question serves as the foundation of any academic study, driving the investigation and framing the scope of inquiry. It focuses the research efforts, ensuring that the study addresses pertinent issues systematically. Crafting a strong research question is essential as it directs the methodology, data collection, and analysis, ultimately shaping the study’s conclusions and contributions to the field.

What is a Research Question?

A research question is the central query that guides a study, focusing on a specific problem or issue. It defines the purpose and direction of the research, influencing the methodology and analysis. A well-crafted research question ensures the study remains relevant, systematic, and contributes valuable insights to the field.

Types of Research Questions

Research questions are a crucial part of any research project. They guide the direction and focus of the study. Here are the main types of research questions:

1. Descriptive Research Questions

These questions aim to describe the characteristics or functions of a specific phenomenon or group. They often begin with “what,” “who,” “where,” “when,” or “how.”

  • What are the common symptoms of depression in teenagers?

2. Comparative Research Questions

These questions compare two or more groups or variables to identify differences or similarities.

  • How do the academic performances of students in private schools compare to those in public schools?

3. Correlational Research Questions

These questions seek to identify the relationships between two or more variables. They often use terms like “relationship,” “association,” or “correlation.”

  • Is there a relationship between social media usage and self-esteem among adolescents?

4. Causal Research Questions

These questions aim to determine whether one variable causes or influences another. They are often used in experimental research.

  • Does a new teaching method improve student engagement in the classroom?

5. Exploratory Research Questions

These questions are used when the researcher is exploring a new area or seeking to understand a complex phenomenon. They are often open-ended.

  • What factors contribute to the success of start-up companies in the tech industry?

6. Predictive Research Questions

These questions aim to predict future occurrences based on current or past data. They often use terms like “predict,” “forecast,” or “expect.”

  • Can high school GPA predict college success?

7. Evaluative Research Questions

These questions assess the effectiveness or impact of a program, intervention, or policy .

  • How effective is the new community outreach program in reducing homelessness?

8. Ethnographic Research Questions

These questions are used in qualitative research to understand cultural phenomena from the perspective of the participants.

  • How do cultural beliefs influence healthcare practices in rural communities?

9. Case Study Research Questions

These questions focus on an in-depth analysis of a specific case, event, or instance.

  • What were the critical factors that led to the failure of Company X?

10. Phenomenological Research Questions

These questions explore the lived experiences of individuals to understand a particular phenomenon.

  • What is the experience of living with chronic pain?

Research Question Format

A well-formulated research question is essential for guiding your study effectively. Follow this format to ensure clarity and precision:

  • Begin with a broad subject area.
  • Example: “Education technology”
  • Define a specific aspect or variable.
  • Example: “Impact of digital tools”
  • Decide if you are describing, comparing, or investigating relationships.
  • Example: “Effectiveness”
  • Identify who or what is being studied.
  • Example: “High school students”
  • Formulate the complete question.
  • Example: “How effective are digital tools in enhancing the learning experience of high school students?”
Sample Format: “How [specific aspect] affects [target population] in [context]?” Example: “How does the use of digital tools affect the academic performance of high school students in urban areas?”

Research Question Examples

Research questions in business.

  • “What are the primary factors influencing customer loyalty in the retail industry?”
  • “How does employee satisfaction differ between remote work and in-office work environments in tech companies?”
  • “What is the relationship between social media marketing and brand awareness among small businesses?”
  • “How does implementing a four-day workweek impact productivity in consulting firms?”
  • “What are the emerging trends in consumer behavior post-COVID-19 in the e-commerce sector?”
  • “Why do some startups succeed in attracting venture capital while others do not?”
  • “How effective is corporate social responsibility in enhancing brand reputation for multinational companies?”
  • “How do decision-making processes in family-owned businesses differ from those in publicly traded companies?”
  • “What strategies do successful entrepreneurs use to scale their businesses in competitive markets?”
  • “How does supply chain management affect the operational efficiency of manufacturing firms?”

Research Questions in Education

  • “What are the most common challenges faced by first-year teachers in urban schools?”
  • “How do student achievement levels differ between traditional classrooms and blended learning environments?”
  • “What is the relationship between parental involvement and student academic performance in elementary schools?”
  • “How does the implementation of project-based learning affect critical thinking skills in middle school students?”
  • “What are the emerging trends in the use of artificial intelligence in education?”
  • “Why do some students perform better in standardized tests than others despite similar instructional methods?”
  • “How effective is the flipped classroom model in improving student engagement and learning outcomes in high school science classes?”
  • “How do teachers’ professional development programs impact teaching practices and student outcomes in rural schools?”
  • “What strategies can be employed to reduce the dropout rate among high school students in low-income areas?”
  • “How does classroom size affect the quality of teaching and learning in elementary schools?”

Research Questions in Health Care

  • “What are the most common barriers to accessing mental health services in rural areas?”
  • “How does patient satisfaction differ between telemedicine and in-person consultations in primary care?”
  • “What is the relationship between diet and the incidence of type 2 diabetes in adults?”
  • “How does regular physical activity influence the recovery rate of patients with cardiovascular diseases?”
  • “What are the emerging trends in the use of wearable technology for health monitoring?”
  • “Why do some patients adhere to their medication regimen while others do not despite similar health conditions?”
  • “How effective are community-based health interventions in reducing obesity rates among children?”
  • “How do interdisciplinary team meetings impact patient care in hospitals?”
  • “What strategies can be implemented to reduce the spread of infectious diseases in healthcare settings?”
  • “How does nurse staffing level affect patient outcomes in intensive care units?”

Research Questions in Computer Science

  • “What are the key features of successful machine learning algorithms used in natural language processing?”
  • “How does the performance of quantum computing compare to classical computing in solving complex optimization problems?”
  • “What is the relationship between software development methodologies and project success rates in large enterprises?”
  • “How does the implementation of cybersecurity protocols impact the frequency of data breaches in financial institutions?”
  • “What are the emerging trends in blockchain technology applications beyond cryptocurrency?”
  • “Why do certain neural network architectures outperform others in image recognition tasks?”
  • “How effective are different code review practices in reducing bugs in open-source software projects?”
  • “How do agile development practices influence team productivity and product quality in software startups?”
  • “What strategies can improve the scalability of distributed systems in cloud computing environments?”
  • “How does the choice of programming language affect the performance and maintainability of enterprise-level software applications?”

Research Questions in Psychology

  • “What are the most common symptoms of anxiety disorders among adolescents?”
  • “How does the level of job satisfaction differ between remote workers and in-office workers?”
  • “What is the relationship between social media use and self-esteem in teenagers?”
  • “How does cognitive-behavioral therapy (CBT) affect the severity of depression symptoms in adults?”
  • “What are the emerging trends in the treatment of post-traumatic stress disorder (PTSD)?”
  • “Why do some individuals develop resilience in the face of adversity while others do not?”
  • “How effective are mindfulness-based interventions in reducing stress levels among college students?”
  • “How does group therapy influence the social skills development of children with autism spectrum disorder?”
  • “What strategies can improve the early diagnosis of bipolar disorder in young adults?”
  • “How do sleep patterns affect cognitive functioning and academic performance in high school students?”

More Research Question Examples

Research question examples for students.

  • “What are the primary study habits of high-achieving college students?”
  • “How do academic performances differ between students who participate in extracurricular activities and those who do not?”
  • “What is the relationship between time management skills and academic success in high school students?”
  • “How does the use of technology in the classroom affect students’ engagement and learning outcomes?”
  • “What are the emerging trends in online learning platforms for high school students?”
  • “Why do some students excel in standardized tests while others struggle despite similar study efforts?”
  • “How effective are peer tutoring programs in improving students’ understanding of complex subjects?”
  • “How do different teaching methods impact the learning process of students with learning disabilities?”
  • “What strategies can help reduce test anxiety among middle school students?”
  • “How does participation in group projects affect the development of collaboration skills in university students?”

Research Question Examples for College Students

  • “What are the most common stressors faced by college students during final exams?”
  • “How does academic performance differ between students who live on campus and those who commute?”
  • “What is the relationship between part-time employment and GPA among college students?”
  • “How does participation in study abroad programs impact cultural awareness and academic performance?”
  • “What are the emerging trends in college students’ use of social media for academic purposes?”
  • “Why do some college students engage in academic dishonesty despite awareness of the consequences?”
  • “How effective are university mental health services in addressing students’ mental health issues?”
  • “How do different learning styles affect the academic success of college students in online courses?”
  • “What strategies can be employed to improve retention rates among first-year college students?”
  • “How does participation in extracurricular activities influence leadership skills development in college students?”

Research Question Examples in Statistics

  • “What are the most common statistical methods used in medical research?”
  • “How does the accuracy of machine learning models compare to traditional statistical methods in predicting housing prices?”
  • “What is the relationship between sample size and the power of a statistical test in clinical trials?”
  • “How does the use of random sampling affect the validity of survey results in social science research?”
  • “What are the emerging trends in the application of Bayesian statistics in data science?”
  • “Why do some datasets require transformation before applying linear regression models?”
  • “How effective are bootstrapping techniques in estimating the confidence intervals of small sample data?”
  • “How do different imputation methods impact the results of analyses with missing data?”
  • “What strategies can improve the interpretation of interaction effects in multiple regression analysis?”
  • “How does the choice of statistical software affect the efficiency of data analysis in academic research?”

Research Question Examples in Socialogy

  • “What are the primary social factors contributing to urban poverty in major cities?”
  • “How does the level of social integration differ between immigrants and native-born citizens in urban areas?”
  • “What is the relationship between educational attainment and social mobility in different socioeconomic classes?”
  • “How does exposure to social media influence political participation among young adults?”
  • “What are the emerging trends in family structures and their impact on child development?”
  • “Why do certain communities exhibit higher levels of civic engagement than others?”
  • “How effective are community policing strategies in reducing crime rates in diverse neighborhoods?”
  • “How do socialization processes differ in single-parent households compared to two-parent households?”
  • “What strategies can be implemented to reduce racial disparities in higher education enrollment?”
  • “How does the implementation of public housing policies affect the quality of life for low-income families?”

Research Question Examples in Biology

  • “What are the primary characteristics of the various stages of mitosis in eukaryotic cells?”
  • “How do the reproductive strategies of amphibians compare to those of reptiles?”
  • “What is the relationship between genetic diversity and the resilience of plant species to climate change?”
  • “How does the presence of pollutants in freshwater ecosystems impact the growth and development of aquatic organisms?”
  • “What are the emerging trends in the use of CRISPR technology for gene editing in agricultural crops?”
  • “Why do certain bacteria develop antibiotic resistance more rapidly than others?”
  • “How effective are different conservation strategies in protecting endangered species?”
  • “How do various environmental factors influence the process of photosynthesis in marine algae?”
  • “What strategies can enhance the effectiveness of reforestation programs in tropical rainforests?”
  • “How does the method of seed dispersal affect the spatial distribution and genetic diversity of plant populations?”

Research Question Examples in History

  • “What were the key social and economic factors that led to the Industrial Revolution in Britain?”
  • “How did the political systems of ancient Athens and ancient Sparta differ in terms of governance and citizen participation?”
  • “What is the relationship between the Renaissance and the subsequent scientific revolution in Europe?”
  • “How did the Treaty of Versailles contribute to the rise of Adolf Hitler and the onset of World War II?”
  • “What are the emerging perspectives on the causes and impacts of the American Civil Rights Movement?”
  • “Why did the Roman Empire decline and eventually fall despite its extensive power and reach?”
  • “How effective were the New Deal programs in alleviating the effects of the Great Depression in the United States?”
  • “How did the processes of colonization and decolonization affect the political landscape of Africa in the 20th century?”
  • “What strategies did the suffragette movement use to secure voting rights for women in the early 20th century?”
  • “How did the logistics and strategies of the D-Day invasion contribute to the Allied victory in World War II?”

Importance of Research Questions

Research questions are fundamental to the success and integrity of any study. Their importance can be highlighted through several key aspects:

  • Research questions provide a clear focus and direction for the study, ensuring that the researcher remains on track.
  • Example: “How does online learning impact student engagement in higher education?”
  • They establish the boundaries of the research, determining what will be included or excluded.
  • Example: “What are the effects of air pollution on respiratory health in urban areas?”
  • Research questions dictate the choice of research design, methodology, and data collection techniques.
  • Example: “What is the relationship between physical activity and mental health in adolescents?”
  • They make the objectives of the research explicit, providing clarity and precision to the study’s goals.
  • Example: “Why do some startups succeed in securing venture capital while others fail?”
  • Well-crafted research questions emphasize the significance and relevance of the study, justifying its importance.
  • Example: “How effective are public health campaigns in increasing vaccination rates among young adults?”
  • They enable a systematic approach to inquiry, ensuring that the study is coherent and logically structured.
  • Example: “What are the social and economic impacts of remote work on urban communities?”
  • Research questions offer a framework for analyzing and interpreting data, guiding the researcher in making sense of the findings.
  • Example: “How does social media usage affect self-esteem among teenagers?”
  • By addressing specific gaps or exploring new areas, research questions ensure that the study contributes meaningfully to the existing body of knowledge.
  • Example: “What are the emerging trends in the use of artificial intelligence in healthcare?”
  • Clear and precise research questions increase the credibility and reliability of the research by providing a focused approach.
  • Example: “How do educational interventions impact literacy rates in low-income communities?”
  • They help in clearly communicating the purpose and findings of the research to others, including stakeholders, peers, and the broader academic community.
  • Example: “What strategies are most effective in reducing youth unemployment in developing countries?”

Research Question vs. Hypothesis

Chracteristics of research questions.

Chracteristics of Research Questions

Research questions are fundamental to the research process as they guide the direction and focus of a study. Here are the key characteristics of effective research questions:

1. Clear and Specific

  • The question should be clearly articulated and specific enough to be understood without ambiguity.
  • Example: “What are the effects of social media on teenagers’ mental health?” rather than “How does social media affect people?”

2. Focused and Researchable

  • The question should be narrow enough to be answerable through research and data collection.
  • Example: “How does participation in extracurricular activities impact academic performance in high school students?” rather than “How do activities affect school performance?”

3. Complex and Analytical

  • The question should require more than a simple yes or no answer and should invite analysis and discussion.
  • Example: “What factors contribute to the success of renewable energy initiatives in urban areas?” rather than “Is renewable energy successful?”

4. Relevant and Significant

  • The question should address an important issue or problem in the field of study and contribute to knowledge or practice.
  • Example: “How does climate change affect agricultural productivity in developing countries?” rather than “What is climate change?”

5. Feasible and Practical

  • The question should be feasible to answer within the constraints of time, resources, and access to information.
  • Example: “What are the challenges faced by remote workers in the tech industry during the COVID-19 pandemic?” rather than “What are the challenges of remote work?”

6. Original and Novel

  • The question should offer a new perspective or explore an area that has not been extensively studied.
  • Example: “How do virtual reality technologies influence empathy in healthcare training?” rather than “What is virtual reality?”
  • The question should be framed in a way that ensures the research can be conducted ethically.
  • Example: “What are the impacts of privacy laws on consumer data protection in the digital age?” rather than “How can we collect personal data more effectively?”

8. Open-Ended

  • The question should encourage detailed responses and exploration, rather than limiting answers to a simple yes or no.
  • Example: “In what ways do cultural differences affect communication styles in multinational companies?” rather than “Do cultural differences affect communication?”

9. Aligned with Research Goals

  • The question should align with the overall objectives of the research project or study.
  • Example: “How do early childhood education programs influence long-term academic achievement?” if the goal is to understand educational impacts.

10. Based on Prior Research

  • The question should build on existing literature and research, identifying gaps or new angles to explore.
  • Example: “What strategies have proven effective in reducing urban air pollution in European cities?” after reviewing current studies on air pollution strategies.

Benefits of Research Question

Research questions are fundamental to the research process and offer numerous benefits, which include the following:

1. Guides the Research Process

A well-defined research question provides a clear focus and direction for your study. It helps in determining what data to collect, how to collect it, and how to analyze it.

Benefit: Ensures that the research stays on track and addresses the specific issue at hand.

2. Clarifies the Purpose of the Study

Research questions help to articulate the purpose and objectives of the study. They make it clear what the researcher intends to explore, describe, compare, or test.

Benefit: Helps in communicating the goals and significance of the research to others, including stakeholders and funding bodies.

3. Determines the Research Design

The type of research question informs the research design, including the choice of methodology, data collection methods, and analysis techniques.

Benefit: Ensures that the chosen research design is appropriate for answering the specific research question, enhancing the validity and reliability of the results.

4. Enhances Literature Review

A well-crafted research question provides a framework for conducting a thorough literature review. It helps in identifying relevant studies, theories, and gaps in existing knowledge.

Benefit: Facilitates a comprehensive understanding of the topic and ensures that the research is grounded in existing literature.

5. Focuses Data Collection

Research questions help in identifying the specific data needed to answer them. This focus prevents the collection of unnecessary data and ensures that all collected data is relevant to the study.

Benefit: Increases the efficiency of data collection and analysis, saving time and resources.

6. Improves Data Analysis

Having a clear research question aids in the selection of appropriate data analysis methods. It helps in determining how the data will be analyzed to draw meaningful conclusions.

Benefit: Enhances the accuracy and relevance of the findings, making them more impactful.

7. Facilitates Hypothesis Formation

In quantitative research, research questions often lead to the development of hypotheses that can be tested statistically.

Benefit: Provides a basis for hypothesis testing, which is essential for establishing cause-and-effect relationships.

8. Supports Result Interpretation

Research questions provide a lens through which the results of the study can be interpreted. They help in understanding what the findings mean in the context of the research objectives.

Benefit: Ensures that the conclusions drawn from the research are aligned with the original aims and objectives.

9. Enhances Reporting and Presentation

A clear research question makes it easier to organize and present the research findings. It helps in structuring the research report or presentation logically.

Benefit: Improves the clarity and coherence of the research report, making it more accessible and understandable to the audience.

10. Encourages Critical Thinking

Formulating research questions requires critical thinking and a deep understanding of the subject matter. It encourages researchers to think deeply about what they want to investigate and why.

Benefit: Promotes a more thoughtful and analytical approach to research, leading to more robust and meaningful findings.

How to Write a Research Question

Crafting a strong research question is crucial for guiding your study effectively. Follow these steps to write a clear and focused research question:

Identify a Broad Topic:

Start with a general area of interest that you are passionate about or that is relevant to your field. Example: “Climate change”

Conduct Preliminary Research:

Explore existing literature and studies to understand the current state of knowledge and identify gaps. Example: “Impact of climate change on agriculture”

Narrow Down the Topic:

Focus on a specific aspect or issue within the broad topic to make the research question more manageable. Example: “Effect of climate change on crop yields”

Consider the Scope:

Ensure the question is neither too broad nor too narrow. It should be specific enough to be answerable but broad enough to allow for thorough exploration. Example: “How does climate change affect corn crop yields in the Midwest United States?”

Determine the Research Type:

Decide whether your research will be descriptive, comparative, relational, or causal, as this will shape your question. Example: “How does climate change affect corn crop yields in the Midwest United States over the past decade?”

Formulate the Question:

Write a clear, concise question that specifies the variables, population, and context. Example: “What is the impact of increasing temperatures and changing precipitation patterns on corn crop yields in the Midwest United States from 2010 to 2020?”

Ensure Feasibility:

Make sure the question can be answered within the constraints of your resources, time, and data availability. Example: “How have corn crop yields in the Midwest United States been affected by climate change-related temperature increases and precipitation changes between 2010 and 2020?”

Review and Refine:

Evaluate the question for clarity, focus, and relevance. Revise as necessary to ensure it is well-defined and researchable. Example: “What are the specific impacts of temperature increases and changes in precipitation patterns on corn crop yields in the Midwest United States from 2010 to 2020?”

What is a research question?

A research question is a specific query guiding a study’s focus and objectives, shaping its methodology and analysis.

Why is a research question important?

It provides direction, defines scope, ensures relevance, and guides the methodology of the research.

How do you formulate a research question?

Identify a topic, narrow it down, conduct preliminary research, and ensure it is clear, focused, and researchable.

What makes a good research question?

Clarity, specificity, feasibility, relevance, and the ability to guide the research effectively.

Can a research question change?

Yes, it can evolve based on initial findings, further literature review, and the research process.

What is the difference between a research question and a hypothesis?

A research question guides the study; a hypothesis is a testable prediction about the relationship between variables.

How specific should a research question be?

It should be specific enough to provide clear direction but broad enough to allow for comprehensive investigation.

What are examples of good research questions?

Examples include: “How does social media affect academic performance?” and “What are the impacts of climate change on agriculture?”

Can a research question be too broad?

Yes, a too broad question can make the research unfocused and challenging to address comprehensively.

What role does a research question play in literature reviews?

It helps identify relevant studies, guides the search for literature, and frames the review’s focus.

Twitter

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

Glossary of research terms.

  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

This glossary is intended to assist you in understanding commonly used terms and concepts when reading, interpreting, and evaluating scholarly research. Also included are common words and phrases defined within the context of how they apply to research in the social and behavioral sciences.

  • Acculturation -- refers to the process of adapting to another culture, particularly in reference to blending in with the majority population [e.g., an immigrant adopting American customs]. However, acculturation also implies that both cultures add something to one another, but still remain distinct groups unto themselves.
  • Accuracy -- a term used in survey research to refer to the match between the target population and the sample.
  • Affective Measures -- procedures or devices used to obtain quantified descriptions of an individual's feelings, emotional states, or dispositions.
  • Aggregate -- a total created from smaller units. For instance, the population of a county is an aggregate of the populations of the cities, rural areas, etc. that comprise the county. As a verb, it refers to total data from smaller units into a large unit.
  • Anonymity -- a research condition in which no one, including the researcher, knows the identities of research participants.
  • Baseline -- a control measurement carried out before an experimental treatment.
  • Behaviorism -- school of psychological thought concerned with the observable, tangible, objective facts of behavior, rather than with subjective phenomena such as thoughts, emotions, or impulses. Contemporary behaviorism also emphasizes the study of mental states such as feelings and fantasies to the extent that they can be directly observed and measured.
  • Beliefs -- ideas, doctrines, tenets, etc. that are accepted as true on grounds which are not immediately susceptible to rigorous proof.
  • Benchmarking -- systematically measuring and comparing the operations and outcomes of organizations, systems, processes, etc., against agreed upon "best-in-class" frames of reference.
  • Bias -- a loss of balance and accuracy in the use of research methods. It can appear in research via the sampling frame, random sampling, or non-response. It can also occur at other stages in research, such as while interviewing, in the design of questions, or in the way data are analyzed and presented. Bias means that the research findings will not be representative of, or generalizable to, a wider population.
  • Case Study -- the collection and presentation of detailed information about a particular participant or small group, frequently including data derived from the subjects themselves.
  • Causal Hypothesis -- a statement hypothesizing that the independent variable affects the dependent variable in some way.
  • Causal Relationship -- the relationship established that shows that an independent variable, and nothing else, causes a change in a dependent variable. It also establishes how much of a change is shown in the dependent variable.
  • Causality -- the relation between cause and effect.
  • Central Tendency -- any way of describing or characterizing typical, average, or common values in some distribution.
  • Chi-square Analysis -- a common non-parametric statistical test which compares an expected proportion or ratio to an actual proportion or ratio.
  • Claim -- a statement, similar to a hypothesis, which is made in response to the research question and that is affirmed with evidence based on research.
  • Classification -- ordering of related phenomena into categories, groups, or systems according to characteristics or attributes.
  • Cluster Analysis -- a method of statistical analysis where data that share a common trait are grouped together. The data is collected in a way that allows the data collector to group data according to certain characteristics.
  • Cohort Analysis -- group by group analytic treatment of individuals having a statistical factor in common to each group. Group members share a particular characteristic [e.g., born in a given year] or a common experience [e.g., entering a college at a given time].
  • Confidentiality -- a research condition in which no one except the researcher(s) knows the identities of the participants in a study. It refers to the treatment of information that a participant has disclosed to the researcher in a relationship of trust and with the expectation that it will not be revealed to others in ways that violate the original consent agreement, unless permission is granted by the participant.
  • Confirmability Objectivity -- the findings of the study could be confirmed by another person conducting the same study.
  • Construct -- refers to any of the following: something that exists theoretically but is not directly observable; a concept developed [constructed] for describing relations among phenomena or for other research purposes; or, a theoretical definition in which concepts are defined in terms of other concepts. For example, intelligence cannot be directly observed or measured; it is a construct.
  • Construct Validity -- seeks an agreement between a theoretical concept and a specific measuring device, such as observation.
  • Constructivism -- the idea that reality is socially constructed. It is the view that reality cannot be understood outside of the way humans interact and that the idea that knowledge is constructed, not discovered. Constructivists believe that learning is more active and self-directed than either behaviorism or cognitive theory would postulate.
  • Content Analysis -- the systematic, objective, and quantitative description of the manifest or latent content of print or nonprint communications.
  • Context Sensitivity -- awareness by a qualitative researcher of factors such as values and beliefs that influence cultural behaviors.
  • Control Group -- the group in an experimental design that receives either no treatment or a different treatment from the experimental group. This group can thus be compared to the experimental group.
  • Controlled Experiment -- an experimental design with two or more randomly selected groups [an experimental group and control group] in which the researcher controls or introduces the independent variable and measures the dependent variable at least two times [pre- and post-test measurements].
  • Correlation -- a common statistical analysis, usually abbreviated as r, that measures the degree of relationship between pairs of interval variables in a sample. The range of correlation is from -1.00 to zero to +1.00. Also, a non-cause and effect relationship between two variables.
  • Covariate -- a product of the correlation of two related variables times their standard deviations. Used in true experiments to measure the difference of treatment between them.
  • Credibility -- a researcher's ability to demonstrate that the object of a study is accurately identified and described based on the way in which the study was conducted.
  • Critical Theory -- an evaluative approach to social science research, associated with Germany's neo-Marxist “Frankfurt School,” that aims to criticize as well as analyze society, opposing the political orthodoxy of modern communism. Its goal is to promote human emancipatory forces and to expose ideas and systems that impede them.
  • Data -- factual information [as measurements or statistics] used as a basis for reasoning, discussion, or calculation.
  • Data Mining -- the process of analyzing data from different perspectives and summarizing it into useful information, often to discover patterns and/or systematic relationships among variables.
  • Data Quality -- this is the degree to which the collected data [results of measurement or observation] meet the standards of quality to be considered valid [trustworthy] and  reliable [dependable].
  • Deductive -- a form of reasoning in which conclusions are formulated about particulars from general or universal premises.
  • Dependability -- being able to account for changes in the design of the study and the changing conditions surrounding what was studied.
  • Dependent Variable -- a variable that varies due, at least in part, to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. For example, in the variables “gender” and “academic major,” academic major is the dependent variable, meaning that your major cannot determine whether you are male or female, but your gender might indirectly lead you to favor one major over another.
  • Deviation -- the distance between the mean and a particular data point in a given distribution.
  • Discourse Community -- a community of scholars and researchers in a given field who respond to and communicate to each other through published articles in the community's journals and presentations at conventions. All members of the discourse community adhere to certain conventions for the presentation of their theories and research.
  • Discrete Variable -- a variable that is measured solely in whole units, such as, gender and number of siblings.
  • Distribution -- the range of values of a particular variable.
  • Effect Size -- the amount of change in a dependent variable that can be attributed to manipulations of the independent variable. A large effect size exists when the value of the dependent variable is strongly influenced by the independent variable. It is the mean difference on a variable between experimental and control groups divided by the standard deviation on that variable of the pooled groups or of the control group alone.
  • Emancipatory Research -- research is conducted on and with people from marginalized groups or communities. It is led by a researcher or research team who is either an indigenous or external insider; is interpreted within intellectual frameworks of that group; and, is conducted largely for the purpose of empowering members of that community and improving services for them. It also engages members of the community as co-constructors or validators of knowledge.
  • Empirical Research -- the process of developing systematized knowledge gained from observations that are formulated to support insights and generalizations about the phenomena being researched.
  • Epistemology -- concerns knowledge construction; asks what constitutes knowledge and how knowledge is validated.
  • Ethnography -- method to study groups and/or cultures over a period of time. The goal of this type of research is to comprehend the particular group/culture through immersion into the culture or group. Research is completed through various methods but, since the researcher is immersed within the group for an extended period of time, more detailed information is usually collected during the research.
  • Expectancy Effect -- any unconscious or conscious cues that convey to the participant in a study how the researcher wants them to respond. Expecting someone to behave in a particular way has been shown to promote the expected behavior. Expectancy effects can be minimized by using standardized interactions with subjects, automated data-gathering methods, and double blind protocols.
  • External Validity -- the extent to which the results of a study are generalizable or transferable.
  • Factor Analysis -- a statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. In a carefully constructed survey, for example, factor analysis can yield information on patterns of responses, not simply data on a single response. Larger tendencies may then be interpreted, indicating behavior trends rather than simply responses to specific questions.
  • Field Studies -- academic or other investigative studies undertaken in a natural setting, rather than in laboratories, classrooms, or other structured environments.
  • Focus Groups -- small, roundtable discussion groups charged with examining specific topics or problems, including possible options or solutions. Focus groups usually consist of 4-12 participants, guided by moderators to keep the discussion flowing and to collect and report the results.
  • Framework -- the structure and support that may be used as both the launching point and the on-going guidelines for investigating a research problem.
  • Generalizability -- the extent to which research findings and conclusions conducted on a specific study to groups or situations can be applied to the population at large.
  • Grey Literature -- research produced by organizations outside of commercial and academic publishing that publish materials, such as, working papers, research reports, and briefing papers.
  • Grounded Theory -- practice of developing other theories that emerge from observing a group. Theories are grounded in the group's observable experiences, but researchers add their own insight into why those experiences exist.
  • Group Behavior -- behaviors of a group as a whole, as well as the behavior of an individual as influenced by his or her membership in a group.
  • Hypothesis -- a tentative explanation based on theory to predict a causal relationship between variables.
  • Independent Variable -- the conditions of an experiment that are systematically manipulated by the researcher. A variable that is not impacted by the dependent variable, and that itself impacts the dependent variable. In the earlier example of "gender" and "academic major," (see Dependent Variable) gender is the independent variable.
  • Individualism -- a theory or policy having primary regard for the liberty, rights, or independent actions of individuals.
  • Inductive -- a form of reasoning in which a generalized conclusion is formulated from particular instances.
  • Inductive Analysis -- a form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but formulates questions throughout the research process.
  • Insiderness -- a concept in qualitative research that refers to the degree to which a researcher has access to and an understanding of persons, places, or things within a group or community based on being a member of that group or community.
  • Internal Consistency -- the extent to which all questions or items assess the same characteristic, skill, or quality.
  • Internal Validity -- the rigor with which the study was conducted [e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. It is also the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity.
  • Life History -- a record of an event/events in a respondent's life told [written down, but increasingly audio or video recorded] by the respondent from his/her own perspective in his/her own words. A life history is different from a "research story" in that it covers a longer time span, perhaps a complete life, or a significant period in a life.
  • Margin of Error -- the permittable or acceptable deviation from the target or a specific value. The allowance for slight error or miscalculation or changing circumstances in a study.
  • Measurement -- process of obtaining a numerical description of the extent to which persons, organizations, or things possess specified characteristics.
  • Meta-Analysis -- an analysis combining the results of several studies that address a set of related hypotheses.
  • Methodology -- a theory or analysis of how research does and should proceed.
  • Methods -- systematic approaches to the conduct of an operation or process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline.
  • Mixed-Methods -- a research approach that uses two or more methods from both the quantitative and qualitative research categories. It is also referred to as blended methods, combined methods, or methodological triangulation.
  • Modeling -- the creation of a physical or computer analogy to understand a particular phenomenon. Modeling helps in estimating the relative magnitude of various factors involved in a phenomenon. A successful model can be shown to account for unexpected behavior that has been observed, to predict certain behaviors, which can then be tested experimentally, and to demonstrate that a given theory cannot account for certain phenomenon.
  • Models -- representations of objects, principles, processes, or ideas often used for imitation or emulation.
  • Naturalistic Observation -- observation of behaviors and events in natural settings without experimental manipulation or other forms of interference.
  • Norm -- the norm in statistics is the average or usual performance. For example, students usually complete their high school graduation requirements when they are 18 years old. Even though some students graduate when they are younger or older, the norm is that any given student will graduate when he or she is 18 years old.
  • Null Hypothesis -- the proposition, to be tested statistically, that the experimental intervention has "no effect," meaning that the treatment and control groups will not differ as a result of the intervention. Investigators usually hope that the data will demonstrate some effect from the intervention, thus allowing the investigator to reject the null hypothesis.
  • Ontology -- a discipline of philosophy that explores the science of what is, the kinds and structures of objects, properties, events, processes, and relations in every area of reality.
  • Panel Study -- a longitudinal study in which a group of individuals is interviewed at intervals over a period of time.
  • Participant -- individuals whose physiological and/or behavioral characteristics and responses are the object of study in a research project.
  • Peer-Review -- the process in which the author of a book, article, or other type of publication submits his or her work to experts in the field for critical evaluation, usually prior to publication. This is standard procedure in publishing scholarly research.
  • Phenomenology -- a qualitative research approach concerned with understanding certain group behaviors from that group's point of view.
  • Philosophy -- critical examination of the grounds for fundamental beliefs and analysis of the basic concepts, doctrines, or practices that express such beliefs.
  • Phonology -- the study of the ways in which speech sounds form systems and patterns in language.
  • Policy -- governing principles that serve as guidelines or rules for decision making and action in a given area.
  • Policy Analysis -- systematic study of the nature, rationale, cost, impact, effectiveness, implications, etc., of existing or alternative policies, using the theories and methodologies of relevant social science disciplines.
  • Population -- the target group under investigation. The population is the entire set under consideration. Samples are drawn from populations.
  • Position Papers -- statements of official or organizational viewpoints, often recommending a particular course of action or response to a situation.
  • Positivism -- a doctrine in the philosophy of science, positivism argues that science can only deal with observable entities known directly to experience. The positivist aims to construct general laws, or theories, which express relationships between phenomena. Observation and experiment is used to show whether the phenomena fit the theory.
  • Predictive Measurement -- use of tests, inventories, or other measures to determine or estimate future events, conditions, outcomes, or trends.
  • Principal Investigator -- the scientist or scholar with primary responsibility for the design and conduct of a research project.
  • Probability -- the chance that a phenomenon will occur randomly. As a statistical measure, it is shown as p [the "p" factor].
  • Questionnaire -- structured sets of questions on specified subjects that are used to gather information, attitudes, or opinions.
  • Random Sampling -- a process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance. Random sampling can be accomplished by first numbering the population, then selecting the sample according to a table of random numbers or using a random-number computer generator. The sample is said to be random because there is no regular or discernible pattern or order. Random sample selection is used under the assumption that sufficiently large samples assigned randomly will exhibit a distribution comparable to that of the population from which the sample is drawn. The random assignment of participants increases the probability that differences observed between participant groups are the result of the experimental intervention.
  • Reliability -- the degree to which a measure yields consistent results. If the measuring instrument [e.g., survey] is reliable, then administering it to similar groups would yield similar results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results.
  • Representative Sample -- sample in which the participants closely match the characteristics of the population, and thus, all segments of the population are represented in the sample. A representative sample allows results to be generalized from the sample to the population.
  • Rigor -- degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in an experimental study.
  • Sample -- the population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results will be generalized or transferred. In studies that use inferential statistics to analyze results or which are designed to be generalizable, sample size is critical, generally the larger the number in the sample, the higher the likelihood of a representative distribution of the population.
  • Sampling Error -- the degree to which the results from the sample deviate from those that would be obtained from the entire population, because of random error in the selection of respondent and the corresponding reduction in reliability.
  • Saturation -- a situation in which data analysis begins to reveal repetition and redundancy and when new data tend to confirm existing findings rather than expand upon them.
  • Semantics -- the relationship between symbols and meaning in a linguistic system. Also, the cuing system that connects what is written in the text to what is stored in the reader's prior knowledge.
  • Social Theories -- theories about the structure, organization, and functioning of human societies.
  • Sociolinguistics -- the study of language in society and, more specifically, the study of language varieties, their functions, and their speakers.
  • Standard Deviation -- a measure of variation that indicates the typical distance between the scores of a distribution and the mean; it is determined by taking the square root of the average of the squared deviations in a given distribution. It can be used to indicate the proportion of data within certain ranges of scale values when the distribution conforms closely to the normal curve.
  • Statistical Analysis -- application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data.
  • Statistical Bias -- characteristics of an experimental or sampling design, or the mathematical treatment of data, that systematically affects the results of a study so as to produce incorrect, unjustified, or inappropriate inferences or conclusions.
  • Statistical Significance -- the probability that the difference between the outcomes of the control and experimental group are great enough that it is unlikely due solely to chance. The probability that the null hypothesis can be rejected at a predetermined significance level [0.05 or 0.01].
  • Statistical Tests -- researchers use statistical tests to make quantitative decisions about whether a study's data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. That is, statistical tests show whether the differences between the outcomes of the control and experimental groups are great enough to be statistically significant. If differences are found to be statistically significant, it means that the probability [likelihood] that these differences occurred solely due to chance is relatively low. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance.
  • Subcultures -- ethnic, regional, economic, or social groups exhibiting characteristic patterns of behavior sufficient to distinguish them from the larger society to which they belong.
  • Testing -- the act of gathering and processing information about individuals' ability, skill, understanding, or knowledge under controlled conditions.
  • Theory -- a general explanation about a specific behavior or set of events that is based on known principles and serves to organize related events in a meaningful way. A theory is not as specific as a hypothesis.
  • Treatment -- the stimulus given to a dependent variable.
  • Trend Samples -- method of sampling different groups of people at different points in time from the same population.
  • Triangulation -- a multi-method or pluralistic approach, using different methods in order to focus on the research topic from different viewpoints and to produce a multi-faceted set of data. Also used to check the validity of findings from any one method.
  • Unit of Analysis -- the basic observable entity or phenomenon being analyzed by a study and for which data are collected in the form of variables.
  • Validity -- the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.
  • Variable -- any characteristic or trait that can vary from one person to another [race, gender, academic major] or for one person over time [age, political beliefs].
  • Weighted Scores -- scores in which the components are modified by different multipliers to reflect their relative importance.
  • White Paper -- an authoritative report that often states the position or philosophy about a social, political, or other subject, or a general explanation of an architecture, framework, or product technology written by a group of researchers. A white paper seeks to contain unbiased information and analysis regarding a business or policy problem that the researchers may be facing.

Elliot, Mark, Fairweather, Ian, Olsen, Wendy Kay, and Pampaka, Maria. A Dictionary of Social Research Methods. Oxford, UK: Oxford University Press, 2016; Free Social Science Dictionary. Socialsciencedictionary.com [2008]. Glossary. Institutional Review Board. Colorado College; Glossary of Key Terms. Writing@CSU. Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor. The SAGE Dictionary of Social and Cultural Research Methods . London: Sage, 2006.

  • << Previous: Independent and Dependent Variables
  • Next: 1. Choosing a Research Problem >>
  • Last Updated: May 30, 2024 9:38 AM
  • URL: https://libguides.usc.edu/writingguide

Identifying cloud internet of things requirements in healthcare: a Delphi-based study

  • Published: 30 May 2024

Cite this article

sample questionnaire research methods

  • Leila Gholamhosseini 1 ,
  • Farahnaz Sadoughi 2 ,
  • Sorayya Rezayi 1 &
  • Somayeh Nasiri 2  

The Internet of Things (IoT) and cloud computing are emerging technologies whose use has been proven in various fields of medicine and healthcare. The positive impact of these technologies on improving the performance of medical centers is also evident. Therefore, the purpose of the present study is to investigate the technological requirements of the Cloud Internet of Things (CIoT) in the healthcare industry. This was a quantitative study conducted in two rounds using the Delphi method. A non-random, purposeful, and criterion-based method was used for sampling. A minimum of one paper or research project in the IoTs, cloud computing, or CIoT fields should be completed by participants. In addition to having a master’s degree or specialized doctorate and at least 5 years of work experience, they should be faculty members. Fifteen participants in the first round of Delphi and 12 participants in the second round who completed all of the questionnaire’s questions made up the research sample. A questionnaire was used to gather information. This questionnaire was designed in four main parts with 14 axes and 48 questions. After collecting the questionnaires in the first round of Delphi, each axis was scored on a five-point Likert scale. Finally, the requirements for the implementation of CIoT in healthcare were determined by the collective agreement of experts. Functional requirements of the network layer and cloud computing, platform layer requirements, and components of quality requirements were extracted. The architectures of the application layer, cloud layer, server layer, network layer, and user layer are explored thoroughly. Furthermore, communication technologies, equipment types, analysis and management, wireless networks and sensors, secure data exchange and management, deployment details, and approaches for security management and control are the main infrastructural keys that are extracted during the Delphi rounds. The findings showed that, with a very high level of importance, the use of wireless local networks was approved more than other network layer components. Compliance with security principles emerged as the most crucial component of the platform layer. We also found the use of hybrid clouds and the necessity of software as a service to be important. All the experts emphasized the need to use application development tools, cloud services, development, and deployment on the platform as a service, data analysis, and application programming interface related to the requirements of the platform layer. Scientifically speaking, they found that the components of secure information exchange, establishing secure protocol communication, key exchange mechanisms, and maintaining information confidentiality are crucial in the field of secure data exchange. Based on the findings, it can be concluded that the implementation of this technology can improve the quality of clinical services and increase the speed of responses and patient satisfaction. It is expected that by identifying the technological requirements of CIoT, a suitable model can be designed for healthcare.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

sample questionnaire research methods

Garfan S, Alamoodi AH, Zaidan B, Al-Zobbi M, Hamid RA, Alwan JK et al (2021) Telehealth utilization during the Covid-19 pandemic: a systematic review. Comput Biol Med 138:104878

Article   Google Scholar  

Safdari R, Rezayi S, Saeedi S, Tanhapour M, Gholamzadeh M (2021) Using data mining techniques to fight and control epidemics: a scoping review. Heal Technol 11(4):759–771

Santagati GE, Melodia T (2016) Experimental evaluation of impulsive ultrasonic intra-body communications for implantable biomedical devices. IEEE Trans Mob Comput 16(2):367–380

Valsalan P, Baomar TAB, Baabood AHO (2020) IoT based health monitoring system. J Crit Rev 7(4):739–743

Google Scholar  

Kashani MH, Madanipour M, Nikravan M, Asghari P, Mahdipour E (2021) A systematic review of IoT in healthcare: applications, techniques, and trends. J Netw Comput Appl 192:103164

Rezayi S, Safaei AA, Mohammadzadeh N (2019) Requirement specification and modeling a wearable smart blanket system for monitoring patients in ambulance. J Med Signals Sens 9(4):234–244

Dash S, Shakyawar SK, Sharma M, Kaushik S (2019) Big data in healthcare: management, analysis and future prospects. J Big Data 6(1):1–25

Kruse CS, Goswamy R, Raval YJ, Marawi S (2016) Challenges and opportunities of big data in health care: a systematic review. JMIR Med Inform 4(4):e5359

Dang LM, Piran MJ, Han D, Min K, Moon H (2019) A survey on internet of things and cloud computing for healthcare. Electronics 8(7):768

Ali O, Shrestha A, Soar J, Wamba SF (2018) Cloud computing-enabled healthcare opportunities, issues, and applications: a systematic review. Int J Inf Manag 43:146–158

Alhaidari F, Rahman A, Zagrouba R (2020) Cloud of Things: architecture, applications and challenges. J Ambient Intell Humaniz Comput 14:1–19

Mahmoud MM, Rodrigues JJ, Saleem K (eds) (2019) Cloud of Things for healthcare: a survey from energy efficiency perspective. In: 2019 International Conference on Computer and Information Sciences (ICCIS). IEEE

Shafi I, Din S, Farooq S, Díez IT, Breñosa J, Espinosa JCM et al (2024) Design and development of patient health tracking, monitoring and big data storage using Internet of Things and real time cloud computing. PLoS ONE 19(3):e0298582

Udgata SK, Suryadevara NK (2021) Internet of Things and sensor network for COVID-19. Springer

Book   Google Scholar  

Guerbouj SSE, Gharsellaoui H, Bouamama S (2019) A comprehensive survey on privacy and security issues in cloud computing, internet of things and cloud of things. Int J Serv Sci Manag Eng Technol (IJSSMET) 10(3):32–44

Kushwah R, Batra PK, Jain A (eds) (2020) Internet of things architectural elements, challenges and future directions. In: 2020 6th International Conference on Signal Processing and Communication (ICSC). IEEE

Marques G, Pitarma R, Garcia NM, Pombo N (2019) Internet of things architectures, technologies, applications, challenges, and future directions for enhanced living environments and healthcare systems: a review. Electronics 8(10):1081

Tiwari VK, Singh V (2016) Study of Internet of Things (IoT): a vision, architectural elements, and future directions. Int J Adv Res Comput Sci. https://doi.org/10.26483/ijarcs.v7i7.6082

Srinivasan C, Rajesh B, Saikalyan P, Premsagar K, Yadav ES (2019) A review on the different types of Internet of Things (IoT). J Adv Res Dyn Control Syst 11(1):154–158

Triantafyllou A, Sarigiannidis P, Lagkas TD (2018) Network protocols, schemes, and mechanisms for Internet of Things (IoT): features, open challenges, and trends. Wirel Commun Mob Comput. https://doi.org/10.1155/2018/5349894

Misra D, Das G, Das D (eds) (2018) Review on Internet of Things (IoT): making the world smart. In: Advances in Communication, Devices and Networking: Proceedings of ICCDN 2017. Springer

Kafle VP, Fukushima Y, Harai H (2016) Internet of things standardization in ITU and prospective networking technologies. IEEE Commun Mag 54(9):43–49

Zhang J, Ye Y, Hu C, Li B (2021) Architecture design and demand analysis on application layer of standard system for ubiquitous power Internet of Things. Glob Energy Interconnect 4(3):304–314

Aguru AD, Babu ES, Nayak SR, Sethy A, Verma A (2022) Integrated industrial reference architecture for smart healthcare in internet of things: a systematic investigation. Algorithms 15(9):309

Fisher JA, Monahan T (2012) Evaluation of real-time location systems in their hospital contexts. Int J Med Inform 81(10):705–712

Tissir N, El Kafhali S, Aboutabit N (2021) Cybersecurity management in cloud computing: semantic literature review and conceptual framework proposal. J Reliab Intell Environ 7:69–84

Vilela PH, Rodrigues JJ, Solic P, Saleem K, Furtado V (2019) Performance evaluation of a Fog-assisted IoT solution for e-Health applications. Futur Gener Comput Syst 97:379–386

Ning H, Li Y, Shi F, Yang LT (2020) Heterogeneous edge computing open platforms and tools for internet of things. Futur Gener Comput Syst 106:67–76

Aman AHM, Yadegaridehkordi E, Attarbashi ZS, Hassan R, Park Y-J (2020) A survey on trend and classification of internet of things reviews. IEEE Access 8:111763–111782

Bhuiyan MN, Rahman MM, Billah MM, Saha D (2021) Internet of things (IoT): a review of its enabling technologies in healthcare applications, standards protocols, security, and market opportunities. IEEE Internet Things J 8(13):10474–10498

Singh RP, Javaid M, Haleem A, Suman R (2020) Internet of things (IoT) applications to fight against COVID-19 pandemic. Diabetes Metab Syndr 14(4):521–524

Liu Y, Zhang L, Yang Y, Zhou L, Ren L, Wang F et al (2019) A novel cloud-based framework for the elderly healthcare services using digital twin. IEEE Access 7:49088–49101

Otoom M, Otoum N, Alzubaidi MA, Etoom Y, Banihani R (2020) An IoT-based framework for early identification and monitoring of COVID-19 cases. Biomed Signal Process Control 62:102149

Jimenez JI, Jahankhani H, Kendzierskyj S (2020) Health care in the cyberspace: medical cyber-physical system and digital twin challenges. In: Farsi M, Daneshkhah A, Hosseinian-Far A, Jahankhani H (eds) Digital twin technologies and smart cities. Springer, pp 79–92

Chapter   Google Scholar  

Aceto G, Persico V, Pescapé A (2020) Industry 4.0 and health: Internet of Things, big data, and cloud computing for healthcare 4.0. J Ind Inf Integr 18:100129

Hu J-X, Chen C-L, Fan C-L, Wang K-H (2017) An intelligent and secure health monitoring scheme using IoT sensor based on cloud computing. J Sens. https://doi.org/10.1155/2017/3734764

Akhbarifar S, Javadi HHS, Rahmani AM, Hosseinzadeh M (2023) A secure remote health monitoring model for early disease diagnosis in cloud-based IoT environment. Pers Ubiquit Comput 27(3):697–713

Firouzi F, Rahmani AM, Mankodiya K, Badaroglu M, Merrett GV, Wong P et al (2018) Internet-of-Things and big data for smarter healthcare: from device to architecture, applications and analytics. Future Gener Comput Syst 78:583–586

Gardašević G, Katzis K, Bajić D, Berbakov L (2020) Emerging wireless sensor networks and Internet of Things technologies—foundations of smart healthcare. Sensors 20(13):3619

Trainor GS (2017) A study of the Internet of things and RFID technology: big data in Navy medicine. Naval Postgraduate School, Monterey, California

Suciu G, Vulpe A, Martian A, Halunga S, Vizireanu DN (2016) Big data processing for renewable energy telemetry using a decentralized cloud M2M system. Wirel Pers Commun 87:1113–1128

Balasundaram A, Routray S, Prabu AV, Krishnan P, Malla PP, Maiti M (2023) Internet of Things (IoT)-based smart healthcare system for efficient diagnostics of health parameters of patients in emergency care. IEEE Internet Things J 10(21):18563–18570

Nasser N, Emad-ul-Haq Q, Imran M, Ali A, Razzak I, Al-Helali A (2023) A smart healthcare framework for detection and monitoring of COVID-19 using IoT and cloud computing. Neural Comput Appl 35:1–15

Alshammari HH (2023) The internet of things healthcare monitoring system based on MQTT protocol. Alex Eng J 69:275–287

Ahmed MR, Mahmud SMH, Hossin MA, Jahan H, Noori SRH (eds) (2018) A cloud based four-tier architecture for early detection of heart disease with machine learning algorithms. In: 2018 IEEE 4th International Conference on Computer and Communications (ICCC)

Vellela SS, Reddy VL, Roja D, Rao GR, Sk KB, Kumar KK (eds) (2023) A cloud-based smart IoT platform for personalized healthcare data gathering and monitoring system. In: 2023 3rd Asian Conference on Innovation in Technology (ASIANCON)

Islam MM, Bhuiyan ZA (2023) An integrated scalable framework for cloud and IoT based green healthcare system. IEEE Access 11:22266–22282

Crisp J, Pelletier D, Duffield C, Adams A, Nagy S (1997) The Delphi method? Nurs Res 46(2):116–118

Sotelo KG, Baron C, Esteban P, Estrada CG, Velázquez LDJL (2018) How to find non-functional requirements in system developments. IFAC-PapersOnLine 51(11):1573–1578

Darwish A, Hassanien AE, Elhoseny M, Sangaiah AK, Muhammad K (2019) The impact of the hybrid platform of internet of things and cloud computing on healthcare systems: opportunities, challenges, and open problems. J Ambient Intell Humaniz Comput 10:4151–4166

Rao BP, Saluia P, Sharma N, Mittal A, Sharma SV (eds) (2012) Cloud computing for Internet of Things and sensing based applications. In: 2012 Sixth International Conference on Sensing Technology (ICST). IEEE

Sadoughi F, Behmanesh A, Sayfouri N (2020) Internet of things in medicine: a systematic mapping study. J Biomed Inform 103:103383

Ahmadi H, Arji G, Shahmoradi L, Safdari R, Nilashi M, Alizadeh M (2019) The application of internet of things in healthcare: a systematic literature review and classification. Univers Access Inf Soc 18:837–869

Balaji S, Nathani K, Santhakumar R (2019) IoT technology, applications and challenges: a contemporary survey. Wirel Pers Commun 108:363–388

Muteba F, Djouani K, Olwal T (2019) A comparative survey study on LPWA IoT technologies: design, considerations, challenges and solutions. Procedia Comput Sci 155:636–641

Chung L, Nixon BA, Yu E, Mylopoulos J (2012) Non-functional requirements in software engineering. Springer

Calvillo-Arbizu J, Román-Martínez I, Reina-Tosina J (2021) Internet of things in health: requirements, issues, and gaps. Comput Methods Progr Biomed 208:106231

Shah JL, Bhat HF, Khan AI (2021) Chapter 6—integration of cloud and IoT for smart e-healthcare. In: Balas VE, Pal S (eds) Healthcare paradigms in the Internet of Things ecosystem. Academic Press, pp 101–136

Nissar G, Khan RA, Mushtaq S, Lone SA, Moon AH (2024) IoT in healthcare: a review of services, applications, key technologies, security concerns, and emerging trends. Multimed Tools Appl 83:1–62

Rezayi S, Safaei AA, Mohammadzadeh N (2021) Design and evaluation of a wearable smart blanket system for monitoring vital signs of patients in an ambulance. J Sens 2021:8820740

Kelly JT, Campbell KL, Gong E, Scuffham P (2020) The Internet of Things: impact and implications for health care delivery. J Med Internet Res 22(11):e20135

Arul R, Alroobaea R, Tariq U, Almulihi AH, Alharithi FS, Shoaib U (2024) IoT-enabled healthcare systems using block chain-dependent adaptable services. Pers Ubiquit Comput 28(1):43–57

Osama M, Ateya AA, Sayed MS, Hammad M, Pławiak P, Abd El-Latif AA et al (2023) Internet of medical things and healthcare: trends, requirements, challenges, and research directions. Sensors 23(17):7435

Chen M, Gonzalez S, Vasilakos A, Cao H, Leung VC (2011) Body area networks: a survey. Mob Netw Appl 16:171–193

Elmustafa SAA, Mujtaba EY (2019) Internet of things in smart environment: concept, applications, challenges, and future directions. World Sci News 134(1):1–51

Detmer D, Bloomrosen M, Raymond B, Tang P (2008) Integrated personal health records: transformative tools for consumer-centric care. BMC Med Inform Decis Mak 8(1):1–14

Hassine TB, Khayati O, Ghezala HB (eds) (2017) An IoT domain meta-model and an approach to software development of IoT solutions. In: 2017 International Conference on Internet of Things, Embedded Systems and Communications (IINTEC). IEEE

Download references

This study was financially supported by the Iran University of Medical Sciences.

Author information

Authors and affiliations.

Department of Health Information Management, School of Paramedical Sciences, AJA University of Medical Sciences, Tehran, Iran

Leila Gholamhosseini & Sorayya Rezayi

Department of Health Information Management, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran

Farahnaz Sadoughi & Somayeh Nasiri

You can also search for this author in PubMed   Google Scholar

Contributions

LG and FS were contributed conceptualization, methodology, and software. LG, FS, SR, and SN were involved in data curation and writing—original draft preparation. LG, FS, SR, and SN were performed visualization and investigation. LG and FS were responsible for supervision.

Corresponding authors

Correspondence to Farahnaz Sadoughi or Sorayya Rezayi .

Ethics declarations

Conflict of interest.

The authors declare that there is no conflict of interest in this research.

Compliance with ethical guidelines

In the present study, all ethical points in the research, including the honesty and trustworthiness of experts’ information, have been observed. This article is part of a bigger research project entitled: “Integration of Internet of Things and Cloud Computing in Healthcare Systems” approved by the Iran University of Medical Sciences (IUMS) in 2018. The ethics code of this study was obtained from the National Committee of Ethics in Biomedical Research under the number: IR.IUMS.REC.1397.379.

Ethics approval and consent to participate statement

All methods were carried out in accordance with relevant guidelines and regulations. The methodology for this study was approved by the Ethics committee of the Iran University of Medical Sciences.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Gholamhosseini, L., Sadoughi, F., Rezayi, S. et al. Identifying cloud internet of things requirements in healthcare: a Delphi-based study. J Supercomput (2024). https://doi.org/10.1007/s11227-024-06253-z

Download citation

Accepted : 18 May 2024

Published : 30 May 2024

DOI : https://doi.org/10.1007/s11227-024-06253-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Internet of Things
  • Cloud computing
  • Functional requirements
  • Quality requirements
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Use of Menthol-Flavored Tobacco Products Among US Middle and High School Students: National Youth Tobacco Survey, 2022

ORIGINAL RESEARCH — Volume 21 — May 30, 2024

Monica E. Cornelius, PhD 1 ; Andrea S. Gentzke, PhD 1 ; Caitlin G. Loretan, MPH 1 ; Nikki A. Hawkins, PhD 1 ; Ahmed Jamal, MBBS 1 ( View author affiliations )

Suggested citation for this article: Cornelius ME, Gentzke AS, Loretan CG, Hawkins NA, Jamal A. Use of Menthol-Flavored Tobacco Products Among US Middle and High School Students: National Youth Tobacco Survey, 2022. Prev Chronic Dis 2024;21:230305. DOI: http://dx.doi.org/10.5888/pcd21.230305 .

PEER REVIEWED

Introduction

Acknowledgments, author information.

What is already known on this topic?

Middle and high school students who currently use tobacco products report using a variety of flavors, including menthol.

What is added by this report?

In 2022, 23.8% of students who currently used any tobacco product and 39.5% who currently used flavored tobacco products reported using menthol-flavored tobacco products. Students who exhibited characteristics of addiction to tobacco product use had a higher prevalence of menthol-flavored tobacco product use.

What are the implications for public health practice?

Menthol and other characterizing flavors or additives in tobacco products may contribute to first-time tobacco use and sustained use among young people. Understanding this association can inform public health policy aimed at preventing and reducing tobacco product use in this population.

Menthol cigarettes have been associated with increased smoking initiation. Although numerous studies have focused on correlates of menthol cigarette smoking among youths, fewer studies have assessed the prevalence and correlates of overall menthol-flavored tobacco product use among middle and high school students.

We analyzed 2022 National Youth Tobacco Survey data to estimate the prevalence of menthol-flavored tobacco product use among US middle and high school students who used tobacco products within the past 30 days. Characteristics associated with menthol-flavored tobacco product use were also examined.

Use of menthol-flavored tobacco products was reported by 23.8% of students who currently used any tobacco product and by 39.5% of students who currently used any flavored tobacco product. Among students who reported past 30-day use of a flavored tobacco product, characteristics associated with a higher prevalence of menthol-flavored tobacco product use included non-Hispanic White race and ethnicity, frequent tobacco product use, use of multiple tobacco products, wanting to use a tobacco product within the first 30 minutes of awakening, and craving tobacco products within the past 30 days.

Unlike results of prior research focused on cigarette smoking among young people, prevalence of use of any menthol-flavored tobacco product was highest among non-Hispanic White youths. Any use of menthol-flavored tobacco products of any type (alone or in combination with other flavors) among young people may be associated with continued product use and symptoms of dependence.

Menthol, an additive in commercial tobacco products, creates a cooling sensation when inhaled (1–3). Menthol has both flavor and sensation properties (1–3). The effects of menthol can make cigarette smoke or e-cigarette aerosol seem less irritating and can enhance the product-user’s experience (1–4). Menthol flavoring is not limited to cigarettes and e-cigarettes; most types of commercial tobacco products are available in menthol flavor (3). Menthol cigarettes have been associated with increased smoking initiation, nicotine dependence, and lower smoking cessation success (1,3,5). Results from modeling studies suggest that prohibiting menthol cigarettes in the US could result in a 15% reduction in smoking prevalence and prevent an estimated 324,000 to 654,000 deaths over the next 40 years (6–8).

Disparities among population groups that use menthol cigarettes are well-documented. Marketing directed at certain population groups has been associated with a higher prevalence of menthol cigarette smoking in these groups (1,3,9,10). Population groups most likely to smoke menthol cigarettes are non-Hispanic Black people, women, sexual minority groups, people identifying as transgender, people residing in low-income communities, people with mental health conditions, youths, and young adults (3).

Smoking initiation usually begins in adolescence (4) when use of nicotine can have negative consequences on brain development and may increase the risk for nicotine dependence (11). Middle and high school students often use a variety of commercial tobacco products available in flavors, including menthol (12). E-cigarettes are the most commonly used tobacco product among middle and high school students — with 9.4% reporting e-cigarette use in 2022 — followed by cigars (1.9%) and cigarettes (1.6%) (12,13). Almost 4 of 5 (79.1%) middle and high school students who reported current use of 1 or more tobacco products used a flavored tobacco product (12). Furthermore, among middle and high school students who currently used any flavored tobacco product, 38.8% reported smoking menthol cigarettes (12). Non-Hispanic Black, Hispanic, and female middle and high school students have reported a higher prevalence of smoking menthol cigarettes (14).

Although numerous studies have focused on correlates of menthol cigarette smoking among youths, fewer studies have assessed the prevalence of using both cigarette and noncigarette menthol-flavored tobacco products in this population (14,15). Such information is important because, although the prevalence of cigarette smoking among youths has declined, use of e-cigarettes has increased, and new tobacco product types (eg, heated tobacco products) continue to become available (13,14). To examine whether previously observed characteristics associated with menthol cigarette smoking (eg, higher prevalence among Black, Hispanic, and female adolescent populations) are similar for use of any menthol-flavored tobacco product among adolescents, our study will 1) provide updated estimates of menthol-flavored tobacco product use among middle and high school students and 2) assess correlates of use of any menthol-flavored tobacco products in this population. Assessing correlates of menthol-flavored tobacco product use among youths can further identify populations that may benefit from public health strategies recognizing the effects of flavored tobacco products in reducing tobacco product use by young people.

Data sample

We analyzed data from the 2022 National Youth Tobacco Survey (NYTS), a cross-sectional, school-based, voluntary, self-administered survey of US middle and high school students in grades 6 to 12 (12,13). A stratified 3-stage cluster sampling procedure generated a nationally representative sample of US students attending public and private schools (16). We collected data from January through May 2022 from 28,291 middle and high school students (overall response rate: 45.2%) by using a web-based survey with 99.3% of respondents completing the survey on a school campus. The analytic sample consisted of middle and high school students who reported use of 1 or more tobacco products within the past 30 days. The 2022 NYTS was approved by the institutional review boards of the data collectors, the CDC institutional review board (45 C.F.R. part 46; 21 C.F.R. part 56), and the Office of Management and Budget.

We assessed current use of menthol-flavored tobacco products among students who indicated past 30-day use of any tobacco product (use of ≥1 tobacco products: e-cigarettes, cigarettes, cigars, smokeless tobacco [chewing tobacco, snuff, dip, snus], dissolvable tobacco products, waterpipes or hookahs, pipe tobacco, bidis, heated tobacco products, or nicotine pouches). We also assessed use of menthol-flavored tobacco products among students who indicated past 30-day use of any flavored tobacco products. Menthol-flavored tobacco product use was defined as using any menthol-flavored tobacco product within the past 30 days, regardless of whether other flavors of tobacco products were used. Responses of “yes” to questions about flavored tobacco product use and “menthol” to the type(s) of flavor used were categorized as menthol-flavored tobacco use. For cigarettes, respondents who, within the past 30 days 1) indicated using only 1 cigarette brand and indicated that the brand was a menthol-flavored brand (Kool, Newport), 2) responded that they smoked Kool or Newport brands to the question “During the past 30 days, what brand of cigarettes did you usually smoke? (Choose only one answer)” (asked among respondents who used multiple brands in the past 30 days), or 3) who answered yes to “During the past 30 days, were the cigarettes that you usually smoked menthol?” were considered to have used menthol-flavored tobacco products (12). Students indicating no use of menthol-flavored tobacco products were categorized as using nonmenthol tobacco products.

Among students who used a flavored tobacco product in the past 30 days, tobacco product use was categorized as follows: 1) e-cigarettes only; 2) combustible tobacco products (cigarettes, cigars, bidis, hookahs, or pipes) only; 3) other tobacco products (smokeless tobacco products [chewing tobacco, snuff, dip, snus], dissolvables, heated tobacco products, or nicotine pouches) only; and 4) any combination of the preceding 3 categories.

Covariates examined included sex (male/female), race and ethnicity (Hispanic, non-Hispanic Black, non-Hispanic White, non-Hispanic Other), sexual orientation (heterosexual, lesbian, gay, bisexual, not sure), transgender identity (yes, no, not sure, don’t know what question is asking), family affluence (scores of low [0–5], medium [6,7], high [8,9] on a 4-item scale), tobacco product advertising exposure (yes [most of the time/always/sometimes], no [rarely/none]), frequent use (≥20 of the past 30 days) of a tobacco product, use of multiple tobacco products (≥2 products), time to wanting to use a tobacco product after awakening (<30 minutes, ≥30 minutes), craving tobacco products within the past 30 days (yes, no), past-year quit attempts, and quit intentions. Categorization of family affluence, advertising exposure, and cessation behaviors were consistent with previous analyses (12).

Respondents who indicated seeing advertisements or promotions for e-cigarettes, cigarettes, and other tobacco products “sometimes,” “most of the time,” or “always” on the internet, in newspapers or magazines, at a store (convenience store, supermarket, gas station, kiosk/storefront, or shopping center), or on television or streaming services were categorized as having been exposed to tobacco product advertising. Those who responded “never” or “rarely” were categorized as unexposed. Those who reported “I do not use the internet,” “I do not read newspapers or magazines,” “I never go to a convenience store, supermarket, or gas station,” or “I do not watch television or streaming services or go to the movies” were excluded.

Respondents who indicated 1 or more for the number of times they had stopped using all tobacco products for 1 day or longer because they were trying to quit were categorized as having a past-year quit attempt. Those who indicated “I did not try to quit all tobacco products during the past 12 months” were categorized as not having made a past-year quit attempt. Respondents who indicated they were seriously thinking about quitting the use of all tobacco products were categorized as having quit intentions; those that responded “No, I am not thinking about quitting the use of all tobacco products” were categorized as not having quit intentions.

We computed the weighted prevalence and 95% CIs separately for menthol-flavored and nonmenthol-flavored tobacco product use among students who used 1) 1 or more tobacco products within the past 30 days (n = 3,334) and 2) 1 or more flavored tobacco products within the past 30 days (n = 2,020), overall and by sociodemographic characteristics, tobacco use characteristics, cessation behaviors, and advertising exposure. We also computed the weighted percentage of menthol use by type of tobacco product. Additionally, we computed the percentage of each characteristic by menthol and nonmenthol tobacco product use among students who used flavored tobacco products (n = 2,020), which is the primary focus of our study. Chi-square tests of independence were used to test for differences in the proportions of each characteristic among menthol- and nonmenthol-flavored tobacco product use, with a P value of <.05 indicating significance. Nested logistic regression models (unadjusted models and models adjusted for sex, racial or ethnic group, and grade level) were used to estimate associations between each characteristic of interest and current use of menthol-flavored tobacco products among students who used 1 or more flavored tobacco products within the past 30 days. Model-adjusted prevalence ratios (APRs) with predicted marginals and Wald χ 2 statistics were computed. Models were adjusted to control for confounding in the associations between each covariate of interest and menthol-flavored tobacco product use. All analyses were performed using SAS-callable SUDAAN software, version 11.0.3 (RTI International).

Prevalence of menthol-flavored and nonmenthol-flavored tobacco product use

Nonmenthol- and menthol-flavored tobacco product use among students who used any tobacco products. In 2022, 3.1 million middle and high school students (11.3%) reported currently using any tobacco product. Most of these students reported using nonmenthol tobacco products (76.2%), ranging from 56.0% (those indicating a time of wanting to use a tobacco product after awakening of <30 min) to 92.2% (non-Hispanic Black students) ( Table 1 ). Among middle and high school students who reported current use of any tobacco product, 23.8% (an estimated 730,000 students) reported use of a menthol-flavored tobacco product; prevalence of menthol-flavored tobacco product use was 25.6% among males and 22.2% among females ( Table 1 ). Prevalence of menthol-flavored tobacco product use by race or ethnicity ranged from 7.8% among non-Hispanic Black students to 30.1% among non-Hispanic White students. Prevalence was 19.6% among middle school students and 24.3% among high school students. Prevalence of menthol-flavored tobacco product use across sexual orientation categories ranged from 24.4% to 26.5%. Prevalence of menthol-flavored tobacco product use by transgender identity ranged from 20.5% among students who didn’t know what the question was asking to 37.7% among students who identified as transgender. Prevalence of menthol-flavored tobacco use among students with characteristics indicative of tobacco addiction (frequent use of tobacco, craving tobacco products, use of multiple tobacco products, and time after awakening to wanting to use a tobacco product) ranged from 38.0% to 44.0% compared with 13.8% to 23.5% among students who did not report characteristics indicative of tobacco addiction. Prevalence of menthol-flavored tobacco use was 26.5% among students with exposure to tobacco product advertising, 24.8% among students who intended to quit using all tobacco products, and 26.2% among students who reported a past-year quit attempt.

Nonmenthol- and menthol-flavored tobacco product use among students who used flavored tobacco products. Most students who currently used any flavored tobacco product reported using nonmenthol tobacco products (60.5%), ranging from 41.2% (those indicating “not sure” if they were transgender) to 84.5% (non-Hispanic Black students) ( Table 1 ). Among students who reported current use of a flavored tobacco product, 39.5% reported use of menthol-flavored tobacco products ( Table 1 ) ( Figure ). Among middle and high school students who currently used any flavored tobacco products, prevalence of menthol-flavored tobacco product use by sex was 43.7% among males and 35.9% among females ( Table 1 ). Prevalence of menthol-flavored tobacco product use ranged from 15.5% among non-Hispanic Black students to 47.1% among non-Hispanic White students. Among middle school students, the prevalence was 34.7% compared with 39.9% among high school students and ranged from 39.4% to 44.3% across sexual orientation categories. Prevalence of menthol-flavored tobacco product use by transgender identity ranged from 37.6% among those who identified as not transgender to 58.8% among those who were not sure. Prevalence of menthol-flavored tobacco use among students with characteristics indicative of addiction (craving tobacco products, use of multiple tobacco products, frequent use of tobacco, and time after awakening to wanting to use a tobacco product) ranged from 50.7% to 57.9% compared with 26.4% to 36.5% among students who did not report characteristics indicative of tobacco addiction. Prevalence of menthol-flavored tobacco use was 41.2% among students with exposure to tobacco product advertising, 38.3% among students who intended to quit using all tobacco products, and 40.6% among students who reported a past-year quit attempt.

Menthol-flavored tobacco use by type of flavored tobacco product. Approximately 53.9% of students who used a combination of types of flavored tobacco products, including e-cigarettes, combustible tobacco products, and other types of tobacco product, indicated use of at least 1 menthol-flavored tobacco product ( Figure ). Among students who exclusively used e-cigarettes, 30.6% reported using menthol-flavored products, and 29.6% of students who exclusively used combustible tobacco products reported using menthol-flavored products. The estimate for prevalence of use of menthol-flavored tobacco products among students who exclusively used other types of tobacco products was not statistically reliable and is not presented.

Characteristics of middle and high school students who use menthol- and nonmenthol-flavored tobacco products among students who use flavored tobacco products. Among students who used any flavored tobacco products, those who used menthol-flavored products differed from those who used nonmenthol-flavored products ( Table 2 ). Compared with students who used nonmenthol-flavored tobacco products, a higher proportion of students who used menthol-flavored tobacco products were male (50.4% among menthol vs 42.2% among nonmenthol, P = .04) or non-Hispanic White, Hispanic, or non-Hispanic Other (96.2% menthol vs 86.5% nonmenthol, P < .001, not shown in table). In contrast, compared with students who used nonmenthol-flavored products, a lower proportion of students who used menthol-flavored products were non-Hispanic Black (3.8% menthol vs 13.5% nonmenthol, P < .001). A higher proportion of students who used menthol-flavored tobacco products (compared with students who used nonmenthol-flavored products) used tobacco products frequently (66.0% vs 38.1%, P < .001); used multiple tobacco products (54.0% vs 31.3%, P < .001); wanted to use a tobacco product within less than 30 minutes of awakening (48.1% vs 27.9%, P < .001); craved tobacco products within the past 30 days (44.8% vs 28.3%, P < .001); and did not intend to quit using tobacco products (39.9% vs 33.1%, P = .03).

Characteristics associated with menthol-flavored tobacco product use among students who use flavored tobacco products. We examined correlates of menthol-flavored tobacco product use among middle and high school students who reported current use of any flavored product. Except for sex and intending to quit using all tobacco products, significant associations between covariates and use of menthol-flavored tobacco products remained after adjustment for grade level, sex, and race and ethnicity, although some changes existed in the strengths of association. Compared with non-Hispanic White students, the prevalence of menthol-flavored tobacco product use was lower among Hispanic students (APR, 0.59; 95% CI, 0.45–0.77) and non-Hispanic Black students (APR, 0.34; 95% CI, 0.22–0.53) ( Table 3 ). Compared with students who were not transgender, current prevalence of menthol-flavored tobacco product use was also higher among students who were transgender (APR, 1.45; 95% CI, 1.03–2.03) and those who were not sure if they were transgender (APR, 1.55; 95% CI, 1.14–2.12). Current prevalence of menthol-flavored tobacco product use was also higher among students who indicated frequent tobacco product use (APR: 1.88; 95% CI, 1.59–2.22); use of multiple tobacco products (APR, 1.68; 95% CI, 1.36–2.05); wanting to use a tobacco product within 30 minutes of awakening (APR, 1.55; 95% CI, 1.27–1.88); and craving tobacco products within the past 30 days (APR, 1.34; 95% CI, 1.08–1.66), compared with respective reference categories.

We found that more than 1 in 5 students who reported current use of at least 1 tobacco product reported use of a menthol-flavored tobacco product. Among students who reported use of at least 1 flavored tobacco product, nearly 2 in 5 reported current use of a menthol-flavored tobacco product. Additionally, 3 in 10 students who reported currently using only flavored e-cigarettes reported using a menthol-flavored product; more than 3 in 10 students who currently only used flavored combustible tobacco products reported using a menthol-flavored product; and more than half of all students who currently used a combination of flavored e-cigarettes, combustible tobacco products, and noncombustible tobacco products reported use of a menthol-flavored product. Differences in sociodemographic characteristics, tobacco product use behavior, and cessation indicators were found among middle and high school students who used menthol-flavored tobacco products.

The prevalence of menthol-flavored tobacco product use was highest among non-Hispanic White students and lowest among non-Hispanic Black students — a result that is contrary to studies focused on menthol cigarette smoking among youths and adults (14,15). At the time of our writing, we found no studies focused on prevalence of any menthol-flavored tobacco product use among youths by race or ethnicity; most studies focused on menthol cigarette smoking or any flavored tobacco product use or did not distinguish between menthol and mint flavors (14,15,17,18). Although our results contrast with some previous studies of cigarette smoking among young people, these findings align with recent research on menthol cigarette smoking that reported a similar pattern (14,19). Miech et al reported that Black adolescents had a lower prevalence of menthol cigarette smoking than adolescents of other races and ethnicities, although results from modeling showed that Black adolescents who smoked cigarettes were more likely to smoke menthol cigarettes compared with White adolescents (19). The results from our study and the Miech study could be partially attributable to a lower prevalence of cigarette smoking in general among young people (12,13) and later-age onset of cigarette smoking among non-Hispanic Black people (20,21). The higher prevalence of e-cigarette use compared with other tobacco products among youths may also play a role. E-cigarettes account for a large proportion of prevalence of any tobacco product use in this population, and fruit- and candy-flavored e-cigarettes are popular in this population (12,13). Populations of young people with a high prevalence of e-cigarette use differ from adult populations with a high prevalence of cigarette smoking relative to other tobacco products. We saw differences by race and ethnicity and among any menthol-flavored tobacco product use (15).

Among students who reported past 30-day use of flavored tobacco products, we saw no association between sexual orientation and menthol-flavored tobacco product use. This is in contrast with previous literature among adults who smoke menthol cigarettes (3). This could be due partly to the high proportion of youths using e-cigarettes and nonmenthol-flavored noncigarette tobacco products (12).

Similar to results from previous studies focused on menthol cigarette smoking (17,22), our study’s results show that, among students who used menthol-flavored tobacco products within the past 30 days, use was associated with behaviors that indicated tobacco dependence. These behaviors include frequent tobacco product use, use of multiple tobacco products, wanting to use tobacco products within 30 minutes of awakening, and craving a tobacco product within the past 30 days. These results suggest use of any menthol-flavored tobacco product (alone or in combination with other flavors) among students who use any flavored tobacco products may be associated with symptoms of dependence, which in turn, can contribute to continued use.

We also found that in 2022, 30.6% of students who currently used only flavored e-cigarettes used menthol e-cigarettes. To our knowledge, our study is one of a few studies focused on the prevalence of menthol-flavored tobacco product use among middle and high school students who currently use any flavored tobacco product, although at least 1 study assessed this among all youths (not just those who currently use tobacco products) (18). Most studies have focused exclusively on the prevalence of menthol cigarette smoking (14,17,19). Thus, our study expands the knowledge base on use by young people of menthol flavor across multiple tobacco product types.

Findings of this study are subject to at least 4 limitations. First, the sample size was not large enough to present characteristics of menthol-flavored product use by exclusive use of individual tobacco product types (eg, cigarette smoking only, cigar use only). Second, NYTS data are cross-sectional, and identified associations reflect tobacco use patterns at the time of survey completion. Third, NYTS data are subject to response bias. However, the validity of self-reported tobacco product use in population-based studies has been shown to be high (23). Finally, our results are generalizable only to middle and high school students in public and private schools in the US.

As of July 2023, menthol is the only nontobacco flavoring allowed in cigarettes sold in the US since the 2009 Family Smoking Prevention and Tobacco Control Act, which prohibited the sale of all characterizing flavors of cigarettes except menthol and tobacco (24). Additionally, in early 2020, the US Food and Drug Administration (FDA) prohibited the use of characterizing flavors in cartridge-based e-cigarettes, excluding menthol (25). In 2022, FDA proposed standards to prohibit menthol as a characterizing flavor in cigarettes and all flavored cigars (6).

Although prohibiting sales of flavors can have a significant effect on reducing tobacco product use among young people, the continued availability of menthol could mitigate the effects of policies prohibiting flavors (26). For example, immediately following the FDA’s announcement of prioritized enforcement of sales of prefilled e-cigarette cartridges in flavors other than tobacco and menthol, increases occurred in the market share of menthol-flavored, prefilled, cartridge-based e-cigarettes and nonmenthol-flavored (including fruit, candy, and alcohol flavored) disposable e-cigarettes (27,28). How this affected overall e-cigarette use among young people is currently unknown. However, a recent study in Minnesota reported changes in tobacco product use in this population after a flavor ban that included menthol was implemented in the Twin Cities (Minneapolis and St. Paul) (26). The study reported that any tobacco product use and e-cigarette use among youths increased to a greater extent in the rest of the state of Minnesota when compared with the increase in the Twin Cities (26). Additionally, use of noncigarette tobacco products with flavors other than mint or menthol by youths increased by 5% in the Twin Cities compared with 10.2% in the rest of the state (26). This shows that the inclusion of menthol in prohibitions of tobacco product flavor can further reduce overall tobacco product use among youths.

As new product types continue to be added to the tobacco landscape, examining the role of menthol and other characterizing flavors or additives in all tobacco products will be important to determine factors that may contribute to initiation and sustained use of tobacco products. Future studies are needed of menthol-flavored tobacco product use with sufficient sample sizes to assess use of specific tobacco products by demographic groups. Continued surveillance of the use of all characterizing flavored tobacco products (including menthol) and the effectiveness of restrictions on flavored tobacco product sales are needed to inform public health policy and tobacco prevention and control efforts.

This research did not receive funding from agencies in the public, commercial, or not-for-profit sectors. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Prevention and Control. The authors received no external financial support for the research, authorship, or publication of this article. The authors declared no potential conflicts of interest with respect to the research, authorship, or publication of this article. No copyrighted material, surveys, instruments, or tools were used in this research.

Corresponding Author: Monica E. Cornelius, PhD, Office on Smoking and Health, National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention, 4770 Buford Hwy, MS S107-7, Atlanta, GA 30341 ( [email protected] ).

Author Affiliations: 1 Office on Smoking and Health, National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia.

  • Klausner K. Menthol cigarettes and smoking initiation: a tobacco industry perspective. Tob Control . 2011;20(Suppl 2):ii12–ii19. PubMed doi:10.1136/tc.2010.041954
  • Krishnan-Sarin S, Green BG, Kong G, Cavallo DA, Jatlow P, Gueorguieva R, et al. . Studying the interactive effects of menthol and nicotine among youth: An examination using e-cigarettes. Drug Alcohol Depend . 2017;180:193–199. PubMed doi:10.1016/j.drugalcdep.2017.07.044
  • Centers for Disease Control and Prevention. Menthol and cigarettes. Updated June 27, 2022. Accessed March 23, 2023. https://www.cdc.gov/tobacco/basic_information/menthol/index.html
  • Centers for Disease Control and Prevention. Preventing tobacco use among youth and young adults: a report of the Surgeon General. 2012. Accessed March 24, 2023. https://www.ncbi.nlm.nih.gov/books/NBK99237/
  • Villanti AC, Johnson AL, Glasser AM, Rose SW, Ambrose BK, Conway KP, et al. . Association of flavored tobacco use with tobacco initiation and subsequent use among US youth and adults, 2013–2015. JAMA Netw Open . 2019;2(10):e1913804. PubMed doi:10.1001/jamanetworkopen.2019.13804
  • US Food and Drug Administration. FDA proposes rules prohibiting menthol cigarettes and flavored cigars to prevent youth initiation, significantly reduce tobacco-related disease and death. FDA News Release. April 28, 2022. Accessed March 23, 2023. https://www.fda.gov/news-events/press-announcements/fda-proposes-rules-prohibiting-menthol-cigarettes-and-flavored-cigars-prevent-youth-initiation
  • Levy DT, Meza R, Yuan Z, Li Y, Cadham C, Sanchez-Romero LM, et al. . Public health impact of a US ban on menthol in cigarettes and cigars: a simulation study. Tob Control . 2023;32(e1):e37–e44. PubMed doi:10.1136/tobaccocontrol-2021-056604
  • Levy DT, Pearson JL, Villanti AC, Blackman K, Vallone DM, Niaura RS, et al. . Modeling the future effects of a menthol ban on smoking prevalence and smoking-attributable deaths in the United States. Am J Public Health . 2011;101(7):1236–1240. PubMed doi:10.2105/AJPH.2011.300179
  • Centers for Disease Control and Prevention. The health consequences of smoking: a report of the Surgeon General; 2004. Accessed March 24, 2023. https://www.ncbi.nlm.nih.gov/books/NBK44695
  • Watson CV, Puvanesarajah S, Hawkins NA, Trivers KF. Racial disparities in flavored tobacco product use, curiosity, susceptibility, and harm perception, National Youth Tobacco Survey 2019-2020. Health Equity . 2023;7(1):137–147. PubMed doi:10.1089/heq.2022.0087
  • US Department of Health and Human Services. E-cigarette use among youth and young adults: a report of the Surgeon General: executive summary. 2016. Accessed March 14, 2024. https://e-cigarettes.surgeongeneral.gov/documents/2016_SGR_Exec_Summ_508.pdf
  • Gentzke AS, Wang TW, Cornelius M, Park-Lee E, Ren C, Sawdey MD, et al. . Tobacco product use and associated factors among middle and high school students — National Youth Tobacco Survey, United States, 2021. MMWR Surveill Summ . 2022;71(5):1–29. PubMed doi:10.15585/mmwr.ss7105a1
  • Park-Lee E, Ren C, Cooper M, Cornelius M, Jamal A, Cullen KA. Tobacco product use among middle and high school students — United States, 2022. MMWR Morb Mortal Wkly Rep . 2022;71(45):1429–1435. PubMed doi:10.15585/mmwr.mm7145a1
  • Sawdey MD, Chang JT, Cullen KA, Rass O, Jackson KJ, Ali FRM, et al. . Trends and associations of menthol cigarette smoking among US middle and high school students — National Youth Tobacco Survey, 2011–2018. Nicotine Tob Res . 2020;22(10):1726–1735. PubMed doi:10.1093/ntr/ntaa054
  • Watkins SL, Pieper F, Chaffee BW, Yerger VB, Ling PM, Max W. Flavored tobacco product use among young adults by race and ethnicity: evidence from the Population Assessment of Tobacco and Health Study. J Adolesc Health . 2022;71(2):226–232. PubMed doi:10.1016/j.jadohealth.2022.02.013
  • Centers for Disease Control and Prevention. Methodology report of the 2022 National Youth Tobacco Survey. February 2023. Accessed March 16, 2024. https://www.cdc.gov/tobacco/data_statistics/surveys/nyts/pdfs/2022-NYTS-Public-Use-Methods-Report-508.pdf
  • Leas EC, Benmarhnia T, Strong DR, Pierce JP. Use of menthol cigarettes, smoking frequency, and nicotine dependence among US youth. JAMA Network Open . 2022;5(6):e2217144-e.
  • Rose SW, Johnson AL, Glasser AM, Villanti AC, Ambrose BK, Conway K, et al. . Flavour types used by youth and adult tobacco users in wave 2 of the Population Assessment of Tobacco and Health (PATH) Study 2014–2015. Tob Control . 2020;29(4):432–446. PubMed doi:10.1136/tobaccocontrol-2018-054852
  • Miech RA, Leventhal AM, Johnson LD. Recent, national trends in US adolescent use of menthol and non-menthol cigarettes. Tob Control . 2023;32(e1):e10–e15. PubMed doi:10.1136/tobaccocontrol-2021-056970
  • Roberts ME, Colby SM, Lu B, Ferketich AK. Understanding tobacco use onset among African Americans. Nicotine Tob Res . 2016;18(Suppl 1):S49–S56. PubMed doi:10.1093/ntr/ntv250
  • Cheng YJ, Cornelius ME, Wang TW, Homa DM. Trends and demographic differences in the incidence and mean age of starting to smoke cigarettes regularly, National Health Interview Survey, 1997. Public Health Rep .Trends and demographic differences in the incidence and mean age of starting to smoke cigarettes regularly, National Health Interview Survey, 1997 2023;138(6):908–915. PubMed doi:10.1177/00333549221138295
  • Azagba S, King J, Shan L, Manzione L. Cigarette smoking behavior among menthol and nonmenthol adolescent smokers. J Adolesc Health . 2020;66(5):545–550. PubMed doi:10.1016/j.jadohealth.2019.11.307
  • Boykan R, Messina CR, Chateau G, Eliscu A, Tolentino J, Goniewicz ML. Self-reported use of tobacco, e-cigarettes, and marijuana versus urinary biomarkers. Pediatrics . 2019;143(5):e20183531. PubMed doi:10.1542/peds.2018-3531
  • US Food and Drug Administration. Family Smoking Prevention and Tobacco Control Act — an overview. 2020. Accessed July 17, 2023. https://www.fda.gov/tobacco-products/rules-regulations-and-guidance/family-smoking-prevention-and-tobacco-control-act-overview
  • US Food and Drug Administration. FDA News Release. FDA finalizes enforcement policy on unauthorized flavored cartridge-based e-cigarettes that appeal to children, including fruit and mint. January 2, 2020. Accessed March 24, 2023. https://www.fda.gov/news-events/press-announcements/fda-finalizes-enforcement-policy-unauthorized-flavored-cartridge-based-e-cigarettes-appeal-children
  • Olson LT, Coats EM, Rogers T, Brown EM, Nonnemaker J, Ross AM, et al. . Youth tobacco use before and after local sales restrictions on flavored and menthol tobacco products in Minnesota. J Adolesc Health . 2022;70(6):978–984. PubMed doi:10.1016/j.jadohealth.2022.01.129
  • Ali FRM, Seidenberg AB, Crane E, Seaman E, Tynan MA, Marynak K. E-cigarette unit sales by product and flavor type, and top-selling brands, United States, 2020–2022. MMWR Morb Mortal Wkly Rep . 2023;72(25):672–677. PubMed doi:10.15585/mmwr.mm7225a1
  • Federal Trade Commission. Federal Trade Commission e-cigarette report for 2019–2020. 2022. Accessed July 17, 2023. https://www.ftc.gov/system/files/ftc_gov/pdf/E-Cigarette%20Report%202019-20%20final.pdf

a Current menthol-flavored tobacco product use was assessed among students who indicated past 30-day tobacco product use (use of ≥1 tobacco products including e-cigarettes, cigarettes, cigars, smokeless tobacco [chewing tobacco, snuff, dip, snus], dissolvable tobacco products, waterpipes/hookahs, pipe tobacco, bidis, heated tobacco products, and nicotine pouches). Those responding “yes” to using a flavored product and “menthol” to type of flavor were categorized as having used menthol-flavored tobacco products. For cigarettes, respondents who, within the past 30 days, indicated 1) using only 1 cigarette brand and indicated that the brand was a menthol-flavored brand (Kool, Newport); 2) responded that they smoked Kool or Newport brands to the question “During the past 30 days, what brand of cigarettes did you usually smoke? (Choose only 1 answer)” (asked among respondents who used multiple brands in the past 30 days); or 3) who answered “yes” to “During the past 30 days, were the cigarettes that you usually smoked menthol?” were considered as having using menthol-flavored tobacco products. b Estimated weighted total numbers were rounded to the nearest 10,000 persons. Overall population estimates might not sum to corresponding subgroup population estimates because of rounding or inclusion of students who did not self-report sex, race and ethnicity, or grade level. c Family affluence was assessed with a composite scale that comprised 4 questions: 1) “Does your family own a vehicle (such as a car, van, or truck)?”; 2) “Do you have your own bedroom?”; 3) “How many computers (including laptops and tablets, not including game consoles and smartphones) does your family own?”; and 4) “During the past 12 months, how many times did you travel on vacation with your family?” Complete data from all 4 questions (n=2,619 among students who currently use tobacco products; n = 1,617 among students who currently used flavored tobacco products) were summed (range, 0–9) and categorized into approximate tertiles based on the sample’s weighted distribution of scores. d Exposure to tobacco product marketing (advertisements or promotions) was assessed separately for e-cigarettes, cigarettes, and other tobacco products for 4 sources: retail stores; internet; television, streaming services, or movies; and newspapers or magazines. Respondents were asked, “When you [are using the Internet; read newspapers or magazines; go to a convenience store, supermarket, or gas station; watch television or streaming services (such as Netflix, Hulu, or Amazon Prime), or go to the movies], how often do you see ads or promotions for [e-cigarettes; cigarettes or other tobacco products]?” Respondents were categorized as exposed if they responded “sometimes,” “most of the time,” or “always” or unexposed if they responded “never” or “rarely.” Those who reported “I do not use the internet,” “I do not read newspapers or magazines,” “I never go to a convenience store, supermarket, or gas station,” or “I do not watch television or streaming services or go to the movies” were excluded from the analysis. There were 476 respondents excluded among students reporting current tobacco product use and 262 respondents excluded among students reporting current flavored tobacco product use. e People who used tobacco products in the past 30 days who indicated use of any product on 20 or more days in the past 30 days were categorized as using tobacco products frequently; otherwise, if all tobacco products were reported as being used less than 20 days out of the last 30, they were categorized as not having frequent tobacco product use. f Based on the question “During the past 30 days, have you had a strong craving or felt like you really needed to use a tobacco product of any kind?” Those answering “yes” were categorized as craving tobacco products within the past 30 days. g Based on the question, “During the past 12 months, how many times have you stopped using all tobacco products for 1 day or longer because you were trying to quit tobacco products for good?” Responses other than “I did not try to quit all tobacco products during the past 12 months” were considered having made 1 or more quit attempts. Respondents missing data on this outcome (n = 619 among students reporting current tobacco product use; n = 286 among students reporting current flavored tobacco product use) were excluded from the analysis. h Based on the question, “Are you seriously thinking about quitting the use of all tobacco products?” Responses of “Yes, during the next 30 days,” “Yes, during the next 6 months,” “Yes, during the next 12 months,” and “Yes, but not during the next 12 months” indicated having quit intentions. The response, “No, I am not thinking about quitting the use of all tobacco products” indicated not having quit intentions. Respondents missing data on this outcome (n = 578 among students reporting current tobacco product use; n = 265 among students reporting current flavored tobacco product use) were excluded from the analysis.

a Current use of menthol-flavored tobacco products was assessed among students who indicated past 30-day tobacco product use (use of ≥1 tobacco products including e-cigarettes, cigarettes, cigars, smokeless tobacco [chewing tobacco, snuff, dip, snus], dissolvable tobacco products, waterpipes/hookahs, pipe tobacco, bidis, heated tobacco products, and nicotine pouches). Those responding “Yes” to using a flavored product and “menthol” to the type of flavor were categorized as using menthol-flavored tobacco products. For cigarettes, respondents who, within the past 30 days, indicated 1) using only one cigarette brand and indicated that the brand was a menthol-flavored brand (Kool, Newport); 2) responded that they smoked Kool or Newport brands to the question “During the past 30 days, what brand of cigarettes did you usually smoke? (Choose only 1 answer)” (asked among respondents who used multiple brands in the past 30 days); or 3) who answered “Yes” to “During the past 30 days, were the cigarettes that you usually smoked menthol?” were categorized as using menthol-flavored tobacco products. b Estimated weighted total numbers were rounded to the nearest 10,000 people. Overall population estimates might not sum to corresponding subgroup population estimates because of rounding or inclusion of students who did not self-report sex, race and ethnicity, or grade level. c P value calculated by using the χ 2 test of independence and indicates whether there are differences between use of menthol-flavored and nonmenthol-flavored tobacco products for each characteristic. d Unstable estimate is not presented because of a relative SE of ≥0.3 or unweighted denominators less than 50. e Family affluence was assessed with a composite scale that comprised 4 questions: 1) “Does your family own a vehicle (such as a car, van, or truck)?”; 2) “Do you have your own bedroom?”; 3) “How many computers (including laptops and tablets; not including game consoles and smartphones) does your family own?”; and 4) “During the past 12 months, how many times did you travel on vacation with your family?” Complete data from all 4 questions (n = 1,617 among students who currently used flavored tobacco products) were summed (range = 0–9) and categorized into approximate tertiles based on the sample’s weighted distribution of scores. f Exposure to tobacco product marketing (advertisements or promotions) was assessed separately for e-cigarettes, cigarettes, and other tobacco products for 4 sources: retail stores; internet; television, streaming services, or movies; and newspapers or magazines. Respondents were asked, “When you [are using the Internet; read newspapers or magazines; go to a convenience store, supermarket, or gas station; watch television or streaming services (such as Netflix, Hulu, or Amazon Prime); or go to the movies], how often do you see ads or promotions for [e-cigarettes; cigarettes or other tobacco products]?” Respondents were categorized as exposed if they responded “sometimes,” “most of the time,” or “always” or unexposed if they responded “never” or “rarely.” Those who reported “I do not use the internet,” “I do not read newspapers or magazines,” “I never go to a convenience stores, supermarket, or gas station,” or “I do not watch television or streaming services or go to the movies” were excluded from the analysis. There were 262 respondents excluded. g Persons who used tobacco products in the past 30 days who indicated use of any product on 20 or more days in the past 30 days were categorized as using tobacco products frequently; otherwise, if all tobacco products were reported as being used less than 20 days out of the last 30, then persons were categorized as not having frequent tobacco product use. h Based on the question “During the past 30 days, have you had a strong craving or felt like you really needed to use a tobacco product of any kind?” those answering “yes” were categorized as craving tobacco products within the past 30 days. i Based on the question, “During the past 12 months, how many times have you stopped using all tobacco products for 1 day or longer because you were trying to quit tobacco products for good?” responses other than “I did not try to quit all tobacco products during the past 12 months” were considered having made 1 or more quit attempts. Respondents (n = 286) missing data on this outcome were excluded from the analysis. j Based on the question, “Are you seriously thinking about quitting the use of all tobacco products?” Responses of “Yes, during the next 30 days,” “Yes, during the next 6 months,” “Yes, during the next 12 months,” and “Yes, but not during the next 12 months” indicated having quit intentions. The response, “No, I am not thinking about quitting the use of all tobacco products” indicated not having quit intentions. Respondents (n = 265) missing data on this outcome were excluded from the analysis.

Abbreviations: APR, adjusted prevalence ratio; PR, prevalence ratio. a Current menthol-flavored tobacco product use was assessed among students who indicated past 30-day tobacco product use (use of ≥1 tobacco products including e-cigarettes, cigarettes, cigars, smokeless tobacco [chewing tobacco, snuff, dip, snus], dissolvable tobacco products, waterpipes/hookahs, pipe tobacco, bidis, heated tobacco products, and nicotine pouches). Those responding “Yes” to using a flavored product and “menthol” to the type of flavor were categorized as using menthol tobacco products. For cigarettes, respondents who, within the past 30 days, indicated 1) using only 1 cigarette brand and indicated that the brand was a menthol-flavored brand (Kool, Newport); 2) responded that they smoked Kool or Newport brands to the question “During the past 30 days, what brand of cigarettes did you usually smoke? (Choose only 1 answer)” (asked among respondents who used multiple brands in the past 30 days), or 3) who answered “Yes” to “During the past 30 days, were the cigarettes that you usually smoked menthol?” were considered as having used menthol-flavored tobacco products. b Prevalence ratios adjusted for sex, race, and grade level for all variables except sex, race, and grade. APR for sex adjusted for race and grade; APR for race adjusted for sex and grade; APR for grade adjusted for sex and race. c P value was calculated by using the Wald χ 2 and tests for differences between menthol status groups (menthol flavors, nonmenthol flavor tobacco product use) for each characteristic. d Family affluence was assessed with a composite scale that comprised 4 questions: 1) “Does your family own a vehicle (such as a car, van, or truck)?”; 2) “Do you have your own bedroom?”; 3) “How many computers (including laptops and tablets, not including game consoles and smartphones) does your family own?”; and 4) “During the past 12 months, how many times did you travel on vacation with your family?” Complete data from all 4 questions (n = 1,617) were summed (range = 0–9) and categorized into approximate tertiles based on the sample’s weighted distribution of scores. e Exposure to tobacco product marketing (advertisements or promotions) was assessed separately for e-cigarettes and cigarettes or other tobacco products for 4 sources: retail stores; internet; television, streaming services, or movies; and newspapers or magazines. Respondents were asked, “When you [are using the Internet; read newspapers or magazines; go to a convenience store, supermarket, or gas station; watch television or streaming services (such as Netflix, Hulu, or Amazon Prime), or go to the movies], how often do you see ads or promotions for [e-cigarettes; cigarettes or other tobacco products]?” Respondents were categorized as exposed if they responded “sometimes,” “most of the time,” or “always” or unexposed if they responded “never” or “rarely.” Those who reported “I do not use the internet,” “I do not read newspapers or magazines,” “I never go to a convenience stores, supermarkets, or gas stations,” or “I do not watch television or streaming services or go to the movies” were excluded from the analysis. There were 262 respondents excluded. f Students who used tobacco products within the past 30 days who indicated use of any product on 20 or more days in the past 30 days were categorized as using tobacco products frequently; otherwise, if all tobacco products were reported as being used less than 20 days out of the last 30, then students who used tobacco product within the past 30 days were categorized as not using tobacco products frequently. g Based on the question “During the past 30 days, have you had a strong craving or felt like you really needed to use a tobacco product of any kind?” Those answering “yes” were categorized as craving tobacco products within the past 30 days. h Based on the question, “During the past 12 months, how many times have you stopped using all tobacco products for 1 day or longer because you were trying to quit tobacco products for good?” Responses other than “I did not try to quit all tobacco products during the past 12 months” indicated having made 1 or more quit attempts. Respondents (n = 286) missing data on this outcome were excluded from the analysis. i Based on the question, “Are you seriously thinking about quitting the use of all tobacco products?” Responses of “Yes, during the next 30 days,” “Yes, during the next 6 months,” “Yes, during the next 12 months,” and “Yes, but not during the next 12 months” indicated quit intentions. The response, “No, I am not thinking about quitting the use of all tobacco products” indicated not having quit intentions. Respondents (n = 265) missing data on this outcome were excluded from the analysis.

The opinions expressed by authors contributing to this journal do not necessarily reflect the opinions of the U.S. Department of Health and Human Services, the Public Health Service, the Centers for Disease Control and Prevention, or the authors’ affiliated institutions.

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.

IMAGES

  1. Survey Examples For Research

    sample questionnaire research methods

  2. Research Questionnaire

    sample questionnaire research methods

  3. Questionnaire Sample For Research Paper PDF

    sample questionnaire research methods

  4. Research Questionnaire

    sample questionnaire research methods

  5. (PDF) Self-Efficacy in Research Methods and Statistics (SERMS

    sample questionnaire research methods

  6. Questionnaire Format For Research

    sample questionnaire research methods

VIDEO

  1. questions paper of research methodology for BBA students

  2. Training

  3. Research Methods: Interviews Vs Questionnaire

  4. NMIMS

  5. Primary Data In Research

  6. Questionnaire| Research Defense #thesisdefense #researchmethods #practicalresearch

COMMENTS

  1. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  2. Questionnaire: Definition, How to Design, Types & Examples

    As a research instrument, a questionnaire is ideal for commercial research because the data you get back is from your target audience (or ideal customers) and the information you get back on their thoughts, preferences or behaviors allows you to make business decisions. 6. A questionnaire can cover any topic.

  3. Sampling Methods

    The sample is the group of individuals who will actually participate in the research. To draw valid conclusions from your results, you have to carefully decide how you will select a sample that is representative of the group as a whole. This is called a sampling method. There are two primary types of sampling methods that you can use in your ...

  4. Questionnaire

    Definition: A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people. It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective.

  5. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  6. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  7. PDF Question and Questionnaire Design

    the ordering of questions within a questionnaire and then discuss methods for testing and evaluating questions and questionnaires. Finally, we offer two more general recommendations to guide questionnaire development. 9.1. Conventional Wisdom Hundreds of methodology textbooks have offered various versions of conventional

  8. PDF Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    writing questions and building the construct of the questionnaire. It also develops the demand to pre-test the questionnaire and finalizing the questionnaire to conduct the survey. Keywords: Questionnaire, Academic Survey, Questionnaire Design, Research Methodology I. INTRODUCTION A questionnaire, as heart of the survey is based on a set of

  9. Hands-on guide to questionnaire research: Selecting, designing, and

    The great popularity with questionnaires is they provide a "quick fix" for research methodology. No single method has been so abused. 1 Questionnaires offer an objective means of collecting information about people's knowledge, beliefs, attitudes, and behaviour. 2,3 Do our patients like our opening hours? What do teenagers think of a local antidrugs campaign and has it changed their attitudes?

  10. Writing Survey Questions

    We frequently test new survey questions ahead of time through qualitative research methods such as focus groups, cognitive interviews, pretesting (often using an online, opt-in sample), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on ...

  11. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  12. Survey Research

    Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

  13. Survey Research: Definition, Examples and Methods

    Survey Research: Definition, Examples and Methods. Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

  14. Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    A questionnaire is an important instrument in a research study to help the researcher collect relevant data regarding the research topic. It is significant to ensure that the design of the ...

  15. Designing and validating a research questionnaire

    In research studies, questionnaires are commonly used as data collection tools, either as the only source of information or in combination with other techniques in mixed-method studies. However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated.

  16. Sampling Methods

    A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

  17. LibGuides: Qualitative study design: Surveys & questionnaires

    Qualitative surveys aim to elicit a detailed response to an open-ended topic question in the participant's own words. Like quantitative surveys, there are three main methods for using qualitative surveys including face to face surveys, phone surveys, and online surveys. Each method of surveying has strengths and limitations. Face to face surveys.

  18. Questionnaires

    Questionnaires can be classified as both, quantitative and qualitative method depending on the nature of questions. Specifically, answers obtained through closed-ended questions (also called restricted questions) with multiple choice answer options are analyzed using quantitative methods. Research findings in this case can be illustrated using ...

  19. PDF Structured Methods: Interviews, Questionnaires and Observation

    An offer of a copy of the final research report can help in some cases. Ensure that the questionnaire can be returned with the minimum of trouble and expense (e.g. by including a reply paid envelope). Keep the questionnaire short and easy to answer. Ensure that you send it to people for whom it is relevant.

  20. Questionnaire Design

    Questionnaires vs surveys. A survey is a research method where you collect and analyse data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  21. (PDF) Questionnaires and Surveys

    However, for this study, a survey or a questionnaire was formed as a tool to collect the data using Google Form. This survey method was chosen because by analyzing a sample from a group, a survey ...

  22. PDF Fundamentals of Survey Research Methodology

    There are also minimal interviewer and respondent measurement errors due to the absence of direct contact (Salant & Dillman, 1994, p. 35). Written surveys allow the respondent the greatest latitude in pace and sequence of response (p. 18). Written surveys may be distributed using either postal or electronic mail.

  23. Survey Sampling Methods & Techniques

    Stratified sampling. Before a stratified sample is taken, the population is divided into groups based on characteristics pertinent to the research, such as age or gender. The population is then randomly sampled within these specific strata. This complex method of sampling ensures each category of the population is represented in the sample.

  24. 10 Different Types of Survey Methods + Pros & Cons

    Each round is designed to narrow things down until a consensus or hypothyses can be formed. One of the key features of the Delphi survey research is that participants are unknown to each other, thereby eliminating influence. 18. AI Surveys. Artificial intelligence is the latest types of survey method.

  25. Sustainability

    Research investigating changes in the environmental awareness and attitudes of Polish e-consumers over a period of ten years and their impact on online purchasing behaviors was conducted in 2010 and 2020 using the same research methods and tools. The research questionnaire, which was used in 2010 and 2020, included questions about the ...

  26. Research Question

    They guide the direction and focus of the study. Here are the main types of research questions: 1. Descriptive Research Questions. These questions aim to describe the characteristics or functions of a specific phenomenon or group. They often begin with "what," "who," "where," "when," or "how.". Example:

  27. Organizing Your Social Sciences Research Paper

    Accuracy-- a term used in survey research to refer to the match between the target population and the sample. Affective Measures-- procedures or devices used to obtain quantified descriptions of an individual's feelings, emotional states, or dispositions. Aggregate-- a total created from smaller units. For instance, the population of a county ...

  28. Research Methods

    Research methods are specific procedures for collecting and analyzing data. ... For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students. In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

  29. Identifying cloud internet of things requirements in ...

    Fifteen participants in the first round of Delphi and 12 participants in the second round who completed all of the questionnaire's questions made up the research sample. A questionnaire was used to gather information. This questionnaire was designed in four main parts with 14 axes and 48 questions.

  30. Use of Menthol-Flavored Tobacco Products Among US Middle and High

    Unlike results of prior research focused on cigarette smoking among young people, prevalence of use of any menthol-flavored tobacco product was highest among non-Hispanic White youths. ... Methods Data sample. We analyzed data from the 2022 National Youth Tobacco Survey (NYTS), a cross-sectional, school-based, voluntary, self-administered ...