• Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

method of data analysis in descriptive research

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

What Is Descriptive Analytics? 5 Examples

Professional looking at descriptive analytics on computer

  • 09 Nov 2021

Data analytics is a valuable tool for businesses aiming to increase revenue, improve products, and retain customers. According to research by global management consulting firm McKinsey & Company, companies that use data analytics are 23 times more likely to outperform competitors in terms of new customer acquisition than non-data-driven companies. They were also nine times more likely to surpass them in measures of customer loyalty and 19 times more likely to achieve above-average profitability.

Data analytics can be broken into four key types :

  • Descriptive, which answers the question, “What happened?”
  • Diagnostic , which answers the question, “Why did this happen?”
  • Predictive , which answers the question, “What might happen in the future?”
  • Prescriptive , which answers the question, “What should we do next?”

Each type of data analysis can help you reach specific goals and be used in tandem to create a full picture of data that informs your organization’s strategy formulation and decision-making.

Descriptive analytics can be leveraged on its own or act as a foundation for the other three analytics types. If you’re new to the field of business analytics, descriptive analytics is an accessible and rewarding place to start.

Access your free e-book today.

What Is Descriptive Analytics?

Descriptive analytics is the process of using current and historical data to identify trends and relationships. It’s sometimes called the simplest form of data analysis because it describes trends and relationships but doesn’t dig deeper.

Descriptive analytics is relatively accessible and likely something your organization uses daily. Basic statistical software, such as Microsoft Excel or data visualization tools , such as Google Charts and Tableau, can help parse data, identify trends and relationships between variables, and visually display information.

Descriptive analytics is especially useful for communicating change over time and uses trends as a springboard for further analysis to drive decision-making .

Here are five examples of descriptive analytics in action to apply at your organization.

Related: 5 Business Analytics Skills for Professionals

5 Examples of Descriptive Analytics

1. traffic and engagement reports.

One example of descriptive analytics is reporting. If your organization tracks engagement in the form of social media analytics or web traffic, you’re already using descriptive analytics.

These reports are created by taking raw data—generated when users interact with your website, advertisements, or social media content—and using it to compare current metrics to historical metrics and visualize trends.

For example, you may be responsible for reporting on which media channels drive the most traffic to the product page of your company’s website. Using descriptive analytics, you can analyze the page’s traffic data to determine the number of users from each source. You may decide to take it one step further and compare traffic source data to historical data from the same sources. This can enable you to update your team on movement; for instance, highlighting that traffic from paid advertisements increased 20 percent year over year.

The three other analytics types can then be used to determine why traffic from each source increased or decreased over time, if trends are predicted to continue, and what your team’s best course of action is moving forward.

2. Financial Statement Analysis

Another example of descriptive analytics that may be familiar to you is financial statement analysis. Financial statements are periodic reports that detail financial information about a business and, together, give a holistic view of a company’s financial health.

There are several types of financial statements, including the balance sheet , income statement , cash flow statement , and statement of shareholders’ equity. Each caters to a specific audience and conveys different information about a company’s finances.

Financial statement analysis can be done in three primary ways: vertical, horizontal, and ratio.

Vertical analysis involves reading a statement from top to bottom and comparing each item to those above and below it. This helps determine relationships between variables. For instance, if each line item is a percentage of the total, comparing them can provide insight into which are taking up larger and smaller percentages of the whole.

Horizontal analysis involves reading a statement from left to right and comparing each item to itself from a previous period. This type of analysis determines change over time.

Finally, ratio analysis involves comparing one section of a report to another based on their relationships to the whole. This directly compares items across periods, as well as your company’s ratios to the industry’s to gauge whether yours is over- or underperforming.

Each of these financial statement analysis methods are examples of descriptive analytics, as they provide information about trends and relationships between variables based on current and historical data.

Credential of Readiness | Master the fundamentals of business | Learn More

3. Demand Trends

Descriptive analytics can also be used to identify trends in customer preference and behavior and make assumptions about the demand for specific products or services.

Streaming provider Netflix’s trend identification provides an excellent use case for descriptive analytics. Netflix’s team—which has a track record of being heavily data-driven—gathers data on users’ in-platform behavior. They analyze this data to determine which TV series and movies are trending at any given time and list trending titles in a section of the platform’s home screen.

Not only does this data allow Netflix users to see what’s popular—and thus, what they might enjoy watching—but it allows the Netflix team to know which types of media, themes, and actors are especially favored at a certain time. This can drive decision-making about future original content creation, contracts with existing production companies, marketing, and retargeting campaigns.

4. Aggregated Survey Results

Descriptive analytics is also useful in market research. When it comes time to glean insights from survey and focus group data, descriptive analytics can help identify relationships between variables and trends.

For instance, you may conduct a survey and identify that as respondents’ age increases, so does their likelihood to purchase your product. If you’ve conducted this survey multiple times over several years, descriptive analytics can tell you if this age-purchase correlation has always existed or if it was something that only occurred this year.

Insights like this can pave the way for diagnostic analytics to explain why certain factors are correlated. You can then leverage predictive and prescriptive analytics to plan future product improvements or marketing campaigns based on those trends.

Related: What Is Marketing Analytics?

5. Progress to Goals

Finally, descriptive analytics can be applied to track progress to goals. Reporting on progress toward key performance indicators (KPIs) can help your team understand if efforts are on track or if adjustments need to be made.

For example, if your organization aims to reach 500,000 monthly unique page views, you can use traffic data to communicate how you’re tracking toward it. Perhaps halfway through the month, you’re at 200,000 unique page views. This would be underperforming because you’d like to be halfway to your goal at that point—at 250,000 unique page views. This descriptive analysis of your team’s progress can allow further analysis to examine what can be done differently to improve traffic numbers and get back on track to hit your KPI.

Business Analytics | Become a data-driven leader | Learn More

Using Data to Identify Relationships and Trends

“Never before has so much data about so many different things been collected and stored every second of every day,” says Harvard Business School Professor Jan Hammond in the online course Business Analytics . “In this world of big data, data literacy —the ability to analyze, interpret, and even question data—is an increasingly valuable skill.”

Leveraging descriptive analytics to communicate change based on current and historical data and as a foundation for diagnostic, predictive, and prescriptive analytics has the potential to take you and your organization far.

Do you want to become a data-driven professional? Explore our eight-week Business Analytics course and our three-course Credential of Readiness (CORe) program to deepen your analytical skills and apply them to real-world business problems.

method of data analysis in descriptive research

About the Author

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

14 Quantitative analysis: Descriptive statistics

Numeric data collected in a research project can be analysed quantitatively using statistical tools in two different ways. Descriptive analysis refers to statistically describing, aggregating, and presenting the constructs of interest or associations between these constructs. Inferential analysis refers to the statistical testing of hypotheses (theory testing). In this chapter, we will examine statistical techniques used for descriptive analysis, and the next chapter will examine statistical techniques for inferential analysis. Much of today’s quantitative data analysis is conducted using software programs such as SPSS or SAS. Readers are advised to familiarise themselves with one of these programs for understanding the concepts described in this chapter.

Data preparation

In research projects, data may be collected from a variety of sources: postal surveys, interviews, pretest or posttest experimental data, observational data, and so forth. This data must be converted into a machine-readable, numeric format, such as in a spreadsheet or a text file, so that they can be analysed by computer programs like SPSS or SAS. Data preparation usually follows the following steps:

Data coding. Coding is the process of converting data into numeric format. A codebook should be created to guide the coding process. A codebook is a comprehensive document containing a detailed description of each variable in a research study, items or measures for that variable, the format of each item (numeric, text, etc.), the response scale for each item (i.e., whether it is measured on a nominal, ordinal, interval, or ratio scale, and whether this scale is a five-point, seven-point scale, etc.), and how to code each value into a numeric format. For instance, if we have a measurement item on a seven-point Likert scale with anchors ranging from ‘strongly disagree’ to ‘strongly agree’, we may code that item as 1 for strongly disagree, 4 for neutral, and 7 for strongly agree, with the intermediate anchors in between. Nominal data such as industry type can be coded in numeric form using a coding scheme such as: 1 for manufacturing, 2 for retailing, 3 for financial, 4 for healthcare, and so forth (of course, nominal data cannot be analysed statistically). Ratio scale data such as age, income, or test scores can be coded as entered by the respondent. Sometimes, data may need to be aggregated into a different form than the format used for data collection. For instance, if a survey measuring a construct such as ‘benefits of computers’ provided respondents with a checklist of benefits that they could select from, and respondents were encouraged to choose as many of those benefits as they wanted, then the total number of checked items could be used as an aggregate measure of benefits. Note that many other forms of data—such as interview transcripts—cannot be converted into a numeric format for statistical analysis. Codebooks are especially important for large complex studies involving many variables and measurement items, where the coding process is conducted by different people, to help the coding team code data in a consistent manner, and also to help others understand and interpret the coded data.

Data entry. Coded data can be entered into a spreadsheet, database, text file, or directly into a statistical program like SPSS. Most statistical programs provide a data editor for entering data. However, these programs store data in their own native format—e.g., SPSS stores data as .sav files—which makes it difficult to share that data with other statistical programs. Hence, it is often better to enter data into a spreadsheet or database where it can be reorganised as needed, shared across programs, and subsets of data can be extracted for analysis. Smaller data sets with less than 65,000 observations and 256 items can be stored in a spreadsheet created using a program such as Microsoft Excel, while larger datasets with millions of observations will require a database. Each observation can be entered as one row in the spreadsheet, and each measurement item can be represented as one column. Data should be checked for accuracy during and after entry via occasional spot checks on a set of items or observations. Furthermore, while entering data, the coder should watch out for obvious evidence of bad data, such as the respondent selecting the ‘strongly agree’ response to all items irrespective of content, including reverse-coded items. If so, such data can be entered but should be excluded from subsequent analysis.

-1

Data transformation. Sometimes, it is necessary to transform data values before they can be meaningfully interpreted. For instance, reverse coded items—where items convey the opposite meaning of that of their underlying construct—should be reversed (e.g., in a 1-7 interval scale, 8 minus the observed value will reverse the value) before they can be compared or combined with items that are not reverse coded. Other kinds of transformations may include creating scale measures by adding individual scale items, creating a weighted index from a set of observed measures, and collapsing multiple values into fewer categories (e.g., collapsing incomes into income ranges).

Univariate analysis

Univariate analysis—or analysis of a single variable—refers to a set of statistical techniques that can describe the general properties of one variable. Univariate statistics include: frequency distribution, central tendency, and dispersion. The frequency distribution of a variable is a summary of the frequency—or percentages—of individual values or ranges of values for that variable. For instance, we can measure how many times a sample of respondents attend religious services—as a gauge of their ‘religiosity’—using a categorical scale: never, once per year, several times per year, about once a month, several times per month, several times per week, and an optional category for ‘did not answer’. If we count the number or percentage of observations within each category—except ‘did not answer’ which is really a missing value rather than a category—and display it in the form of a table, as shown in Figure 14.1, what we have is a frequency distribution. This distribution can also be depicted in the form of a bar chart, as shown on the right panel of Figure 14.1, with the horizontal axis representing each category of that variable and the vertical axis representing the frequency or percentage of observations within each category.

Frequency distribution of religiosity

With very large samples, where observations are independent and random, the frequency distribution tends to follow a plot that looks like a bell-shaped curve—a smoothed bar chart of the frequency distribution—similar to that shown in Figure 14.2. Here most observations are clustered toward the centre of the range of values, with fewer and fewer observations clustered toward the extreme ends of the range. Such a curve is called a normal distribution .

(15 + 20 + 21 + 20 + 36 + 15 + 25 + 15)/8=20.875

Lastly, the mode is the most frequently occurring value in a distribution of values. In the previous example, the most frequently occurring value is 15, which is the mode of the above set of test scores. Note that any value that is estimated from a sample, such as mean, median, mode, or any of the later estimates are called a statistic .

36-15=21

Bivariate analysis

Bivariate analysis examines how two variables are related to one another. The most common bivariate statistic is the bivariate correlation —often, simply called ‘correlation’—which is a number between -1 and +1 denoting the strength of the relationship between two variables. Say that we wish to study how age is related to self-esteem in a sample of 20 respondents—i.e., as age increases, does self-esteem increase, decrease, or remain unchanged?. If self-esteem increases, then we have a positive correlation between the two variables, if self-esteem decreases, then we have a negative correlation, and if it remains the same, we have a zero correlation. To calculate the value of this correlation, consider the hypothetical dataset shown in Table 14.1.

Normal distribution

After computing bivariate correlation, researchers are often interested in knowing whether the correlation is significant (i.e., a real one) or caused by mere chance. Answering such a question would require testing the following hypothesis:

\[H_0:\quad r = 0 \]

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Have a thesis expert improve your writing

Check your thesis for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Descriptive Statistics | Definitions, Types, Examples

Published on 4 November 2022 by Pritha Bhandari . Revised on 9 January 2023.

Descriptive statistics summarise and organise characteristics of a data set. A data set is a collection of responses or observations from a sample or entire population .

In quantitative research , after collecting data, the first step of statistical analysis is to describe characteristics of the responses, such as the average of one variable (e.g., age), or the relation between two variables (e.g., age and creativity).

The next step is inferential statistics , which help you decide whether your data confirms or refutes your hypothesis and whether it is generalisable to a larger population.

Table of contents

Types of descriptive statistics, frequency distribution, measures of central tendency, measures of variability, univariate descriptive statistics, bivariate descriptive statistics, frequently asked questions.

There are 3 main types of descriptive statistics:

  • The distribution concerns the frequency of each value.
  • The central tendency concerns the averages of the values.
  • The variability or dispersion concerns how spread out the values are.

Types of descriptive statistics

You can apply these to assess only one variable at a time, in univariate analysis, or to compare two or more, in bivariate and multivariate analysis.

  • Go to a library
  • Watch a movie at a theater
  • Visit a national park

A data set is made up of a distribution of values, or scores. In tables or graphs, you can summarise the frequency of every possible value of a variable in numbers or percentages.

  • Simple frequency distribution table
  • Grouped frequency distribution table

From this table, you can see that more women than men or people with another gender identity took part in the study. In a grouped frequency distribution, you can group numerical response values and add up the number of responses for each group. You can also convert each of these numbers to percentages.

Measures of central tendency estimate the center, or average, of a data set. The mean , median and mode are 3 ways of finding the average.

Here we will demonstrate how to calculate the mean, median, and mode using the first 6 responses of our survey.

The mean , or M , is the most commonly used method for finding the average.

To find the mean, simply add up all response values and divide the sum by the total number of responses. The total number of responses or observations is called N .

The median is the value that’s exactly in the middle of a data set.

To find the median, order each response value from the smallest to the biggest. Then, the median is the number in the middle. If there are two numbers in the middle, find their mean.

The mode is the simply the most popular or most frequent response value. A data set can have no mode, one mode, or more than one mode.

To find the mode, order your data set from lowest to highest and find the response that occurs most frequently.

Measures of variability give you a sense of how spread out the response values are. The range, standard deviation and variance each reflect different aspects of spread.

The range gives you an idea of how far apart the most extreme response scores are. To find the range , simply subtract the lowest value from the highest value.

Standard deviation

The standard deviation ( s ) is the average amount of variability in your dataset. It tells you, on average, how far each score lies from the mean. The larger the standard deviation, the more variable the data set is.

There are six steps for finding the standard deviation:

  • List each score and find their mean.
  • Subtract the mean from each score to get the deviation from the mean.
  • Square each of these deviations.
  • Add up all of the squared deviations.
  • Divide the sum of the squared deviations by N – 1.
  • Find the square root of the number you found.

Step 5: 421.5/5 = 84.3

Step 6: √84.3 = 9.18

The variance is the average of squared deviations from the mean. Variance reflects the degree of spread in the data set. The more spread the data, the larger the variance is in relation to the mean.

To find the variance, simply square the standard deviation. The symbol for variance is s 2 .

Univariate descriptive statistics focus on only one variable at a time. It’s important to examine data from each variable separately using multiple measures of distribution, central tendency and spread. Programs like SPSS and Excel can be used to easily calculate these.

If you were to only consider the mean as a measure of central tendency, your impression of the ‘middle’ of the data set can be skewed by outliers, unlike the median or mode.

Likewise, while the range is sensitive to extreme values, you should also consider the standard deviation and variance to get easily comparable measures of spread.

If you’ve collected data on more than one variable, you can use bivariate or multivariate descriptive statistics to explore whether there are relationships between them.

In bivariate analysis, you simultaneously study the frequency and variability of two variables to see if they vary together. You can also compare the central tendency of the two variables before performing further statistical tests .

Multivariate analysis is the same as bivariate analysis but with more than two variables.

Contingency table

In a contingency table, each cell represents the intersection of two variables. Usually, an independent variable (e.g., gender) appears along the vertical axis and a dependent one appears along the horizontal axis (e.g., activities). You read ‘across’ the table to see how the independent and dependent variables relate to each other.

Interpreting a contingency table is easier when the raw data is converted to percentages. Percentages make each row comparable to the other by making it seem as if each group had only 100 observations or participants. When creating a percentage-based contingency table, you add the N for each independent variable on the end.

From this table, it is more clear that similar proportions of children and adults go to the library over 17 times a year. Additionally, children most commonly went to the library between 5 and 8 times, while for adults, this number was between 13 and 16.

Scatter plots

A scatter plot is a chart that shows you the relationship between two or three variables. It’s a visual representation of the strength of a relationship.

In a scatter plot, you plot one variable along the x-axis and another one along the y-axis. Each data point is represented by a point in the chart.

From your scatter plot, you see that as the number of movies seen at movie theaters increases, the number of visits to the library decreases. Based on your visual assessment of a possible linear relationship, you perform further tests of correlation and regression.

Descriptive statistics: Scatter plot

Descriptive statistics summarise the characteristics of a data set. Inferential statistics allow you to test a hypothesis or assess whether your data is generalisable to the broader population.

The 3 main types of descriptive statistics concern the frequency distribution, central tendency, and variability of a dataset.

  • Distribution refers to the frequencies of different responses.
  • Measures of central tendency give you the average for each response.
  • Measures of variability show you the spread or dispersion of your dataset.
  • Univariate statistics summarise only one variable  at a time.
  • Bivariate statistics compare two variables .
  • Multivariate statistics compare more than two variables .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2023, January 09). Descriptive Statistics | Definitions, Types, Examples. Scribbr. Retrieved 14 May 2024, from https://www.scribbr.co.uk/stats/descriptive-statistics-explained/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, data collection methods | step-by-step guide & examples, variability | calculating range, iqr, variance, standard deviation, normal distribution | examples, formulas, & uses.

Data Analysis: Descriptive and Analytical Statistics

  • First Online: 01 March 2024

Cite this chapter

method of data analysis in descriptive research

  • Animesh Hazari 2  

114 Accesses

The analysis of the raw data is very important to make meaningful interpretations while also testing the research hypothesis. The process of data analysis in research involves data redundancy, integration, filtration, and statistical processing. Each step has been explained below. All researchers need to perform data analysis with accuracy after the collection of data. The raw data often does not provide meaningful results and thus needs to be processed for better understanding and provide a scientific conclusion to the research hypothesis by either accepting or rejecting it.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and affiliations.

College of Health Sciences, Gulf Medical University, Ajman, Ajman, United Arab Emirates

Animesh Hazari

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Hazari, A. (2023). Data Analysis: Descriptive and Analytical Statistics. In: Research Methodology for Allied Health Professionals. Springer, Singapore. https://doi.org/10.1007/978-981-99-8925-6_10

Download citation

DOI : https://doi.org/10.1007/978-981-99-8925-6_10

Published : 01 March 2024

Publisher Name : Springer, Singapore

Print ISBN : 978-981-99-8924-9

Online ISBN : 978-981-99-8925-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • What is descriptive research?

Last updated

5 February 2023

Reviewed by

Cathy Heath

Descriptive research is a common investigatory model used by researchers in various fields, including social sciences, linguistics, and academia.

Read on to understand the characteristics of descriptive research and explore its underlying techniques, processes, and procedures.

Analyze your descriptive research

Dovetail streamlines analysis to help you uncover and share actionable insights

Descriptive research is an exploratory research method. It enables researchers to precisely and methodically describe a population, circumstance, or phenomenon.

As the name suggests, descriptive research describes the characteristics of the group, situation, or phenomenon being studied without manipulating variables or testing hypotheses . This can be reported using surveys , observational studies, and case studies. You can use both quantitative and qualitative methods to compile the data.

Besides making observations and then comparing and analyzing them, descriptive studies often develop knowledge concepts and provide solutions to critical issues. It always aims to answer how the event occurred, when it occurred, where it occurred, and what the problem or phenomenon is.

  • Characteristics of descriptive research

The following are some of the characteristics of descriptive research:

Quantitativeness

Descriptive research can be quantitative as it gathers quantifiable data to statistically analyze a population sample. These numbers can show patterns, connections, and trends over time and can be discovered using surveys, polls, and experiments.

Qualitativeness

Descriptive research can also be qualitative. It gives meaning and context to the numbers supplied by quantitative descriptive research .

Researchers can use tools like interviews, focus groups, and ethnographic studies to illustrate why things are what they are and help characterize the research problem. This is because it’s more explanatory than exploratory or experimental research.

Uncontrolled variables

Descriptive research differs from experimental research in that researchers cannot manipulate the variables. They are recognized, scrutinized, and quantified instead. This is one of its most prominent features.

Cross-sectional studies

Descriptive research is a cross-sectional study because it examines several areas of the same group. It involves obtaining data on multiple variables at the personal level during a certain period. It’s helpful when trying to understand a larger community’s habits or preferences.

Carried out in a natural environment

Descriptive studies are usually carried out in the participants’ everyday environment, which allows researchers to avoid influencing responders by collecting data in a natural setting. You can use online surveys or survey questions to collect data or observe.

Basis for further research

You can further dissect descriptive research’s outcomes and use them for different types of investigation. The outcomes also serve as a foundation for subsequent investigations and can guide future studies. For example, you can use the data obtained in descriptive research to help determine future research designs.

  • Descriptive research methods

There are three basic approaches for gathering data in descriptive research: observational, case study, and survey.

You can use surveys to gather data in descriptive research. This involves gathering information from many people using a questionnaire and interview .

Surveys remain the dominant research tool for descriptive research design. Researchers can conduct various investigations and collect multiple types of data (quantitative and qualitative) using surveys with diverse designs.

You can conduct surveys over the phone, online, or in person. Your survey might be a brief interview or conversation with a set of prepared questions intended to obtain quick information from the primary source.

Observation

This descriptive research method involves observing and gathering data on a population or phenomena without manipulating variables. It is employed in psychology, market research , and other social science studies to track and understand human behavior.

Observation is an essential component of descriptive research. It entails gathering data and analyzing it to see whether there is a relationship between the two variables in the study. This strategy usually allows for both qualitative and quantitative data analysis.

Case studies

A case study can outline a specific topic’s traits. The topic might be a person, group, event, or organization.

It involves using a subset of a larger group as a sample to characterize the features of that larger group.

You can generalize knowledge gained from studying a case study to benefit a broader audience.

This approach entails carefully examining a particular group, person, or event over time. You can learn something new about the study topic by using a small group to better understand the dynamics of the entire group.

  • Types of descriptive research

There are several types of descriptive study. The most well-known include cross-sectional studies, census surveys, sample surveys, case reports, and comparison studies.

Case reports and case series

In the healthcare and medical fields, a case report is used to explain a patient’s circumstances when suffering from an uncommon illness or displaying certain symptoms. Case reports and case series are both collections of related cases. They have aided the advancement of medical knowledge on countless occasions.

The normative component is an addition to the descriptive survey. In the descriptive–normative survey, you compare the study’s results to the norm.

Descriptive survey

This descriptive type of research employs surveys to collect information on various topics. This data aims to determine the degree to which certain conditions may be attained.

You can extrapolate or generalize the information you obtain from sample surveys to the larger group being researched.

Correlative survey

Correlative surveys help establish if there is a positive, negative, or neutral connection between two variables.

Performing census surveys involves gathering relevant data on several aspects of a given population. These units include individuals, families, organizations, objects, characteristics, and properties.

During descriptive research, you gather different degrees of interest over time from a specific population. Cross-sectional studies provide a glimpse of a phenomenon’s prevalence and features in a population. There are no ethical challenges with them and they are quite simple and inexpensive to carry out.

Comparative studies

These surveys compare the two subjects’ conditions or characteristics. The subjects may include research variables, organizations, plans, and people.

Comparison points, assumption of similarities, and criteria of comparison are three important variables that affect how well and accurately comparative studies are conducted.

For instance, descriptive research can help determine how many CEOs hold a bachelor’s degree and what proportion of low-income households receive government help.

  • Pros and cons

The primary advantage of descriptive research designs is that researchers can create a reliable and beneficial database for additional study. To conduct any inquiry, you need access to reliable information sources that can give you a firm understanding of a situation.

Quantitative studies are time- and resource-intensive, so knowing the hypotheses viable for testing is crucial. The basic overview of descriptive research provides helpful hints as to which variables are worth quantitatively examining. This is why it’s employed as a precursor to quantitative research designs.

Some experts view this research as untrustworthy and unscientific. However, there is no way to assess the findings because you don’t manipulate any variables statistically.

Cause-and-effect correlations also can’t be established through descriptive investigations. Additionally, observational study findings cannot be replicated, which prevents a review of the findings and their replication.

The absence of statistical and in-depth analysis and the rather superficial character of the investigative procedure are drawbacks of this research approach.

  • Descriptive research examples and applications

Several descriptive research examples are emphasized based on their types, purposes, and applications. Research questions often begin with “What is …” These studies help find solutions to practical issues in social science, physical science, and education.

Here are some examples and applications of descriptive research:

Determining consumer perception and behavior

Organizations use descriptive research designs to determine how various demographic groups react to a certain product or service.

For example, a business looking to sell to its target market should research the market’s behavior first. When researching human behavior in response to a cause or event, the researcher pays attention to the traits, actions, and responses before drawing a conclusion.

Scientific classification

Scientific descriptive research enables the classification of organisms and their traits and constituents.

Measuring data trends

A descriptive study design’s statistical capabilities allow researchers to track data trends over time. It’s frequently used to determine the study target’s current circumstances and underlying patterns.

Conduct comparison

Organizations can use a descriptive research approach to learn how various demographics react to a certain product or service. For example, you can study how the target market responds to a competitor’s product and use that information to infer their behavior.

  • Bottom line

A descriptive research design is suitable for exploring certain topics and serving as a prelude to larger quantitative investigations. It provides a comprehensive understanding of the “what” of the group or thing you’re investigating.

This research type acts as the cornerstone of other research methodologies . It is distinctive because it can use quantitative and qualitative research approaches at the same time.

What is descriptive research design?

Descriptive research design aims to systematically obtain information to describe a phenomenon, situation, or population. More specifically, it helps answer the what, when, where, and how questions regarding the research problem rather than the why.

How does descriptive research compare to qualitative research?

Despite certain parallels, descriptive research concentrates on describing phenomena, while qualitative research aims to understand people better.

How do you analyze descriptive research data?

Data analysis involves using various methodologies, enabling the researcher to evaluate and provide results regarding validity and reliability.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 25 November 2023

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

method of data analysis in descriptive research

Users report unexpectedly high data usage, especially during streaming sessions.

method of data analysis in descriptive research

Users find it hard to navigate from the home page to relevant playlists in the app.

method of data analysis in descriptive research

It would be great to have a sleep timer feature, especially for bedtime listening.

method of data analysis in descriptive research

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Descriptive Research

Try Qualtrics for free

Descriptive research: what it is and how to use it.

8 min read Understanding the who, what and where of a situation or target group is an essential part of effective research and making informed business decisions.

For example you might want to understand what percentage of CEOs have a bachelor’s degree or higher. Or you might want to understand what percentage of low income families receive government support – or what kind of support they receive.

Descriptive research is what will be used in these types of studies.

In this guide we’ll look through the main issues relating to descriptive research to give you a better understanding of what it is, and how and why you can use it.

Free eBook: 2024 global market research trends report

What is descriptive research?

Descriptive research is a research method used to try and determine the characteristics of a population or particular phenomenon.

Using descriptive research you can identify patterns in the characteristics of a group to essentially establish everything you need to understand apart from why something has happened.

Market researchers use descriptive research for a range of commercial purposes to guide key decisions.

For example you could use descriptive research to understand fashion trends in a given city when planning your clothing collection for the year. Using descriptive research you can conduct in depth analysis on the demographic makeup of your target area and use the data analysis to establish buying patterns.

Conducting descriptive research wouldn’t, however, tell you why shoppers are buying a particular type of fashion item.

Descriptive research design

Descriptive research design uses a range of both qualitative research and quantitative data (although quantitative research is the primary research method) to gather information to make accurate predictions about a particular problem or hypothesis.

As a survey method, descriptive research designs will help researchers identify characteristics in their target market or particular population.

These characteristics in the population sample can be identified, observed and measured to guide decisions.

Descriptive research characteristics

While there are a number of descriptive research methods you can deploy for data collection, descriptive research does have a number of predictable characteristics.

Here are a few of the things to consider:

Measure data trends with statistical outcomes

Descriptive research is often popular for survey research because it generates answers in a statistical form, which makes it easy for researchers to carry out a simple statistical analysis to interpret what the data is saying.

Descriptive research design is ideal for further research

Because the data collection for descriptive research produces statistical outcomes, it can also be used as secondary data for another research study.

Plus, the data collected from descriptive research can be subjected to other types of data analysis .

Uncontrolled variables

A key component of the descriptive research method is that it uses random variables that are not controlled by the researchers. This is because descriptive research aims to understand the natural behavior of the research subject.

It’s carried out in a natural environment

Descriptive research is often carried out in a natural environment. This is because researchers aim to gather data in a natural setting to avoid swaying respondents.

Data can be gathered using survey questions or online surveys.

For example, if you want to understand the fashion trends we mentioned earlier, you would set up a study in which a researcher observes people in the respondent’s natural environment to understand their habits and preferences.

Descriptive research allows for cross sectional study

Because of the nature of descriptive research design and the randomness of the sample group being observed, descriptive research is ideal for cross sectional studies – essentially the demographics of the group can vary widely and your aim is to gain insights from within the group.

This can be highly beneficial when you’re looking to understand the behaviors or preferences of a wider population.

Descriptive research advantages

There are many advantages to using descriptive research, some of them include:

Cost effectiveness

Because the elements needed for descriptive research design are not specific or highly targeted (and occur within the respondent’s natural environment) this type of study is relatively cheap to carry out.

Multiple types of data can be collected

A big advantage of this research type, is that you can use it to collect both quantitative and qualitative data. This means you can use the stats gathered to easily identify underlying patterns in your respondents’ behavior.

Descriptive research disadvantages

Potential reliability issues.

When conducting descriptive research it’s important that the initial survey questions are properly formulated.

If not, it could make the answers unreliable and risk the credibility of your study.

Potential limitations

As we’ve mentioned, descriptive research design is ideal for understanding the what, who or where of a situation or phenomenon.

However, it can’t help you understand the cause or effect of the behavior. This means you’ll need to conduct further research to get a more complete picture of a situation.

Descriptive research methods

Because descriptive research methods include a range of quantitative and qualitative research, there are several research methods you can use.

Use case studies

Case studies in descriptive research involve conducting in-depth and detailed studies in which researchers get a specific person or case to answer questions.

Case studies shouldn’t be used to generate results, rather it should be used to build or establish hypothesis that you can expand into further market research .

For example you could gather detailed data about a specific business phenomenon, and then use this deeper understanding of that specific case.

Use observational methods

This type of study uses qualitative observations to understand human behavior within a particular group.

By understanding how the different demographics respond within your sample you can identify patterns and trends.

As an observational method, descriptive research will not tell you the cause of any particular behaviors, but that could be established with further research.

Use survey research

Surveys are one of the most cost effective ways to gather descriptive data.

An online survey or questionnaire can be used in descriptive studies to gather quantitative information about a particular problem.

Survey research is ideal if you’re using descriptive research as your primary research.

Descriptive research examples

Descriptive research is used for a number of commercial purposes or when organizations need to understand the behaviors or opinions of a population.

One of the biggest examples of descriptive research that is used in every democratic country, is during elections.

Using descriptive research, researchers will use surveys to understand who voters are more likely to choose out of the parties or candidates available.

Using the data provided, researchers can analyze the data to understand what the election result will be.

In a commercial setting, retailers often use descriptive research to figure out trends in shopping and buying decisions.

By gathering information on the habits of shoppers, retailers can get a better understanding of the purchases being made.

Another example that is widely used around the world, is the national census that takes place to understand the population.

The research will provide a more accurate picture of a population’s demographic makeup and help to understand changes over time in areas like population age, health and education level.

Where Qualtrics helps with descriptive research

Whatever type of research you want to carry out, there’s a survey type that will work.

Qualtrics can help you determine the appropriate method and ensure you design a study that will deliver the insights you need.

Our experts can help you with your market research needs , ensuring you get the most out of Qualtrics market research software to design, launch and analyze your data to guide better, more accurate decisions for your organization.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

Your Modern Business Guide To Data Analysis Methods And Techniques

Data analysis methods and techniques blog post by datapine

Table of Contents

1) What Is Data Analysis?

2) Why Is Data Analysis Important?

3) What Is The Data Analysis Process?

4) Types Of Data Analysis Methods

5) Top Data Analysis Techniques To Apply

6) Quality Criteria For Data Analysis

7) Data Analysis Limitations & Barriers

8) Data Analysis Skills

9) Data Analysis In The Big Data Environment

In our data-rich age, understanding how to analyze and extract true meaning from our business’s digital insights is one of the primary drivers of success.

Despite the colossal volume of data we create every day, a mere 0.5% is actually analyzed and used for data discovery , improvement, and intelligence. While that may not seem like much, considering the amount of digital information we have at our fingertips, half a percent still accounts for a vast amount of data.

With so much data and so little time, knowing how to collect, curate, organize, and make sense of all of this potentially business-boosting information can be a minefield – but online data analysis is the solution.

In science, data analysis uses a more complex approach with advanced techniques to explore and experiment with data. On the other hand, in a business context, data is used to make data-driven decisions that will enable the company to improve its overall performance. In this post, we will cover the analysis of data from an organizational point of view while still going through the scientific and statistical foundations that are fundamental to understanding the basics of data analysis. 

To put all of that into perspective, we will answer a host of important analytical questions, explore analytical methods and techniques, while demonstrating how to perform analysis in the real world with a 17-step blueprint for success.

What Is Data Analysis?

Data analysis is the process of collecting, modeling, and analyzing data using various statistical and logical methods and techniques. Businesses rely on analytics processes and tools to extract insights that support strategic and operational decision-making.

All these various methods are largely based on two core areas: quantitative and qualitative research.

To explain the key differences between qualitative and quantitative research, here’s a video for your viewing pleasure:

Gaining a better understanding of different techniques and methods in quantitative research as well as qualitative insights will give your analyzing efforts a more clearly defined direction, so it’s worth taking the time to allow this particular knowledge to sink in. Additionally, you will be able to create a comprehensive analytical report that will skyrocket your analysis.

Apart from qualitative and quantitative categories, there are also other types of data that you should be aware of before dividing into complex data analysis processes. These categories include: 

  • Big data: Refers to massive data sets that need to be analyzed using advanced software to reveal patterns and trends. It is considered to be one of the best analytical assets as it provides larger volumes of data at a faster rate. 
  • Metadata: Putting it simply, metadata is data that provides insights about other data. It summarizes key information about specific data that makes it easier to find and reuse for later purposes. 
  • Real time data: As its name suggests, real time data is presented as soon as it is acquired. From an organizational perspective, this is the most valuable data as it can help you make important decisions based on the latest developments. Our guide on real time analytics will tell you more about the topic. 
  • Machine data: This is more complex data that is generated solely by a machine such as phones, computers, or even websites and embedded systems, without previous human interaction.

Why Is Data Analysis Important?

Before we go into detail about the categories of analysis along with its methods and techniques, you must understand the potential that analyzing data can bring to your organization.

  • Informed decision-making : From a management perspective, you can benefit from analyzing your data as it helps you make decisions based on facts and not simple intuition. For instance, you can understand where to invest your capital, detect growth opportunities, predict your income, or tackle uncommon situations before they become problems. Through this, you can extract relevant insights from all areas in your organization, and with the help of dashboard software , present the data in a professional and interactive way to different stakeholders.
  • Reduce costs : Another great benefit is to reduce costs. With the help of advanced technologies such as predictive analytics, businesses can spot improvement opportunities, trends, and patterns in their data and plan their strategies accordingly. In time, this will help you save money and resources on implementing the wrong strategies. And not just that, by predicting different scenarios such as sales and demand you can also anticipate production and supply. 
  • Target customers better : Customers are arguably the most crucial element in any business. By using analytics to get a 360° vision of all aspects related to your customers, you can understand which channels they use to communicate with you, their demographics, interests, habits, purchasing behaviors, and more. In the long run, it will drive success to your marketing strategies, allow you to identify new potential customers, and avoid wasting resources on targeting the wrong people or sending the wrong message. You can also track customer satisfaction by analyzing your client’s reviews or your customer service department’s performance.

What Is The Data Analysis Process?

Data analysis process graphic

When we talk about analyzing data there is an order to follow in order to extract the needed conclusions. The analysis process consists of 5 key stages. We will cover each of them more in detail later in the post, but to start providing the needed context to understand what is coming next, here is a rundown of the 5 essential steps of data analysis. 

  • Identify: Before you get your hands dirty with data, you first need to identify why you need it in the first place. The identification is the stage in which you establish the questions you will need to answer. For example, what is the customer's perception of our brand? Or what type of packaging is more engaging to our potential customers? Once the questions are outlined you are ready for the next step. 
  • Collect: As its name suggests, this is the stage where you start collecting the needed data. Here, you define which sources of data you will use and how you will use them. The collection of data can come in different forms such as internal or external sources, surveys, interviews, questionnaires, and focus groups, among others.  An important note here is that the way you collect the data will be different in a quantitative and qualitative scenario. 
  • Clean: Once you have the necessary data it is time to clean it and leave it ready for analysis. Not all the data you collect will be useful, when collecting big amounts of data in different formats it is very likely that you will find yourself with duplicate or badly formatted data. To avoid this, before you start working with your data you need to make sure to erase any white spaces, duplicate records, or formatting errors. This way you avoid hurting your analysis with bad-quality data. 
  • Analyze : With the help of various techniques such as statistical analysis, regressions, neural networks, text analysis, and more, you can start analyzing and manipulating your data to extract relevant conclusions. At this stage, you find trends, correlations, variations, and patterns that can help you answer the questions you first thought of in the identify stage. Various technologies in the market assist researchers and average users with the management of their data. Some of them include business intelligence and visualization software, predictive analytics, and data mining, among others. 
  • Interpret: Last but not least you have one of the most important steps: it is time to interpret your results. This stage is where the researcher comes up with courses of action based on the findings. For example, here you would understand if your clients prefer packaging that is red or green, plastic or paper, etc. Additionally, at this stage, you can also find some limitations and work on them. 

Now that you have a basic understanding of the key data analysis steps, let’s look at the top 17 essential methods.

17 Essential Types Of Data Analysis Methods

Before diving into the 17 essential types of methods, it is important that we go over really fast through the main analysis categories. Starting with the category of descriptive up to prescriptive analysis, the complexity and effort of data evaluation increases, but also the added value for the company.

a) Descriptive analysis - What happened.

The descriptive analysis method is the starting point for any analytic reflection, and it aims to answer the question of what happened? It does this by ordering, manipulating, and interpreting raw data from various sources to turn it into valuable insights for your organization.

Performing descriptive analysis is essential, as it enables us to present our insights in a meaningful way. Although it is relevant to mention that this analysis on its own will not allow you to predict future outcomes or tell you the answer to questions like why something happened, it will leave your data organized and ready to conduct further investigations.

b) Exploratory analysis - How to explore data relationships.

As its name suggests, the main aim of the exploratory analysis is to explore. Prior to it, there is still no notion of the relationship between the data and the variables. Once the data is investigated, exploratory analysis helps you to find connections and generate hypotheses and solutions for specific problems. A typical area of ​​application for it is data mining.

c) Diagnostic analysis - Why it happened.

Diagnostic data analytics empowers analysts and executives by helping them gain a firm contextual understanding of why something happened. If you know why something happened as well as how it happened, you will be able to pinpoint the exact ways of tackling the issue or challenge.

Designed to provide direct and actionable answers to specific questions, this is one of the world’s most important methods in research, among its other key organizational functions such as retail analytics , e.g.

c) Predictive analysis - What will happen.

The predictive method allows you to look into the future to answer the question: what will happen? In order to do this, it uses the results of the previously mentioned descriptive, exploratory, and diagnostic analysis, in addition to machine learning (ML) and artificial intelligence (AI). Through this, you can uncover future trends, potential problems or inefficiencies, connections, and casualties in your data.

With predictive analysis, you can unfold and develop initiatives that will not only enhance your various operational processes but also help you gain an all-important edge over the competition. If you understand why a trend, pattern, or event happened through data, you will be able to develop an informed projection of how things may unfold in particular areas of the business.

e) Prescriptive analysis - How will it happen.

Another of the most effective types of analysis methods in research. Prescriptive data techniques cross over from predictive analysis in the way that it revolves around using patterns or trends to develop responsive, practical business strategies.

By drilling down into prescriptive analysis, you will play an active role in the data consumption process by taking well-arranged sets of visual data and using it as a powerful fix to emerging issues in a number of key areas, including marketing, sales, customer experience, HR, fulfillment, finance, logistics analytics , and others.

Top 17 data analysis methods

As mentioned at the beginning of the post, data analysis methods can be divided into two big categories: quantitative and qualitative. Each of these categories holds a powerful analytical value that changes depending on the scenario and type of data you are working with. Below, we will discuss 17 methods that are divided into qualitative and quantitative approaches. 

Without further ado, here are the 17 essential types of data analysis methods with some use cases in the business world: 

A. Quantitative Methods 

To put it simply, quantitative analysis refers to all methods that use numerical data or data that can be turned into numbers (e.g. category variables like gender, age, etc.) to extract valuable insights. It is used to extract valuable conclusions about relationships, differences, and test hypotheses. Below we discuss some of the key quantitative methods. 

1. Cluster analysis

The action of grouping a set of data elements in a way that said elements are more similar (in a particular sense) to each other than to those in other groups – hence the term ‘cluster.’ Since there is no target variable when clustering, the method is often used to find hidden patterns in the data. The approach is also used to provide additional context to a trend or dataset.

Let's look at it from an organizational perspective. In a perfect world, marketers would be able to analyze each customer separately and give them the best-personalized service, but let's face it, with a large customer base, it is timely impossible to do that. That's where clustering comes in. By grouping customers into clusters based on demographics, purchasing behaviors, monetary value, or any other factor that might be relevant for your company, you will be able to immediately optimize your efforts and give your customers the best experience based on their needs.

2. Cohort analysis

This type of data analysis approach uses historical data to examine and compare a determined segment of users' behavior, which can then be grouped with others with similar characteristics. By using this methodology, it's possible to gain a wealth of insight into consumer needs or a firm understanding of a broader target group.

Cohort analysis can be really useful for performing analysis in marketing as it will allow you to understand the impact of your campaigns on specific groups of customers. To exemplify, imagine you send an email campaign encouraging customers to sign up for your site. For this, you create two versions of the campaign with different designs, CTAs, and ad content. Later on, you can use cohort analysis to track the performance of the campaign for a longer period of time and understand which type of content is driving your customers to sign up, repurchase, or engage in other ways.  

A useful tool to start performing cohort analysis method is Google Analytics. You can learn more about the benefits and limitations of using cohorts in GA in this useful guide . In the bottom image, you see an example of how you visualize a cohort in this tool. The segments (devices traffic) are divided into date cohorts (usage of devices) and then analyzed week by week to extract insights into performance.

Cohort analysis chart example from google analytics

3. Regression analysis

Regression uses historical data to understand how a dependent variable's value is affected when one (linear regression) or more independent variables (multiple regression) change or stay the same. By understanding each variable's relationship and how it developed in the past, you can anticipate possible outcomes and make better decisions in the future.

Let's bring it down with an example. Imagine you did a regression analysis of your sales in 2019 and discovered that variables like product quality, store design, customer service, marketing campaigns, and sales channels affected the overall result. Now you want to use regression to analyze which of these variables changed or if any new ones appeared during 2020. For example, you couldn’t sell as much in your physical store due to COVID lockdowns. Therefore, your sales could’ve either dropped in general or increased in your online channels. Through this, you can understand which independent variables affected the overall performance of your dependent variable, annual sales.

If you want to go deeper into this type of analysis, check out this article and learn more about how you can benefit from regression.

4. Neural networks

The neural network forms the basis for the intelligent algorithms of machine learning. It is a form of analytics that attempts, with minimal intervention, to understand how the human brain would generate insights and predict values. Neural networks learn from each and every data transaction, meaning that they evolve and advance over time.

A typical area of application for neural networks is predictive analytics. There are BI reporting tools that have this feature implemented within them, such as the Predictive Analytics Tool from datapine. This tool enables users to quickly and easily generate all kinds of predictions. All you have to do is select the data to be processed based on your KPIs, and the software automatically calculates forecasts based on historical and current data. Thanks to its user-friendly interface, anyone in your organization can manage it; there’s no need to be an advanced scientist. 

Here is an example of how you can use the predictive analysis tool from datapine:

Example on how to use predictive analytics tool from datapine

**click to enlarge**

5. Factor analysis

The factor analysis also called “dimension reduction” is a type of data analysis used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. The aim here is to uncover independent latent variables, an ideal method for streamlining specific segments.

A good way to understand this data analysis method is a customer evaluation of a product. The initial assessment is based on different variables like color, shape, wearability, current trends, materials, comfort, the place where they bought the product, and frequency of usage. Like this, the list can be endless, depending on what you want to track. In this case, factor analysis comes into the picture by summarizing all of these variables into homogenous groups, for example, by grouping the variables color, materials, quality, and trends into a brother latent variable of design.

If you want to start analyzing data using factor analysis we recommend you take a look at this practical guide from UCLA.

6. Data mining

A method of data analysis that is the umbrella term for engineering metrics and insights for additional value, direction, and context. By using exploratory statistical evaluation, data mining aims to identify dependencies, relations, patterns, and trends to generate advanced knowledge.  When considering how to analyze data, adopting a data mining mindset is essential to success - as such, it’s an area that is worth exploring in greater detail.

An excellent use case of data mining is datapine intelligent data alerts . With the help of artificial intelligence and machine learning, they provide automated signals based on particular commands or occurrences within a dataset. For example, if you’re monitoring supply chain KPIs , you could set an intelligent alarm to trigger when invalid or low-quality data appears. By doing so, you will be able to drill down deep into the issue and fix it swiftly and effectively.

In the following picture, you can see how the intelligent alarms from datapine work. By setting up ranges on daily orders, sessions, and revenues, the alarms will notify you if the goal was not completed or if it exceeded expectations.

Example on how to use intelligent alerts from datapine

7. Time series analysis

As its name suggests, time series analysis is used to analyze a set of data points collected over a specified period of time. Although analysts use this method to monitor the data points in a specific interval of time rather than just monitoring them intermittently, the time series analysis is not uniquely used for the purpose of collecting data over time. Instead, it allows researchers to understand if variables changed during the duration of the study, how the different variables are dependent, and how did it reach the end result. 

In a business context, this method is used to understand the causes of different trends and patterns to extract valuable insights. Another way of using this method is with the help of time series forecasting. Powered by predictive technologies, businesses can analyze various data sets over a period of time and forecast different future events. 

A great use case to put time series analysis into perspective is seasonality effects on sales. By using time series forecasting to analyze sales data of a specific product over time, you can understand if sales rise over a specific period of time (e.g. swimwear during summertime, or candy during Halloween). These insights allow you to predict demand and prepare production accordingly.  

8. Decision Trees 

The decision tree analysis aims to act as a support tool to make smart and strategic decisions. By visually displaying potential outcomes, consequences, and costs in a tree-like model, researchers and company users can easily evaluate all factors involved and choose the best course of action. Decision trees are helpful to analyze quantitative data and they allow for an improved decision-making process by helping you spot improvement opportunities, reduce costs, and enhance operational efficiency and production.

But how does a decision tree actually works? This method works like a flowchart that starts with the main decision that you need to make and branches out based on the different outcomes and consequences of each decision. Each outcome will outline its own consequences, costs, and gains and, at the end of the analysis, you can compare each of them and make the smartest decision. 

Businesses can use them to understand which project is more cost-effective and will bring more earnings in the long run. For example, imagine you need to decide if you want to update your software app or build a new app entirely.  Here you would compare the total costs, the time needed to be invested, potential revenue, and any other factor that might affect your decision.  In the end, you would be able to see which of these two options is more realistic and attainable for your company or research.

9. Conjoint analysis 

Last but not least, we have the conjoint analysis. This approach is usually used in surveys to understand how individuals value different attributes of a product or service and it is one of the most effective methods to extract consumer preferences. When it comes to purchasing, some clients might be more price-focused, others more features-focused, and others might have a sustainable focus. Whatever your customer's preferences are, you can find them with conjoint analysis. Through this, companies can define pricing strategies, packaging options, subscription packages, and more. 

A great example of conjoint analysis is in marketing and sales. For instance, a cupcake brand might use conjoint analysis and find that its clients prefer gluten-free options and cupcakes with healthier toppings over super sugary ones. Thus, the cupcake brand can turn these insights into advertisements and promotions to increase sales of this particular type of product. And not just that, conjoint analysis can also help businesses segment their customers based on their interests. This allows them to send different messaging that will bring value to each of the segments. 

10. Correspondence Analysis

Also known as reciprocal averaging, correspondence analysis is a method used to analyze the relationship between categorical variables presented within a contingency table. A contingency table is a table that displays two (simple correspondence analysis) or more (multiple correspondence analysis) categorical variables across rows and columns that show the distribution of the data, which is usually answers to a survey or questionnaire on a specific topic. 

This method starts by calculating an “expected value” which is done by multiplying row and column averages and dividing it by the overall original value of the specific table cell. The “expected value” is then subtracted from the original value resulting in a “residual number” which is what allows you to extract conclusions about relationships and distribution. The results of this analysis are later displayed using a map that represents the relationship between the different values. The closest two values are in the map, the bigger the relationship. Let’s put it into perspective with an example. 

Imagine you are carrying out a market research analysis about outdoor clothing brands and how they are perceived by the public. For this analysis, you ask a group of people to match each brand with a certain attribute which can be durability, innovation, quality materials, etc. When calculating the residual numbers, you can see that brand A has a positive residual for innovation but a negative one for durability. This means that brand A is not positioned as a durable brand in the market, something that competitors could take advantage of. 

11. Multidimensional Scaling (MDS)

MDS is a method used to observe the similarities or disparities between objects which can be colors, brands, people, geographical coordinates, and more. The objects are plotted using an “MDS map” that positions similar objects together and disparate ones far apart. The (dis) similarities between objects are represented using one or more dimensions that can be observed using a numerical scale. For example, if you want to know how people feel about the COVID-19 vaccine, you can use 1 for “don’t believe in the vaccine at all”  and 10 for “firmly believe in the vaccine” and a scale of 2 to 9 for in between responses.  When analyzing an MDS map the only thing that matters is the distance between the objects, the orientation of the dimensions is arbitrary and has no meaning at all. 

Multidimensional scaling is a valuable technique for market research, especially when it comes to evaluating product or brand positioning. For instance, if a cupcake brand wants to know how they are positioned compared to competitors, it can define 2-3 dimensions such as taste, ingredients, shopping experience, or more, and do a multidimensional scaling analysis to find improvement opportunities as well as areas in which competitors are currently leading. 

Another business example is in procurement when deciding on different suppliers. Decision makers can generate an MDS map to see how the different prices, delivery times, technical services, and more of the different suppliers differ and pick the one that suits their needs the best. 

A final example proposed by a research paper on "An Improved Study of Multilevel Semantic Network Visualization for Analyzing Sentiment Word of Movie Review Data". Researchers picked a two-dimensional MDS map to display the distances and relationships between different sentiments in movie reviews. They used 36 sentiment words and distributed them based on their emotional distance as we can see in the image below where the words "outraged" and "sweet" are on opposite sides of the map, marking the distance between the two emotions very clearly.

Example of multidimensional scaling analysis

Aside from being a valuable technique to analyze dissimilarities, MDS also serves as a dimension-reduction technique for large dimensional data. 

B. Qualitative Methods

Qualitative data analysis methods are defined as the observation of non-numerical data that is gathered and produced using methods of observation such as interviews, focus groups, questionnaires, and more. As opposed to quantitative methods, qualitative data is more subjective and highly valuable in analyzing customer retention and product development.

12. Text analysis

Text analysis, also known in the industry as text mining, works by taking large sets of textual data and arranging them in a way that makes it easier to manage. By working through this cleansing process in stringent detail, you will be able to extract the data that is truly relevant to your organization and use it to develop actionable insights that will propel you forward.

Modern software accelerate the application of text analytics. Thanks to the combination of machine learning and intelligent algorithms, you can perform advanced analytical processes such as sentiment analysis. This technique allows you to understand the intentions and emotions of a text, for example, if it's positive, negative, or neutral, and then give it a score depending on certain factors and categories that are relevant to your brand. Sentiment analysis is often used to monitor brand and product reputation and to understand how successful your customer experience is. To learn more about the topic check out this insightful article .

By analyzing data from various word-based sources, including product reviews, articles, social media communications, and survey responses, you will gain invaluable insights into your audience, as well as their needs, preferences, and pain points. This will allow you to create campaigns, services, and communications that meet your prospects’ needs on a personal level, growing your audience while boosting customer retention. There are various other “sub-methods” that are an extension of text analysis. Each of them serves a more specific purpose and we will look at them in detail next. 

13. Content Analysis

This is a straightforward and very popular method that examines the presence and frequency of certain words, concepts, and subjects in different content formats such as text, image, audio, or video. For example, the number of times the name of a celebrity is mentioned on social media or online tabloids. It does this by coding text data that is later categorized and tabulated in a way that can provide valuable insights, making it the perfect mix of quantitative and qualitative analysis.

There are two types of content analysis. The first one is the conceptual analysis which focuses on explicit data, for instance, the number of times a concept or word is mentioned in a piece of content. The second one is relational analysis, which focuses on the relationship between different concepts or words and how they are connected within a specific context. 

Content analysis is often used by marketers to measure brand reputation and customer behavior. For example, by analyzing customer reviews. It can also be used to analyze customer interviews and find directions for new product development. It is also important to note, that in order to extract the maximum potential out of this analysis method, it is necessary to have a clearly defined research question. 

14. Thematic Analysis

Very similar to content analysis, thematic analysis also helps in identifying and interpreting patterns in qualitative data with the main difference being that the first one can also be applied to quantitative analysis. The thematic method analyzes large pieces of text data such as focus group transcripts or interviews and groups them into themes or categories that come up frequently within the text. It is a great method when trying to figure out peoples view’s and opinions about a certain topic. For example, if you are a brand that cares about sustainability, you can do a survey of your customers to analyze their views and opinions about sustainability and how they apply it to their lives. You can also analyze customer service calls transcripts to find common issues and improve your service. 

Thematic analysis is a very subjective technique that relies on the researcher’s judgment. Therefore,  to avoid biases, it has 6 steps that include familiarization, coding, generating themes, reviewing themes, defining and naming themes, and writing up. It is also important to note that, because it is a flexible approach, the data can be interpreted in multiple ways and it can be hard to select what data is more important to emphasize. 

15. Narrative Analysis 

A bit more complex in nature than the two previous ones, narrative analysis is used to explore the meaning behind the stories that people tell and most importantly, how they tell them. By looking into the words that people use to describe a situation you can extract valuable conclusions about their perspective on a specific topic. Common sources for narrative data include autobiographies, family stories, opinion pieces, and testimonials, among others. 

From a business perspective, narrative analysis can be useful to analyze customer behaviors and feelings towards a specific product, service, feature, or others. It provides unique and deep insights that can be extremely valuable. However, it has some drawbacks.  

The biggest weakness of this method is that the sample sizes are usually very small due to the complexity and time-consuming nature of the collection of narrative data. Plus, the way a subject tells a story will be significantly influenced by his or her specific experiences, making it very hard to replicate in a subsequent study. 

16. Discourse Analysis

Discourse analysis is used to understand the meaning behind any type of written, verbal, or symbolic discourse based on its political, social, or cultural context. It mixes the analysis of languages and situations together. This means that the way the content is constructed and the meaning behind it is significantly influenced by the culture and society it takes place in. For example, if you are analyzing political speeches you need to consider different context elements such as the politician's background, the current political context of the country, the audience to which the speech is directed, and so on. 

From a business point of view, discourse analysis is a great market research tool. It allows marketers to understand how the norms and ideas of the specific market work and how their customers relate to those ideas. It can be very useful to build a brand mission or develop a unique tone of voice. 

17. Grounded Theory Analysis

Traditionally, researchers decide on a method and hypothesis and start to collect the data to prove that hypothesis. The grounded theory is the only method that doesn’t require an initial research question or hypothesis as its value lies in the generation of new theories. With the grounded theory method, you can go into the analysis process with an open mind and explore the data to generate new theories through tests and revisions. In fact, it is not necessary to collect the data and then start to analyze it. Researchers usually start to find valuable insights as they are gathering the data. 

All of these elements make grounded theory a very valuable method as theories are fully backed by data instead of initial assumptions. It is a great technique to analyze poorly researched topics or find the causes behind specific company outcomes. For example, product managers and marketers might use the grounded theory to find the causes of high levels of customer churn and look into customer surveys and reviews to develop new theories about the causes. 

How To Analyze Data? Top 17 Data Analysis Techniques To Apply

17 top data analysis techniques by datapine

Now that we’ve answered the questions “what is data analysis’”, why is it important, and covered the different data analysis types, it’s time to dig deeper into how to perform your analysis by working through these 17 essential techniques.

1. Collaborate your needs

Before you begin analyzing or drilling down into any techniques, it’s crucial to sit down collaboratively with all key stakeholders within your organization, decide on your primary campaign or strategic goals, and gain a fundamental understanding of the types of insights that will best benefit your progress or provide you with the level of vision you need to evolve your organization.

2. Establish your questions

Once you’ve outlined your core objectives, you should consider which questions will need answering to help you achieve your mission. This is one of the most important techniques as it will shape the very foundations of your success.

To help you ask the right things and ensure your data works for you, you have to ask the right data analysis questions .

3. Data democratization

After giving your data analytics methodology some real direction, and knowing which questions need answering to extract optimum value from the information available to your organization, you should continue with democratization.

Data democratization is an action that aims to connect data from various sources efficiently and quickly so that anyone in your organization can access it at any given moment. You can extract data in text, images, videos, numbers, or any other format. And then perform cross-database analysis to achieve more advanced insights to share with the rest of the company interactively.  

Once you have decided on your most valuable sources, you need to take all of this into a structured format to start collecting your insights. For this purpose, datapine offers an easy all-in-one data connectors feature to integrate all your internal and external sources and manage them at your will. Additionally, datapine’s end-to-end solution automatically updates your data, allowing you to save time and focus on performing the right analysis to grow your company.

data connectors from datapine

4. Think of governance 

When collecting data in a business or research context you always need to think about security and privacy. With data breaches becoming a topic of concern for businesses, the need to protect your client's or subject’s sensitive information becomes critical. 

To ensure that all this is taken care of, you need to think of a data governance strategy. According to Gartner , this concept refers to “ the specification of decision rights and an accountability framework to ensure the appropriate behavior in the valuation, creation, consumption, and control of data and analytics .” In simpler words, data governance is a collection of processes, roles, and policies, that ensure the efficient use of data while still achieving the main company goals. It ensures that clear roles are in place for who can access the information and how they can access it. In time, this not only ensures that sensitive information is protected but also allows for an efficient analysis as a whole. 

5. Clean your data

After harvesting from so many sources you will be left with a vast amount of information that can be overwhelming to deal with. At the same time, you can be faced with incorrect data that can be misleading to your analysis. The smartest thing you can do to avoid dealing with this in the future is to clean the data. This is fundamental before visualizing it, as it will ensure that the insights you extract from it are correct.

There are many things that you need to look for in the cleaning process. The most important one is to eliminate any duplicate observations; this usually appears when using multiple internal and external sources of information. You can also add any missing codes, fix empty fields, and eliminate incorrectly formatted data.

Another usual form of cleaning is done with text data. As we mentioned earlier, most companies today analyze customer reviews, social media comments, questionnaires, and several other text inputs. In order for algorithms to detect patterns, text data needs to be revised to avoid invalid characters or any syntax or spelling errors. 

Most importantly, the aim of cleaning is to prevent you from arriving at false conclusions that can damage your company in the long run. By using clean data, you will also help BI solutions to interact better with your information and create better reports for your organization.

6. Set your KPIs

Once you’ve set your sources, cleaned your data, and established clear-cut questions you want your insights to answer, you need to set a host of key performance indicators (KPIs) that will help you track, measure, and shape your progress in a number of key areas.

KPIs are critical to both qualitative and quantitative analysis research. This is one of the primary methods of data analysis you certainly shouldn’t overlook.

To help you set the best possible KPIs for your initiatives and activities, here is an example of a relevant logistics KPI : transportation-related costs. If you want to see more go explore our collection of key performance indicator examples .

Transportation costs logistics KPIs

7. Omit useless data

Having bestowed your data analysis tools and techniques with true purpose and defined your mission, you should explore the raw data you’ve collected from all sources and use your KPIs as a reference for chopping out any information you deem to be useless.

Trimming the informational fat is one of the most crucial methods of analysis as it will allow you to focus your analytical efforts and squeeze every drop of value from the remaining ‘lean’ information.

Any stats, facts, figures, or metrics that don’t align with your business goals or fit with your KPI management strategies should be eliminated from the equation.

8. Build a data management roadmap

While, at this point, this particular step is optional (you will have already gained a wealth of insight and formed a fairly sound strategy by now), creating a data governance roadmap will help your data analysis methods and techniques become successful on a more sustainable basis. These roadmaps, if developed properly, are also built so they can be tweaked and scaled over time.

Invest ample time in developing a roadmap that will help you store, manage, and handle your data internally, and you will make your analysis techniques all the more fluid and functional – one of the most powerful types of data analysis methods available today.

9. Integrate technology

There are many ways to analyze data, but one of the most vital aspects of analytical success in a business context is integrating the right decision support software and technology.

Robust analysis platforms will not only allow you to pull critical data from your most valuable sources while working with dynamic KPIs that will offer you actionable insights; it will also present them in a digestible, visual, interactive format from one central, live dashboard . A data methodology you can count on.

By integrating the right technology within your data analysis methodology, you’ll avoid fragmenting your insights, saving you time and effort while allowing you to enjoy the maximum value from your business’s most valuable insights.

For a look at the power of software for the purpose of analysis and to enhance your methods of analyzing, glance over our selection of dashboard examples .

10. Answer your questions

By considering each of the above efforts, working with the right technology, and fostering a cohesive internal culture where everyone buys into the different ways to analyze data as well as the power of digital intelligence, you will swiftly start to answer your most burning business questions. Arguably, the best way to make your data concepts accessible across the organization is through data visualization.

11. Visualize your data

Online data visualization is a powerful tool as it lets you tell a story with your metrics, allowing users across the organization to extract meaningful insights that aid business evolution – and it covers all the different ways to analyze data.

The purpose of analyzing is to make your entire organization more informed and intelligent, and with the right platform or dashboard, this is simpler than you think, as demonstrated by our marketing dashboard .

An executive dashboard example showcasing high-level marketing KPIs such as cost per lead, MQL, SQL, and cost per customer.

This visual, dynamic, and interactive online dashboard is a data analysis example designed to give Chief Marketing Officers (CMO) an overview of relevant metrics to help them understand if they achieved their monthly goals.

In detail, this example generated with a modern dashboard creator displays interactive charts for monthly revenues, costs, net income, and net income per customer; all of them are compared with the previous month so that you can understand how the data fluctuated. In addition, it shows a detailed summary of the number of users, customers, SQLs, and MQLs per month to visualize the whole picture and extract relevant insights or trends for your marketing reports .

The CMO dashboard is perfect for c-level management as it can help them monitor the strategic outcome of their marketing efforts and make data-driven decisions that can benefit the company exponentially.

12. Be careful with the interpretation

We already dedicated an entire post to data interpretation as it is a fundamental part of the process of data analysis. It gives meaning to the analytical information and aims to drive a concise conclusion from the analysis results. Since most of the time companies are dealing with data from many different sources, the interpretation stage needs to be done carefully and properly in order to avoid misinterpretations. 

To help you through the process, here we list three common practices that you need to avoid at all costs when looking at your data:

  • Correlation vs. causation: The human brain is formatted to find patterns. This behavior leads to one of the most common mistakes when performing interpretation: confusing correlation with causation. Although these two aspects can exist simultaneously, it is not correct to assume that because two things happened together, one provoked the other. A piece of advice to avoid falling into this mistake is never to trust just intuition, trust the data. If there is no objective evidence of causation, then always stick to correlation. 
  • Confirmation bias: This phenomenon describes the tendency to select and interpret only the data necessary to prove one hypothesis, often ignoring the elements that might disprove it. Even if it's not done on purpose, confirmation bias can represent a real problem, as excluding relevant information can lead to false conclusions and, therefore, bad business decisions. To avoid it, always try to disprove your hypothesis instead of proving it, share your analysis with other team members, and avoid drawing any conclusions before the entire analytical project is finalized.
  • Statistical significance: To put it in short words, statistical significance helps analysts understand if a result is actually accurate or if it happened because of a sampling error or pure chance. The level of statistical significance needed might depend on the sample size and the industry being analyzed. In any case, ignoring the significance of a result when it might influence decision-making can be a huge mistake.

13. Build a narrative

Now, we’re going to look at how you can bring all of these elements together in a way that will benefit your business - starting with a little something called data storytelling.

The human brain responds incredibly well to strong stories or narratives. Once you’ve cleansed, shaped, and visualized your most invaluable data using various BI dashboard tools , you should strive to tell a story - one with a clear-cut beginning, middle, and end.

By doing so, you will make your analytical efforts more accessible, digestible, and universal, empowering more people within your organization to use your discoveries to their actionable advantage.

14. Consider autonomous technology

Autonomous technologies, such as artificial intelligence (AI) and machine learning (ML), play a significant role in the advancement of understanding how to analyze data more effectively.

Gartner predicts that by the end of this year, 80% of emerging technologies will be developed with AI foundations. This is a testament to the ever-growing power and value of autonomous technologies.

At the moment, these technologies are revolutionizing the analysis industry. Some examples that we mentioned earlier are neural networks, intelligent alarms, and sentiment analysis.

15. Share the load

If you work with the right tools and dashboards, you will be able to present your metrics in a digestible, value-driven format, allowing almost everyone in the organization to connect with and use relevant data to their advantage.

Modern dashboards consolidate data from various sources, providing access to a wealth of insights in one centralized location, no matter if you need to monitor recruitment metrics or generate reports that need to be sent across numerous departments. Moreover, these cutting-edge tools offer access to dashboards from a multitude of devices, meaning that everyone within the business can connect with practical insights remotely - and share the load.

Once everyone is able to work with a data-driven mindset, you will catalyze the success of your business in ways you never thought possible. And when it comes to knowing how to analyze data, this kind of collaborative approach is essential.

16. Data analysis tools

In order to perform high-quality analysis of data, it is fundamental to use tools and software that will ensure the best results. Here we leave you a small summary of four fundamental categories of data analysis tools for your organization.

  • Business Intelligence: BI tools allow you to process significant amounts of data from several sources in any format. Through this, you can not only analyze and monitor your data to extract relevant insights but also create interactive reports and dashboards to visualize your KPIs and use them for your company's good. datapine is an amazing online BI software that is focused on delivering powerful online analysis features that are accessible to beginner and advanced users. Like this, it offers a full-service solution that includes cutting-edge analysis of data, KPIs visualization, live dashboards, reporting, and artificial intelligence technologies to predict trends and minimize risk.
  • Statistical analysis: These tools are usually designed for scientists, statisticians, market researchers, and mathematicians, as they allow them to perform complex statistical analyses with methods like regression analysis, predictive analysis, and statistical modeling. A good tool to perform this type of analysis is R-Studio as it offers a powerful data modeling and hypothesis testing feature that can cover both academic and general data analysis. This tool is one of the favorite ones in the industry, due to its capability for data cleaning, data reduction, and performing advanced analysis with several statistical methods. Another relevant tool to mention is SPSS from IBM. The software offers advanced statistical analysis for users of all skill levels. Thanks to a vast library of machine learning algorithms, text analysis, and a hypothesis testing approach it can help your company find relevant insights to drive better decisions. SPSS also works as a cloud service that enables you to run it anywhere.
  • SQL Consoles: SQL is a programming language often used to handle structured data in relational databases. Tools like these are popular among data scientists as they are extremely effective in unlocking these databases' value. Undoubtedly, one of the most used SQL software in the market is MySQL Workbench . This tool offers several features such as a visual tool for database modeling and monitoring, complete SQL optimization, administration tools, and visual performance dashboards to keep track of KPIs.
  • Data Visualization: These tools are used to represent your data through charts, graphs, and maps that allow you to find patterns and trends in the data. datapine's already mentioned BI platform also offers a wealth of powerful online data visualization tools with several benefits. Some of them include: delivering compelling data-driven presentations to share with your entire company, the ability to see your data online with any device wherever you are, an interactive dashboard design feature that enables you to showcase your results in an interactive and understandable way, and to perform online self-service reports that can be used simultaneously with several other people to enhance team productivity.

17. Refine your process constantly 

Last is a step that might seem obvious to some people, but it can be easily ignored if you think you are done. Once you have extracted the needed results, you should always take a retrospective look at your project and think about what you can improve. As you saw throughout this long list of techniques, data analysis is a complex process that requires constant refinement. For this reason, you should always go one step further and keep improving. 

Quality Criteria For Data Analysis

So far we’ve covered a list of methods and techniques that should help you perform efficient data analysis. But how do you measure the quality and validity of your results? This is done with the help of some science quality criteria. Here we will go into a more theoretical area that is critical to understanding the fundamentals of statistical analysis in science. However, you should also be aware of these steps in a business context, as they will allow you to assess the quality of your results in the correct way. Let’s dig in. 

  • Internal validity: The results of a survey are internally valid if they measure what they are supposed to measure and thus provide credible results. In other words , internal validity measures the trustworthiness of the results and how they can be affected by factors such as the research design, operational definitions, how the variables are measured, and more. For instance, imagine you are doing an interview to ask people if they brush their teeth two times a day. While most of them will answer yes, you can still notice that their answers correspond to what is socially acceptable, which is to brush your teeth at least twice a day. In this case, you can’t be 100% sure if respondents actually brush their teeth twice a day or if they just say that they do, therefore, the internal validity of this interview is very low. 
  • External validity: Essentially, external validity refers to the extent to which the results of your research can be applied to a broader context. It basically aims to prove that the findings of a study can be applied in the real world. If the research can be applied to other settings, individuals, and times, then the external validity is high. 
  • Reliability : If your research is reliable, it means that it can be reproduced. If your measurement were repeated under the same conditions, it would produce similar results. This means that your measuring instrument consistently produces reliable results. For example, imagine a doctor building a symptoms questionnaire to detect a specific disease in a patient. Then, various other doctors use this questionnaire but end up diagnosing the same patient with a different condition. This means the questionnaire is not reliable in detecting the initial disease. Another important note here is that in order for your research to be reliable, it also needs to be objective. If the results of a study are the same, independent of who assesses them or interprets them, the study can be considered reliable. Let’s see the objectivity criteria in more detail now. 
  • Objectivity: In data science, objectivity means that the researcher needs to stay fully objective when it comes to its analysis. The results of a study need to be affected by objective criteria and not by the beliefs, personality, or values of the researcher. Objectivity needs to be ensured when you are gathering the data, for example, when interviewing individuals, the questions need to be asked in a way that doesn't influence the results. Paired with this, objectivity also needs to be thought of when interpreting the data. If different researchers reach the same conclusions, then the study is objective. For this last point, you can set predefined criteria to interpret the results to ensure all researchers follow the same steps. 

The discussed quality criteria cover mostly potential influences in a quantitative context. Analysis in qualitative research has by default additional subjective influences that must be controlled in a different way. Therefore, there are other quality criteria for this kind of research such as credibility, transferability, dependability, and confirmability. You can see each of them more in detail on this resource . 

Data Analysis Limitations & Barriers

Analyzing data is not an easy task. As you’ve seen throughout this post, there are many steps and techniques that you need to apply in order to extract useful information from your research. While a well-performed analysis can bring various benefits to your organization it doesn't come without limitations. In this section, we will discuss some of the main barriers you might encounter when conducting an analysis. Let’s see them more in detail. 

  • Lack of clear goals: No matter how good your data or analysis might be if you don’t have clear goals or a hypothesis the process might be worthless. While we mentioned some methods that don’t require a predefined hypothesis, it is always better to enter the analytical process with some clear guidelines of what you are expecting to get out of it, especially in a business context in which data is utilized to support important strategic decisions. 
  • Objectivity: Arguably one of the biggest barriers when it comes to data analysis in research is to stay objective. When trying to prove a hypothesis, researchers might find themselves, intentionally or unintentionally, directing the results toward an outcome that they want. To avoid this, always question your assumptions and avoid confusing facts with opinions. You can also show your findings to a research partner or external person to confirm that your results are objective. 
  • Data representation: A fundamental part of the analytical procedure is the way you represent your data. You can use various graphs and charts to represent your findings, but not all of them will work for all purposes. Choosing the wrong visual can not only damage your analysis but can mislead your audience, therefore, it is important to understand when to use each type of data depending on your analytical goals. Our complete guide on the types of graphs and charts lists 20 different visuals with examples of when to use them. 
  • Flawed correlation : Misleading statistics can significantly damage your research. We’ve already pointed out a few interpretation issues previously in the post, but it is an important barrier that we can't avoid addressing here as well. Flawed correlations occur when two variables appear related to each other but they are not. Confusing correlations with causation can lead to a wrong interpretation of results which can lead to building wrong strategies and loss of resources, therefore, it is very important to identify the different interpretation mistakes and avoid them. 
  • Sample size: A very common barrier to a reliable and efficient analysis process is the sample size. In order for the results to be trustworthy, the sample size should be representative of what you are analyzing. For example, imagine you have a company of 1000 employees and you ask the question “do you like working here?” to 50 employees of which 49 say yes, which means 95%. Now, imagine you ask the same question to the 1000 employees and 950 say yes, which also means 95%. Saying that 95% of employees like working in the company when the sample size was only 50 is not a representative or trustworthy conclusion. The significance of the results is way more accurate when surveying a bigger sample size.   
  • Privacy concerns: In some cases, data collection can be subjected to privacy regulations. Businesses gather all kinds of information from their customers from purchasing behaviors to addresses and phone numbers. If this falls into the wrong hands due to a breach, it can affect the security and confidentiality of your clients. To avoid this issue, you need to collect only the data that is needed for your research and, if you are using sensitive facts, make it anonymous so customers are protected. The misuse of customer data can severely damage a business's reputation, so it is important to keep an eye on privacy. 
  • Lack of communication between teams : When it comes to performing data analysis on a business level, it is very likely that each department and team will have different goals and strategies. However, they are all working for the same common goal of helping the business run smoothly and keep growing. When teams are not connected and communicating with each other, it can directly affect the way general strategies are built. To avoid these issues, tools such as data dashboards enable teams to stay connected through data in a visually appealing way. 
  • Innumeracy : Businesses are working with data more and more every day. While there are many BI tools available to perform effective analysis, data literacy is still a constant barrier. Not all employees know how to apply analysis techniques or extract insights from them. To prevent this from happening, you can implement different training opportunities that will prepare every relevant user to deal with data. 

Key Data Analysis Skills

As you've learned throughout this lengthy guide, analyzing data is a complex task that requires a lot of knowledge and skills. That said, thanks to the rise of self-service tools the process is way more accessible and agile than it once was. Regardless, there are still some key skills that are valuable to have when working with data, we list the most important ones below.

  • Critical and statistical thinking: To successfully analyze data you need to be creative and think out of the box. Yes, that might sound like a weird statement considering that data is often tight to facts. However, a great level of critical thinking is required to uncover connections, come up with a valuable hypothesis, and extract conclusions that go a step further from the surface. This, of course, needs to be complemented by statistical thinking and an understanding of numbers. 
  • Data cleaning: Anyone who has ever worked with data before will tell you that the cleaning and preparation process accounts for 80% of a data analyst's work, therefore, the skill is fundamental. But not just that, not cleaning the data adequately can also significantly damage the analysis which can lead to poor decision-making in a business scenario. While there are multiple tools that automate the cleaning process and eliminate the possibility of human error, it is still a valuable skill to dominate. 
  • Data visualization: Visuals make the information easier to understand and analyze, not only for professional users but especially for non-technical ones. Having the necessary skills to not only choose the right chart type but know when to apply it correctly is key. This also means being able to design visually compelling charts that make the data exploration process more efficient. 
  • SQL: The Structured Query Language or SQL is a programming language used to communicate with databases. It is fundamental knowledge as it enables you to update, manipulate, and organize data from relational databases which are the most common databases used by companies. It is fairly easy to learn and one of the most valuable skills when it comes to data analysis. 
  • Communication skills: This is a skill that is especially valuable in a business environment. Being able to clearly communicate analytical outcomes to colleagues is incredibly important, especially when the information you are trying to convey is complex for non-technical people. This applies to in-person communication as well as written format, for example, when generating a dashboard or report. While this might be considered a “soft” skill compared to the other ones we mentioned, it should not be ignored as you most likely will need to share analytical findings with others no matter the context. 

Data Analysis In The Big Data Environment

Big data is invaluable to today’s businesses, and by using different methods for data analysis, it’s possible to view your data in a way that can help you turn insight into positive action.

To inspire your efforts and put the importance of big data into context, here are some insights that you should know:

  • By 2026 the industry of big data is expected to be worth approximately $273.4 billion.
  • 94% of enterprises say that analyzing data is important for their growth and digital transformation. 
  • Companies that exploit the full potential of their data can increase their operating margins by 60% .
  • We already told you the benefits of Artificial Intelligence through this article. This industry's financial impact is expected to grow up to $40 billion by 2025.

Data analysis concepts may come in many forms, but fundamentally, any solid methodology will help to make your business more streamlined, cohesive, insightful, and successful than ever before.

Key Takeaways From Data Analysis 

As we reach the end of our data analysis journey, we leave a small summary of the main methods and techniques to perform excellent analysis and grow your business.

17 Essential Types of Data Analysis Methods:

  • Cluster analysis
  • Cohort analysis
  • Regression analysis
  • Factor analysis
  • Neural Networks
  • Data Mining
  • Text analysis
  • Time series analysis
  • Decision trees
  • Conjoint analysis 
  • Correspondence Analysis
  • Multidimensional Scaling 
  • Content analysis 
  • Thematic analysis
  • Narrative analysis 
  • Grounded theory analysis
  • Discourse analysis 

Top 17 Data Analysis Techniques:

  • Collaborate your needs
  • Establish your questions
  • Data democratization
  • Think of data governance 
  • Clean your data
  • Set your KPIs
  • Omit useless data
  • Build a data management roadmap
  • Integrate technology
  • Answer your questions
  • Visualize your data
  • Interpretation of data
  • Consider autonomous technology
  • Build a narrative
  • Share the load
  • Data Analysis tools
  • Refine your process constantly 

We’ve pondered the data analysis definition and drilled down into the practical applications of data-centric analytics, and one thing is clear: by taking measures to arrange your data and making your metrics work for you, it’s possible to transform raw information into action - the kind of that will push your business to the next level.

Yes, good data analytics techniques result in enhanced business intelligence (BI). To help you understand this notion in more detail, read our exploration of business intelligence reporting .

And, if you’re ready to perform your own analysis, drill down into your facts and figures while interacting with your data on astonishing visuals, you can try our software for a free, 14-day trial .

Chapter 14 Quantitative Analysis Descriptive Statistics

Numeric data collected in a research project can be analyzed quantitatively using statistical tools in two different ways. Descriptive analysis refers to statistically describing, aggregating, and presenting the constructs of interest or associations between these constructs. Inferential analysis refers to the statistical testing of hypotheses (theory testing). In this chapter, we will examine statistical techniques used for descriptive analysis, and the next chapter will examine statistical techniques for inferential analysis. Much of today’s quantitative data analysis is conducted using software programs such as SPSS or SAS. Readers are advised to familiarize themselves with one of these programs for understanding the concepts described in this chapter.

Data Preparation

In research projects, data may be collected from a variety of sources: mail-in surveys, interviews, pretest or posttest experimental data, observational data, and so forth. This data must be converted into a machine -readable, numeric format, such as in a spreadsheet or a text file, so that they can be analyzed by computer programs like SPSS or SAS. Data preparation usually follows the following steps.

Data coding. Coding is the process of converting data into numeric format. A codebook should be created to guide the coding process. A codebook is a comprehensive document containing detailed description of each variable in a research study, items or measures for that variable, the format of each item (numeric, text, etc.), the response scale for each item (i.e., whether it is measured on a nominal, ordinal, interval, or ratio scale; whether such scale is a five-point, seven-point, or some other type of scale), and how to code each value into a numeric format. For instance, if we have a measurement item on a seven-point Likert scale with anchors ranging from “strongly disagree” to “strongly agree”, we may code that item as 1 for strongly disagree, 4 for neutral, and 7 for strongly agree, with the intermediate anchors in between. Nominal data such as industry type can be coded in numeric form using a coding scheme such as: 1 for manufacturing, 2 for retailing, 3 for financial, 4 for healthcare, and so forth (of course, nominal data cannot be analyzed statistically). Ratio scale data such as age, income, or test scores can be coded as entered by the respondent. Sometimes, data may need to be aggregated into a different form than the format used for data collection. For instance, for measuring a construct such as “benefits of computers,” if a survey provided respondents with a checklist of b enefits that they could select from (i.e., they could choose as many of those benefits as they wanted), then the total number of checked items can be used as an aggregate measure of benefits. Note that many other forms of data, such as interview transcripts, cannot be converted into a numeric format for statistical analysis. Coding is especially important for large complex studies involving many variables and measurement items, where the coding process is conducted by different people, to help the coding team code data in a consistent manner, and also to help others understand and interpret the coded data.

Data entry. Coded data can be entered into a spreadsheet, database, text file, or directly into a statistical program like SPSS. Most statistical programs provide a data editor for entering data. However, these programs store data in their own native format (e.g., SPSS stores data as .sav files), which makes it difficult to share that data with other statistical programs. Hence, it is often better to enter data into a spreadsheet or database, where they can be reorganized as needed, shared across programs, and subsets of data can be extracted for analysis. Smaller data sets with less than 65,000 observations and 256 items can be stored in a spreadsheet such as Microsoft Excel, while larger dataset with millions of observations will require a database. Each observation can be entered as one row in the spreadsheet and each measurement item can be represented as one column. The entered data should be frequently checked for accuracy, via occasional spot checks on a set of items or observations, during and after entry. Furthermore, while entering data, the coder should watch out for obvious evidence of bad data, such as the respondent selecting the “strongly agree” response to all items irrespective of content, including reverse-coded items. If so, such data can be entered but should be excluded from subsequent analysis.

Missing values. Missing data is an inevitable part of any empirical data set. Respondents may not answer certain questions if they are ambiguously worded or too sensitive. Such problems should be detected earlier during pretests and corrected before the main data collection process begins. During data entry, some statistical programs automatically treat blank entries as missing values, while others require a specific numeric value such as -1 or 999 to be entered to denote a missing value. During data analysis, the default mode of handling missing values in most software programs is to simply drop the entire observation containing even a single missing value, in a technique called listwise deletion . Such deletion can significantly shrink the sample size and make it extremely difficult to detect small effects. Hence, some software programs allow the option of replacing missing values with an estimated value via a process called imputation . For instance, if the missing value is one item in a multi-item scale, the imputed value may be the average of the respondent’s responses to remaining items on that scale. If the missing value belongs to a single-item scale, many researchers use the average of other respondent’s responses to that item as the imputed value. Such imputation may be biased if the missing value is of a systematic nature rather than a random nature. Two methods that can produce relatively unbiased estimates for imputation are the maximum likelihood procedures and multiple imputation methods, both of which are supported in popular software programs such as SPSS and SAS.

Data transformation. Sometimes, it is necessary to transform data values before they can be meaningfully interpreted. For instance, reverse coded items, where items convey the opposite meaning of that of their underlying construct, should be reversed (e.g., in a 1-7 interval scale, 8 minus the observed value will reverse the value) before they can be compared or combined with items that are not reverse coded. Other kinds of transformations may include creating scale measures by adding individual scale items, creating a weighted index from a set of observed measures, and collapsing multiple values into fewer categories (e.g., collapsing incomes into income ranges).

Univariate Analysis

Univariate analysis, or analysis of a single variable, refers to a set of statistical techniques that can describe the general properties of one variable. Univariate statistics include: (1) frequency distribution, (2) central tendency, and (3) dispersion. The frequency distribution of a variable is a summary of the frequency (or percentages) of individual values or ranges of values for that variable. For instance, we can measure how many times a sample of respondents attend religious services (as a measure of their “religiosity”) using a categorical scale: never, once per year, several times per year, about once a month, several times per month, several times per week, and an optional category for “did not answer.” If we count the number (or percentage) of observations within each category (except “did not answer” which is really a missing value rather than a category), and display it in the form of a table as shown in Figure 14.1, what we have is a frequency distribution. This distribution can also be depicted in the form of a bar chart, as shown on the right panel of Figure 14.1, with the horizontal axis representing each category of that variable and the vertical axis representing the frequency or percentage of observations within each category.

method of data analysis in descriptive research

Figure 14.1. Frequency distribution of religiosity.

With very large samples where observations are independent and random, the frequency distribution tends to follow a plot that looked like a bell-shaped curve (a smoothed bar chart of the frequency distribution) similar to that shown in Figure 14.2, where most observations are clustered toward the center of the range of values, and fewer and fewer observations toward the extreme ends of the range. Such a curve is called a normal distribution.

Central tendency is an estimate of the center of a distribution of values. There are three major estimates of central tendency: mean, median, and mode. The arithmetic mean (often simply called the “mean”) is the simple average of all values in a given distribution. Consider a set of eight test scores: 15, 22, 21, 18, 36, 15, 25, 15. The arithmetic mean of these values is (15 + 20 + 21 + 20 + 36 + 15 + 25 + 15)/8 = 20.875. Other types of means include geometric mean (n th root of the product of n numbers in a distribution) and harmonic mean (the reciprocal of the arithmetic means of the reciprocal of each value in a distribution), but these means are not very popular for statistical analysis of social research data.

The second measure of central tendency, the median , is the middle value within a range of values in a distribution. This is computed by sorting all values in a distribution in increasing order and selecting the middle value. In case there are two middle values (if there is an even number of values in a distribution), the average of the two middle values represent the median. In the above example, the sorted values are: 15, 15, 15, 18, 22, 21, 25, 36. The two middle values are 18 and 22, and hence the median is (18 + 22)/2 = 20.

Lastly, the mode is the most frequently occurring value in a distribution of values. In the previous example, the most frequently occurring value is 15, which is the mode of the above set of test scores. Note that any value that is estimated from a sample, such as mean, median, mode, or any of the later estimates are called a statistic .

Dispersion refers to the way values are spread around the central tendency, for example, how tightly or how widely are the values clustered around the mean. Two common measures of dispersion are the range and standard deviation. The range is the difference between the highest and lowest values in a distribution. The range in our previous example is 36-15 = 21.

The range is particularly sensitive to the presence of outliers. For instance, if the highest value in the above distribution was 85 and the other vales remained the same, the range would be 85-15 = 70. Standard deviation , the second measure of dispersion, corrects for such outliers by using a formula that takes into account how close or how far each value from the distribution mean:

method of data analysis in descriptive research

Figure 14.2. Normal distribution.

method of data analysis in descriptive research

Table 14.1. Hypothetical data on age and self-esteem.

The two variables in this dataset are age (x) and self-esteem (y). Age is a ratio-scale variable, while self-esteem is an average score computed from a multi-item self-esteem scale measured using a 7-point Likert scale, ranging from “strongly disagree” to “strongly agree.” The histogram of each variable is shown on the left side of Figure 14.3. The formula for calculating bivariate correlation is:

method of data analysis in descriptive research

Figure 14.3. Histogram and correlation plot of age and self-esteem.

After computing bivariate correlation, researchers are often interested in knowing whether the correlation is significant (i.e., a real one) or caused by mere chance. Answering such a question would require testing the following hypothesis:

H 0 : r = 0

H 1 : r ≠ 0

H 0 is called the null hypotheses , and H 1 is called the alternative hypothesis (sometimes, also represented as H a ). Although they may seem like two hypotheses, H 0 and H 1 actually represent a single hypothesis since they are direct opposites of each other. We are interested in testing H 1 rather than H 0 . Also note that H 1 is a non-directional hypotheses since it does not specify whether r is greater than or less than zero. Directional hypotheses will be specified as H 0 : r ≤ 0; H 1 : r > 0 (if we are testing for a positive correlation). Significance testing of directional hypothesis is done using a one-tailed t-test, while that for non-directional hypothesis is done using a two-tailed t-test.

In statistical testing, the alternative hypothesis cannot be tested directly. Rather, it is tested indirectly by rejecting the null hypotheses with a certain level of probability. Statistical testing is always probabilistic, because we are never sure if our inferences, based on sample data, apply to the population, since our sample never equals the population. The probability that a statistical inference is caused pure chance is called the p-value . The p-value is compared with the significance level (α), which represents the maximum level of risk that we are willing to take that our inference is incorrect. For most statistical analysis, α is set to 0.05. A p-value less than α=0.05 indicates that we have enough statistical evidence to reject the null hypothesis, and thereby, indirectly accept the alternative hypothesis. If p>0.05, then we do not have adequate statistical evidence to reject the null hypothesis or accept the alternative hypothesis.

The easiest way to test for the above hypothesis is to look up critical values of r from statistical tables available in any standard text book on statistics or on the Internet (most software programs also perform significance testing). The critical value of r depends on our desired significance level (α = 0.05), the degrees of freedom (df), and whether the desired test is a one-tailed or two-tailed test. The degree of freedom is the number of values that can vary freely in any calculation of a statistic. In case of correlation, the df simply equals n – 2, or for the data in Table 14.1, df is 20 – 2 = 18. There are two different statistical tables for one-tailed and two -tailed test. In the two -tailed table, the critical value of r for α = 0.05 and df = 18 is 0.44. For our computed correlation of 0.79 to be significant, it must be larger than the critical value of 0.44 or less than -0.44. Since our computed value of 0.79 is greater than 0.44, we conclude that there is a significant correlation between age and self-esteem in our data set, or in other words, the odds are less than 5% that this correlation is a chance occurrence. Therefore, we can reject the null hypotheses that r ≤ 0, which is an indirect way of saying that the alternative hypothesis r > 0 is probably correct.

Most research studies involve more than two variables. If there are n variables, then we will have a total of n*(n-1)/2 possible correlations between these n variables. Such correlations are easily computed using a software program like SPSS, rather than manually using the formula for correlation (as we did in Table 14.1), and represented using a correlation matrix, as shown in Table 14.2. A correlation matrix is a matrix that lists the variable names along the first row and the first column, and depicts bivariate correlations between pairs of variables in the appropriate cell in the matrix. The values along the principal diagonal (from the top left to the bottom right corner) of this matrix are always 1, because any variable is always perfectly correlated with itself. Further, since correlations are non-directional, the correlation between variables V1 and V2 is the same as that between V2 and V1. Hence, the lower triangular matrix (values below the principal diagonal) is a mirror reflection of the upper triangular matrix (values above the principal diagonal), and therefore, we often list only the lower triangular matrix for simplicity. If the correlations involve variables measured using interval scales, then this specific type of correlations are called Pearson product moment correlations .

Another useful way of presenting bivariate data is cross-tabulation (often abbreviated to cross-tab, and sometimes called more formally as a contingency table). A cross-tab is a table that describes the frequency (or percentage) of all combinations of two or more nominal or categorical variables. As an example, let us assume that we have the following observations of gender and grade for a sample of 20 students, as shown in Figure 14.3. Gender is a nominal variable (male/female or M/F), and grade is a categorical variable with three levels (A, B, and C). A simple cross-tabulation of the data may display the joint distribution of gender and grades (i.e., how many students of each gender are in each grade category, as a raw frequency count or as a percentage) in a 2 x 3 matrix. This matrix will help us see if A, B, and C grades are equally distributed across male and female students. The cross-tab data in Table 14.3 shows that the distribution of A grades is biased heavily toward female students: in a sample of 10 male and 10 female students, five female students received the A grade compared to only one male students. In contrast, the distribution of C grades is biased toward male students: three male students received a C grade, compared to only one female student. However, the distribution of B grades was somewhat uniform, with six male students and five female students. The last row and the last column of this table are called marginal totals because they indicate the totals across each category and displayed along the margins of the table.

method of data analysis in descriptive research

Table 14.2. A hypothetical correlation matrix for eight variables.

method of data analysis in descriptive research

Table 14.3. Example of cross-tab analysis.

Although we can see a distinct pattern of grade distribution between male and female students in Table 14.3, is this pattern real or “statistically significant”? In other words, do the above frequency counts differ from that that may be expected from pure chance? To answer this question, we should compute the expected count of observation in each cell of the 2 x 3 cross-tab matrix. This is done by multiplying the marginal column total and the marginal row total for each cell and dividing it by the total number of observations. For example, for the male/A grade cell, expected count = 5 * 10 / 20 = 2.5. In other words, we were expecting 2.5 male students to receive an A grade, but in reality, only one student received the A grade. Whether this difference between expected and actual count is significant can be tested using a chi-square test . The chi-square statistic can be computed as the average difference between observed and expected counts across all cells. We can then compare this number to the critical value associated with a desired probability level (p < 0.05) and the degrees of freedom, which is simply (m-1)*(n-1), where m and n are the number of rows and columns respectively. In this example, df = (2 – 1) * (3 – 1) = 2. From standard chi-square tables in any statistics book, the critical chi-square value for p=0.05 and df=2 is 5.99. The computed chi -square value, based on our observed data, is 1.00, which is less than the critical value. Hence, we must conclude that the observed grade pattern is not statistically different from the pattern that can be expected by pure chance.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

PESTLE Analysis

Insights and resources on business analysis tools

Descriptive Analysis: How-To, Types, Examples

Last Updated: Mar 29, 2024 by Thomas Bush Filed Under: Business

From diagnostic to predictive, there are many different types of data analysis . Perhaps the most straightforward of them is descriptive analysis, which seeks to describe or summarize past and present data, helping to create accessible data insights. In this short guide, we’ll review the basics of descriptive analysis, including what exactly it is, what benefits it has, how to do it, as well as some types and examples.

What Is Descriptive Analysis?

Descriptive analysis, also known as descriptive analytics or descriptive statistics, is the process of using statistical techniques to describe or summarize a set of data. As one of the major types of data analysis, descriptive analysis is popular for its ability to generate accessible insights from otherwise uninterpreted data.

Unlike other types of data analysis, the descriptive analysis does not attempt to make predictions about the future. Instead, it draws insights solely from past data, by manipulating in ways that make it more meaningful.

Benefits of Descriptive Analysis

Descriptive analysis is all about trying to describe or summarize data. Although it doesn’t make predictions about the future, it can still be extremely valuable in business environments . This is chiefly because descriptive analysis makes it easier to consume data, which can make it easier for analysts to act on.

Another benefit of descriptive analysis is that it can help to filter out less meaningful data. This is because the statistical techniques used within this type of analysis usually focus on the patterns in data, and not the outliers.

Types of Descriptive Analysis

According to CampusLabs.com , descriptive analysis can be categorized as one of four types. They are measures of frequency, central tendency, dispersion or variation, and position.

Measures of Frequency

In descriptive analysis, it’s essential to know how frequently a certain event or response occurs. This is the purpose of measures of frequency, like a count or percent. For example, consider a survey where 1,000 participants are asked about their favourite ice cream flavor. A list of 1,000 responses would be difficult to consume, but the data can be made much more accessible by measuring how many times a certain flavor was selected.

Measures of Central Tendency

In descriptive analysis, it’s also worth knowing the central (or average) event or response. Common measures of central tendency include the three averages — mean, median, and mode. As an example, consider a survey in which the height of 1,000 people is measured. In this case, the mean average would be a very helpful descriptive metric.

Measures of Dispersion

Sometimes, it may be worth knowing how data is distributed across a range. To illustrate this, consider the average height in a sample of two people. If both individuals are six feet tall, the average height is six feet. However, if one individual is five feet tall and the other is seven feet tall, the average height is still six feet. In order to measure this kind of distribution, measures of dispersion like range or standard deviation can be employed.

Measures of Position

Last of all, descriptive analysis can involve identifying the position of one event or response in relation to others. This is where measures like percentiles and quartiles can be used.

How to Do Descriptive Analysis

Like many types of data analysis, descriptive analysis can be quite open-ended. In other words, it’s up to you what you want to look for in your analysis. With that said, the process of descriptive analysis usually consists of the same few steps.

  • Collect data

The first step in any type of data analysis is to collect the data. This can be done in a variety of ways, but surveys and good old fashioned measurements are often used.

Another important step in descriptive and other types of data analysis is to clean the data. This is because data may be formatted in inaccessible ways, which will make it difficult to manipulate with statistics. Cleaning data may involve changing its textual format, categorizing it, and/or removing outliers.

  • Apply methods

Finally, descriptive analysis involves applying the chosen statistical methods so as to draw the desired conclusions. What methods you choose will depend on the data you are dealing with and what you are looking to determine. If in doubt, review the four types of descriptive analysis methods explained above.

When to Do Descriptive Analysis

Descriptive analysis is often used when reviewing any past or present data. This is because raw data is difficult to consume and interpret, while the metrics offered by descriptive analysis are much more focused.

Descriptive analysis can also be conducted as the precursor to diagnostic or predictive analysis , providing insights into what has happened in the past before attempting to explain why it happened or predicting what will happen in the future.

Descriptive Analysis Example

As an example of descriptive analysis, consider an insurance company analyzing its customer base.

The insurance company may know certain traits about its customers, such as their gender, age, and nationality. To gain a better profile of their customers, the insurance company can apply descriptive analysis.

Measures of frequency can be used to identify how many customers are under a certain age; measures of central tendency can be used to identify who most of their customers are; measures of dispersion can be used to identify the variation in, for example, the age of their customers; finally, measures of position can be used to compare segments of customers based on specific traits.

Final Thoughts

Descriptive analysis is a popular type of data analysis. It’s often conducted before diagnostic or predictive analysis, as it simply aims to describe and summarize past data.

To do so, descriptive analysis uses a variety of statistical techniques, including measures of frequency, central tendency, dispersion, and position. How exactly you conduct descriptive analysis will depend on what you are looking to find out, but the steps usually involve collecting, cleaning, and finally analyzing data.

In any case, this business analysis process is invaluable when working with data.

Image by  Pexels

8 Types of Data Analysis

method of data analysis in descriptive research

Data analysis is an aspect of  data science and data analytics that is all about analyzing data for different kinds of purposes. The data analysis process involves inspecting, cleaning, transforming and modeling data to draw useful insights from it.

What Are the Different Types of Data Analysis?

  • Descriptive analysis
  • Diagnostic analysis
  • Exploratory analysis
  • Inferential analysis
  • Predictive analysis
  • Causal analysis
  • Mechanistic analysis
  • Prescriptive analysis

With its multiple facets, methodologies and techniques, data analysis is used in a variety of fields, including business, science and social science, among others. As businesses thrive under the influence of technological advancements in data analytics, data analysis plays a huge role in  decision-making , providing a better, faster and more efficacious system that minimizes risks and reduces  human biases .

That said, there are different kinds of data analysis catered with different goals. We’ll examine each one below.

Two Camps of Data Analysis

Data analysis can be divided into two camps, according to the book  R for Data Science :

  • Hypothesis Generation — This involves looking deeply at the data and combining your domain knowledge to generate hypotheses about why the data behaves the way it does.
  • Hypothesis Confirmation — This involves using a precise mathematical model to generate falsifiable predictions with statistical sophistication to confirm your prior hypotheses.

Types of Data Analysis

Data analysis can be separated and organized into types, arranged in an increasing order of complexity.

1. Descriptive Analysis

The goal of descriptive analysis is to describe or summarize a set of data. Here’s what you need to know:

  • Descriptive analysis is the very first analysis performed in the data analysis process.
  • It generates simple summaries about samples and measurements.
  • It involves common, descriptive statistics like measures of central tendency, variability, frequency and position.

Descriptive Analysis Example

Take the  Covid-19 statistics page on Google, for example. The line graph is a pure summary of the cases/deaths, a presentation and description of the population of a particular country infected by the virus.

Descriptive analysis is the first step in analysis where you summarize and describe the data you have using descriptive statistics, and the result is a simple presentation of your data.

More on Data Analysis: Data Analyst vs. Data Scientist: Similarities and Differences Explained

2. Diagnostic Analysis 

Diagnostic analysis seeks to answer the question “Why did this happen?” by taking a more in-depth look at data to uncover subtle patterns. Here’s what you need to know:

  • Diagnostic analysis typically comes after descriptive analysis, taking initial findings and investigating why certain patterns in data happen. 
  • Diagnostic analysis may involve analyzing other related data sources, including past data, to reveal more insights into current data trends.  
  • Diagnostic analysis is ideal for further exploring patterns in data to explain anomalies.  

Diagnostic Analysis Example

A footwear store wants to review its website traffic levels over the previous 12 months. Upon compiling and assessing the data, the company’s marketing team finds that June experienced above-average levels of traffic while July and August witnessed slightly lower levels of traffic. 

To find out why this difference occurred, the marketing team takes a deeper look. Team members break down the data to focus on specific categories of footwear. In the month of June, they discovered that pages featuring sandals and other beach-related footwear received a high number of views while these numbers dropped in July and August. 

Marketers may also review other factors like seasonal changes and company sales events to see if other variables could have contributed to this trend.   

3. Exploratory Analysis (EDA)

Exploratory analysis involves examining or exploring data and finding relationships between variables that were previously unknown. Here’s what you need to know:

  • EDA helps you discover relationships between measures in your data, which are not evidence for the existence of the correlation, as denoted by the phrase, “ Correlation doesn’t imply causation .”
  • It’s useful for discovering new connections and forming hypotheses. It drives design planning and data collection.

Exploratory Analysis Example

Climate change is an increasingly important topic as the global temperature has gradually risen over the years. One example of an exploratory data analysis on climate change involves taking the rise in temperature over the years from 1950 to 2020 and the increase of human activities and industrialization to find relationships from the data. For example, you may increase the number of factories, cars on the road and airplane flights to see how that correlates with the rise in temperature.

Exploratory analysis explores data to find relationships between measures without identifying the cause. It’s most useful when formulating hypotheses.

4. Inferential Analysis

Inferential analysis involves using a small sample of data to infer information about a larger population of data.

The goal of statistical modeling itself is all about using a small amount of information to extrapolate and generalize information to a larger group. Here’s what you need to know:

  • Inferential analysis involves using estimated data that is representative of a population and gives a measure of uncertainty or standard deviation to your estimation.
  • The  accuracy of inference depends heavily on your sampling scheme. If the sample isn’t representative of the population, the generalization will be inaccurate. This is known as the  central limit theorem .

Inferential Analysis Example

The idea of drawing an inference about the population at large with a smaller sample size is intuitive. Many statistics you see on the media and the internet are inferential; a prediction of an event based on a small sample. For example, a psychological study on the benefits of sleep might have a total of 500 people involved. When they followed up with the candidates, the candidates reported to have better overall attention spans and well-being with seven-to-nine hours of sleep, while those with less sleep and more sleep than the given range suffered from reduced attention spans and energy. This study drawn from 500 people was just a tiny portion of the 7 billion people in the world, and is thus an inference of the larger population.

Inferential analysis extrapolates and generalizes the information of the larger group with a smaller sample to generate analysis and predictions.

5. Predictive Analysis

Predictive analysis involves using historical or current data to find patterns and make predictions about the future. Here’s what you need to know:

  • The accuracy of the predictions depends on the input variables.
  • Accuracy also depends on the types of models. A linear model might work well in some cases, and in other cases it might not.
  • Using a variable to predict another one doesn’t denote a causal relationship.

Predictive Analysis Example

The 2020 US election is a popular topic and many  prediction models are built to predict the winning candidate. FiveThirtyEight did this to forecast the 2016 and 2020 elections. Prediction analysis for an election would require input variables such as historical polling data, trends and current polling data in order to return a good prediction. Something as large as an election wouldn’t just be using a linear model, but a complex model with certain tunings to best serve its purpose.

Predictive analysis takes data from the past and present to make predictions about the future.

More on Data: Explaining the Empirical for Normal Distribution

6. Causal Analysis

Causal analysis looks at the cause and effect of relationships between variables and is focused on finding the cause of a correlation. Here’s what you need to know:

  • To find the cause, you have to question whether the observed correlations driving your conclusion are valid. Just looking at the surface data won’t help you discover the hidden mechanisms underlying the correlations.
  • Causal analysis is applied in randomized studies focused on identifying causation.
  • Causal analysis is the gold standard in data analysis and scientific studies where the cause of phenomenon is to be extracted and singled out, like separating wheat from chaff.
  • Good data is hard to find and requires expensive research and studies. These studies are analyzed in aggregate (multiple groups), and the observed relationships are just average effects (mean) of the whole population. This means the results might not apply to everyone.

Causal Analysis Example  

Say you want to test out whether a new drug improves human strength and focus. To do that, you perform randomized control trials for the drug to test its effect. You compare the sample of candidates for your new drug against the candidates receiving a mock control drug through a few tests focused on strength and overall focus and attention. This will allow you to observe how the drug affects the outcome.

Causal analysis is about finding out the causal relationship between variables, and examining how a change in one variable affects another.

7. Mechanistic Analysis

Mechanistic analysis is used to understand exact changes in variables that lead to other changes in other variables. Here’s what you need to know:

  • It’s applied in physical or engineering sciences, situations that require high precision and little room for error, only noise in data is measurement error.
  • It’s designed to understand a biological or behavioral process, the pathophysiology of a disease or the mechanism of action of an intervention. 

Mechanistic Analysis Example

Many graduate-level research and complex topics are suitable examples, but to put it in simple terms, let’s say an experiment is done to simulate safe and effective nuclear fusion to power the world. A mechanistic analysis of the study would entail a precise balance of controlling and manipulating variables with highly accurate measures of both variables and the desired outcomes. It’s this intricate and meticulous modus operandi toward these big topics that allows for scientific breakthroughs and advancement of society.

Mechanistic analysis is in some ways a predictive analysis, but modified to tackle studies that require high precision and meticulous methodologies for physical or engineering science .

8. Prescriptive Analysis 

Prescriptive analysis compiles insights from other previous data analyses and determines actions that teams or companies can take to prepare for predicted trends. Here’s what you need to know: 

  • Prescriptive analysis may come right after predictive analysis, but it may involve combining many different data analyses. 
  • Companies need advanced technology and plenty of resources to conduct prescriptive analysis. AI systems that process data and adjust automated tasks are an example of the technology required to perform prescriptive analysis.  

Prescriptive Analysis Example

Prescriptive analysis is pervasive in everyday life, driving the curated content users consume on social media. On platforms like TikTok and Instagram, algorithms can apply prescriptive analysis to review past content a user has engaged with and the kinds of behaviors they exhibited with specific posts. Based on these factors, an algorithm seeks out similar content that is likely to elicit the same response and recommends it on a user’s personal feed. 

When to Use the Different Types of Data Analysis 

  • Descriptive analysis summarizes the data at hand and presents your data in a comprehensible way.
  • Diagnostic analysis takes a more detailed look at data to reveal why certain patterns occur, making it a good method for explaining anomalies. 
  • Exploratory data analysis helps you discover correlations and relationships between variables in your data.
  • Inferential analysis is for generalizing the larger population with a smaller sample size of data.
  • Predictive analysis helps you make predictions about the future with data.
  • Causal analysis emphasizes finding the cause of a correlation between variables.
  • Mechanistic analysis is for measuring the exact changes in variables that lead to other changes in other variables.
  • Prescriptive analysis combines insights from different data analyses to develop a course of action teams and companies can take to capitalize on predicted outcomes. 

A few important tips to remember about data analysis include:

  • Correlation doesn’t imply causation.
  • EDA helps discover new connections and form hypotheses.
  • Accuracy of inference depends on the sampling scheme.
  • A good prediction depends on the right input variables.
  • A simple linear model with enough data usually does the trick.
  • Using a variable to predict another doesn’t denote causal relationships.
  • Good data is hard to find, and to produce it requires expensive research.
  • Results from studies are done in aggregate and are average effects and might not apply to everyone.​

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Great Companies Need Great People. That's Where We Come In.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

method of data analysis in descriptive research

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Employee Engagement Survey Tools

Top 10 Employee Engagement Survey Tools

employee engagement software

Top 20 Employee Engagement Software Solutions

May 3, 2024

customer experience software

15 Best Customer Experience Software of 2024

May 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Enago Academy

Bridging the Gap: Overcome these 7 flaws in descriptive research design

' src=

Descriptive research design is a powerful tool used by scientists and researchers to gather information about a particular group or phenomenon. This type of research provides a detailed and accurate picture of the characteristics and behaviors of a particular population or subject. By observing and collecting data on a given topic, descriptive research helps researchers gain a deeper understanding of a specific issue and provides valuable insights that can inform future studies.

In this blog, we will explore the definition, characteristics, and common flaws in descriptive research design, and provide tips on how to avoid these pitfalls to produce high-quality results. Whether you are a seasoned researcher or a student just starting, understanding the fundamentals of descriptive research design is essential to conducting successful scientific studies.

Table of Contents

What Is Descriptive Research Design?

The descriptive research design involves observing and collecting data on a given topic without attempting to infer cause-and-effect relationships. The goal of descriptive research is to provide a comprehensive and accurate picture of the population or phenomenon being studied and to describe the relationships, patterns, and trends that exist within the data.

Descriptive research methods can include surveys, observational studies , and case studies, and the data collected can be qualitative or quantitative . The findings from descriptive research provide valuable insights and inform future research, but do not establish cause-and-effect relationships.

Importance of Descriptive Research in Scientific Studies

1. understanding of a population or phenomenon.

Descriptive research provides a comprehensive picture of the characteristics and behaviors of a particular population or phenomenon, allowing researchers to gain a deeper understanding of the topic.

2. Baseline Information

The information gathered through descriptive research can serve as a baseline for future research and provide a foundation for further studies.

3. Informative Data

Descriptive research can provide valuable information and insights into a particular topic, which can inform future research, policy decisions, and programs.

4. Sampling Validation

Descriptive research can be used to validate sampling methods and to help researchers determine the best approach for their study.

5. Cost Effective

Descriptive research is often less expensive and less time-consuming than other research methods , making it a cost-effective way to gather information about a particular population or phenomenon.

6. Easy to Replicate

Descriptive research is straightforward to replicate, making it a reliable way to gather and compare information from multiple sources.

Key Characteristics of Descriptive Research Design

The primary purpose of descriptive research is to describe the characteristics, behaviors, and attributes of a particular population or phenomenon.

2. Participants and Sampling

Descriptive research studies a particular population or sample that is representative of the larger population being studied. Furthermore, sampling methods can include convenience, stratified, or random sampling.

3. Data Collection Techniques

Descriptive research typically involves the collection of both qualitative and quantitative data through methods such as surveys, observational studies, case studies, or focus groups.

4. Data Analysis

Descriptive research data is analyzed to identify patterns, relationships, and trends within the data. Statistical techniques , such as frequency distributions and descriptive statistics, are commonly used to summarize and describe the data.

5. Focus on Description

Descriptive research is focused on describing and summarizing the characteristics of a particular population or phenomenon. It does not make causal inferences.

6. Non-Experimental

Descriptive research is non-experimental, meaning that the researcher does not manipulate variables or control conditions. The researcher simply observes and collects data on the population or phenomenon being studied.

When Can a Researcher Conduct Descriptive Research?

A researcher can conduct descriptive research in the following situations:

  • To better understand a particular population or phenomenon
  • To describe the relationships between variables
  • To describe patterns and trends
  • To validate sampling methods and determine the best approach for a study
  • To compare data from multiple sources.

Types of Descriptive Research Design

1. survey research.

Surveys are a type of descriptive research that involves collecting data through self-administered or interviewer-administered questionnaires. Additionally, they can be administered in-person, by mail, or online, and can collect both qualitative and quantitative data.

2. Observational Research

Observational research involves observing and collecting data on a particular population or phenomenon without manipulating variables or controlling conditions. It can be conducted in naturalistic settings or controlled laboratory settings.

3. Case Study Research

Case study research is a type of descriptive research that focuses on a single individual, group, or event. It involves collecting detailed information on the subject through a variety of methods, including interviews, observations, and examination of documents.

4. Focus Group Research

Focus group research involves bringing together a small group of people to discuss a particular topic or product. Furthermore, the group is usually moderated by a researcher and the discussion is recorded for later analysis.

5. Ethnographic Research

Ethnographic research involves conducting detailed observations of a particular culture or community. It is often used to gain a deep understanding of the beliefs, behaviors, and practices of a particular group.

Advantages of Descriptive Research Design

1. provides a comprehensive understanding.

Descriptive research provides a comprehensive picture of the characteristics, behaviors, and attributes of a particular population or phenomenon, which can be useful in informing future research and policy decisions.

2. Non-invasive

Descriptive research is non-invasive and does not manipulate variables or control conditions, making it a suitable method for sensitive or ethical concerns.

3. Flexibility

Descriptive research allows for a wide range of data collection methods , including surveys, observational studies, case studies, and focus groups, making it a flexible and versatile research method.

4. Cost-effective

Descriptive research is often less expensive and less time-consuming than other research methods. Moreover, it gives a cost-effective option to many researchers.

5. Easy to Replicate

Descriptive research is easy to replicate, making it a reliable way to gather and compare information from multiple sources.

6. Informs Future Research

The insights gained from a descriptive research can inform future research and inform policy decisions and programs.

Disadvantages of Descriptive Research Design

1. limited scope.

Descriptive research only provides a snapshot of the current situation and cannot establish cause-and-effect relationships.

2. Dependence on Existing Data

Descriptive research relies on existing data, which may not always be comprehensive or accurate.

3. Lack of Control

Researchers have no control over the variables in descriptive research, which can limit the conclusions that can be drawn.

The researcher’s own biases and preconceptions can influence the interpretation of the data.

5. Lack of Generalizability

Descriptive research findings may not be applicable to other populations or situations.

6. Lack of Depth

Descriptive research provides a surface-level understanding of a phenomenon, rather than a deep understanding.

7. Time-consuming

Descriptive research often requires a large amount of data collection and analysis, which can be time-consuming and resource-intensive.

7 Ways to Avoid Common Flaws While Designing Descriptive Research

method of data analysis in descriptive research

1. Clearly define the research question

A clearly defined research question is the foundation of any research study, and it is important to ensure that the question is both specific and relevant to the topic being studied.

2. Choose the appropriate research design

Choosing the appropriate research design for a study is crucial to the success of the study. Moreover, researchers should choose a design that best fits the research question and the type of data needed to answer it.

3. Select a representative sample

Selecting a representative sample is important to ensure that the findings of the study are generalizable to the population being studied. Researchers should use a sampling method that provides a random and representative sample of the population.

4. Use valid and reliable data collection methods

Using valid and reliable data collection methods is important to ensure that the data collected is accurate and can be used to answer the research question. Researchers should choose methods that are appropriate for the study and that can be administered consistently and systematically.

5. Minimize bias

Bias can significantly impact the validity and reliability of research findings.  Furthermore, it is important to minimize bias in all aspects of the study, from the selection of participants to the analysis of data.

6. Ensure adequate sample size

An adequate sample size is important to ensure that the results of the study are statistically significant and can be generalized to the population being studied.

7. Use appropriate data analysis techniques

The appropriate data analysis technique depends on the type of data collected and the research question being asked. Researchers should choose techniques that are appropriate for the data and the question being asked.

Have you worked on descriptive research designs? How was your experience creating a descriptive design? What challenges did you face? Do write to us or leave a comment below and share your insights on descriptive research designs!

' src=

extremely very educative

Indeed very educative and useful. Well explained. Thank you

Simple,easy to understand

Rate this article Cancel Reply

Your email address will not be published.

method of data analysis in descriptive research

Enago Academy's Most Popular Articles

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Journals Combat Image Manipulation with AI

Science under Surveillance: Journals adopt advanced AI to uncover image manipulation

Journals are increasingly turning to cutting-edge AI tools to uncover deceitful images published in manuscripts.…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

method of data analysis in descriptive research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

method of data analysis in descriptive research

As a researcher, what do you consider most when choosing an image manipulation detector?

  • Open access
  • Published: 17 August 2023

Data visualisation in scoping reviews and evidence maps on health topics: a cross-sectional analysis

  • Emily South   ORCID: orcid.org/0000-0003-2187-4762 1 &
  • Mark Rodgers 1  

Systematic Reviews volume  12 , Article number:  142 ( 2023 ) Cite this article

3636 Accesses

13 Altmetric

Metrics details

Scoping reviews and evidence maps are forms of evidence synthesis that aim to map the available literature on a topic and are well-suited to visual presentation of results. A range of data visualisation methods and interactive data visualisation tools exist that may make scoping reviews more useful to knowledge users. The aim of this study was to explore the use of data visualisation in a sample of recent scoping reviews and evidence maps on health topics, with a particular focus on interactive data visualisation.

Ovid MEDLINE ALL was searched for recent scoping reviews and evidence maps (June 2020-May 2021), and a sample of 300 papers that met basic selection criteria was taken. Data were extracted on the aim of each review and the use of data visualisation, including types of data visualisation used, variables presented and the use of interactivity. Descriptive data analysis was undertaken of the 238 reviews that aimed to map evidence.

Of the 238 scoping reviews or evidence maps in our analysis, around one-third (37.8%) included some form of data visualisation. Thirty-five different types of data visualisation were used across this sample, although most data visualisations identified were simple bar charts (standard, stacked or multi-set), pie charts or cross-tabulations (60.8%). Most data visualisations presented a single variable (64.4%) or two variables (26.1%). Almost a third of the reviews that used data visualisation did not use any colour (28.9%). Only two reviews presented interactive data visualisation, and few reported the software used to create visualisations.

Conclusions

Data visualisation is currently underused by scoping review authors. In particular, there is potential for much greater use of more innovative forms of data visualisation and interactive data visualisation. Where more innovative data visualisation is used, scoping reviews have made use of a wide range of different methods. Increased use of these more engaging visualisations may make scoping reviews more useful for a range of stakeholders.

Peer Review reports

Scoping reviews are “a type of evidence synthesis that aims to systematically identify and map the breadth of evidence available on a particular topic, field, concept, or issue” ([ 1 ], p. 950). While they include some of the same steps as a systematic review, such as systematic searches and the use of predetermined eligibility criteria, scoping reviews often address broader research questions and do not typically involve the quality appraisal of studies or synthesis of data [ 2 ]. Reasons for conducting a scoping review include the following: to map types of evidence available, to explore research design and conduct, to clarify concepts or definitions and to map characteristics or factors related to a concept [ 3 ]. Scoping reviews can also be undertaken to inform a future systematic review (e.g. to assure authors there will be adequate studies) or to identify knowledge gaps [ 3 ]. Other evidence synthesis approaches with similar aims have been described as evidence maps, mapping reviews or systematic maps [ 4 ]. While this terminology is used inconsistently, evidence maps can be used to identify evidence gaps and present them in a user-friendly (and often visual) way [ 5 ].

Scoping reviews are often targeted to an audience of healthcare professionals or policy-makers [ 6 ], suggesting that it is important to present results in a user-friendly and informative way. Until recently, there was little guidance on how to present the findings of scoping reviews. In recent literature, there has been some discussion of the importance of clearly presenting data for the intended audience of a scoping review, with creative and innovative use of visual methods if appropriate [ 7 , 8 , 9 ]. Lockwood et al. suggest that innovative visual presentation should be considered over dense sections of text or long tables in many cases [ 8 ]. Khalil et al. suggest that inspiration could be drawn from the field of data visualisation [ 7 ]. JBI guidance on scoping reviews recommends that reviewers carefully consider the best format for presenting data at the protocol development stage and provides a number of examples of possible methods [ 10 ].

Interactive resources are another option for presentation in scoping reviews [ 9 ]. Researchers without the relevant programming skills can now use several online platforms (such as Tableau [ 11 ] and Flourish [ 12 ]) to create interactive data visualisations. The benefits of using interactive visualisation in research include the ability to easily present more than two variables [ 13 ] and increased engagement of users [ 14 ]. Unlike static graphs, interactive visualisations can allow users to view hierarchical data at different levels, exploring both the “big picture” and looking in more detail ([ 15 ], p. 291). Interactive visualizations are often targeted at practitioners and decision-makers [ 13 ], and there is some evidence from qualitative research that they are valued by policy-makers [ 16 , 17 , 18 ].

Given their focus on mapping evidence, we believe that scoping reviews are particularly well-suited to visually presenting data and the use of interactive data visualisation tools. However, it is unknown how many recent scoping reviews visually map data or which types of data visualisation are used. The aim of this study was to explore the use of data visualisation methods in a large sample of recent scoping reviews and evidence maps on health topics. In particular, we were interested in the extent to which these forms of synthesis use any form of interactive data visualisation.

This study was a cross-sectional analysis of studies labelled as scoping reviews or evidence maps (or synonyms of these terms) in the title or abstract.

The search strategy was developed with help from an information specialist. Ovid MEDLINE® ALL was searched in June 2021 for studies added to the database in the previous 12 months. The search was limited to English language studies only.

The search strategy was as follows:

Ovid MEDLINE(R) ALL

(scoping review or evidence map or systematic map or mapping review or scoping study or scoping project or scoping exercise or literature mapping or evidence mapping or systematic mapping or literature scoping or evidence gap map).ab,ti.

limit 1 to english language

(202006* or 202007* or 202008* or 202009* or 202010* or 202011* or 202012* or 202101* or 202102* or 202103* or 202104* or 202105*).dt.

The search returned 3686 records. Records were de-duplicated in EndNote 20 software, leaving 3627 unique records.

A sample of these reviews was taken by screening the search results against basic selection criteria (Table 1 ). These criteria were piloted and refined after discussion between the two researchers. A single researcher (E.S.) screened the records in EPPI-Reviewer Web software using the machine-learning priority screening function. Where a second opinion was needed, decisions were checked by a second researcher (M.R.).

Our initial plan for sampling, informed by pilot searching, was to screen and data extract records in batches of 50 included reviews at a time. We planned to stop screening when a batch of 50 reviews had been extracted that included no new types of data visualisation or after screening time had reached 2 days. However, once data extraction was underway, we found the sample to be richer in terms of data visualisation than anticipated. After the inclusion of 300 reviews, we took the decision to end screening in order to ensure the study was manageable.

Data extraction

A data extraction form was developed in EPPI-Reviewer Web, piloted on 50 reviews and refined. Data were extracted by one researcher (E. S. or M. R.), with a second researcher (M. R. or E. S.) providing a second opinion when needed. The data items extracted were as follows: type of review (term used by authors), aim of review (mapping evidence vs. answering specific question vs. borderline), number of visualisations (if any), types of data visualisation used, variables/domains presented by each visualisation type, interactivity, use of colour and any software requirements.

When categorising review aims, we considered “mapping evidence” to incorporate all of the six purposes for conducting a scoping review proposed by Munn et al. [ 3 ]. Reviews were categorised as “answering a specific question” if they aimed to synthesise study findings to answer a particular question, for example on effectiveness of an intervention. We were inclusive with our definition of “mapping evidence” and included reviews with mixed aims in this category. However, some reviews were difficult to categorise (for example where aims were unclear or the stated aims did not match the actual focus of the paper) and were considered to be “borderline”. It became clear that a proportion of identified records that described themselves as “scoping” or “mapping” reviews were in fact pseudo-systematic reviews that failed to undertake key systematic review processes. Such reviews attempted to integrate the findings of included studies rather than map the evidence, and so reviews categorised as “answering a specific question” were excluded from the main analysis. Data visualisation methods for meta-analyses have been explored previously [ 19 ]. Figure  1 shows the flow of records from search results to final analysis sample.

figure 1

Flow diagram of the sampling process

Data visualisation was defined as any graph or diagram that presented results data, including tables with a visual mapping element, such as cross-tabulations and heat maps. However, tables which displayed data at a study level (e.g. tables summarising key characteristics of each included study) were not included, even if they used symbols, shading or colour. Flow diagrams showing the study selection process were also excluded. Data visualisations in appendices or supplementary information were included, as well as any in publicly available dissemination products (e.g. visualisations hosted online) if mentioned in papers.

The typology used to categorise data visualisation methods was based on an existing online catalogue [ 20 ]. Specific types of data visualisation were categorised in five broad categories: graphs, diagrams, tables, maps/geographical and other. If a data visualisation appeared in our sample that did not feature in the original catalogue, we checked a second online catalogue [ 21 ] for an appropriate term, followed by wider Internet searches. These additional visualisation methods were added to the appropriate section of the typology. The final typology can be found in Additional file 1 .

We conducted descriptive data analysis in Microsoft Excel 2019 and present frequencies and percentages. Where appropriate, data are presented using graphs or other data visualisations created using Flourish. We also link to interactive versions of some of these visualisations.

Almost all of the 300 reviews in the total sample were labelled by review authors as “scoping reviews” ( n  = 293, 97.7%). There were also four “mapping reviews”, one “scoping study”, one “evidence mapping” and one that was described as a “scoping review and evidence map”. Included reviews were all published in 2020 or 2021, with the exception of one review published in 2018. Just over one-third of these reviews ( n  = 105, 35.0%) included some form of data visualisation. However, we excluded 62 reviews that did not focus on mapping evidence from the following analysis (see “ Methods ” section). Of the 238 remaining reviews (that either clearly aimed to map evidence or were judged to be “borderline”), 90 reviews (37.8%) included at least one data visualisation. The references for these reviews can be found in Additional file 2 .

Number of visualisations

Thirty-six (40.0%) of these 90 reviews included just one example of data visualisation (Fig.  2 ). Less than a third ( n  = 28, 31.1%) included three or more visualisations. The greatest number of data visualisations in one review was 17 (all bar or pie charts). In total, 222 individual data visualisations were identified across the sample of 238 reviews.

figure 2

Number of data visualisations per review

Categories of data visualisation

Graphs were the most frequently used category of data visualisation in the sample. Over half of the reviews with data visualisation included at least one graph ( n  = 59, 65.6%). The least frequently used category was maps, with 15.6% ( n  = 14) of these reviews including a map.

Of the total number of 222 individual data visualisations, 102 were graphs (45.9%), 34 were tables (15.3%), 23 were diagrams (10.4%), 15 were maps (6.8%) and 48 were classified as “other” in the typology (21.6%).

Types of data visualisation

All of the types of data visualisation identified in our sample are reported in Table 2 . In total, 35 different types were used across the sample of reviews.

The most frequently used data visualisation type was a bar chart. Of 222 total data visualisations, 78 (35.1%) were a variation on a bar chart (either standard bar chart, stacked bar chart or multi-set bar chart). There were also 33 pie charts (14.9% of data visualisations) and 24 cross-tabulations (10.8% of data visualisations). In total, these five types of data visualisation accounted for 60.8% ( n  = 135) of all data visualisations. Figure  3 shows the frequency of each data visualisation category and type; an interactive online version of this treemap is also available ( https://public.flourish.studio/visualisation/9396133/ ). Figure  4 shows how users can further explore the data using the interactive treemap.

figure 3

Data visualisation categories and types. An interactive version of this treemap is available online: https://public.flourish.studio/visualisation/9396133/ . Through the interactive version, users can further explore the data (see Fig.  4 ). The unit of this treemap is the individual data visualisation, so multiple data visualisations within the same scoping review are represented in this map. Created with flourish.studio ( https://flourish.studio )

figure 4

Screenshots showing how users of the interactive treemap can explore the data further. Users can explore each level of the hierarchical treemap ( A Visualisation category >  B Visualisation subcategory >  C Variables presented in visualisation >  D Individual references reporting this category/subcategory/variable permutation). Created with flourish.studio ( https://flourish.studio )

Data presented

Around two-thirds of data visualisations in the sample presented a single variable ( n  = 143, 64.4%). The most frequently presented single variables were themes ( n  = 22, 9.9% of data visualisations), population ( n  = 21, 9.5%), country or region ( n  = 21, 9.5%) and year ( n  = 20, 9.0%). There were 58 visualisations (26.1%) that presented two different variables. The remaining 21 data visualisations (9.5%) presented three or more variables. Figure  5 shows the variables presented by each different type of data visualisation (an interactive version of this figure is available online).

figure 5

Variables presented by each data visualisation type. Darker cells indicate a larger number of reviews. An interactive version of this heat map is available online: https://public.flourish.studio/visualisation/10632665/ . Users can hover over each cell to see the number of data visualisations for that combination of data visualisation type and variable. The unit of this heat map is the individual data visualisation, so multiple data visualisations within a single scoping review are represented in this map. Created with flourish.studio ( https://flourish.studio )

Most reviews presented at least one data visualisation in colour ( n  = 64, 71.1%). However, almost a third ( n  = 26, 28.9%) used only black and white or greyscale.

Interactivity

Only two of the reviews included data visualisations with any level of interactivity. One scoping review on music and serious mental illness [ 22 ] linked to an interactive bubble chart hosted online on Tableau. Functionality included the ability to filter the studies displayed by various attributes.

The other review was an example of evidence mapping from the environmental health field [ 23 ]. All four of the data visualisations included in the paper were available in an interactive format hosted either by the review management software or on Tableau. The interactive versions linked to the relevant references so users could directly explore the evidence base. This was the only review that provided this feature.

Software requirements

Nine reviews clearly reported the software used to create data visualisations. Three reviews used Tableau (one of them also used review management software as discussed above) [ 22 , 23 , 24 ]. Two reviews generated maps using ArcGIS [ 25 ] or ArcMap [ 26 ]. One review used Leximancer for a lexical analysis [ 27 ]. One review undertook a bibliometric analysis using VOSviewer [ 28 ], and another explored citation patterns using CitNetExplorer [ 29 ]. Other reviews used Excel [ 30 ] or R [ 26 ].

To our knowledge, this is the first systematic and in-depth exploration of the use of data visualisation techniques in scoping reviews. Our findings suggest that the majority of scoping reviews do not use any data visualisation at all, and, in particular, more innovative examples of data visualisation are rare. Around 60% of data visualisations in our sample were simple bar charts, pie charts or cross-tabulations. There appears to be very limited use of interactive online visualisation, despite the potential this has for communicating results to a range of stakeholders. While it is not always appropriate to use data visualisation (or a simple bar chart may be the most user-friendly way of presenting the data), these findings suggest that data visualisation is being underused in scoping reviews. In a large minority of reviews, visualisations were not published in colour, potentially limiting how user-friendly and attractive papers are to decision-makers and other stakeholders. Also, very few reviews clearly reported the software used to create data visualisations. However, 35 different types of data visualisation were used across the sample, highlighting the wide range of methods that are potentially available to scoping review authors.

Our results build on the limited research that has previously been undertaken in this area. Two previous publications also found limited use of graphs in scoping reviews. Results were “mapped graphically” in 29% of scoping reviews in any field in one 2014 publication [ 31 ] and 17% of healthcare scoping reviews in a 2016 article [ 6 ]. Our results suggest that the use of data visualisation has increased somewhat since these reviews were conducted. Scoping review methods have also evolved in the last 10 years; formal guidance on scoping review conduct was published in 2014 [ 32 ], and an extension of the PRISMA checklist for scoping reviews was published in 2018 [ 33 ]. It is possible that an overall increase in use of data visualisation reflects increased quality of published scoping reviews. There is also some literature supporting our findings on the wide range of data visualisation methods that are used in evidence synthesis. An investigation of methods to identify, prioritise or display health research gaps (25/139 included studies were scoping reviews; 6/139 were evidence maps) identified 14 different methods used to display gaps or priorities, with half being “more advanced” (e.g. treemaps, radial bar plots) ([ 34 ], p. 107). A review of data visualisation methods used in papers reporting meta-analyses found over 200 different ways of displaying data [ 19 ].

Only two reviews in our sample used interactive data visualisation, and one of these was an example of systematic evidence mapping from the environmental health field rather than a scoping review (in environmental health, systematic evidence mapping explicitly involves producing a searchable database [ 35 ]). A scoping review of papers on the use of interactive data visualisation in population health or health services research found a range of examples but still limited use overall [ 13 ]. For example, the authors noted the currently underdeveloped potential for using interactive visualisation in research on health inequalities. It is possible that the use of interactive data visualisation in academic papers is restricted by academic publishing requirements; for example, it is currently difficult to incorporate an interactive figure into a journal article without linking to an external host or platform. However, we believe that there is a lot of potential to add value to future scoping reviews by using interactive data visualisation software. Few reviews in our sample presented three or more variables in a single visualisation, something which can easily be achieved using interactive data visualisation tools. We have previously used EPPI-Mapper [ 36 ] to present results of a scoping review of systematic reviews on behaviour change in disadvantaged groups, with links to the maps provided in the paper [ 37 ]. These interactive maps allowed policy-makers to explore the evidence on different behaviours and disadvantaged groups and access full publications of the included studies directly from the map.

We acknowledge there are barriers to use for some of the data visualisation software available. EPPI-Mapper and some of the software used by reviews in our sample incur a cost. Some software requires a certain level of knowledge and skill in its use. However numerous online free data visualisation tools and resources exist. We have used Flourish to present data for this review, a basic version of which is currently freely available and easy to use. Previous health research has been found to have used a range of different interactive data visualisation software, much of which does not required advanced knowledge or skills to use [ 13 ].

There are likely to be other barriers to the use of data visualisation in scoping reviews. Journal guidelines and policies may present barriers for using innovative data visualisation. For example, some journals charge a fee for publication of figures in colour. As previously mentioned, there are limited options for incorporating interactive data visualisation into journal articles. Authors may also be unaware of the data visualisation methods and tools that are available. Producing data visualisations can be time-consuming, particularly if authors lack experience and skills in this. It is possible that many authors prioritise speed of publication over spending time producing innovative data visualisations, particularly in a context where there is pressure to achieve publications.

Limitations

A limitation of this study was that we did not assess how appropriate the use of data visualisation was in our sample as this would have been highly subjective. Simple descriptive or tabular presentation of results may be the most appropriate approach for some scoping review objectives [ 7 , 8 , 10 ], and the scoping review literature cautions against “over-using” different visual presentation methods [ 7 , 8 ]. It cannot be assumed that all of the reviews that did not include data visualisation should have done so. Likewise, we do not know how many reviews used methods of data visualisation that were not well suited to their data.

We initially relied on authors’ own use of the term “scoping review” (or equivalent) to sample reviews but identified a relatively large number of papers labelled as scoping reviews that did not meet the basic definition, despite the availability of guidance and reporting guidelines [ 10 , 33 ]. It has previously been noted that scoping reviews may be undertaken inappropriately because they are seen as “easier” to conduct than a systematic review ([ 3 ], p.6), and that reviews are often labelled as “scoping reviews” while not appearing to follow any established framework or guidance [ 2 ]. We therefore took the decision to remove these reviews from our main analysis. However, decisions on how to classify review aims were subjective, and we did include some reviews that were of borderline relevance.

A further limitation is that this was a sample of published reviews, rather than a comprehensive systematic scoping review as have previously been undertaken [ 6 , 31 ]. The number of scoping reviews that are published has increased rapidly, and this would now be difficult to undertake. As this was a sample, not all relevant scoping reviews or evidence maps that would have met our criteria were included. We used machine learning to screen our search results for pragmatic reasons (to reduce screening time), but we do not see any reason that our sample would not be broadly reflective of the wider literature.

Data visualisation, and in particular more innovative examples of it, is currently underused in published scoping reviews on health topics. The examples that we have found highlight the wide range of methods that scoping review authors could draw upon to present their data in an engaging way. In particular, we believe that interactive data visualisation has significant potential for mapping the available literature on a topic. Appropriate use of data visualisation may increase the usefulness, and thus uptake, of scoping reviews as a way of identifying existing evidence or research gaps by decision-makers, researchers and commissioners of research. We recommend that scoping review authors explore the extensive free resources and online tools available for data visualisation. However, we also think that it would be useful for publishers to explore allowing easier integration of interactive tools into academic publishing, given the fact that papers are now predominantly accessed online. Future research may be helpful to explore which methods are particularly useful to scoping review users.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Organisation formerly known as Joanna Briggs Institute

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Munn Z, Pollock D, Khalil H, Alexander L, McLnerney P, Godfrey CM, Peters M, Tricco AC. What are scoping reviews? Providing a formal definition of scoping reviews as a type of evidence synthesis. JBI Evid Synth. 2022;20:950–952.

Peters MDJ, Marnie C, Colquhoun H, Garritty CM, Hempel S, Horsley T, Langlois EV, Lillie E, O’Brien KK, Tunçalp Ӧ, et al. Scoping reviews: reinforcing and advancing the methodology and application. Syst Rev. 2021;10:263.

Article   PubMed   PubMed Central   Google Scholar  

Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18:143.

Sutton A, Clowes M, Preston L, Booth A. Meeting the review family: exploring review types and associated information retrieval requirements. Health Info Libr J. 2019;36:202–22.

Article   PubMed   Google Scholar  

Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5:28.

Tricco AC, Lillie E, Zarin W, O’Brien K, Colquhoun H, Kastner M, Levac D, Ng C, Sharpe JP, Wilson K, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16:15.

Khalil H, Peters MDJ, Tricco AC, Pollock D, Alexander L, McInerney P, Godfrey CM, Munn Z. Conducting high quality scoping reviews-challenges and solutions. J Clin Epidemiol. 2021;130:156–60.

Lockwood C, dos Santos KB, Pap R. Practical guidance for knowledge synthesis: scoping review methods. Asian Nurs Res. 2019;13:287–94.

Article   Google Scholar  

Pollock D, Peters MDJ, Khalil H, McInerney P, Alexander L, Tricco AC, Evans C, de Moraes ÉB, Godfrey CM, Pieper D, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evidence Synthesis. 2022;10:11124.

Google Scholar  

Peters MDJ GC, McInerney P, Munn Z, Tricco AC, Khalil, H. Chapter 11: Scoping reviews (2020 version). In: Aromataris E MZ, editor. JBI Manual for Evidence Synthesis. JBI; 2020. Available from https://synthesismanual.jbi.global . Accessed 1 Feb 2023.

Tableau Public. https://www.tableau.com/en-gb/products/public . Accessed 24 January 2023.

flourish.studio. https://flourish.studio/ . Accessed 24 January 2023.

Chishtie J, Bielska IA, Barrera A, Marchand J-S, Imran M, Tirmizi SFA, Turcotte LA, Munce S, Shepherd J, Senthinathan A, et al. Interactive visualization applications in population health and health services research: systematic scoping review. J Med Internet Res. 2022;24: e27534.

Isett KR, Hicks DM. Providing public servants what they need: revealing the “unseen” through data visualization. Public Adm Rev. 2018;78:479–85.

Carroll LN, Au AP, Detwiler LT, Fu T-c, Painter IS, Abernethy NF. Visualization and analytics tools for infectious disease epidemiology: a systematic review. J Biomed Inform. 2014;51:287–298.

Lundkvist A, El-Khatib Z, Kalra N, Pantoja T, Leach-Kemon K, Gapp C, Kuchenmüller T. Policy-makers’ views on translating burden of disease estimates in health policies: bridging the gap through data visualization. Arch Public Health. 2021;79:17.

Zakkar M, Sedig K. Interactive visualization of public health indicators to support policymaking: an exploratory study. Online J Public Health Inform. 2017;9:e190–e190.

Park S, Bekemeier B, Flaxman AD. Understanding data use and preference of data visualization for public health professionals: a qualitative study. Public Health Nurs. 2021;38:531–41.

Kossmeier M, Tran US, Voracek M. Charting the landscape of graphical displays for meta-analysis and systematic reviews: a comprehensive review, taxonomy, and feature analysis. BMC Med Res Methodol. 2020;20:26.

Ribecca, S. The Data Visualisation Catalogue. https://datavizcatalogue.com/index.html . Accessed 23 November 2021.

Ferdio. Data Viz Project. https://datavizproject.com/ . Accessed 23 November 2021.

Golden TL, Springs S, Kimmel HJ, Gupta S, Tiedemann A, Sandu CC, Magsamen S. The use of music in the treatment and management of serious mental illness: a global scoping review of the literature. Front Psychol. 2021;12: 649840.

Keshava C, Davis JA, Stanek J, Thayer KA, Galizia A, Keshava N, Gift J, Vulimiri SV, Woodall G, Gigot C, et al. Application of systematic evidence mapping to assess the impact of new research when updating health reference values: a case example using acrolein. Environ Int. 2020;143: 105956.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Jayakumar P, Lin E, Galea V, Mathew AJ, Panda N, Vetter I, Haynes AB. Digital phenotyping and patient-generated health data for outcome measurement in surgical care: a scoping review. J Pers Med. 2020;10:282.

Qu LG, Perera M, Lawrentschuk N, Umbas R, Klotz L. Scoping review: hotspots for COVID-19 urological research: what is being published and from where? World J Urol. 2021;39:3151–60.

Article   CAS   PubMed   Google Scholar  

Rossa-Roccor V, Acheson ES, Andrade-Rivas F, Coombe M, Ogura S, Super L, Hong A. Scoping review and bibliometric analysis of the term “planetary health” in the peer-reviewed literature. Front Public Health. 2020;8:343.

Hewitt L, Dahlen HG, Hartz DL, Dadich A. Leadership and management in midwifery-led continuity of care models: a thematic and lexical analysis of a scoping review. Midwifery. 2021;98: 102986.

Xia H, Tan S, Huang S, Gan P, Zhong C, Lu M, Peng Y, Zhou X, Tang X. Scoping review and bibliometric analysis of the most influential publications in achalasia research from 1995 to 2020. Biomed Res Int. 2021;2021:8836395.

Vigliotti V, Taggart T, Walker M, Kusmastuti S, Ransome Y. Religion, faith, and spirituality influences on HIV prevention activities: a scoping review. PLoS ONE. 2020;15: e0234720.

van Heemskerken P, Broekhuizen H, Gajewski J, Brugha R, Bijlmakers L. Barriers to surgery performed by non-physician clinicians in sub-Saharan Africa-a scoping review. Hum Resour Health. 2020;18:51.

Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synth Methods. 2014;5:371–85.

Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, McInerney P, Godfrey CM, Khalil H. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth. 2020;18:2119–26.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, Moher D, Peters MDJ, Horsley T, Weeks L, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169:467–73.

Nyanchoka L, Tudur-Smith C, Thu VN, Iversen V, Tricco AC, Porcher R. A scoping review describes methods used to identify, prioritize and display gaps in health research. J Clin Epidemiol. 2019;109:99–110.

Wolffe TAM, Whaley P, Halsall C, Rooney AA, Walker VR. Systematic evidence maps as a novel tool to support evidence-based decision-making in chemicals policy and risk management. Environ Int. 2019;130:104871.

Digital Solution Foundry and EPPI-Centre. EPPI-Mapper, Version 2.0.1. EPPI-Centre, UCL Social Research Institute, University College London. 2020. https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3790 .

South E, Rodgers M, Wright K, Whitehead M, Sowden A. Reducing lifestyle risk behaviours in disadvantaged groups in high-income countries: a scoping review of systematic reviews. Prev Med. 2022;154: 106916.

Download references

Acknowledgements

We would like to thank Melissa Harden, Senior Information Specialist, Centre for Reviews and Dissemination, for advice on developing the search strategy.

This work received no external funding.

Author information

Authors and affiliations.

Centre for Reviews and Dissemination, University of York, York, YO10 5DD, UK

Emily South & Mark Rodgers

You can also search for this author in PubMed   Google Scholar

Contributions

Both authors conceptualised and designed the study and contributed to screening, data extraction and the interpretation of results. ES undertook the literature searches, analysed data, produced the data visualisations and drafted the manuscript. MR contributed to revising the manuscript, and both authors read and approved the final version.

Corresponding author

Correspondence to Emily South .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Typology of data visualisation methods.

Additional file 2.

References of scoping reviews included in main dataset.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

South, E., Rodgers, M. Data visualisation in scoping reviews and evidence maps on health topics: a cross-sectional analysis. Syst Rev 12 , 142 (2023). https://doi.org/10.1186/s13643-023-02309-y

Download citation

Received : 21 February 2023

Accepted : 07 August 2023

Published : 17 August 2023

DOI : https://doi.org/10.1186/s13643-023-02309-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scoping review
  • Evidence map
  • Data visualisation

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

method of data analysis in descriptive research

method of data analysis in descriptive research

Understanding data analysis: A beginner's guide

Before data can be used to tell a story, it must go through a process that makes it usable. Explore the role of data analysis in decision-making.

What is data analysis?

Data analysis is the process of gathering, cleaning, and modeling data to reveal meaningful insights. This data is then crafted into reports that support the strategic decision-making process.

Types of data analysis

There are many different types of data analysis. Each type can be used to answer a different question.

method of data analysis in descriptive research

Descriptive analytics

Descriptive analytics refers to the process of analyzing historical data to understand trends and patterns. For example, success or failure to achieve key performance indicators like return on investment.

An example of descriptive analytics is generating reports to provide an overview of an organization's sales and financial data, offering valuable insights into past activities and outcomes.

method of data analysis in descriptive research

Predictive analytics

Predictive analytics uses historical data to help predict what might happen in the future, such as identifying past trends in data to determine if they’re likely to recur.

Methods include a range of statistical and machine learning techniques, including neural networks, decision trees, and regression analysis.

method of data analysis in descriptive research

Diagnostic analytics

Diagnostic analytics helps answer questions about what caused certain events by looking at performance indicators. Diagnostic analytics techniques supplement basic descriptive analysis.

Generally, diagnostic analytics involves spotting anomalies in data (like an unexpected shift in a metric), gathering data related to these anomalies, and using statistical techniques to identify potential explanations.

method of data analysis in descriptive research

Cognitive analytics

Cognitive analytics is a sophisticated form of data analysis that goes beyond traditional methods. This method uses machine learning and natural language processing to understand, reason, and learn from data in a way that resembles human thought processes.

The goal of cognitive analytics is to simulate human-like thinking to provide deeper insights, recognize patterns, and make predictions.

method of data analysis in descriptive research

Prescriptive analytics

Prescriptive analytics helps answer questions about what needs to happen next to achieve a certain goal or target. By using insights from prescriptive analytics, organizations can make data-driven decisions in the face of uncertainty.

Data analysts performing prescriptive analysis often rely on machine learning to find patterns in large semantic models and estimate the likelihood of various outcomes.

method of data analysis in descriptive research

analyticsText analytics

Text analytics is a way to teach computers to understand human language. It involves using algorithms and other techniques to extract information from large amounts of text data, such as social media posts or customer previews.

Text analytics helps data analysts make sense of what people are saying, find patterns, and gain insights that can be used to make better decisions in fields like business, marketing, and research.

The data analysis process

Compiling and interpreting data so it can be used in decision making is a detailed process and requires a systematic approach. Here are the steps that data analysts follow:

1. Define your objectives.

Clearly define the purpose of your analysis. What specific question are you trying to answer? What problem do you want to solve? Identify your core objectives. This will guide the entire process.

2. Collect and consolidate your data.

Gather your data from all relevant sources using  data analysis software . Ensure that the data is representative and actually covers the variables you want to analyze.

3. Select your analytical methods.

Investigate the various data analysis methods and select the technique that best aligns with your objectives. Many free data analysis software solutions offer built-in algorithms and methods to facilitate this selection process.

4. Clean your data.

Scrutinize your data for errors, missing values, or inconsistencies using the cleansing features already built into your data analysis software. Cleaning the data ensures accuracy and reliability in your analysis and is an important part of data analytics.

5. Uncover valuable insights.

Delve into your data to uncover patterns, trends, and relationships. Use statistical methods, machine learning algorithms, or other analytical techniques that are aligned with your goals. This step transforms raw data into valuable insights.

6. Interpret and visualize the results.

Examine the results of your analyses to understand their implications. Connect these findings with your initial objectives. Then, leverage the visualization tools within free data analysis software to present your insights in a more digestible format.

7. Make an informed decision.

Use the insights gained from your analysis to inform your next steps. Think about how these findings can be utilized to enhance processes, optimize strategies, or improve overall performance.

By following these steps, analysts can systematically approach large sets of data, breaking down the complexities and ensuring the results are actionable for decision makers.

The importance of data analysis

Data analysis is critical because it helps business decision makers make sense of the information they collect in our increasingly data-driven world. Imagine you have a massive pile of puzzle pieces (data), and you want to see the bigger picture (insights). Data analysis is like putting those puzzle pieces together—turning that data into knowledge—to reveal what’s important.

Whether you’re a business decision maker trying to make sense of customer preferences or a scientist studying trends, data analysis is an important tool that helps us understand the world and make informed choices.

Primary data analysis methods

A person working on his desktop an open office environment

Quantitative analysis

Quantitative analysis deals with numbers and measurements (for example, looking at survey results captured through ratings). When performing quantitative analysis, you’ll use mathematical and statistical methods exclusively and answer questions like ‘how much’ or ‘how many.’ 

Two people looking at tablet screen showing a word document

Qualitative analysis

Qualitative analysis is about understanding the subjective meaning behind non-numerical data. For example, analyzing interview responses or looking at pictures to understand emotions. Qualitative analysis looks for patterns, themes, or insights, and is mainly concerned with depth and detail.

Data analysis solutions and resources

Turn your data into actionable insights and visualize the results with ease.

Microsoft 365

Process data and turn ideas into reality with innovative apps, including Excel.

Importance of backing up data

Learn how to back up your data and devices for peace of mind—and added security. 

Copilot in Excel

Go deeper with your data using Microsoft Copilot—your AI assistant.

Excel expense template

Organize and track your business expenses using Excel.

Excel templates

Boost your productivity with free, customizable Excel templates for all types of documents.

Chart designs

Enhance presentations, research, and other materials with customizable chart templates.

Follow Microsoft

 LinkedIn.

  • Open access
  • Published: 13 May 2024

Patient medication management, understanding and adherence during the transition from hospital to outpatient care - a qualitative longitudinal study in polymorbid patients with type 2 diabetes

  • Léa Solh Dost   ORCID: orcid.org/0000-0001-5767-1305 1 , 2 ,
  • Giacomo Gastaldi   ORCID: orcid.org/0000-0001-6327-7451 3 &
  • Marie P. Schneider   ORCID: orcid.org/0000-0002-7557-9278 1 , 2  

BMC Health Services Research volume  24 , Article number:  620 ( 2024 ) Cite this article

Metrics details

Continuity of care is under great pressure during the transition from hospital to outpatient care. Medication changes during hospitalization may be poorly communicated and understood, compromising patient safety during the transition from hospital to home. The main aims of this study were to investigate the perspectives of patients with type 2 diabetes and multimorbidities on their medications from hospital discharge to outpatient care, and their healthcare journey through the outpatient healthcare system. In this article, we present the results focusing on patients’ perspectives of their medications from hospital to two months after discharge.

Patients with type 2 diabetes, with at least two comorbidities and who returned home after discharge, were recruited during their hospitalization. A descriptive qualitative longitudinal research approach was adopted, with four in-depth semi-structured interviews per participant over a period of two months after discharge. Interviews were based on semi-structured guides, transcribed verbatim, and a thematic analysis was conducted.

Twenty-one participants were included from October 2020 to July 2021. Seventy-five interviews were conducted. Three main themes were identified: (A) Medication management, (B) Medication understanding, and (C) Medication adherence, during three periods: (1) Hospitalization, (2) Care transition, and (3) Outpatient care. Participants had varying levels of need for medication information and involvement in medication management during hospitalization and in outpatient care. The transition from hospital to autonomous medication management was difficult for most participants, who quickly returned to their routines with some participants experiencing difficulties in medication adherence.

Conclusions

The transition from hospital to outpatient care is a challenging process during which discharged patients are vulnerable and are willing to take steps to better manage, understand, and adhere to their medications. The resulting tension between patients’ difficulties with their medications and lack of standardized healthcare support calls for interprofessional guidelines to better address patients’ needs, increase their safety, and standardize physicians’, pharmacists’, and nurses’ roles and responsibilities.

Peer Review reports

Introduction

Continuity of patient care is characterized as the collaborative engagement between the patient and their physician-led care team in the ongoing management of healthcare, with the mutual objective of delivering high-quality and cost-effective medical care [ 1 ]. Continuity of care is under great pressure during the transition of care from hospital to outpatient care, with a risk of compromising patients’ safety [ 2 , 3 ]. The early post-discharge period is a high-risk and fragile transition: once discharged, one in five patients experience at least one adverse event during the first three weeks following discharge, and more than half of these adverse events are drug-related [ 4 , 5 ]. A retrospective study examining all discharged patients showed that adverse drug events (ADEs) account for up to 20% of 30-day hospital emergency readmissions [ 6 ]. During hospitalization, patients’ medications are generally modified, with an average of nearly four medication changes per patient [ 7 ]. Information regarding medications such as medication changes, the expected effect, side effects, and instructions for use are frequently poorly communicated to patients during hospitalization and at discharge [ 8 , 9 , 10 , 11 ]. Between 20 and 60% of discharged patients lack knowledge of their medications [ 12 , 13 ]. Consideration of patients’ needs and their active engagement in decision-making during hospitalization regarding their medications are often lacking [ 11 , 14 , 15 ]. This can lead to unsafe discharge and contribute to medication adherence difficulties, such as non-implementation of newly prescribed medications [ 16 , 17 ].

Patients with multiple comorbidities and polypharmacy are at higher risk of ADE [ 18 ]. Type 2 diabetes is one of the chronic health conditions most frequently associated with comorbidities and patients with type 2 diabetes often lack care continuum [ 19 , 20 , 21 ]. The prevalence of patients hospitalized with type 2 diabetes can exceed 40% [ 22 ] and these patients are at higher risk for readmission due to their comorbidities and their medications, such as insulin and oral hypoglycemic agents [ 23 , 24 , 25 ].

Interventions and strategies to improve patient care and safety at transition have shown mixed results worldwide in reducing cost, rehospitalization, ADE, and non-adherence [ 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ]. However, interventions that are patient-centered, with a patient follow-up and led by interprofessional healthcare teams showed promising results [ 34 , 35 , 36 ]. Most of these interventions have not been implemented routinely due to the extensive time to translate research into practice and the lack of hybrid implementation studies [ 37 , 38 , 39 , 40 , 41 ]. In addition, patient-reported outcomes and perspectives have rarely been considered, yet patients’ involvement is essential for seamless and integrated care [ 42 , 43 ]. Interprofessional collaboration in which patients are full members of the interprofessional team, is still in its infancy in outpatient care [ 44 ]. Barriers and facilitators regarding medications at the transition of care have been explored in multiple qualitative studies at one given time in a given setting (e.g., at discharge, one-month post-discharge) [ 8 , 45 , 46 , 47 , 48 ]. However, few studies have adopted a holistic methodology from the hospital to the outpatient setting to explore changes in patients’ perspectives over time [ 49 , 50 , 51 ]. Finally, little is known about whether, how, and when patients return to their daily routine following hospitalization and the impact of hospitalization weeks after discharge.

In Switzerland, continuity of care after hospital discharge is still poorly documented, both in terms of contextual analysis and interventional studies, and is mainly conducted in the hospital setting [ 31 , 35 , 52 , 53 , 54 , 55 , 56 ]. The first step of an implementation science approach is to perform a contextual analysis to set up effective interventions adapted to patients’ needs and aligned to healthcare professionals’ activities in a specific context [ 41 , 57 ]. Therefore, the main aims of this study were to investigate the perspectives of patients with type 2 diabetes and multimorbidities on their medications from hospital discharge to outpatient care, and on their healthcare journey through the outpatient healthcare system. In this article, we present the results focusing on patients’ perspectives of their medications from hospital to two months after discharge.

Study design

This qualitative longitudinal study, conducted from October 2020 to July 2021, used a qualitative descriptive methodology through four consecutive in-depth semi-structured interviews per participant at three, 10-, 30- and 60-days post-discharge, as illustrated in Fig.  1 . Longitudinal qualitative research is characterized by qualitative data collection at different points in time and focuses on temporality, such as time and change [ 58 , 59 ]. Qualitative descriptive studies aim to explore and describe the depth and complexity of human experiences or phenomena [ 60 , 61 , 62 ]. We focused our qualitative study on the 60 first days after discharge as this period is considered highly vulnerable and because studies often use 30- or 60-days readmission as an outcome measure [ 5 , 63 ].

This qualitative study follows the Consolidated Criteria for Reporting Qualitative Research (COREQ). Ethics committee approval was sought and granted by the Cantonal Research Ethics Commission, Geneva (CCER) (2020 − 01779).

Recruitment took place during participants’ hospitalization in the general internal medicine divisions at the Geneva University Hospitals in the canton of Geneva (500 000 inhabitants), Switzerland. Interviews took place at participants’ homes, in a private office at the University of Geneva, by telephone or by secure video call, according to participants’ preference. Informal caregivers could also participate alongside the participants.

figure 1

Study flowchart

Researcher characteristics

All the researchers were trained in qualitative studies. The diabetologist and researcher (GG) who enrolled the patients in the study was involved directly or indirectly (advice asked to the Geneva University Hospital diabetes team of which he was a part) for most participants’ care during hospitalization. LS (Ph.D. student and community pharmacist) was unknown to participants and presented herself during hospitalization as a “researcher” and not as a healthcare professional to avoid any risk of influencing participants’ answers. This study was not interventional, and the interviewer (LS) invited participants to contact a healthcare professional for any questions related to their medication or medical issues.

Population and sampling strategy

Patients with type 2 diabetes were chosen as an example population to describe polypharmacy patients as these patients usually have several health issues and polypharmacy [ 20 , 22 , 25 ]. Inclusions criteria for the study were: adult patients with type 2 diabetes, with at least two other comorbidities, hospitalized for at least three days in a general internal medicine ward, with a minimum of one medication change during hospital stay, and who self-managed their medications once discharged home. Exclusion criteria were patients not reachable by telephone following discharge, unable to give consent (patients with schizophrenia, dementia, brain damage, or drug/alcohol misuse), and who could not communicate in French. A purposive sampling methodology was applied aiming to include participants with different ages, genders, types, and numbers of health conditions by listing participants’ characteristics in a double-entry table, available in Supplementary Material 1 , until thematic saturation was reached. Thematic saturation was considered achieved when no new code or theme emerged and new data repeated previously coded information [ 64 ]. The participants were identified if they were hospitalized in the ward dedicated to diabetes care or when the diabetes team was contacted for advice. The senior ward physician (GG) screened eligible patients and the interviewer (LS) obtained written consent before hospital discharge.

Data collection and instruments

Sociodemographic (age, gender, educational level, living arrangement) and clinical characteristics (reason for hospitalization, date of admission, health conditions, diabetes diagnosis, medications before and during hospitalization) were collected by interviewing participants before their discharge and by extracting participants’ data from electronic hospital files by GG and LS. Participants’ pharmacies were contacted with the participant’s consent to obtain medication records from the last three months if information regarding medications before hospitalization was missing in the hospital files.

Semi-structured interview guides for each interview (at three, 10-, 30- and 60-days post-discharge) were developed based on different theories and components of health behavior and medication adherence: the World Health Organization’s (WHO) five dimensions for adherence, the Information-Motivation-Behavioral skills model and the Social Cognitive Theory [ 65 , 66 , 67 ]. Each interview explored participants’ itinerary in the healthcare system and their perspectives on their medications. Regarding medications, the following themes were mentioned at each interview: changes in medications, patients’ understanding and implication; information on their medications, self-management of their medications, and patients’ medication adherence. Other aspects were mentioned in specific interviews: patients’ hospitalization and experience on their return home (interview 1), motivation (interviews 2 and 4), and patient’s feedback on the past two months (interview 4). Interview guides translated from French are available in Supplementary Material 2 . The participants completed self-reported and self-administrated questionnaires at different interviews to obtain descriptive information on different factors that may affect medication management and adherence: self-report questionnaires on quality of life (EQ-5D-5 L) [ 68 ], literacy (Schooling-Opinion-Support questionnaire) [ 69 ], medication adherence (Adherence Visual Analogue Scale, A-VAS) [ 70 ] and Belief in Medication Questionnaire (BMQ) [ 71 ] were administered to each participant at the end of selected interviews to address the different factors that may affect medication management and adherence as well as to determine a trend of determinants over time. The BMQ contains two subscores: Specific-Necessity and Specific-Concerns, addressing respectively their perceived needs for their medications, and their concerns about adverse consequences associated with taking their medication [ 72 ].

Data management

Informed consent forms, including consent to obtain health data, were securely stored in a private office at the University of Geneva. The participants’ identification key was protected by a password known only by MS and LS. Confidentiality was guaranteed by pseudonymization of participants’ information and audio-recordings were destroyed once analyzed. Sociodemographic and clinical characteristics, medication changes, and answers to questionnaires were securely collected by electronic case report forms (eCRFs) on RedCap®. Interviews were double audio-recorded and field notes were taken during interviews. Recorded interviews were manually transcribed verbatim in MAXQDA® (2018.2) by research assistants and LS and transcripts were validated for accuracy by LS. A random sample of 20% of questionnaires was checked for accuracy for the transcription from the paper questionnaires to the eCRFs. Recorded sequences with no link to the discussed topics were not transcribed and this was noted in the transcripts.

Data analysis

A descriptive statistical analysis of sociodemographic, clinical characteristics and self-reported questionnaire data was carried out. A thematic analysis of transcripts was performed, as described by Braun and Clarke [ 73 ], by following six steps: raw data was read, text segments related to the study objectives were identified, text segments to create new categories were identified, similar or redundant categories were reduced and a model that integrated all significant categories was created. The analysis was conducted in parallel with patient enrolment to ensure data saturation. To ensure the validity of the coding method, transcripts were double coded independently and discussed by the research team until similar themes were obtained. The research group developed and validated an analysis grid, with which LS coded systematically the transcriptions and met regularly with the research team to discuss questions on data analysis and to ensure the quality of coding. The analysis was carried out in French, and the verbatims of interest cited in the manuscript were translated and validated by a native English-speaking researcher to preserve the meaning.

In this analysis, we used the term “healthcare professionals” when more than one profession could be involved in participants’ medication management. Otherwise, when a specific healthcare professional was involved, we used the designated profession (e.g. physicians, pharmacists).

Patient and public involvement

During the development phase of the study, interview guides and questionnaires were reviewed for clarity and validity and adapted by two patient partners, with multiple health conditions and who experienced previously a hospital discharge. They are part of the HUG Patients Partners + 3P platform for research and patient and public involvement.

Interviews and participants’ descriptions

A total of 75 interviews were conducted with 21 participants. In total, 31 patients were contacted, seven refused to participate (four at the project presentation and three at consent), two did not enter the selection criteria at discharge and one was unreachable after discharge. Among the 21 participants, 15 participated in all interviews, four in three interviews, one in two interviews, and one in one interview, due to scheduling constraints. Details regarding interviews and participants characteristics are presented in Tables  1 and 2 .

The median length of time between hospital discharge and interviews 1,2,3 and 4 was 5 (IQR: 4–7), 14 (13-20), 35 (22-38), and 63 days (61-68), respectively. On average, by comparing medications at hospital admission and discharge, a median of 7 medication changes (IQR: 6–9, range:2;17) occurred per participant during hospitalization and a median of 7 changes (5–12) during the two months following discharge. Details regarding participants’ medications are described in Table  3 .

Patient self-reported adherence over the past week for their three most challenging medications are available in Supplementary Material 3 .

Qualitative analysis

We defined care transition as the period from discharge until the first medical appointment post-discharge, and outpatient care as the period starting after the first medical appointment. Data was organized into three key themes (A. Medication management, B. Medication understanding, and C. Medication adherence) divided into subthemes at three time points (1. Hospitalization, 2. Care transition and 3. Outpatient care). Figure  2 summarizes and illustrates the themes and subthemes with their influencing factors as bullet points.

figure 2

Participants’ medication management, understanding and adherence during hospitalization, care transition and outpatient care

A. Medication management

A.1 medication management during hospitalization: medication management by hospital staff.

Medications during hospitalization were mainly managed by hospital healthcare professionals (i.e. nurses and physicians) with varying degrees of patient involvement: “At the hospital, they prepared the medications for me. […] I didn’t even know what the packages looked like.” Participant 22; interview 1 (P22.1) Some participants reported having therapeutic education sessions with specialized nurses and physicians, such as the explanation and demonstration of insulin injection and glucose monitoring. A patient reported that he was given the choice of several treatments and was involved in shared decision-making. Other participants had an active role in managing and optimizing dosages, such as rapid insulin, due to prior knowledge and use of medications before hospitalization.

A.2 Medication management at transition: obtaining the medication and initiating self-management

Once discharged, some participants had difficulties obtaining their medications at the pharmacy because some medications were not stored and had to be ordered, delaying medication initiation. To counter this problem upstream, a few participants were provided a 24-to-48-hour supply of medications at discharge. It was sometimes requested by the patient or suggested by the healthcare professionals but was not systematic. The transition from medication management by hospital staff to self-management was exhausting for most participants who were faced with a large amount of new information and changes in their medications: “ When I was in the hospital, I didn’t even realize all the changes. When I came back home, I took away the old medication packages and got out the new ones. And then I thought : « my God, all this…I didn’t know I had all these changes » ” P2.1 Written documentation, such as the discharge prescription or dosage labels on medication packages, was helpful in managing their medication at home. Most participants used weekly pill organizers to manage their medications, which were either already used before hospitalization or were introduced post-discharge. The help of a family caregiver in managing and obtaining medications was reported as a facilitator.

A.3 Medication management in outpatient care: daily self-management and medication burden

A couple of days or weeks after discharge, most participants had acquired a routine so that medication management was less demanding, but the medication burden varied depending on the participants. For some, medication management became a simple action well implemented in their routine (“It has become automatic” , P23.4), while for others, the number of medications and the fact that the medications reminded them of the disease was a heavy burden to bear on a daily basis (“ During the first few days after getting out of the hospital, I thought I was going to do everything right. In the end, well [laughs] it’s complicated. I ended up not always taking the medication, not monitoring the blood sugar” P12.2) To support medication self-management, some participants had written documentation such as treatment plans, medication lists, and pictures of their medication packages on their phones. Some participants had difficulties obtaining medications weeks after discharge as discharge prescriptions were not renewable and participants did not see their physician in time. Others had to visit multiple physicians to have their prescriptions updated. A few participants were faced with prescription or dispensing errors, such as prescribing or dispensing the wrong dosage, which affected medication management and decreased trust in healthcare professionals. In most cases, according to participants, the pharmacy staff worked in an interprofessional collaboration with physicians to provide new and updated prescriptions.

B. Medication understanding

B.1 medication understanding during hospitalization: new information and instructions.

The amount of information received during hospitalization varied considerably among participants with some reporting having received too much, while others saying they received too little information regarding medication changes, the reason for changes, or for introducing new medications: “They told me I had to take this medication all my life, but they didn’t tell me what the effects were or why I was taking it.” P5.3

Hospitalization was seen by some participants as a vulnerable and tiring period during which they were less receptive to information. Information and explanations were generally given verbally, making it complicated for most participants to recall it. Some participants reported that hospital staff was attentive to their needs for information and used communication techniques such as teach-back (a way of checking understanding by asking participants to say in their own words what they need to know or do about their health or medications). Some participants were willing to be proactive in the understanding of their medications while others were more passive, had no specific needs for information, and did not see how they could be engaged more.

B.2 Medication understanding at transition: facing medication changes

At hospital discharge, the most challenging difficulty for participants was to understand the changes made regarding their medications. For newly diagnosed participants, the addition of new medications was more difficult to understand, whereas, for experienced participants, changes in known medications such as dosage modification, changes within a therapeutic class, and generic substitutions were the most difficult to understand. Not having been informed about changes caused confusion and misunderstanding. Therefore, medication reconciliation done by the patient was time-consuming, especially for participants with multiple medications: “ They didn’t tell me at all that they had changed my treatment completely. They just told me : « We’ve changed a few things. But it was the whole treatment ». ” P2.3 Written information, such as the discharge prescription, the discharge report (brief letter summarizing information about the hospitalization, given to the patient at discharge), or the label on the medication box (written by the pharmacist with instructions on dosage) helped them find or recall information about their medications and diagnoses. However, technical terms were used in hospital documentations and were not always understandable. For example, this participant said: “ On the prescription of valsartan, they wrote: ‘resume in the morning once profile…’[once hypertension profile allows]… I don’t know what that means.” P8.1 In addition, some documents were incomplete, as mentioned by a patient who did not have the insulin dosage mentioned on the hospital prescription. Some participants sought help from healthcare professionals, such as pharmacists, hospital physicians, or general practitioners a few days after discharge to review medications, answer questions, or obtain additional information.

B.3 Medication understanding in the outpatient care: concerns and knowledge

Weeks after discharge, most participants had concerns about the long-term use of their medications, their usefulness, and the possible risk of interactions or side effects. Some participants also reported having some lack of knowledge regarding indications, names, or how the medication worked: “I don’t even know what Brilique® [ticagrelor, antiplatelet agent] is for. It’s for blood pressure, isn’t it?. I don’t know.” P11.4 According to participants, the main reasons for the lack of understanding were the lack of information at the time of prescribing and the large number of medications, making it difficult to search for information and remember it. Participants sought information from different healthcare professionals or by themselves, on package inserts, through the internet, or from family and friends. Others reported having had all the information needed or were not interested in having more information. In addition, participants with low medication literacy, such as non-native speakers or elderly people, struggled more with medication understanding and sought help from family caregivers or healthcare professionals, even weeks after discharge: “ I don’t understand French very well […] [The doctor] explained it very quickly…[…] I didn’t understand everything he was saying” P16.2

C. Medication adherence

C.2 medication adherence at transition: adopting new behaviors.

Medication adherence was not mentioned as a concern during hospitalization and a few participants reported difficulties in medication initiation once back home: “I have an injection of Lantus® [insulin] in the morning, but obviously, the first day [after discharge], I forgot to do it because I was not used to it.” P23.1 Participants had to quickly adopt new behaviors in the first few days after discharge, especially for participants with few medications pre-hospitalization. The use of weekly pill organizers, alarms and specific storage space were reported as facilitators to support adherence. One patient did not initiate one of his medications because he did not understand the medication indication, and another patient took her old medications because she was used to them. Moreover, most participants experienced their hospitalization as a turning point, a time when they focused on their health, thought about the importance of their medications, and discussed any new lifestyle or dietary measures that might be implemented.

C.3 Medication adherence in outpatient care: ongoing medication adherence

More medication adherence difficulties appeared a few weeks after hospital discharge when most participants reported nonadherence behaviors, such as difficulties implementing the dosage regimen, or intentionally discontinuing the medication and modifying the medication regimen on their initiative. Determinants positively influencing medication adherence were the establishment of a routine; organizing medications in weekly pill-organizers; organizing pocket doses (medications for a short period that participants take with them when away from home); seeking support from family caregivers; using alarm clocks; and using specific storage places. Reasons for nonadherence were changes in daily routine; intake times that were not convenient for the patient; the large number of medications; and poor knowledge of the medication or side effects. Healthcare professionals’ assistance for medication management, such as the help of home nurses or pharmacists for the preparation of weekly pill-organizers, was requested by participants or offered by healthcare professionals to support medication adherence: “ I needed [a home nurse] to put my pills in the pillbox. […] I felt really weak […] and I was making mistakes. So, I’m very happy [the doctor] offered me [home care]. […] I have so many medications.” P22.3 Some participants who experienced prehospitalization non-adherence were more aware of their non-adherence and implemented strategies, such as modifying the timing of intake: “I said to my doctor : « I forget one time out of two […], can I take them in the morning? » We looked it up and yes, I can take it in the morning.” P11.2 In contrast, some participants were still struggling with adherence difficulties that they had before hospitalization. Motivations for taking medications two months after discharge were to improve health, avoid complications, reduce symptoms, reduce the number of medications in the future or out of obligation: “ I force myself to take them because I want to get to the end of my diabetes, I want to reduce the number of pills as much as possible.” P14.2 After a few weeks post-hospitalization, for some participants, health and illness were no longer the priority because of other life imperatives (e.g., family or financial situation).

This longitudinal study provided a multi-faceted representation of how patients manage, understand, and adhere to their medications from hospital discharge to two months after discharge. Our findings highlighted the varying degree of participants’ involvement in managing their medications during their hospitalization, the individualized needs for information during and after hospitalization, the complicated transition from hospital to autonomous medication management, the adaptation of daily routines around medication once back home, and the adherence difficulties that surfaced in the outpatient care, with nonadherence prior to hospitalization being an indicator of the behavior after discharge. Finally, our results confirmed the lack of continuity in care and showed the lack of patient care standardization experienced by the participants during the transition from hospital to outpatient care.

This in-depth analysis of patients’ experiences reinforces common challenges identified in the existing literature such as the lack of personalized information [ 9 , 10 , 11 ], loss of autonomy during hospitalization [ 14 , 74 , 75 ], difficulties in obtaining medication at discharge [ 11 , 45 , 76 ] and challenges in understanding treatment modifications and generics substitution [ 11 , 32 , 77 , 78 ]. Some of these studies were conducted during patients’ hospitalization [ 10 , 75 , 79 ] or up to 12 months after discharge [ 80 , 81 ], but most studies focused on the few days following hospital discharge [ 9 , 11 , 14 , 82 ]. Qualitative studies on medications at transition often focused on a specific topic, such as medication information, or a specific moment in time, and often included healthcare professionals, which muted patients’ voices [ 9 , 10 , 11 , 47 , 49 ]. Our qualitative longitudinal methodology was interested in capturing the temporal dynamics, in-depth narratives, and contextual nuances of patients’ medication experiences during transitions of care [ 59 , 83 ]. This approach provided a comprehensive understanding of how patients’ perspectives and behaviors evolved over time, offering insights into the complex interactions of medication management, understanding and adherence, and turning points within their medication journeys. A qualitative longitudinal design was used by Fylan et al. to underline patients’ resilience in medication management during and after discharge, by Brandberg et al. to show the dynamic process of self-management during the 4 weeks post-discharge and by Lawton et al. to examine how patients with type 2 diabetes perceived their care after discharge over a period of four years [ 49 , 50 , 51 ]. Our study focused on the first two months following hospitalization and future studies should focus on following discharged and at-risk patients over a longer period, as “transitions of care do not comprise linear trajectories of patients’ movements, with a starting and finishing point. Instead, they are endless loops of movements” [ 47 ].

Our results provide a particularly thorough description of how participants move from a state of total dependency during hospitalization regarding their medication management to a sudden and complete autonomy after hospital discharge impacting medication management, understanding, and adherence in the first days after discharge for some participants. Several qualitative studies have described the lack of shared decision-making and the loss of patient autonomy during hospitalization, which had an impact on self-management and created conflicts with healthcare professionals [ 75 , 81 , 84 ]. Our study also highlights nuanced patient experiences, including varying levels of patient needs, involvement, and proactivity during hospitalization and outpatient care, and our results contribute to capturing different perspectives that contrast with some literature that often portrays patients as more passive recipients of care [ 14 , 15 , 74 , 75 ]. Shared decision-making and proactive medication are key elements as they contribute to a smoother transition and better outcomes for patients post-discharge [ 85 , 86 , 87 ].

Consistent with the literature, the study identifies some challenges in medication initiation post-discharge [ 16 , 17 , 88 ] but our results also describe how daily routine rapidly takes over, either solidifying adherence behavior or generating barriers to medication adherence. Participants’ nonadherence prior to hospitalization was a factor influencing participants’ adherence post-hospitalization and this association should be further investigated, as literature showed that hospitalized patients have high scores of non-adherence [ 89 ]. Mortel et al. showed that more than 20% of discharged patients stopped their medications earlier than agreed with the physician and 25% adapted their medication intake [ 90 ]. Furthermore, patients who self-managed their medications had a lower perception of the necessity of their medication than patients who received help, which could negatively impact medication adherence [ 91 ]. Although participants in our study had high BMQ scores for necessity and lower scores for concerns, some participants expressed doubts about the need for their medications and a lack of motivation a few weeks after discharge. Targeted pharmacy interventions for newly prescribed medications have been shown to improve medication adherence, and hospital discharge is an opportune moment to implement this service [ 92 , 93 ].

Many medication changes were made during the transition of care (a median number of 7 changes during hospitalization and 7 changes during the two months after discharge), especially medication additions during hospitalization and interruptions after hospitalization. While medication changes during hospitalization are well described, the many changes following discharge are less discussed [ 7 , 94 ]. A Danish study showed that approximately 65% of changes made during hospitalization were accepted by primary healthcare professionals but only 43% of new medications initiated during hospitalization were continued after discharge [ 95 ]. The numerous changes after discharge may be caused by unnecessary intensification of medications during hospitalization, delayed discharge letters, lack of standardized procedures, miscommunication, patient self-management difficulties, or in response to an acute situation [ 96 , 97 , 98 ]. During the transition of care, in our study, both new and experienced participants were faced with difficulties in managing and understanding medication changes, either for newly prescribed medication or changes in previous medications. Such difficulties corroborate the findings of the literature [ 9 , 10 , 47 ] and our results showed that the lack of understanding during hospitalization led to participants having questions about their medications, even weeks after discharge. Particular attention should be given to patients’ understanding of medication changes jointly by physicians, nurses and pharmacists during the transition of care and in the months that follow as medications are likely to undergo as many changes as during hospitalization.

Implication for practice and future research

The patients’ perspectives in this study showed, at a system level, that there was a lack of standardization in healthcare professional practices regarding medication dispensing and follow-up. For now, in Switzerland, there are no official guidelines on medication prescription and dispensation during the transition of care although some international guidelines have been developed for outpatient healthcare professionals [ 3 , 99 , 100 , 101 , 102 ]. Here are some suggestions for improvement arising from our results. Patients should be included as partners and healthcare professionals should systematically assess (i) previous medication adherence, (ii) patients’ desired level of involvement and (iii) their needs for information during hospitalization. Hospital discharge processes should be routinely implemented to standardize hospital discharge preparation, medication prescribing, and dispensing. Discharge from the hospital should be planned with community pharmacies to ensure that all medications are available and, if necessary, doses of medications should be supplied by the hospital to bridge the gap. A partnership with outpatient healthcare professionals, such as general practitioners, community pharmacists, and homecare nurses, should be set up for effective asynchronous interprofessional collaboration to consolidate patients’ medication management, knowledge, and adherence, as well as to monitor signs of deterioration or adverse drug events.

Future research should consolidate our first attempt to develop a framework to better characterize medication at the transition of care, using Fig. 2   as a starting point. Contextualized interventions, co-designed by health professionals, patients and stakeholders, should be tested in a hybrid implementation study to test the implementation and effectiveness of the intervention for the health system [ 103 ].

Limitations

This study has some limitations. First, the transcripts were validated for accuracy by the interviewer but not by a third party, which could have increased the robustness of the transcription. Nevertheless, the interviewer followed all methodological recommendations for transcription. Second, patient inclusion took place during the COVID-19 pandemic, which may have had an impact on patient care and the availability of healthcare professionals. Third, we cannot guarantee the accuracy of some participants’ medication history before hospitalization, even though we contacted the participants’ main pharmacy, as participants could have gone to different pharmacies to obtain their medications. Fourth, our findings may not be generalizable to other populations and other healthcare systems because some issues may be specific to multimorbid patients with type 2 diabetes or to the Swiss healthcare setting. Nevertheless, issues encountered by our participants regarding their medications correlate with findings in the literature. Fifth, only 15 out of 21 participants took part in all the interviews, but most participants took part in at least three interviews and data saturation was reached. Lastly, by its qualitative and longitudinal design, it is possible that the discussion during interviews and participants’ reflections between interviews influenced participants’ management, knowledge, and adherence, even though this study was observational, and no advice or recommendations were given by the interviewer during interviews.

Discharged patients are willing to take steps to better manage, understand, and adhere to their medications, yet they are also faced with difficulties in the hospital and outpatient care. Furthermore, extensive changes in medications not only occur during hospitalization but also during the two months following hospital discharge, for which healthcare professionals should give particular attention. The different degrees of patients’ involvement, needs and resources should be carefully considered to enable them to better manage, understand and adhere to their medications. At a system level, patients’ experiences revealed a lack of standardization of medication practices during the transition of care. The healthcare system should provide the ecosystem needed for healthcare professionals responsible for or involved in the management of patients’ medications during the hospital stay, discharge, and outpatient care to standardize their practices while considering the patient as an active partner.

Data availability

The anonymized quantitative survey datasets and the qualitative codes are available in French from the corresponding author on reasonable request.

Abbreviations

adverse drug events

Adherence Visual Analogue Scale

Belief in Medication Questionnaire

Consolidated Criteria for Reporting Qualitative Research

case report form

standard deviation

World Health Organization

American Academy of Family Physician. Continuity of Care, Definition of 2020. Accessed 10 July 2022 https://www.aafp.org/about/policies/all/continuity-of-care-definition.html

Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA. 2007;297(8):831–41.

Article   CAS   PubMed   Google Scholar  

World Health Organization (WHO). Medication Safety in Transitions of Care. 2019.

Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161–7.

Article   PubMed   Google Scholar  

Krumholz HM. Post-hospital syndrome–an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100–2.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Banholzer S, Dunkelmann L, Haschke M, Derungs A, Exadaktylos A, Krähenbühl S, et al. Retrospective analysis of adverse drug reactions leading to short-term emergency hospital readmission. Swiss Med Wkly. 2021;151:w20400.

Blozik E, Signorell A, Reich O. How does hospitalization affect continuity of drug therapy: an exploratory study. Ther Clin Risk Manag. 2016;12:1277–83.

Article   PubMed   PubMed Central   Google Scholar  

Allen J, Hutchinson AM, Brown R, Livingston PM. User experience and care for older people transitioning from hospital to home: patients’ and carers’ perspectives. Health Expect. 2018;21(2):518–27.

Daliri S, Bekker CL, Buurman BM, Scholte Op Reimer WJM, van den Bemt BJF, Karapinar-Çarkit F. Barriers and facilitators with medication use during the transition from hospital to home: a qualitative study among patients. BMC Health Serv Res. 2019;19(1):204.

Bekker CL, Mohsenian Naghani S, Natsch S, Wartenberg NS, van den Bemt BJF. Information needs and patient perceptions of the quality of medication information available in hospitals: a mixed method study. Int J Clin Pharm. 2020;42(6):1396–404.

Foulon V, Wuyts J, Desplenter F, Spinewine A, Lacour V, Paulus D, et al. Problems in continuity of medication management upon transition between primary and secondary care: patients’ and professionals’ experiences. Acta Clin Belgica: Int J Clin Lab Med. 2019;74(4):263–71.

Article   Google Scholar  

Micheli P, Kossovsky MP, Gerstel E, Louis-Simonet M, Sigaud P, Perneger TV, et al. Patients’ knowledge of drug treatments after hospitalisation: the key role of information. Swiss Med Wkly. 2007;137(43–44):614–20.

PubMed   Google Scholar  

Ziaeian B, Araujo KL, Van Ness PH, Horwitz LI. Medication reconciliation accuracy and patient understanding of intended medication changes on hospital discharge. J Gen Intern Med. 2012;27(11):1513–20.

Allen J, Hutchinson AM, Brown R, Livingston PM. User experience and care integration in Transitional Care for older people from hospital to home: a Meta-synthesis. Qual Health Res. 2016;27(1):24–36.

Mackridge AJ, Rodgers R, Lee D, Morecroft CW, Krska J. Cross-sectional survey of patients’ need for information and support with medicines after discharge from hospital. Int J Pharm Pract. 2018;26(5):433–41.

Mulhem E, Lick D, Varughese J, Barton E, Ripley T, Haveman J. Adherence to medications after hospital discharge in the elderly. Int J Family Med. 2013;2013:901845.

Fallis BA, Dhalla IA, Klemensberg J, Bell CM. Primary medication non-adherence after discharge from a general internal medicine service. PLoS ONE. 2013;8(5):e61735.

Zhou L, Rupa AP. Categorization and association analysis of risk factors for adverse drug events. Eur J Clin Pharmacol. 2018;74(4):389–404.

Moreau-Gruet F. La multimorbidité chez les personnes de 50 ans et plus. Résultats basés sur l’enqête SHARE (Survey of Health, Ageing and Retirement in Europe. Obsan Bulletin 4/2013. 2013(Neuchâtel: OBservatoire suisse de la santé).

Iglay K, Hannachi H, Joseph Howie P, Xu J, Li X, Engel SS, et al. Prevalence and co-prevalence of comorbidities among patients with type 2 diabetes mellitus. Curr Med Res Opin. 2016;32(7):1243–52.

Sibounheuang P, Olson PS, Kittiboonyakun P. Patients’ and healthcare providers’ perspectives on diabetes management: a systematic review of qualitative studies. Res Social Adm Pharm. 2020;16(7):854–74.

Müller-Wieland D, Merkel M, Hamann A, Siegel E, Ottillinger B, Woker R, et al. Survey to estimate the prevalence of type 2 diabetes mellitus in hospital patients in Germany by systematic HbA1c measurement upon admission. Int J Clin Pract. 2018;72(12):e13273.

Blanc AL, Fumeaux T, Stirnemann J, Dupuis Lozeron E, Ourhamoune A, Desmeules J, et al. Development of a predictive score for potentially avoidable hospital readmissions for general internal medicine patients. PLoS ONE. 2019;14(7):e0219348.

Hansen LO, Greenwald JL, Budnitz T, Howell E, Halasyamani L, Maynard G, et al. Project BOOST: effectiveness of a multihospital effort to reduce rehospitalization. J Hosp Med. 2013;8(8):421–7.

Khalid JM, Raluy-Callado M, Curtis BH, Boye KS, Maguire A, Reaney M. Rates and risk of hospitalisation among patients with type 2 diabetes: retrospective cohort study using the UK General Practice Research Database linked to English Hospital Episode statistics. Int J Clin Pract. 2014;68(1):40–8.

Lussier ME, Evans HJ, Wright EA, Gionfriddo MR. The impact of community pharmacist involvement on transitions of care: a systematic review and meta-analysis. J Am Pharm Assoc. 2020;60(1):153–.

van der Heijden A, de Bruijne MC, Nijpels G, Hugtenburg JG. Cost-effectiveness of a clinical medication review in vulnerable older patients at hospital discharge, a randomized controlled trial. Int J Clin Pharm. 2019;41(4):963–71.

Bingham J, Campbell P, Schussel K, Taylor AM, Boesen K, Harrington A, et al. The Discharge Companion Program: an interprofessional collaboration in Transitional Care Model Delivery. Pharm (Basel). 2019;7(2):68.

Google Scholar  

Farris KB, Carter BL, Xu Y, Dawson JD, Shelsky C, Weetman DB, et al. Effect of a care transition intervention by pharmacists: an RCT. BMC Health Serv Res. 2014;14:406.

Meslot C, Gauchet A, Hagger MS, Chatzisarantis N, Lehmann A, Allenet B. A Randomised Controlled Trial to test the effectiveness of planning strategies to improve Medication Adherence in patients with Cardiovascular Disease. Appl Psychol Health Well Being. 2017;9(1):106–29.

Garnier A, Rouiller N, Gachoud D, Nachar C, Voirol P, Griesser AC, et al. Effectiveness of a transition plan at discharge of patients hospitalized with heart failure: a before-and-after study. ESC Heart Fail. 2018;5(4):657–67.

Daliri S, Bekker CL, Buurman BM, Scholte Op Reimer WJM, van den Bemt BJF, Karapinar-Çarkit F. Medication management during transitions from hospital to home: a focus group study with hospital and primary healthcare providers in the Netherlands. Int J Clin Pharm. 2020.

Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30-day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–8.

Leppin AL, Gionfriddo MR, Kessler M, Brito JP, Mair FS, Gallacher K, et al. Preventing 30-day hospital readmissions: a systematic review and meta-analysis of randomized trials. JAMA Intern Med. 2014;174(7):1095–107.

Donzé J, John G, Genné D, Mancinetti M, Gouveia A, Méan M et al. Effects of a Multimodal Transitional Care Intervention in patients at high risk of readmission: the TARGET-READ Randomized Clinical Trial. JAMA Intern Med. 2023.

Rodrigues CR, Harrington AR, Murdock N, Holmes JT, Borzadek EZ, Calabro K, et al. Effect of pharmacy-supported transition-of-care interventions on 30-Day readmissions: a systematic review and Meta-analysis. Ann Pharmacother. 2017;51(10):866–89.

Lam MYY, Dodds LJ, Corlett SA. Engaging patients to access the community pharmacy medicine review service after discharge from hospital: a cross-sectional study in England. Int J Clin Pharm. 2019;41(4):1110–7.

Hossain LN, Fernandez-Llimos F, Luckett T, Moullin JC, Durks D, Franco-Trigo L, et al. Qualitative meta-synthesis of barriers and facilitators that influence the implementation of community pharmacy services: perspectives of patients, nurses and general medical practitioners. BMJ Open. 2017;7(9):e015471.

En-Nasery-de Heer S, Uitvlugt EB, Bet PM, van den Bemt BJF, Alai A, van den Bemt P et al. Implementation of a pharmacist-led transitional pharmaceutical care programme: process evaluation of medication actions to reduce hospital admissions through a collaboration between Community and Hospital pharmacists (MARCH). J Clin Pharm Ther. 2022.

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

De Geest S, Zúñiga F, Brunkert T, Deschodt M, Zullig LL, Wyss K, et al. Powering Swiss health care for the future: implementation science to bridge the valley of death. Swiss Med Wkly. 2020;150:w20323.

Noonan VK, Lyddiatt A, Ware P, Jaglal SB, Riopelle RJ, Bingham CO 3, et al. Montreal Accord on patient-reported outcomes (PROs) use series - paper 3: patient-reported outcomes can facilitate shared decision-making and guide self-management. J Clin Epidemiol. 2017;89:125–35.

Hesselink G, Schoonhoven L, Barach P, Spijker A, Gademan P, Kalkman C, et al. Improving patient handovers from hospital to primary care: a systematic review. Ann Intern Med. 2012;157(6):417–28.

(OFSP) Interprofessionnalité dans le domaine de la santé Soins ambulatoire. Accessed 4 January 2024. https://www.bag.admin.ch/bag/fr/home/strategie-und-politik/nationale-gesundheitspolitik/foerderprogramme-der-fachkraefteinitiative-plus/foerderprogramme-interprofessionalitaet.html

Mitchell SE, Laurens V, Weigel GM, Hirschman KB, Scott AM, Nguyen HQ, et al. Care transitions from patient and caregiver perspectives. Ann Fam Med. 2018;16(3):225–31.

Davoody N, Koch S, Krakau I, Hägglund M. Post-discharge stroke patients’ information needs as input to proposing patient-centred eHealth services. BMC Med Inf Decis Mak. 2016;16:66.

Ozavci G, Bucknall T, Woodward-Kron R, Hughes C, Jorm C, Joseph K, et al. A systematic review of older patients’ experiences and perceptions of communication about managing medication across transitions of care. Res Social Adm Pharm. 2021;17(2):273–91.

Fylan B, Armitage G, Naylor D, Blenkinsopp A. A qualitative study of patient involvement in medicines management after hospital discharge: an under-recognised source of systems resilience. BMJ Qual Saf. 2018;27(7):539–46.

Fylan B, Marques I, Ismail H, Breen L, Gardner P, Armitage G, et al. Gaps, traps, bridges and props: a mixed-methods study of resilience in the medicines management system for patients with heart failure at hospital discharge. BMJ Open. 2019;9(2):e023440.

Brandberg C, Ekstedt M, Flink M. Self-management challenges following hospital discharge for patients with multimorbidity: a longitudinal qualitative study of a motivational interviewing intervention. BMJ Open. 2021;11(7):e046896.

Lawton J, Rankin D, Peel E, Douglas M. Patients’ perceptions and experiences of transitions in diabetes care: a longitudinal qualitative study. Health Expect. 2009;12(2):138–48.

Mabire C, Bachnick S, Ausserhofer D, Simon M. Patient readiness for hospital discharge and its relationship to discharge preparation and structural factors: a cross-sectional study. Int J Nurs Stud. 2019;90:13–20.

Meyers DC, Durlak JA, Wandersman A. The quality implementation framework: a synthesis of critical steps in the implementation process. Am J Community Psychol. 2012;50(3–4):462–80.

Meyer-Massetti C, Hofstetter V, Hedinger-Grogg B, Meier CR, Guglielmo BJ. Medication-related problems during transfer from hospital to home care: baseline data from Switzerland. Int J Clin Pharm. 2018;40(6):1614–20.

Neeman M, Dobrinas M, Maurer S, Tagan D, Sautebin A, Blanc AL, et al. Transition of care: a set of pharmaceutical interventions improves hospital discharge prescriptions from an internal medicine ward. Eur J Intern Med. 2017;38:30–7.

Geese F, Schmitt KU. Interprofessional Collaboration in Complex Patient Care Transition: a qualitative multi-perspective analysis. Healthc (Basel). 2023;11(3).

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. Int J Nurs Stud. 2013;50(5):587–92.

Thomson R, Plumridge L, Holland J, Editorial. Int J Soc Res Methodol. 2003;6(3):185–7.

Audulv Å, Hall EOC, Kneck Å, Westergren T, Fegran L, Pedersen MK, et al. Qualitative longitudinal research in health research: a method study. BMC Med Res Methodol. 2022;22(1):255.

Kim H, Sefcik JS, Bradway C. Characteristics of qualitative descriptive studies: a systematic review. Res Nurs Health. 2017;40(1):23–42.

Sandelowski M. Whatever happened to qualitative description? Res Nurs Health. 2000;23(4):334–40.

Bradshaw C, Atkinson S, Doody O. Employing a qualitative description Approach in Health Care Research. Glob Qual Nurs Res. 2017;4:2333393617742282.

PubMed   PubMed Central   Google Scholar  

Bellone JM, Barner JC, Lopez DA. Postdischarge interventions by pharmacists and impact on hospital readmission rates. J Am Pharm Assoc (2003). 2012;52(3):358–62.

Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are Enough? Qual Health Res. 2016;27(4):591–608.

World Health Organization. Adherence to long-term therapies: evidence for action. 2003.

Fisher JD, Fisher WA, Amico KR, Harman JJ. An information-motivation-behavioral skills model of adherence to antiretroviral therapy. Health Psychol. 2006;25(4):462–73.

Bandura A. Health promotion from the perspective of social cognitive theory. Psychol Health. 1998;13(4):623–49.

ShiftEUROQOL Research FOndation EQ 5D Instruments. Accessed 30 July 2022 https://euroqol.org/eq-5d-instruments/sample-demo/

Jeppesen KM, Coyle JD, Miser WF. Screening questions to predict limited health literacy: a cross-sectional study of patients with diabetes mellitus. Ann Fam Med. 2009;7(1):24–31.

Giordano TP, Guzman D, Clark R, Charlebois ED, Bangsberg DR. Measuring adherence to antiretroviral therapy in a diverse population using a visual analogue scale. HIV Clin Trials. 2004;5(2):74–9.

Horne R, Weinman J, Hankins M. The beliefs about medicines questionnaire: the development and evaluation of a new method for assessing the cognitive representation of medication. Psychol Health. 1999;14(1):1–24.

Horne R, Chapman SC, Parham R, Freemantle N, Forbes A, Cooper V. Understanding patients’ adherence-related beliefs about medicines prescribed for long-term conditions: a meta-analytic review of the necessity-concerns Framework. PLoS ONE. 2013;8(12):e80633.

Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Res Psychol. 2006;3(2):77–101.

Waibel S, Henao D, Aller M-B, Vargas I, Vázquez M-L. What do we know about patients’ perceptions of continuity of care? A meta-synthesis of qualitative studies. Int J Qual Health Care. 2011;24(1):39–48.

Rognan SE, Jørgensen MJ, Mathiesen L, Druedahl LC, Lie HB, Bengtsson K, et al. The way you talk, do I have a choice?’ Patient narratives of medication decision-making during hospitalization. Int J Qualitative Stud Health Well-being. 2023;18(1):2250084.

Michel B, Hemery M, Rybarczyk-Vigouret MC, Wehrle P, Beck M. Drug-dispensing problems community pharmacists face when patients are discharged from hospitals: a study about 537 prescriptions in Alsace. Int J Qual Health Care. 2016;28(6):779–84.

Bruhwiler LD, Hersberger KE, Lutters M. Hospital discharge: what are the problems, information needs and objectives of community pharmacists? A mixed method approach. Pharm Pract (Granada). 2017;15(3):1046.

Knight DA, Thompson D, Mathie E, Dickinson A. Seamless care? Just a list would have helped!’ Older people and their carer’s experiences of support with medication on discharge home from hospital. Health Expect. 2013;16(3):277–91.

Gualandi R, Masella C, Viglione D, Tartaglini D. Exploring the hospital patient journey: what does the patient experience? PLoS ONE. 2019;14(12):e0224899.

Norberg H, Håkansson Lindqvist M, Gustafsson M. Older individuals’ experiences of Medication Management and Care after Discharge from Hospital: an interview study. Patient Prefer Adherence. 2023;17:781–92.

Jones KC, Austad K, Silver S, Cordova-Ramos EG, Fantasia KL, Perez DC, et al. Patient perspectives of the hospital discharge process: a qualitative study. J Patient Exp. 2023;10:23743735231171564.

Hesselink G, Flink M, Olsson M, Barach P, Dudzik-Urbaniak E, Orrego C, et al. Are patients discharged with care? A qualitative study of perceptions and experiences of patients, family members and care providers. BMJ Qual Saf. 2012;21(Suppl 1):i39–49.

Murray SA, Kendall M, Carduff E, Worth A, Harris FM, Lloyd A, et al. Use of serial qualitative interviews to understand patients’ evolving experiences and needs. BMJ. 2009;339:b3702.

Berger ZD, Boss EF, Beach MC. Communication behaviors and patient autonomy in hospital care: a qualitative study. Patient Educ Couns. 2017;100(8):1473–81.

Davis RE, Jacklin R, Sevdalis N, Vincent CA. Patient involvement in patient safety: what factors influence patient participation and engagement? Health Expect. 2007;10(3):259–67.

Greene J, Hibbard JH. Why does patient activation matter? An examination of the relationships between patient activation and health-related outcomes. J Gen Intern Med. 2012;27(5):520–6.

Mitchell SE, Gardiner PM, Sadikova E, Martin JM, Jack BW, Hibbard JH, et al. Patient activation and 30-day post-discharge hospital utilization. J Gen Intern Med. 2014;29(2):349–55.

Weir DL, Motulsky A, Abrahamowicz M, Lee TC, Morgan S, Buckeridge DL, et al. Failure to follow medication changes made at hospital discharge is associated with adverse events in 30 days. Health Serv Res. 2020;55(4):512–23.

Kripalani S, Goggins K, Nwosu S, Schildcrout J, Mixon AS, McNaughton C, et al. Medication nonadherence before hospitalization for Acute Cardiac events. J Health Commun. 2015;20(Suppl 2):34–42.

Mortelmans L, De Baetselier E, Goossens E, Dilles T. What happens after Hospital Discharge? Deficiencies in Medication Management encountered by geriatric patients with polypharmacy. Int J Environ Res Public Health. 2021;18(13).

Mortelmans L, Goossens E, Dilles T. Beliefs about medication after hospital discharge in geriatric patients with polypharmacy. Geriatr Nurs. 2022;43:280–7.

Bandiera C, Ribaut J, Dima AL, Allemann SS, Molesworth K, Kalumiya K et al. Swiss Priority setting on implementing Medication Adherence interventions as Part of the European ENABLE COST action. Int J Public Health. 2022;67.

Elliott R, Boyd M, Nde S. at e. Supporting adherence for people starting a new medication for a long-term condition through community pharmacies: a pragmaticrandomised controlled trial of the New Medicine Service. 2015.

Grimmsmann T, Schwabe U, Himmel W. The influence of hospitalisation on drug prescription in primary care–a large-scale follow-up study. Eur J Clin Pharmacol. 2007;63(8):783–90.

Larsen MD, Rosholm JU, Hallas J. The influence of comprehensive geriatric assessment on drug therapy in elderly patients. Eur J Clin Pharmacol. 2014;70(2):233–9.

Viktil KK, Blix HS, Eek AK, Davies MN, Moger TA, Reikvam A. How are drug regimen changes during hospitalisation handled after discharge: a cohort study. BMJ Open. 2012;2(6):e001461.

Strehlau AG, Larsen MD, Søndergaard J, Almarsdóttir AB, Rosholm J-U. General practitioners’ continuation and acceptance of medication changes at sectorial transitions of geriatric patients - a qualitative interview study. BMC Fam Pract. 2018;19(1):168.

Anderson TS, Lee S, Jing B, Fung K, Ngo S, Silvestrini M, et al. Prevalence of diabetes medication intensifications in older adults discharged from US Veterans Health Administration Hospitals. JAMA Netw Open. 2020;3(3):e201511.

Royal Pharmaceutical Society. Keeping patients safewhen they transfer between care providers– getting the medicines right June 2012. Accessed 27 October 2023 https://www.rpharms.com/Portals/0/RPS%20document%20library/Open%20access/Publications/Keeping%20patients%20safe%20transfer%20of%20care%20report.pdf

International Pharmaceutical Federation (FIP). Medicines reconciliation: A toolkit for pharmacists. Accessed 23 September 2023 https://www.fip.org/file/4949

Californian Pharmacist Assiociation Transitions of Care Resource Guide. https://cdn.ymaws.com/www.cshp.org/resource/resmgr/Files/Practice-Policy/For_Pharmacists/transitions_of_care_final_10.pdf

Royal Collegue of Physicians. Medication safety at hospital discharge: Improvement guide and resource. Accessed 18 September 2023 https://www.rcplondon.ac.uk/file/33421/download

Douglas N, Campbell W, Hinckley J. Implementation science: buzzword or game changer. J Speech Lang Hear Res. 2015;58.

Download references

Acknowledgements

The authors would like to thank all the patients who took part in this study. We would also like to thank the Geneva University Hospitals Patients Partners + 3P platform as well as Mrs. Tourane Corbière and Mr. Joël Mermoud, patient partners, who reviewed interview guides for clarity and significance. We would like to thank Samuel Fabbi, Vitcoryavarman Koh, and Pierre Repiton for the transcriptions of the audio recordings.

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Open access funding provided by University of Geneva

Author information

Authors and affiliations.

School of Pharmaceutical Sciences, University of Geneva, Geneva, Switzerland

Léa Solh Dost & Marie P. Schneider

Institute of Pharmaceutical Sciences of Western Switzerland, University of Geneva, Geneva, Switzerland

Division of Endocrinology, Diabetes, Hypertension and Nutrition, Department of Medicine, Geneva University Hospitals, Geneva, Switzerland

Giacomo Gastaldi

You can also search for this author in PubMed   Google Scholar

Contributions

LS, GG, and MS conceptualized and designed the study. LS and GG screened and recruited participants. LS conducted the interviews. LS, GG, and MS performed data analysis and interpretation. LS drafted the manuscript and LS and MS worked on the different versions. MS and GG approved the final manuscript.

Corresponding authors

Correspondence to Léa Solh Dost or Marie P. Schneider .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was sought and granted by the Cantonal Research Ethics Commission, Geneva (CCER) (2020 − 01779), and informed consent to participate was obtained from all participants.

Consent for publication

Informed consent for publication was obtained from all participants.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Solh Dost, L., Gastaldi, G. & Schneider, M. Patient medication management, understanding and adherence during the transition from hospital to outpatient care - a qualitative longitudinal study in polymorbid patients with type 2 diabetes. BMC Health Serv Res 24 , 620 (2024). https://doi.org/10.1186/s12913-024-10784-9

Download citation

Received : 28 June 2023

Accepted : 26 February 2024

Published : 13 May 2024

DOI : https://doi.org/10.1186/s12913-024-10784-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Continuity of care
  • Transition of care
  • Patient discharge
  • Medication management
  • Medication adherence
  • Qualitative research
  • Longitudinal studies
  • Patient-centered care
  • Interprofessional collaboration
  • Type 2 diabetes

BMC Health Services Research

ISSN: 1472-6963

method of data analysis in descriptive research

IMAGES

  1. A Complete Guide for Descriptive Analysis

    method of data analysis in descriptive research

  2. How To Use Descriptive Analysis In Research

    method of data analysis in descriptive research

  3. Introduction to Descriptive Analysis / Descriptive Statistics

    method of data analysis in descriptive research

  4. What is Descriptive Analysis?- Types and Advantages

    method of data analysis in descriptive research

  5. Standard statistical tools in research and data analysis

    method of data analysis in descriptive research

  6. How-To: Data Analytics for Beginners

    method of data analysis in descriptive research

VIDEO

  1. Analysis of Data? Some Examples to Explore

  2. Data analysis and interpretation of descriptive research (part 2) with example

  3. Mastering Diagnostic Analytics in Data Analysis #excel #businessintelligence #dataanalysis

  4. Four types of Data Analytics

  5. Data Analysis in Research

  6. Descriptive Analytics in Data Analysis

COMMENTS

  1. Descriptive Analytics

    Descriptive Analytics. Definition: Descriptive analytics focused on describing or summarizing raw data and making it interpretable. This type of analytics provides insight into what has happened in the past. It involves the analysis of historical data to identify patterns, trends, and insights. Descriptive analytics often uses visualization ...

  2. Descriptive Research Design

    Data Analysis Methods. Descriptive research design data analysis methods depend on the type of data collected and the research question being addressed. Here are some common methods of data analysis for descriptive research: Descriptive Statistics. This method involves analyzing data to summarize and describe the key features of a sample or ...

  3. Descriptive Research

    Descriptive research methods. Descriptive research is usually defined as a type of quantitative research, though qualitative research can also be used for descriptive purposes. The research design should be carefully developed to ensure that the results are valid and reliable.. Surveys. Survey research allows you to gather large volumes of data that can be analyzed for frequencies, averages ...

  4. PDF Descriptive analysis in education: A guide for researchers

    Box 1. Descriptive Analysis Is a Critical Component of Research Box 2. Examples of Using Descriptive Analyses to Diagnose Need and Target Intervention on the Topic of "Summer Melt" Box 3. An Example of Using Descriptive Analysis to Evaluate Plausible Causes and Generate Hypotheses Box 4.

  5. Descriptive Statistics

    A data set is a collection of responses or observations from a sample or entire population. In quantitative research, after collecting data, the first step of statistical analysis is to describe characteristics of the responses, such as the average of one variable (e.g., age), or the relation between two variables (e.g., age and creativity).

  6. What Is Descriptive Analytics? 5 Examples

    5 Examples of Descriptive Analytics. 1. Traffic and Engagement Reports. One example of descriptive analytics is reporting. If your organization tracks engagement in the form of social media analytics or web traffic, you're already using descriptive analytics. These reports are created by taking raw data—generated when users interact with ...

  7. Descriptive Statistics for Summarising Data

    Using the data from these three rows, we can draw the following descriptive picture. Mentabil scores spanned a range of 50 (from a minimum score of 85 to a maximum score of 135). Speed scores had a range of 16.05 s (from 1.05 s - the fastest quality decision to 17.10 - the slowest quality decision).

  8. Descriptive Data Analysis

    Performance Measurement and Management. Descriptive techniques often include constructing tables of means and quantiles, measures of dispersion such as variance or standard deviation, and cross-tabulations or "crosstabs" that can be used to examine many disparate hypotheses. Those hypotheses are often about observed differences across subgroups.

  9. Descriptive Statistics

    Descriptive statistics are fundamental in the field of data analysis and interpretation, as they provide the first step in understanding a dataset. Here are a few reasons why descriptive statistics are important: Data Summarization: Descriptive statistics provide simple summaries about the measures and samples you have collected. With a large ...

  10. Quantitative analysis: Descriptive statistics

    Numeric data collected in a research project can be analysed quantitatively using statistical tools in two different ways. Descriptive analysis refers to statistically describing, aggregating, and presenting the constructs of interest or associations between these constructs.Inferential analysis refers to the statistical testing of hypotheses (theory testing).

  11. Descriptive Statistics

    Descriptive statistics summarise and organise characteristics of a data set. A data set is a collection of responses or observations from a sample or entire population . In quantitative research , after collecting data, the first step of statistical analysis is to describe characteristics of the responses, such as the average of one variable (e ...

  12. Data Analysis: Descriptive and Analytical Statistics

    10.1.7 Conducting Descriptive Statistics in SPSS. It is very simple and quick to analyze our research data using the descriptive analysis method in the SPSS. Once the dataset is ready, use the following steps to run the descriptive analyses: $$ \mathrm {Click},\mathrm {Analyze}\to \mathrm {Descriptive}\ \mathrm {Statistics}\to \mathrm {Explore} $$.

  13. Descriptive Analysis: What It Is + Best Research Tips

    Descriptive analysis is a sort of data research that aids in describing, demonstrating, or helpfully summarizing data points so those patterns may develop that satisfy all of the conditions of the data. It is the technique of identifying patterns and links by utilizing recent and historical data. Because it identifies patterns and associations ...

  14. Descriptive Research: Design, Methods, Examples, and FAQs

    Descriptive research is an exploratory research method.It enables researchers to precisely and methodically describe a population, circumstance, or phenomenon.. As the name suggests, descriptive research describes the characteristics of the group, situation, or phenomenon being studied without manipulating variables or testing hypotheses.This can be reported using surveys, observational ...

  15. Qualitative and descriptive research: Data type versus data analysis

    Qualitative research collects data qualitatively, and the method of analysis is also primarily qualitative. This often involves an inductive exploration of the data to identify recurring themes, patterns, or concepts and then describing and interpreting those categories. Of course, in qualitative research, the data collected qualitatively can ...

  16. Descriptive research: What it is and how to use it

    Descriptive research design. Descriptive research design uses a range of both qualitative research and quantitative data (although quantitative research is the primary research method) to gather information to make accurate predictions about a particular problem or hypothesis. As a survey method, descriptive research designs will help ...

  17. What is data analysis? Methods, techniques, types & how-to

    A method of data analysis that is the umbrella term for engineering metrics and insights for additional value, direction, and context. By using exploratory statistical evaluation, data mining aims to identify dependencies, relations, patterns, and trends to generate advanced knowledge.

  18. Chapter 14 Quantitative Analysis Descriptive Statistics

    Chapter 14 Quantitative Analysis Descriptive Statistics. Numeric data collected in a research project can be analyzed quantitatively using statistical tools in two different ways. Descriptive analysis refers to statistically describing, aggregating, and presenting the constructs of interest or associations between these constructs.

  19. Descriptive Analysis: How-To, Types, Examples

    In descriptive analysis, it's also worth knowing the central (or average) event or response. Common measures of central tendency include the three averages — mean, median, and mode. As an example, consider a survey in which the height of 1,000 people is measured. In this case, the mean average would be a very helpful descriptive metric.

  20. Types of Data Analysis: A Guide

    A tutorial on the different types of data analysis. | Video: Shiram Vasudevan When to Use the Different Types of Data Analysis Descriptive analysis summarizes the data at hand and presents your data in a comprehensible way.; Diagnostic analysis takes a more detailed look at data to reveal why certain patterns occur, making it a good method for explaining anomalies.

  21. Descriptive Research: Characteristics, Methods + Examples

    Characteristics of descriptive research. The term descriptive research then refers to research questions, the design of the study, and data analysis conducted on that topic. We call it an observational research method because none of the research study variables are influenced in any capacity. Some distinctive characteristics of descriptive ...

  22. Data Analysis in Research: Types & Methods

    For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell.

  23. Descriptive Research

    Descriptive research typically involves the collection of both qualitative and quantitative data through methods such as surveys, observational studies, case studies, or focus groups. 4. Data Analysis. Descriptive research data is analyzed to identify patterns, relationships, and trends within the data.

  24. (PDF) Descriptive Data Analysis

    It is one of the simplest analyses that can be performed and interpreted, it is the easiest method to summarize a data set, get a description of the targeted sample, and show its characteristics ...

  25. "Weighing the Pros and Cons of Everything": A Qualitative Descriptive

    Research team members experienced in qualitative methods (SJP, AD, SJA) used thematic analysis to explore emergent themes. 23,30 This approach identifies similar words, phrases, patterns, and concepts as analysis moves from a detailed descriptive level to a broad thematic level. Similar codes were analyzed comparatively, and discrepancies were ...

  26. Learning to Do Qualitative Data Analysis: A Starting Point

    Also from Sage. CQ Library Elevating debate opens in new tab; Sage Data Uncovering insight opens in new tab; Sage Business Cases Shaping futures opens in new tab; Sage Campus Unleashing potential opens in new tab; Sage Knowledge Multimedia learning resources opens in new tab; Sage Research Methods Supercharging research opens in new tab; Sage Video Streaming knowledge opens in new tab

  27. Data visualisation in scoping reviews and evidence maps on health

    Scoping reviews and evidence maps are forms of evidence synthesis that aim to map the available literature on a topic and are well-suited to visual presentation of results. A range of data visualisation methods and interactive data visualisation tools exist that may make scoping reviews more useful to knowledge users. The aim of this study was to explore the use of data visualisation in a ...

  28. Understanding Data Analysis: A Beginner's Guide

    Gather your data from all relevant sources using data analysis software. Ensure that the data is representative and actually covers the variables you want to analyze. 3. Select your analytical methods. Investigate the various data analysis methods and select the technique that best aligns with your objectives.

  29. Patient medication management, understanding and adherence during the

    Study design. This qualitative longitudinal study, conducted from October 2020 to July 2021, used a qualitative descriptive methodology through four consecutive in-depth semi-structured interviews per participant at three, 10-, 30- and 60-days post-discharge, as illustrated in Fig. 1.Longitudinal qualitative research is characterized by qualitative data collection at different points in time ...

  30. Trends and Factors Associated With Mortality Rates of Leading Causes of

    Data analysis. The analysis began with a descriptive overview of the mortality rates for each of the three causes over the study period. Aggregate data for the selected time period (2007-2020) and available patient characteristics were summarized with mortality rates per 1,000 people and the total number of deaths over the years for all patients.