visual representation of quantitative data

The Visual Display of Quantitative Information

visual representation of quantitative data

  • Education & Teaching
  • Higher & Continuing Education

Amazon prime logo

Enjoy fast, free delivery, exclusive deals, and award-winning movies & TV shows with Prime Try Prime and start saving today with fast, free delivery

Amazon Prime includes:

Fast, FREE Delivery is available to Prime members. To join, select "Try Amazon Prime and start saving today with Fast, FREE Delivery" below the Add to Cart button.

  • Cardmembers earn 5% Back at Amazon.com with a Prime Credit Card.
  • Unlimited Free Two-Day Delivery
  • Streaming of thousands of movies and TV shows with limited ads on Prime Video.
  • A Kindle book to borrow for free each month - with no due dates
  • Listen to over 2 million songs and hundreds of playlists
  • Unlimited photo storage with anywhere access

Important:  Your credit card will NOT be charged when you start your free trial or if you cancel during the trial period. If you're happy with Amazon Prime, do nothing. At the end of the free trial, your membership will automatically upgrade to a monthly membership.

Buy new: $26.36 $26.36 FREE delivery: Tuesday, April 30 on orders over $35.00 shipped by Amazon. Ships from: Amazon.com Sold by: Amazon.com

Return this item for free.

Free returns are available for the shipping address you chose. You can return the item for any reason in new and unused condition: no shipping charges

  • Go to your orders and start the return
  • Select the return method

Buy used: $19.81

Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products. Something we hope you'll especially enjoy: FBA items qualify for FREE Shipping and Amazon Prime.

If you're a seller, Fulfillment by Amazon can help you grow your business. Learn more about the program.

Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required .

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Image Unavailable

The Visual Display of Quantitative Information, 2nd Ed.

  • To view this video download Flash Player

Follow the author

Edward R. Tufte

The Visual Display of Quantitative Information, 2nd Ed. 2nd Edition

Purchase options and add-ons.

  • ISBN-10 1930824130
  • ISBN-13 978-1930824133
  • Edition 2nd
  • Publisher Graphics Pr
  • Publication date February 14, 2001
  • Language English
  • Dimensions 11 x 9 x 1 inches
  • Print length 197 pages
  • See all details

Amazon First Reads | Editors' picks at exclusive prices

Frequently bought together

The Visual Display of Quantitative Information, 2nd Ed.

Similar items that may ship from close to you

Beautiful Evidence

Product details

  • Publisher ‏ : ‎ Graphics Pr; 2nd edition (February 14, 2001)
  • Language ‏ : ‎ English
  • Paperback ‏ : ‎ 197 pages
  • ISBN-10 ‏ : ‎ 1930824130
  • ISBN-13 ‏ : ‎ 978-1930824133
  • Item Weight ‏ : ‎ 1.65 pounds
  • Dimensions ‏ : ‎ 11 x 9 x 1 inches
  • #11 in Science & Mathematics
  • #23 in Education (Books)
  • #228 in Unknown

About the author

Edward r. tufte.

Statistician/visualizer/artist Edward Tufte is Professor Emeritus of Political Science, Statistics, and Computer Science at Yale University. He wrote, designed, and self-published 5 classic books on data visualization.

The New York Times described Tufte as the "Leonardo da Vinci of data," and Bloomberg as the "Galileo of graphics."

Having completed his book Seeing With Fresh Eyes: Meaning, Space, Data, Truth, ET is now constructing a 234-acre tree farm and sculpture park in northwest Connecticut, which will show his artworks and remain open space in perpetuity.

He founded Graphics Press, ET Modern Gallery/Studio, and Hogpen Hill Farms.

Customer reviews

Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.

To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.

Reviews with images

Customer Image

  • Sort reviews by Top reviews Most recent Top reviews

Top reviews from the United States

There was a problem filtering reviews right now. please try again later..

visual representation of quantitative data

Top reviews from other countries

visual representation of quantitative data

  • Amazon Newsletter
  • About Amazon
  • Accessibility
  • Sustainability
  • Press Center
  • Investor Relations
  • Amazon Devices
  • Amazon Science
  • Sell on Amazon
  • Sell apps on Amazon
  • Supply to Amazon
  • Protect & Build Your Brand
  • Become an Affiliate
  • Become a Delivery Driver
  • Start a Package Delivery Business
  • Advertise Your Products
  • Self-Publish with Us
  • Become an Amazon Hub Partner
  • › See More Ways to Make Money
  • Amazon Visa
  • Amazon Store Card
  • Amazon Secured Card
  • Amazon Business Card
  • Shop with Points
  • Credit Card Marketplace
  • Reload Your Balance
  • Amazon Currency Converter
  • Your Account
  • Your Orders
  • Shipping Rates & Policies
  • Amazon Prime
  • Returns & Replacements
  • Manage Your Content and Devices
  • Recalls and Product Safety Alerts
  • Conditions of Use
  • Privacy Notice
  • Consumer Health Data Privacy Disclosure
  • Your Ads Privacy Choices

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

1.3: Visual Representation of Data II - Quantitative Variables

  • Last updated
  • Save as PDF
  • Page ID 7789

  • Jonathan A. Poritz
  • Colorado State University – Pueblo

Now suppose we have a population and quantitative variable in which we are interested. We get a sample, which could be large or small, and look at the values of the our variable for the individuals in that sample. There are two ways we tend to make pictures of datasets like this: stem-and-leaf plots and histograms .

Stem-and-leaf Plots

One somewhat old-fashioned way to handle a modest amount of quantitative data produces something between simply a list of all the data values and a graph. It’s not a bad technique to know about in case one has to write down a dataset by hand, but very tedious – and quite unnecessary, if one uses modern electronic tools instead – if the dataset has more than a couple dozen values. The easiest case of this technique is where the data are all whole numbers in the range \(0-99\) . In that case, one can take off the tens place of each number – call it the stem – and put it on the left side of a vertical bar, and then line up all the ones places – each is a leaf – to the right of that stem. The whole thing is called a stem-and-leaf plot or, sometimes, just a stemplot .

It’s important not to skip any stems which are in the middle of the dataset, even if there are no corresponding leaves. It is also a good idea to allow repeated leaves, if there are repeated numbers in the dataset, so that the length of the row of leaves will give a good representation of how much data is in that general group of data values.

Example 1.3.1. Here is a list of the scores of 30 students on a statistics test: \[\begin{matrix} 86 & 80 & 25 & 77 & 73 & 76 & 88 & 90 & 69 & 93\\ 90 & 83 & 70 & 73 & 73 & 70 & 90 & 83 & 71 & 95\\ 40 & 58 & 68 & 69 & 100 & 78 & 87 & 25 & 92 & 74 \end{matrix}\] As we said, using the tens place (and the hundreds place as well, for the data value \(100\) ) as the stem and the ones place as the leaf, we get

[tab:stemplot1]

One nice feature stem-and-leaf plots have is that they contain all of the data values , they do not lose anything (unlike our next visualization method, for example).

[Frequency] Histograms

The most important visual representation of quantitative data is a histogram . Histograms actually look a lot like a stem-and-leaf plot, except turned on its side and with the row of numbers turned into a vertical bar, like a bar graph. The height of each of these bars would be how many

Another way of saying that is that we would be making bars whose heights were determined by how many scores were in each group of ten. Note there is still a question of into which bar a value right on the edge would count: e.g., does the data value \(50\) count in the bar to the left of that number, or the bar to the right? It doesn’t actually matter which side, but it is important to state which choice is being made.

Example 1.3.2 Continuing with the score data in Example 1.3.1 and putting all data values \(x\) satisfying \(20\le x<30\) in the first bar, values \(x\) satisfying \(30\le x<40\) in the second, values \(x\) satisfying \(40\le x<50\) in the second, etc. – that is, put data values on the edges in the bar to the right – we get the figure

Screen Shot 2020-01-16 at 9.41.07 AM.png

Actually, there is no reason that the bars always have to be ten units wide: it is important that they are all the same size and that how they handle the edge cases (whether the left or right bar gets a data value on edge), but they could be any size. We call the successive ranges of the \(x\) coordinates which get put together for each bar the called bins or classes , and it is up to the statistician to chose whichever bins – where they start and how wide they are – shows the data best.

Typically, the smaller the bin size, the more variation (precision) can be seen in the bars ... but sometimes there is so much variation that the result seems to have a lot of random jumps up and down, like static on the radio. On the other hand, using a large bin size makes the picture smoother ... but sometimes, it is so smooth that very little information is left. Some of this is shown in the following

Example 1.3.3. Continuing with the score data in Example 1.3.1 and now using the bins with \(x\) satisfying \(10\le x<12\) , then \(12\le x<14\) , etc. , we get the histogram with bins of width 2:

Screen Shot 2020-01-16 at 9.43.05 AM.png

If we use the bins with \(x\) satisfying \(10\le x<15\) , then \(15\le x<20\) , etc. , we get the histogram with bins of width 5:

Screen Shot 2020-01-16 at 9.44.18 AM.png

If we use the bins with \(x\) satisfying \(20\le x<40\) , then \(40\le x<60\) , etc. , we get the histogram with bins of width 20:

Screen Shot 2020-01-16 at 9.45.14 AM.png

Finally, if we use the bins with \(x\) satisfying \(0\le x<50\) , then \(50\le x<100\) , and then \(100\le x<150\) , we get the histogram with bins of width 50:

Screen Shot 2020-01-16 at 9.46.31 AM.png

[Relative Frequency] Histograms

Just as we could have bar charts with absolute (§2.1) or relative (§2.2) frequencies, we can do the same for histograms. Above, in §3.2, we made absolute frequency histograms. If, instead, we divide each of the counts used to determine the heights of the bars by the total sample size, we will get fractions or percents – relative frequencies. We should then change the label on the \(y\) -axis and the tick-marks numbers on the \(y\) -axis, but otherwise the graph will look exactly the same (as it did with relative frequency bar charts compared with absolute frequency bar chars).

Example 1.3.4. Let’s make the relative frequency histogram corresponding to the absolute frequency histogram in Example 1.3.2 based on the data from Example 1.3.1 – all we have to do is change the numbers used to make heights of the bars in the graph by dividing them by the sample size, 30, and then also change the \(y\) -axis label and tick mark numbers.

Screen Shot 2020-01-16 at 9.49.16 AM.png

How to Talk About Histograms

Histograms of course tell us what the data values are – the location along the \(x\) value of a bar is the value of the variable – and how many of them have each particular value – the height of the bar tells how many data values are in that bin. This is also given a technical name

[def:distribution] Given a variable defined on a population, or at least on a sample, the distribution of that variable is a list of all the values the variable actually takes on and how many times it takes on these values.

The reason we like the visual version of a distribution, its histogram, is that our visual intuition can then help us answer general, qualitative questions about what those data must be telling us. The first questions we usually want to answer quickly about the data are

  • What is the shape of the histogram?
  • Where is its center ?
  • How much variability [also called spread ] does it show?

When we talk about the general shape of a histogram, we often use the terms

[def:symmskew] A histogram is symmetric if the left half is (approximately) the mirror image of the right half.

We say a histogram is skewed left if the tail on the left side is longer than on the right. In other words, left skew is when the left half of the histogram – half in the sense that the total of the bars in this left part is half of the size of the dataset – extends farther to the left than the right does to the right. Conversely, the histogram is skewed right if the right half extends farther to the right than the left does to the left.

If the shape of the histogram has one significant peak, then we say it is unimodal , while if it has several such, we say it is multimodal .

It is often easy to point to where the center of a distribution looks like it lies, but it is hard to be precise. It is particularly difficult if the histogram is “noisy,” maybe multimodal. Similarly, looking at a histogram, it is often easy to say it is “quite spread out” or “very concentrated in the center,” but it is then hard to go beyond this general sense.

Precision in our discussion of the center and spread of a dataset will only be possible in the next section, when we work with numerical measures of these features.

  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

visual representation of quantitative data

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

17 Data Visualization Techniques All Professionals Should Know

Data Visualizations on a Page

  • 17 Sep 2019

There’s a growing demand for business analytics and data expertise in the workforce. But you don’t need to be a professional analyst to benefit from data-related skills.

Becoming skilled at common data visualization techniques can help you reap the rewards of data-driven decision-making , including increased confidence and potential cost savings. Learning how to effectively visualize data could be the first step toward using data analytics and data science to your advantage to add value to your organization.

Several data visualization techniques can help you become more effective in your role. Here are 17 essential data visualization techniques all professionals should know, as well as tips to help you effectively present your data.

Access your free e-book today.

What Is Data Visualization?

Data visualization is the process of creating graphical representations of information. This process helps the presenter communicate data in a way that’s easy for the viewer to interpret and draw conclusions.

There are many different techniques and tools you can leverage to visualize data, so you want to know which ones to use and when. Here are some of the most important data visualization techniques all professionals should know.

Data Visualization Techniques

The type of data visualization technique you leverage will vary based on the type of data you’re working with, in addition to the story you’re telling with your data .

Here are some important data visualization techniques to know:

  • Gantt Chart
  • Box and Whisker Plot
  • Waterfall Chart
  • Scatter Plot
  • Pictogram Chart
  • Highlight Table
  • Bullet Graph
  • Choropleth Map
  • Network Diagram
  • Correlation Matrices

1. Pie Chart

Pie Chart Example

Pie charts are one of the most common and basic data visualization techniques, used across a wide range of applications. Pie charts are ideal for illustrating proportions, or part-to-whole comparisons.

Because pie charts are relatively simple and easy to read, they’re best suited for audiences who might be unfamiliar with the information or are only interested in the key takeaways. For viewers who require a more thorough explanation of the data, pie charts fall short in their ability to display complex information.

2. Bar Chart

Bar Chart Example

The classic bar chart , or bar graph, is another common and easy-to-use method of data visualization. In this type of visualization, one axis of the chart shows the categories being compared, and the other, a measured value. The length of the bar indicates how each group measures according to the value.

One drawback is that labeling and clarity can become problematic when there are too many categories included. Like pie charts, they can also be too simple for more complex data sets.

3. Histogram

Histogram Example

Unlike bar charts, histograms illustrate the distribution of data over a continuous interval or defined period. These visualizations are helpful in identifying where values are concentrated, as well as where there are gaps or unusual values.

Histograms are especially useful for showing the frequency of a particular occurrence. For instance, if you’d like to show how many clicks your website received each day over the last week, you can use a histogram. From this visualization, you can quickly determine which days your website saw the greatest and fewest number of clicks.

4. Gantt Chart

Gantt Chart Example

Gantt charts are particularly common in project management, as they’re useful in illustrating a project timeline or progression of tasks. In this type of chart, tasks to be performed are listed on the vertical axis and time intervals on the horizontal axis. Horizontal bars in the body of the chart represent the duration of each activity.

Utilizing Gantt charts to display timelines can be incredibly helpful, and enable team members to keep track of every aspect of a project. Even if you’re not a project management professional, familiarizing yourself with Gantt charts can help you stay organized.

5. Heat Map

Heat Map Example

A heat map is a type of visualization used to show differences in data through variations in color. These charts use color to communicate values in a way that makes it easy for the viewer to quickly identify trends. Having a clear legend is necessary in order for a user to successfully read and interpret a heatmap.

There are many possible applications of heat maps. For example, if you want to analyze which time of day a retail store makes the most sales, you can use a heat map that shows the day of the week on the vertical axis and time of day on the horizontal axis. Then, by shading in the matrix with colors that correspond to the number of sales at each time of day, you can identify trends in the data that allow you to determine the exact times your store experiences the most sales.

6. A Box and Whisker Plot

Box and Whisker Plot Example

A box and whisker plot , or box plot, provides a visual summary of data through its quartiles. First, a box is drawn from the first quartile to the third of the data set. A line within the box represents the median. “Whiskers,” or lines, are then drawn extending from the box to the minimum (lower extreme) and maximum (upper extreme). Outliers are represented by individual points that are in-line with the whiskers.

This type of chart is helpful in quickly identifying whether or not the data is symmetrical or skewed, as well as providing a visual summary of the data set that can be easily interpreted.

7. Waterfall Chart

Waterfall Chart Example

A waterfall chart is a visual representation that illustrates how a value changes as it’s influenced by different factors, such as time. The main goal of this chart is to show the viewer how a value has grown or declined over a defined period. For example, waterfall charts are popular for showing spending or earnings over time.

8. Area Chart

Area Chart Example

An area chart , or area graph, is a variation on a basic line graph in which the area underneath the line is shaded to represent the total value of each data point. When several data series must be compared on the same graph, stacked area charts are used.

This method of data visualization is useful for showing changes in one or more quantities over time, as well as showing how each quantity combines to make up the whole. Stacked area charts are effective in showing part-to-whole comparisons.

9. Scatter Plot

Scatter Plot Example

Another technique commonly used to display data is a scatter plot . A scatter plot displays data for two variables as represented by points plotted against the horizontal and vertical axis. This type of data visualization is useful in illustrating the relationships that exist between variables and can be used to identify trends or correlations in data.

Scatter plots are most effective for fairly large data sets, since it’s often easier to identify trends when there are more data points present. Additionally, the closer the data points are grouped together, the stronger the correlation or trend tends to be.

10. Pictogram Chart

Pictogram Example

Pictogram charts , or pictograph charts, are particularly useful for presenting simple data in a more visual and engaging way. These charts use icons to visualize data, with each icon representing a different value or category. For example, data about time might be represented by icons of clocks or watches. Each icon can correspond to either a single unit or a set number of units (for example, each icon represents 100 units).

In addition to making the data more engaging, pictogram charts are helpful in situations where language or cultural differences might be a barrier to the audience’s understanding of the data.

11. Timeline

Timeline Example

Timelines are the most effective way to visualize a sequence of events in chronological order. They’re typically linear, with key events outlined along the axis. Timelines are used to communicate time-related information and display historical data.

Timelines allow you to highlight the most important events that occurred, or need to occur in the future, and make it easy for the viewer to identify any patterns appearing within the selected time period. While timelines are often relatively simple linear visualizations, they can be made more visually appealing by adding images, colors, fonts, and decorative shapes.

12. Highlight Table

Highlight Table Example

A highlight table is a more engaging alternative to traditional tables. By highlighting cells in the table with color, you can make it easier for viewers to quickly spot trends and patterns in the data. These visualizations are useful for comparing categorical data.

Depending on the data visualization tool you’re using, you may be able to add conditional formatting rules to the table that automatically color cells that meet specified conditions. For instance, when using a highlight table to visualize a company’s sales data, you may color cells red if the sales data is below the goal, or green if sales were above the goal. Unlike a heat map, the colors in a highlight table are discrete and represent a single meaning or value.

13. Bullet Graph

Bullet Graph Example

A bullet graph is a variation of a bar graph that can act as an alternative to dashboard gauges to represent performance data. The main use for a bullet graph is to inform the viewer of how a business is performing in comparison to benchmarks that are in place for key business metrics.

In a bullet graph, the darker horizontal bar in the middle of the chart represents the actual value, while the vertical line represents a comparative value, or target. If the horizontal bar passes the vertical line, the target for that metric has been surpassed. Additionally, the segmented colored sections behind the horizontal bar represent range scores, such as “poor,” “fair,” or “good.”

14. Choropleth Maps

Choropleth Map Example

A choropleth map uses color, shading, and other patterns to visualize numerical values across geographic regions. These visualizations use a progression of color (or shading) on a spectrum to distinguish high values from low.

Choropleth maps allow viewers to see how a variable changes from one region to the next. A potential downside to this type of visualization is that the exact numerical values aren’t easily accessible because the colors represent a range of values. Some data visualization tools, however, allow you to add interactivity to your map so the exact values are accessible.

15. Word Cloud

Word Cloud Example

A word cloud , or tag cloud, is a visual representation of text data in which the size of the word is proportional to its frequency. The more often a specific word appears in a dataset, the larger it appears in the visualization. In addition to size, words often appear bolder or follow a specific color scheme depending on their frequency.

Word clouds are often used on websites and blogs to identify significant keywords and compare differences in textual data between two sources. They are also useful when analyzing qualitative datasets, such as the specific words consumers used to describe a product.

16. Network Diagram

Network Diagram Example

Network diagrams are a type of data visualization that represent relationships between qualitative data points. These visualizations are composed of nodes and links, also called edges. Nodes are singular data points that are connected to other nodes through edges, which show the relationship between multiple nodes.

There are many use cases for network diagrams, including depicting social networks, highlighting the relationships between employees at an organization, or visualizing product sales across geographic regions.

17. Correlation Matrix

Correlation Matrix Example

A correlation matrix is a table that shows correlation coefficients between variables. Each cell represents the relationship between two variables, and a color scale is used to communicate whether the variables are correlated and to what extent.

Correlation matrices are useful to summarize and find patterns in large data sets. In business, a correlation matrix might be used to analyze how different data points about a specific product might be related, such as price, advertising spend, launch date, etc.

Other Data Visualization Options

While the examples listed above are some of the most commonly used techniques, there are many other ways you can visualize data to become a more effective communicator. Some other data visualization options include:

  • Bubble clouds
  • Circle views
  • Dendrograms
  • Dot distribution maps
  • Open-high-low-close charts
  • Polar areas
  • Radial trees
  • Ring Charts
  • Sankey diagram
  • Span charts
  • Streamgraphs
  • Wedge stack graphs
  • Violin plots

Business Analytics | Become a data-driven leader | Learn More

Tips For Creating Effective Visualizations

Creating effective data visualizations requires more than just knowing how to choose the best technique for your needs. There are several considerations you should take into account to maximize your effectiveness when it comes to presenting data.

Related : What to Keep in Mind When Creating Data Visualizations in Excel

One of the most important steps is to evaluate your audience. For example, if you’re presenting financial data to a team that works in an unrelated department, you’ll want to choose a fairly simple illustration. On the other hand, if you’re presenting financial data to a team of finance experts, it’s likely you can safely include more complex information.

Another helpful tip is to avoid unnecessary distractions. Although visual elements like animation can be a great way to add interest, they can also distract from the key points the illustration is trying to convey and hinder the viewer’s ability to quickly understand the information.

Finally, be mindful of the colors you utilize, as well as your overall design. While it’s important that your graphs or charts are visually appealing, there are more practical reasons you might choose one color palette over another. For instance, using low contrast colors can make it difficult for your audience to discern differences between data points. Using colors that are too bold, however, can make the illustration overwhelming or distracting for the viewer.

Related : Bad Data Visualization: 5 Examples of Misleading Data

Visuals to Interpret and Share Information

No matter your role or title within an organization, data visualization is a skill that’s important for all professionals. Being able to effectively present complex data through easy-to-understand visual representations is invaluable when it comes to communicating information with members both inside and outside your business.

There’s no shortage in how data visualization can be applied in the real world. Data is playing an increasingly important role in the marketplace today, and data literacy is the first step in understanding how analytics can be used in business.

Are you interested in improving your analytical skills? Learn more about Business Analytics , our eight-week online course that can help you use data to generate insights and tackle business decisions.

This post was updated on January 20, 2022. It was originally published on September 17, 2019.

visual representation of quantitative data

About the Author

tableau.com is not available in your region.

Ohio University Logo

University Libraries

  • Ohio University Libraries
  • Library Guides

Research Data Literacy 101

  • Data Visualization
  • Data Management
  • Find Data & Statistics
  • Data Manipulation & Visualization Tools
  • Citing Data
  • Contact/Need Help?

Tutorials & Books on Visualization

There are ENDLESS videos and help tutorials freely available on the web, even for specific programs.

  • Data visualization for dummies eBook through Alden Library - This full-color guide introduces you to a variety of ways to handle and synthesize data in much more interesting ways than mere columns and rows of numbers
  • Information visualization : an introduction eBook from Alden Library - Information visualization is the act of gaining insight into data, and is carried out by virtually everyone. It is usually facilitated by turning data - often a collection of numbers - into images that allow much easier comprehension.
  • Information visualization: perception for design, 3rd eBook from Alden Library
  • Designing Creative Content: How visualising data helps us see Data vis is as much about what story you want to tell as it is about visual design. This presentation digs into case studies and projects with a particular focus on answering the why, what, and how of data visualization.

What is Data Visualization?

Data visualization “ ...helps the human eye perceive huge amounts of data in an understandable way. It enables researchers to represent, draft, and display data visually in a way that allows humans to more efficiently identify, visualize, and remember it. " ( Orkodashvili, 2014 )

Building an information visualization can aid you in your research process in many ways, but predominately:

  • Early in the research process- i nformation visualization can help you explore and understand patterns in your data in a non-traditional way, giving you a different perspective.
  • Later in the research process- visualization can help you communicate important aspects of your data in an (hopefully) easy-to-understand fashion.

Orkodashvili, Mariam, PhD, Salem Press Encyclopedia, January, 2014.

Let's Get Started

Choosing your visualization method is important. ask yourself these questions:.

  • Who is my audience, are they familiar with this information?
  • What am I trying to convey? What am I showing them? What is my audience supposed to understand?
  • How much time or space do I have? Does that matter?

Here is a Cheat Sheet for helping you choose a graphic that best suits your goal:  Mike Parkinson's Graphic Cheat Sheet .

There are six types of visualization strategies and each of these six types can be further described with examples on this Periodic Table of Visualization Methods .

  • Data visualization - visual representations of quantitative data in a schematic form
  • Information visualization - interactive, visual representations of data to amplify understanding. Data is transformed into an interactive image
  • Concept visualization - methods to elaborate mostly qualitative concepts, ideas, plans and analysis
  • Strategy visualization - systematic use of complementary visual representations of the analysis, conclusion, and implementation of strategies
  • Metaphor visualization - position information graphically to organize and structure information and convey insight about the information
  • Compound visualization - complementary use of different graphic representation formats in one scheme

Data is Beautiful -TED Talk

David McCandless turns complex data sets (like worldwide military spending, media buzz, Facebook status updates) into beautiful, simple diagrams that tease out unseen patterns and connections. Good design, he suggests, is the best way to navigate information glut — and it may just change the way we see the world.

  • << Previous: Open Data
  • Next: Data Manipulation & Visualization Tools >>

Analyst Answers

Data & Finance for Work & Life

visual representation of quantitative data

What is a data display? Definition, Types, & Examples

You may have heard that “data is the new oil” — the most valuable commodity of the 21st century. But just like oil is useless until refined, data is useless until simplified and communicated. Data displays are a tool to help analysts do just that.

Because they’re so important to data, data displays can be found in virtually every discipline that deals with large amounts of information. Consequently, the precise meaning behind data displays has become blurred, resulting in a lot of unanswered questions — many of which you may have already asked yourself.

The purpose of this article is to clear things up. I will briefly define data displays, show examples of 17 popular displays, answer some common questions, compare data displays and data visualization, cover data displays in the context of qualitative data, and explore common data visualization tools.

Don’t forget, you can get the free Intro to Data Analysis eBook to get a strong start.

Data Display Definition

Also known as data visualization, a data display is a visual representation of raw or processed data that aims to communicate a small number of insights about the behavior of an underlying table, which is otherwise difficult or impossible to understand to the naked eye. Common examples include graphs and charts, but any visual depiction of information, even maps, can be considered data displays.

Additionally, the term “data display” can refer to a legal agreement in which a publishing entity, often a stock exchange, obtains the rights from a partner to publicly display the partner’s data. This is important to know, but it’s outside the scope of this article.

Types of Data Display: 17 Actionable Visualizations with Examples

The most common types of data displays are the 17 that follow:

  • bar charts,
  • column charts,
  • stacked bar charts,
  • line graphs,
  • area charts,
  • stacked area charts,
  • unstacked area charts,
  • combo bar-and-line charts,
  • waterfall charts,
  • tree diagrams,
  • bullet graphs,
  • scatter plots,
  • histograms,
  • packed bubble charts, and
  • box & whisker plots.

Let’s look at each of these with an example. I’ll be using Tableau software to show these, but many of them are available in Excel.

Bar charts show the value of dimensions as horizontal rectangles. They’re useful for comparing simple items side-by-side. This image shows total checkouts for two book IDs.

visual representation of quantitative data

Column Charts

Column charts show the value of dimensions as a vertical rectangle. Like bar charts, they’re useful for comparing simple items side-by-side. This image shows total checkouts for two book IDs.

visual representation of quantitative data

Stacked Bar/Stacked Columns Charts

Stacked bar or column charts show the value of dimensions with more granular dimensions “inside.” They’re useful for comparing dimensions with additional breakdown. In this image, the columns represent total checkouts by book ID, and the colors represent month of checkout.

visual representation of quantitative data

Tree maps show the value of multiple dimensions by their relative size and splits them into rectangles in a “spiral” fashion. As you can see here, book IDs are shows in size by the number of checkouts they had.

visual representation of quantitative data

Line Graphs

Line graphs show the value of two dimensions that are continuous, most often wherein one of the dimensions is time. This image shows five book IDs by number of checkouts over time.

visual representation of quantitative data

Area Charts

Area charts show the value of a dimension as all the space under a line (often over time).

visual representation of quantitative data

Stacked Area Charts

Stacked are charts show the value of two dimension values as areas stacked on top of each other, such that one starts where the other ends on the vertical axis.

visual representation of quantitative data

Unstacked Area Charts

Unstacked area charts show two area charts layered on top of each other such that both start from zero. As you can see below, this view is useful for comparing the sum of two values over time.

visual representation of quantitative data

Combo Bar-and-Line Charts

A bar-and-line chart shows two different measures — one as a line and the other as bars. These are particularly useful when showing a running total as a line and the individual values of the total as bars.

visual representation of quantitative data

Waterfall Charts

Waterfall charts show a beginning balance, additions, subtractions, and an ending balance, all as a sequence of connected bars. These are useful for showing additions and subtractions, or a corkscrew calculations, around a project or account.

visual representation of quantitative data

Tree Diagrams

Tree diagrams show the hierarchical relationship between elements of a system.

data analysis types, methods, and techniques tree diagram

Bullet Graphs

Bullet graphs show a column value of actual real numbers (blue bars), a marker for a target number (the small black vertical lines), and shading at different intervals to indicate quality of performance such as bad, acceptable, and good.

visual representation of quantitative data

Scatter Plots

Scatter plots show points on a plane where two variables meet — very similar to a line graph but used to compare any kind of variables, not just a value over time.

visual representation of quantitative data

Histograms show bars representing groupings of a given dimension. This is easier to understand in the picture — each column represents a number of entries that fall into a range, i.e. 10 values fall into a bin ranging from 1-4, 29 values fall into a bin of 5-8, etc.

visual representation of quantitative data

Heat maps show the intensity of a grid of values through the use of color shading and size intensities.

visual representation of quantitative data

Packed Bubble Charts

Packed bubble charts show the intensity of dimension values based on relative size of “bubbles,” which are nothing more than circles.

visual representation of quantitative data

Box & Whisker Plots

Box & whisker plots show values of a series based on 4 markers: max, min, lower 25% quartile, upper 75% quartile, and the average.

visual representation of quantitative data

Don’t Use Pie Charts!

One of the most common chart types is a pie chart, and I’m asking you to never use one. Why? Because pie charts don’t provide any value to the viewer.

A pie chart shows the percent that parts of a total represent. But what does that mean for the viewer? Visually, it’s difficult to distinguish which slices are largest, unless you have one slice that dominates 80% or more of the pie — or you use labels on each slice.

If you want to show percent of total, use a percent of total bar chart. Or better yet, use a waterfall chart! These will be much more informative to the viewer.

Packed Bubble Charts: the New Cool Thing

I’m not totally against bubble charts, but they’re not the most insightful visual we can provide. A bubble chart has no structure, so it’s not possible to compare different values. They’re similar to pie charts in that it’s difficult to draw insight.

That said, there is some creative value to the viewer. Bubble charts grab attention, which means you can use them to draw in users and show them more insightful charts.

Why display data in different ways?

In all of the example charts above, I used the same two data tables. What this means is that any given data set can be represented in many different data displays . So why would we represent data in different ways?

The simple answer is that it helps the viewer think differently about information . When I showed the stacked area chart of number of checkouts for two different books, it appeared as though the books followed the same trend.

However, when I showed them in an unstacked view, we clearly see that the book colored orange performed slightly better in Q1 – Q3, whereas the book colored blue performed better in Q4.

Displaying data in different ways allows us to think differently about it — to gain insights and understand it in new and creative ways.

Another reason for using multiple data displays is for an analyst to cater to his/her audience . For example, take another look at the bullet graph and scatter plot above.

Managers in a book selling firm are likely very interested in the performance of sales in Q1 vs Q2, so the bullet graph is better for them . However, a writer looking to better understand the relationship between sales of individual books in Q1 vs Q2 will prefer the scatter plot .

Which data display shows the number of observations?

I’m not sure where this question comes from, but it’s asked a lot. An observation is nothing more than one line in a data table, and many wonder what data display shows the total number of these lines.

In short, any data display can show the number of observations in the underlying data set — it’s only a question of granularity of dimensions. However, the most common data display showing number of observations is a scatter plot. As long as you include a measure at the observation level of detail, the scatter will show the number of observations.

If the goal, however, is simply to count the number of observations, most data table software have a simple count function . In Excel, it’s COUNTA(array of one column). In Tableau, it’s COUNT([observation metric]).

What data display uses intervals and frequency?

Another common question, and this one is easy. Take another look at the histogram above. It pinpoints intervals and counts the number of records within that interval. The number of records is also know as frequency. In short, the data display showing intervals and frequency is a histogram.

Which data display is misleading?

You may have heard the term “misleading data.” Unfortunately, misleading data is a necessary evil in the world of informatics. While any data display can be misleading, the most common examples are bar charts in which an axis is made non-zero and line charts in which the data axis (x-axis) is reversed. The first results in an inflated visual value of bars, and the second results in the reverse interpretation of a trend over time.

Misleading data is a huge topic and is outside the scope of this article. If you’re interested in it, check out these articles:

  • Data Distortion: What is it? And how is it misleading?
  • Pros & Cons of Data Visualization: the Good, Bad, & Ugly

Data Visualization vs Data Display

Alas, we arrive at what is likely the most common source of confusion surrounding data displays: the difference between a data display and a data visualization.

In most cases, there is no difference between a data visualization and a data display — they are synonyms. However, the term “visualization” is a buzzword that invites the image of aesthetically-pleasing data displays, whereas “data display” can refer to visualizations OR aesthetically-simple charts and graphs like those used in academic papers.

What is a data display in qualitative research?

Since data is quantitative, applying data displays to qualitative research can be challenging — but it’s 100% possible. It requires converting qualitative data into quantitative data . In most cases qualitative data consists of words, so “conversion” involves counting words. In practice, counting manifests as (1) idea coding and/or (2) determining word frequency .

Idea coding consists of reading through text and assigning designated phrases per idea covered, then counting the number of times these phrases appear . Word frequency consists of passing a text through a word analyzer software and counting the most common combinations . The details around these techniques are outside the scope of this article, but you can learn more in the article Qualitative Content Analysis: a Simple Guide with Examples .

Once converted into numbers, we can display qualitative data just like we display quantitative data in the 17 Actionable Visualizations . So, how do we answer the question “what is a data display in qualitative research?”

In short, a data display in qualitative research is the visualization of words after they have been quantified through idea coding, word frequency, or both.

Data Display Tools and Products (5 Examples)

Any article on data displays worth its salt shows data tools. Here are five free data visualization tools you can get started with today.

Admittedly, Excel is not a data display tool in the strict sense of the term. However, it offers several user-friendly visualization options. You can navigate to them via the Insert ribbon. The options include:

  • Column charts,
  • Bar charts,
  • Line graphs,
  • Histograms,
  • Box & Whisker Plots,
  • Waterfall charts,
  • Pie charts (but don’t use them!),
  • Scatter plots, and
  • Combo charts

They’re displayed in the icons as shown in the below picture:

visual representation of quantitative data

Tableau is the leading data visualization software, and for good reason. It’s what I used to build all of the data displays earlier in this article. Tableau interacts directly with data stored in Excel, on a local server, in Google Sheets, and many other sources.

It provides one of the most flexible interfaces available, allowing you to rapidly “slice and dice” different dimensions and measures and switch between visualizations with the click of a button.

The one downside is that Tableau takes some time to learn . Its flexibility requires the use of many functional buttons, and you’ll need some time to understand them.

You can download the free version of the paid product called Tableau Public .

I only recently learned about Flourish. It’s a pre-set data display tool that’s much less flexible than Tableau, but much easier to get started on. Given a set of static and dynamic charts to choose from, Flourish prompts you to fill in data in a format compatible with the chart.

Have you seen those “rat race” videos where GDP per country or market cap by company is shown over time? With the leaders moving to the top over the years? You can build that in Flourish .

Infogram allows the creation of a fixed number of data displays, similar to those available in Excel. It’s added value, however, is that Infogram is aesthetically pleasing and it’s a browser-based tool. This means you won’t bore an audience with classic Excel charts, and it means you can access your work anywhere you have an internet connection. Check out Infogram here .

Datawrapper

Datawrapper is similar to Flourish and Infogram. The key difference is that you have a wide variety of displays to choose from like Flourish, but it requires a standard input format like Infogram.

At the end of the day, Tableau is by far the best visualization software in terms of flexibility and power. But if you’re looking for a simple, accessible solution, Flourish, Infogram and Datawrapper will do the trick. Try them out to see which is best for you!

Data Display in Excel

A quick note on data display in Excel: in addition to using the visualizations discussed above on a normal range, you can use them on a pivot table.

What are the steps to display data in a pivot table?

Imagine you have a normal range in Excel that you want to convert to a pivot table. You can do so by highlighting the range and navigating to Insert > Tables > Pivot Table. Once the field appears, drag the dimensions and measures you want into the fields.

From there, you can create a pivot table data display by placing your cursor anywhere in the pivot table and navigating to Insert and clicking a visualization. The data display is now connected to the pivot table and will change with it.

visual representation of quantitative data

Data Display from Database

So far we’ve discussed the data display definition, types of displays, answered some common questions, compared data displays and data visualization, covered data displays with qualitative data, and explored common tools.

All of these items can be considered “front-end” topics, meaning they don’t require you to work with programming languages and underlying datasets. However, it’s worth addressing how to create a data display from a database.

At its core, a database is a storage location with 2 or more joinable tables. While IT professionals would laugh at me saying this, two tabs in Excel with a data table in each could technically be considered a database . This means that any time you create a data display or visualization using data from a structure of this nature, you’re displaying from a database.

But this would be an oversimplification !

In reality, serious databases are stored on servers accessible with SQL. Displaying data from those databases requires a tool, such as Tableau, capable of accessing those servers directly. If not, you would need to export them into Excel first, then display the data with a tool.

In short, displaying data from a database requires either a powerful visualization tool or preparatory export from the database into Excel using SQL.

What is a data display in math?

Because we’re talking about data, the numeric affiliation with math comes up often. Data displays are used in math insofar as math is used in almost every discipline. This means we don’t need to explore it extensively. However, the specific use of data displays in statistics is important.

Data Displays and Statistics

Statistics is the specific discipline of math that deals with datasets. More specifically, it deals with descriptive and inferential analytics . In short, descriptive analysis tries to understand distribution. Distribution can be broken down into central tendency and dispersion.

Inferential analysis, on the other hand, uses descriptive statistics on known data to make assumptions about a broader population. If a penny is copper (descriptive), and all pennies are the same, then all pennies are copper (inferential).

Data displays in statistics can be used for both descriptive and inferential analysis. They help the analyst understand how well their models represent the data.

Much of statistics is polluted with discipline-specific jargon, and it’s not the goal of this article to deep-dive into that world. Instead, I encourage you to get ahold of one of the data display tools we discussed and start playing with them. This is the best way to learn data display skills .

At AnalystAnswers.com, I’m working to build task-based packets to help you improve your skills. So stay tuned for those!

If you found this article helpful, check out more free content on data, finance, and business analytics at the AnalystAnswers.com homepage !

About the Author

Noah is the founder & Editor-in-Chief at AnalystAnswers. He is a transatlantic professional and entrepreneur with 5+ years of corporate finance and data analytics experience, as well as 3+ years in consumer financial products and business software. He started AnalystAnswers to provide aspiring professionals with accessible explanations of otherwise dense finance and data concepts. Noah believes everyone can benefit from an analytical mindset in growing digital world. When he's not busy at work, Noah likes to explore new European cities, exercise, and spend time with friends and family.

File available immediately.

visual representation of quantitative data

Notice: JavaScript is required for this content.

Illustration with collage of pictograms of clouds, pie chart, graph pictograms on the following

Data visualization is the representation of data through use of common graphics, such as charts, plots, infographics and even animations. These visual displays of information communicate complex data relationships and data-driven insights in a way that is easy to understand.

Data visualization can be utilized for a variety of purposes, and it’s important to note that is not only reserved for use by data teams. Management also leverages it to convey organizational structure and hierarchy while data analysts and data scientists use it to discover and explain patterns and trends.  Harvard Business Review  (link resides outside ibm.com) categorizes data visualization into four key purposes: idea generation, idea illustration, visual discovery, and everyday dataviz. We’ll delve deeper into these below:

Idea generation

Data visualization is commonly used to spur idea generation across teams. They are frequently leveraged during brainstorming or  Design Thinking  sessions at the start of a project by supporting the collection of different perspectives and highlighting the common concerns of the collective. While these visualizations are usually unpolished and unrefined, they help set the foundation within the project to ensure that the team is aligned on the problem that they’re looking to address for key stakeholders.

Idea illustration

Data visualization for idea illustration assists in conveying an idea, such as a tactic or process. It is commonly used in learning settings, such as tutorials, certification courses, centers of excellence, but it can also be used to represent organization structures or processes, facilitating communication between the right individuals for specific tasks. Project managers frequently use Gantt charts and waterfall charts to illustrate  workflows .  Data modeling  also uses abstraction to represent and better understand data flow within an enterprise’s information system, making it easier for developers, business analysts, data architects, and others to understand the relationships in a database or data warehouse.

Visual discovery

Visual discovery and every day data viz are more closely aligned with data teams. While visual discovery helps data analysts, data scientists, and other data professionals identify patterns and trends within a dataset, every day data viz supports the subsequent storytelling after a new insight has been found.

Data visualization

Data visualization is a critical step in the data science process, helping teams and individuals convey data more effectively to colleagues and decision makers. Teams that manage reporting systems typically leverage defined template views to monitor performance. However, data visualization isn’t limited to performance dashboards. For example, while  text mining  an analyst may use a word cloud to to capture key concepts, trends, and hidden relationships within this unstructured data. Alternatively, they may utilize a graph structure to illustrate relationships between entities in a knowledge graph. There are a number of ways to represent different types of data, and it’s important to remember that it is a skillset that should extend beyond your core analytics team.

Use this model selection framework to choose the most appropriate model while balancing your performance requirements with cost, risks and deployment needs.

Register for the ebook on generative AI

The earliest form of data visualization can be traced back the Egyptians in the pre-17th century, largely used to assist in navigation. As time progressed, people leveraged data visualizations for broader applications, such as in economic, social, health disciplines. Perhaps most notably, Edward Tufte published  The Visual Display of Quantitative Information  (link resides outside ibm.com), which illustrated that individuals could utilize data visualization to present data in a more effective manner. His book continues to stand the test of time, especially as companies turn to dashboards to report their performance metrics in real-time. Dashboards are effective data visualization tools for tracking and visualizing data from multiple data sources, providing visibility into the effects of specific behaviors by a team or an adjacent one on performance. Dashboards include common visualization techniques, such as:

  • Tables: This consists of rows and columns used to compare variables. Tables can show a great deal of information in a structured way, but they can also overwhelm users that are simply looking for high-level trends.
  • Pie charts and stacked bar charts:  These graphs are divided into sections that represent parts of a whole. They provide a simple way to organize data and compare the size of each component to one other.
  • Line charts and area charts:  These visuals show change in one or more quantities by plotting a series of data points over time and are frequently used within predictive analytics. Line graphs utilize lines to demonstrate these changes while area charts connect data points with line segments, stacking variables on top of one another and using color to distinguish between variables.
  • Histograms: This graph plots a distribution of numbers using a bar chart (with no spaces between the bars), representing the quantity of data that falls within a particular range. This visual makes it easy for an end user to identify outliers within a given dataset.
  • Scatter plots: These visuals are beneficial in reveling the relationship between two variables, and they are commonly used within regression data analysis. However, these can sometimes be confused with bubble charts, which are used to visualize three variables via the x-axis, the y-axis, and the size of the bubble.
  • Heat maps:  These graphical representation displays are helpful in visualizing behavioral data by location. This can be a location on a map, or even a webpage.
  • Tree maps, which display hierarchical data as a set of nested shapes, typically rectangles. Treemaps are great for comparing the proportions between categories via their area size.

Access to data visualization tools has never been easier. Open source libraries, such as D3.js, provide a way for analysts to present data in an interactive way, allowing them to engage a broader audience with new data. Some of the most popular open source visualization libraries include:

  • D3.js: It is a front-end JavaScript library for producing dynamic, interactive data visualizations in web browsers.  D3.js  (link resides outside ibm.com) uses HTML, CSS, and SVG to create visual representations of data that can be viewed on any browser. It also provides features for interactions and animations.
  • ECharts:  A powerful charting and visualization library that offers an easy way to add intuitive, interactive, and highly customizable charts to products, research papers, presentations, etc.  Echarts  (link resides outside ibm.com) is based in JavaScript and ZRender, a lightweight canvas library.
  • Vega:   Vega  (link resides outside ibm.com) defines itself as “visualization grammar,” providing support to customize visualizations across large datasets which are accessible from the web.
  • deck.gl: It is part of Uber's open source visualization framework suite.  deck.gl  (link resides outside ibm.com) is a framework, which is used for  exploratory data analysis  on big data. It helps build high-performance GPU-powered visualization on the web.

With so many data visualization tools readily available, there has also been a rise in ineffective information visualization. Visual communication should be simple and deliberate to ensure that your data visualization helps your target audience arrive at your intended insight or conclusion. The following best practices can help ensure your data visualization is useful and clear:

Set the context: It’s important to provide general background information to ground the audience around why this particular data point is important. For example, if e-mail open rates were underperforming, we may want to illustrate how a company’s open rate compares to the overall industry, demonstrating that the company has a problem within this marketing channel. To drive an action, the audience needs to understand how current performance compares to something tangible, like a goal, benchmark, or other key performance indicators (KPIs).

Know your audience(s): Think about who your visualization is designed for and then make sure your data visualization fits their needs. What is that person trying to accomplish? What kind of questions do they care about? Does your visualization address their concerns? You’ll want the data that you provide to motivate people to act within their scope of their role. If you’re unsure if the visualization is clear, present it to one or two people within your target audience to get feedback, allowing you to make additional edits prior to a large presentation.

Choose an effective visual:  Specific visuals are designed for specific types of datasets. For instance, scatter plots display the relationship between two variables well, while line graphs display time series data well. Ensure that the visual actually assists the audience in understanding your main takeaway. Misalignment of charts and data can result in the opposite, confusing your audience further versus providing clarity.

Keep it simple:  Data visualization tools can make it easy to add all sorts of information to your visual. However, just because you can, it doesn’t mean that you should! In data visualization, you want to be very deliberate about the additional information that you add to focus user attention. For example, do you need data labels on every bar in your bar chart? Perhaps you only need one or two to help illustrate your point. Do you need a variety of colors to communicate your idea? Are you using colors that are accessible to a wide range of audiences (e.g. accounting for color blind audiences)? Design your data visualization for maximum impact by eliminating information that may distract your target audience.

An AI-infused integrated planning solution that helps you transcend the limits of manual planning.

Build, run and manage AI models. Prepare data and build models on any cloud using open source code or visual modeling. Predict and optimize your outcomes.

Unlock the value of enterprise data and build an insight-driven organization that delivers business advantage with IBM Consulting.                                   

Your trusted Watson co-pilot for smarter analytics and confident decisions.

Use features within IBM Watson® Studio that help you visualize and gain insights into your data, then cleanse and transform your data to build high-quality predictive models.

Data Refinery makes it easy to explore, prepare, and deliver data that people across your organization can trust.

Learn how to use Apache Superset (a modern, enterprise-ready business intelligence web application) with Netezza database to uncover the story behind the data.

Predict outcomes with flexible AI-infused forecasting and analyze what-if scenarios in real-time. IBM Planning Analytics is an integrated business planning solution that turns raw data into actionable insights. Deploy as you need, on-premises or on cloud.

Presentation of Quantitative Data: Data Visualization

  • First Online: 15 December 2018

Cite this chapter

visual representation of quantitative data

  • Hector Guerrero 2  

525k Accesses

We often think of data as being numerical values, and in business, those values are often stated in terms of units of currency (dollars, pesos, dinars, etc.). Although data in the form of currency are ubiquitous, it is quite easy to imagine other numerical units: percentages, counts in categories, units of sales, etc. This chapter, in conjunction with Chap. 3 , discusses how we can best use Excel’s graphic capabilities to effectively present quantitative data ( ratio and interval ) to inform and influence an audience, whether it is in euros or some other quantitative measure. In Chaps. 4 and 5 , we will acknowledge that not all data are numerical by focusing on qualitative ( categorical/nominal or ordinal ) data. The process of data gathering often produces a combination of data types, and throughout our discussions, it will be impossible to ignore this fact: quantitative and qualitative data often occur together. Let us begin our study of data visualization .

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and affiliations.

College of William & Mary, Mason School of Business, Williamsburg, VA, USA

Hector Guerrero

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Guerrero, H. (2019). Presentation of Quantitative Data: Data Visualization. In: Excel Data Analysis. Springer, Cham. https://doi.org/10.1007/978-3-030-01279-3_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-01279-3_2

Published : 15 December 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-01278-6

Online ISBN : 978-3-030-01279-3

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Behav Sci
  • v.45(1); 2022 Mar

Quantitative Techniques and Graphical Representations for Interpreting Results from Alternating Treatment Design

Rumen manolov.

1 Department of Social Psychology and Quantitative Psychology, University of Barcelona, Passeig de la Vall dHebron 171, 08035 Barcelona, Spain

René Tanious

2 Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium

Patrick Onghena

Associated data.

The data used for the illustrations are available from https://osf.io/ks4p2/

Multiple quantitative methods for single-case experimental design data have been applied to multiple-baseline, withdrawal, and reversal designs. The advanced data analytic techniques historically applied to single-case design data are primarily applicable to designs that involve clear sequential phases such as repeated measurement during baseline and treatment phases, but these techniques may not be valid for alternating treatment design (ATD) data where two or more treatments are rapidly alternated. Some recently proposed data analytic techniques applicable to ATD are reviewed. For ATDs with random assignment of condition ordering, the Edgington’s randomization test is one type of inferential statistical technique that can complement descriptive data analytic techniques for comparing data paths and for assessing the consistency of effects across blocks in which different conditions are being compared. In addition, several recently developed graphical representations are presented, alongside the commonly used time series line graph. The quantitative and graphical data analytic techniques are illustrated with two previously published data sets. Apart from discussing the potential advantages provided by each of these data analytic techniques, barriers to applying them are reduced by disseminating open access software to quantify or graph data from ATDs.

Alternating treatment design (ATD) is a single-case experimental design (SCED 1 ), characterized by a rapid and frequent alternation of conditions (Barlow & Hayes, 1979 ; Kratochwill & Levin, 1980 ) that can be used to compare two (or more) different treatments, or a control and a treatment condition. An ATD can be understood as a type of “multielement design” (see Hammond et al., 2013 ; Kennedy, 2005 ; Riley-Tillman et al., 2020 ; see Barlow & Hayes, 1979 , for a discussion), but it important to mention two potential distinctions. On the one hand, the term “multilelement design” is employed when an ATD is used for test-control pairwise functional analysis methodology (Hagopian et al., 1997 ; Hall et al., 2020 ; Hammond et al., 2013 ; Iwata et al., 1994 ). On the other hand, a multielement design can be used for assessing contextual variables and ATD for assessing interventions (Ledford et al., 2019 ). Previous publications on best practices for applying ATD recommend a minimum of five data points per condition, and limiting consecutive repeated exposure to two sessions of any one condition (What Works Clearinghouse, 2020 ; Wolery al., 2018). The rapid alternation between conditions distinguishes ATDs from other SCEDs, which are characterized by more consecutive repeated measurements for the same condition (Onghena & Edgington, 2005 ).

In relation to the previously mentioned distinguishing features of ATDs, it is important to adequately identify under what conditions this design is most useful and should be recommended to applied researchers. ATDs are applicable to reversible behaviors (Wolery et al., 2018 ) that are sensitive to interventions that can be introduced and removed fast, prior to maintenance and generalization phases of treatment analyses. Thus, for nonreversible behaviors, an AB (Michiels & Onghena, 2019 ), a multiple-baseline and/or a changing-criterion design can be used (Ledford et al., 2019 ), whereas for reversible behaviors and interventions that require more time to demonstrate a treatment effect (or for an effect to wear off), an ABAB design is typically recommended.

ATD can be useful for applied researchers for several reasons. First, an ATD can be used to compare the efficiency of different interventions (Holcombe et al., 1994 ), instead of only comparing a baseline to an intervention condition. Second, an ATD enables researchers to perform, in a brief period of time, several attempts to demonstrate whether one condition is superior to the other. This rapid alternation of conditions is useful to reduce the threat of history because it decreases the likelihood that confounding external events occur exactly at the same time as the conditions change (Petursdottir & Carr, 2018 ). This rapid alternation is also useful to reduce the threat of maturation, which usually entails a gradual process (Petursdottir & Carr, 2018 ), because the total duration of the ATD study is likely to be shorter when conditions change rapidly and the same condition is not in place for many consecutive measurements. Third, an ATD entailing a random determination of the sequence of conditions further increases the level of internal validity and makes the design equivalent to medical N-of-1 trials, which also entail block randomization and are considered Level-1 empirical evidence for treatment effectiveness for individual cases (Howick et al., 2011 ). The use of randomization when determining the alternating sequence has been recommended (Barlow & Hayes, 1979 ; Horner & Odom, 2014 ; Kazdin, 2011 ) and is relatively common: Manolov and Onghena ( 2018 ) report 51% and Tanious and Onghena ( 2020 ) report 59% of the ATD studies use randomization in the design. The fact that randomization is not always used limits the data analysis options available to the investigator. In the following paragraphs, we refer to different options for determining the condition sequence for ATDs. It is important to note that the way in which the sequence is determined affects the number of options available for data analysis.

Among the possibilities for a random determination for condition ordering, a completely randomized design (Onghena & Edgington, 2005 ) entails that the conditions are randomly alternated without any restriction, but this could lead to problematic sequences such as AAAAABBBBB or AAABBBBBAA. Given that such sequences do not allow for a rapid alternation of conditions, other randomization techniques are more commonly used to select the ordering of conditions. In particular, a “random alternation with no condition repeating until all have been conducted” (Wolery et al., 2018 , p. 304) describes block randomization (Ledford, 2018 ) or a randomized block design (Onghena & Edgington, 2005 ), in which all conditions are grouped in blocks and the order of conditions within each block is determined at random. For instance, sequences such as AB-BA-BA-AB-BA and BA-AB-BA-BA-AB can be obtained. A randomly determined sequence arising from an ATD with block randomization is equivalent to the N-of-1 trials used in the health sciences (Guyatt et al., 1990 ; Krone et al., 2020 ; Nikles & Mitchell, 2015 ), in which several random-order blocks are referred to as multiple crossovers. Another option is to use “random alternation with no more than two consecutive sessions in a single condition” (Wolery et al., 2018 , p. 304). Such an ATD with restricted randomization could lead to a sequence such as ABBABAABAB or AABABBABBA, with the latter being impossible when using block randomization. An alternative procedure for determining the sequence is through counterbalancing (Barlow & Hayes, 1979 ; Kennedy, 2005 ), which is especially relevant if there are multiple conditions and participants. Counterbalancing enables different ordering of the conditions to be present for different participants. For instance, the sequence could be ABBABAAB for participant 1 and BAABABBA for participant 2.

Aims and Organization

In the remaining sections of this manuscript, the emphasis is placed on data analysis options for ATD data. In particular, we illustrate the use of several quantitative techniques as complements to (rather than substitutes for) visual analysis. Quantifications are highlighted in relation to the importance of increasing the objectivity of the assessment of intervention effectiveness (Cox & Friedel, 2020 ; Laraway et al., 2019 ), reducing difficulties with accurately identifying clear differences between ATD data paths (Kranak et al., 2021 ), and making ATD results more likely to meet the requirements for including the data in meta-analyses (Onghena et al., 2018 ). The descriptive quantifications of differences in treatment effects and the inferential techniques (i.e., a randomization test) are applicable to both ATDs with block randomization and restricted randomization. However, the quantifications for assessing the consistency of effects across blocks are only applicable to ATDs with block randomization assignment for the conditions. The analytical options the current manuscript focuses on are scattered across several texts published since 2018. This article is aimed at providing behavior analysts with additional data analytic options, using freely available web-based software.

In the following text, we first discuss visual analysis, several descriptive quantitative techniques, and one inferential statistical technique. Next, we provide potential advantages for the proposed quantifications that complement visual inspection of graphed ATD data. Third, in order to enhance the applicability of the techniques and to make possible the replication of the results presented, we describe several existing software for data analysis options. Finally, we illustrate these quantitative data analytic techniques with two previously published ATD data sets.

Data Analysis Options for Alternating Treatment Design

Visual analysis.

Visual inspection has long been the first choice for investigators (Barlow et al., 2009 ; Sidman, 1960 ). The data analysis focuses on the degree to which the data path for one condition is differentiable from (and clearly superior to) the data path for the other condition (Ledford et al., 2019 ). The data paths are represented by lines connecting sessions within each condition of the ATD. Thus, visual analysis assesses the magnitude and consistency of the separation between conditions (Horner & Odom, 2014 ), also referred to as differentiation (Riley-Tillman et al., 2020 ) between the data paths (e.g., whether they cross or not and what is the vertical distance between them). This comparison usually incorporates consistency and level or magnitude of the difference in the dependent variable across the treatment conditions (Ledford et al., 2019 ).

Descriptive Data Analytic Techniques

The main strengths and limitations of the descriptive data analytic techniques reviewed are presented in Table ​ Table1. 1 . Examples of their use are provided in the section entitled “Illustrations and Comparison of the Results,” including a graphical representation of most of these techniques. In Table ​ Table1, 1 , we also refer to the particular figure that represents an application of a technique.

Summary of the main features of several data analytic techniques applicable to alternating treatments designs

Comparing Data Paths

Quantifying the difference between the data paths entails using observed behavior via direct measurement and linearly interpolated values. The linearly interpolated values are the specific locations within a data path for one condition; they lie between session data points from that condition. The interpolated data points represent the value that hypothetically would have been obtained for a given condition if it had taken place on a given measurement occasion; however, in the ATD, the alternative treatment condition is imposed instead.

One approach to comparing two or more data paths is to use the visual structured criterion (VSC; Lanovaz et al., 2019 ). The comparison is performed ordinally, that is, considering only whether one condition is superior to the other; it does not measure the degree of superiority (unlike the quantification described in the following paragraph). In particular, the VSC first quantifies the number of comparisons (measurement sessions) for which one condition is superior. Afterwards, the VSC compares this quantity to the cut-off points empirically derived by Lanovaz et al. ( 2019 ) for detecting superiority greater than one expected by chance.

A comparison involving actual and linearly interpolated values (abbreviated as ALIV, Manolov & Onghena, 2018 ) assesses the magnitude of effect, by focusing on the average distance between the data paths. Complementary to the visual structured criterion, ALIV quantifies the magnitude of the separation between data paths.

Assessment of Level and Trend

Comparing data paths is common in visual analysis of graphed SCED data, and in many ways relies on implicit use of interpolated values between sessions for each data path. In addition to visual comparison, a quantification using only the obtained (observed) measurements may be preferable to a quantification using the interpolated values from the ALIV. A possible quantification using only observed values is the “average difference between successive observations” (ADISO; Manolov & Onghena, 2018 ). As suggested by Ledford et al. ( 2019 ), measurements from one condition are compared to adjacent measurements of the other condition. The calculations focus on level, whereas potential distinct trends are quantified via increasing or decreasing differences between adjacent values. For an ATD with block randomization of condition ordering, it is straightforward to perform the comparisons within blocks. However, a substantial limitation arises when ADISO is used for ATD data with restricted randomization because the analyst would have to decide exactly how to segment the alternation sequence (i.e., which comparisons to perform). With different segmentations, the quantification of the difference between conditions can lead to different results. The recommendation is to segment the sequence in such a way that it allows for the maximum number of possible comparisons (e.g., segment AABBABBAABBA as AABB-AB-BA-AB-BA and not as AAB-BA-BBAA-BBA). In cases where different segmentations lead to the same number of comparisons (e.g., BAABAABABABB can be segmented as BAA-BA-AB-AB-ABB and BA-AB-AAB-AB-ABB), a sensitivity analysis comparing the results across different segmentations is warranted.

Taking into Account the Variability within Conditions

In ATD research, the measures of variability within a condition commonly reported are the (1) range and (2) standard deviation (Manolov & Onghena, 2018 ). Beyond reporting these values, the “visual aid and objective rule” (VAIOR, Manolov & Vannest, 2019 ) also includes the degree of variability within conditions. VAIOR assesses whether the data from one condition are superior to the data from the other condition, with the latter being summarized by a trend line and a variability band . The trend line is fitted by applying the Theil-Sen method (Vannest et al., 2012 ) to the data obtained in one condition (usually the baseline condition or another reference condition). The Theil-Sen method is a robust (i.e., resistant to outliers) technique based on finding the median of the slopes of all possible trend lines connecting all values pairwise. The variability band is constructed on the basis of the median absolute deviations from the median, which is a measure of scatter that is also resistant to outliers. The assessment in VAIOR focuses on whether the data from a given condition exceed the variability band. Similar to the visual structured criterion, a dichotomous decision is reached regarding whether there is sufficient evidence for the superiority of one condition over another with the degree of variability within each condition affecting this determination.

Consistency of Effects when Comparing Conditions

When analyzing SCED data, the consistency of the data within the same condition and the consistency of effects are two crucial aspects for establishing a functional relation between the independent variable, which causes the observed change (if any) on the dependent variable (Lane et al., 2017 ; Maggin et al., 2018 ). Two different approaches can be used for quantifying the consistency of effects for data obtained following an ATD with block randomization . The first is called “consistency of effects across blocks” (CEAB), is based on variance partitioning (Manolov et al., 2020 ): the total variance is divided into variance explained by the intervention effect, variance attributed to differences between blocks, and residual or interaction variance. The total variance is the sum of the squared deviations between any value and the mean of all values. The explained variance basically reflects the squared differences between the mean in each condition and the mean of all values, regardless of the condition in which they were obtained. The variance attributed to the blocks reflects the squared differences between the mean of the values from each block (mixing both conditions being compared) and the mean of all values. The variance represents the lack of consistency of the effect across blocks because the difference between conditions is larger in some blocks than others. The smaller the residual or interaction variability, the more consistent the effect was across blocks. In the context of this data analytic technique, several graphical representations are also suggested to facilitate interpreting the CEAB (Manolov et al., 2020 ), as shown in the section entitled “Illustrations and Comparison of the Results.”

Another approach is based on a graphical representation called the modified Brinley plot (Blampied, 2017 ) in which the measurements in one condition are plotted (on the Y-axis) against the measurements in the other condition (on the X-axis). A single data point represents the block. For designs that have phases (e.g., a multiple-baseline design or an ABAB design), each point represents the mean of a phase for a condition, with baseline means represented on the X-axis and adjacent intervention phase means on the Y-axis. A diagonal line (slope = 1, intercept = 0) shows the absence of difference or the equality between conditions. If all points are above the diagonal line, there is consistent superiority of treatment over baseline (assuming a high score represents improvement). If all points are below the diagonal then the treatment made behavior worse. The consistency in the magnitude of the effect across blocks is assessed in relation to the degree to which the points are close to a parallel diagonal line marking the average difference between conditions. If the slope is not equal to 1.0, then the interpretation is a bit more complex but quite revealing. If, for example, the treatment works best when baseline values are low, then data points on the left end of the graph will be farther above the baseline than points on the right end.

The calculation is actually a mean absolute percentage error, computed when comparing different conditions, which is why this data analytical technique is abbreviated MAPEDIFF (Manolov & Tanious, 2020 ). Thus, the modified Brinley plot can be used to represent visually the outcome of the specific comparisons performed between measurements in an ATD with block randomization) or between phases in a multiple-baseline or an ABAB design. It also enables checking whether the direction of the difference is consistently in favor of one of the conditions, whether this difference is of sufficient magnitude for all comparisons (in case a meaningful cut-off point is available), whether treatment efficacy depends on baseline levels, and whether this difference is consistent across all comparisons.

In both cases, the consistency of effects can be conceptualized as the degree to which variability of the effects observed in the different blocks are comparable to the average of these effects across blocks. Nonetheless, we prefer to separate the assessment of variability (usually assessed within each condition separately, before exploring whether there is a difference in variability across conditions), from the assessment of consistency of effects (which necessarily entails a comparison across conditions). These separate assessments are well-aligned with the recommendations for performing visual analysis (Lane et al., 2017 ; Ledford et al., 2019 ; Maggin et al., 2018 ).

Inferential Data Analytical Techniques

In the following section we refer to randomization tests as an inferential technique based on a stochastic element in the design (i.e., the use of randomization for determining the alternation sequence for conditions). In fact, randomization tests are the historically first statistical option proposed for ATD (Edgington, 1967 ; Kratochwill & Levin, 1980 ) and several studies using ATD have applied this analytical option (Weaver & Lloyd, 2019 ). However, despite the frequent use of randomization of condition assignment, the application of randomization tests are not yet commonly used with SCEDs (Manolov & Onghena, 2018 ). The aim of the current section is to justify and encourage both the use of randomization of condition presentation and the employment of randomization tests as an inferential analytical tool, as well as to describe their main features. Other inferential techniques, based on random sampling, are not discussed here. The interested reader is referred to regression-based procedures for model-based inference (Onghena, 2020 ). In particular, these techniques allow modeling the average level of the measurements in each condition and, if desired, the trends. The readings suggested for regression-based options in the SCED context are Moeyaert et al. ( 2014 ), Shadish et al. ( 2013 ), and Solmi et al. ( 2014 ), whereas for options in the context of N-of-1 trials Krone et al. ( 2020 ) and Zucker et al. ( 2010 ) can be consulted.

What is Gained by Using Randomization of Condition Ordering

Randomization can address threats to internal validity and increase the scientific credibility of the results of a study, including SCED studies (Edgington, 1996 ; Kratochwill & Levin, 2010 ; Tate et al., 2013 ). For ATDs, alternating the sequence randomly makes it less likely that external events are systematically associated with the exact moments in which conditions change. Randomization, along with counterbalancing, has also been suggested for decreasing condition sequencing effects, i.e., the possibility that one condition consistently precedes the other condition (Horner & Odom, 2014 ; Kennedy, 2005 ). The usefulness of randomization for addressing threats to internal validity is likely the reason for original introduction of ATDs as discussed by Barlow and Hayes ( 1979 ).

The inclusion of randomization of condition ordering in the design also allows the investigator to use a specific analytical technique called randomization tests (Edgington, 1967 , 1975 ). Randomization tests are applicable across different kinds of SCEDs (Craig & Fisher, 2019 ; Heyvaert & Onghena, 2014 ; Kratochwill & Levin, 2010 ), as long as there is randomization in the design, such as the random assignment of conditions to measurement occasions (Edgington, 1980 ; Levin et al., 2019 ). Randomization tests are also flexible in the selection of a test statistic according to the type of effect expected (Heyvaert & Onghena, 2014 ). In particular, the test statistic can be defined according to whether the effect is expected to be a change in level or in slope (Levin et al., 2020 ), and whether the change is expected to be immediate or delayed (Levin et al., 2017 ; Michiels & Onghena, 2019 ). The test statistic is just the computation of a specific measure of the difference between conditions that is of interest to the researcher for which a p -value will be obtained. Owing to the presence of randomization in condition ordering, there is no need to refer to any theoretical sampling distribution that would require random sampling. The test statistic is usually the mean difference actually obtained, due to its frequent use as a summary measure in ATD (Manolov & Onghena, 2018 ). Any aspect of the observed data (e.g., level, trend, overlap 2 ) or any effect size or quantification (e.g., ALIV; Manolov, 2019 ) can be used as a test statistic. To conduct the analysis, the test statistic is computed for the actual (obtained) alternation sequence (for instance, ABBAAB). Then the same test statistic is computed for all possible alternation sequences. In particular, the measurements obtained (e.g., 6, 8, 9, 7, 5, 7) maintain their order as they cannot be placed elsewhere due to the likely presence of autocorrelation in the data (Shadish & Sullivan, 2011 ). What changes in each possible alternation sequence, from which the actual alternation sequence was selected at random, are the labels, which denote the treatment conditions. Thus, when constructing the randomization distribution, other possible orderings/labels such ABABAB and ABABBA are assigned to each measurement in its original sequence (6, 8, 9, 7, 5, 7) and the test statistic is computed according to these labels. The randomization distribution is constructed by computing the test statistic for all possible alternation sequences, whose number is 2 k when there are k blocks or pairs of conditions and for each block a random selection is performed regarding which condition is first and which section (Onghena & Edgington, 2005 ). The actually obtained test statistic is compared to the test statistics computed for all possible alternation sequences under the randomization scheme (these are called “pseudostatistics,” because they are computed for alternating sequences that did not actually take place, but are ones that could possibly occur). If an increase in the target behavior is desired, the p -value is the proportion of pseudostatistics as large as or larger than the actual test statistic. As an alternative, if a decrease is the aim of the intervention, the p -value is the proportion of pseudostatistics as small as or smaller than the actual test statistic.

As an additional strength, although their use requires random ordering of conditions for each participant, randomization tests are free from the assumptions of random sampling of participants from a population, normality or independence of the data (Dugard et al., 2012 ; Edgington & Onghena, 2007 ). This is important, because in the SCED context it cannot be assumed that either the individual or their behavior were sampled at random. Moreover, the data are autocorrelated and not necessarily normally distributed (Pustejovsky et al., 2019 ; Shadish & Sullivan, 2011 ; Solomon, 2014 ). Finally, when using a randomization test, missing data can be handled effectively in a straightforward way by randomizing a missing-data marker, as if it were just another observed value, when obtaining the value of the test statistic for all possible random assignments (De et al., 2020 ). There is no specific limitation that the use of randomization of condition ordering entails because it is also possible to combine randomization and counterbalancing (e.g., see Edgington & Onghena, 2007 , ch. 6). This could occur, for instance, when determining the sequence at random for participant 1 (e.g., ABABBAAB) and counterbalancing for participant 2 (i.e., BABAABBA).

Interpreting the p-Value

The null hypothesis is that there is no effect of the intervention and thus the measurements obtained would have been the same under any of the possible randomizations (Jacobs, 2019 ), and in the ATD case, under any of the possible random sequences. The p -value quantifies the probability of obtaining a difference between conditions as large as, or larger than, the actually observed difference, conditional on there being no difference between the conditions. A small p -value entails that the difference observed is unlikely if the null hypothesis is true. Hence, either we observed an unlikely event or it is not true that the intervention is ineffective. If we don’t believe in unlikely events then our conclusion is tentatively that the intervention is effective, but a statistically significant result does not show the actual probability that the intervention is superior to another treatment or baseline.

In addition, it should be noted that p -values should not be interpreted in isolation. Other analytical methods, such as visual analysis and clinical significance measures, as well as assessment of social validity should be considered as well. We do not suggest that a p -value is the only way for tentatively inferring a substantial treatment effect, because the assessment of the presence of a functional relation is usually performed via visual analysis of graphed data (Maggin et al., 2018 ), especially in terms of the consistency of the effects (Ledford et al., 2019 ). However, the p -value based on the presence of randomization in the design is an objective quantification, which is valid thanks to the use of the randomization of condition ordering as it was actually implemented during the study.

Assessing Intervention Effectiveness: Beyond p-Values

A randomization test is not to be applied arbitrarily (Gigerenzer, 2004 ), nor is it free of interpretation from the researcher (see Perone, 1999 ). In fact, the researcher chooses a priori the method for choosing the condition ordering at random that is the most reasonable (e.g., block randomization vs. restricted randomization; Manolov, 2019 ) and which test statistic to use according to the expected effects (change in level or change in trend, immediate or delayed), in relation to the six data aspects emphasized by Kratochwill et al. ( 2013 ). Moreover, the researcher is encouraged to use other data analytic outcomes besides the p -value because other sources of data analysis are not discarded or disregarded when interpreting a p -value. In terms of inferential quantifications, confidence intervals are important for informing about the precision of estimates (Wilkinson & The Task Force on Statistical Inference, 1999 ) and they can be constructed based on randomization test inversion (Michiels et al., 2017 ). The visual representation of the data should always be inspected, and the individual values can be analyzed. The researchers can, and must, still seek the possible causes of specific outlier measurements according to their knowledge about the client, the context, and the target behavior. Finally, maintenance, generalization, and any subjective opinion expressed by the client or significant others can be considered, along with normative data (if available), to assess the social validity of the results (Horner et al., 2005 ; Kazdin, 1977 ).

The Need for Quantifications Complementing Visual Analysis

Visual and quantitative analyses should be used in conjunction.

The quantifications illustrated are not suggested as replacements for the visual inspection of graphed data. They should rather be understood as complementary. Such complements are necessary for several reasons. First, visual and quantitative analyses can achieve different goals. Visual analysis is used to shape an inductive and dynamic approach to identifying the factors controlling the target behavior (Johnson & Cook, 2019 ; Ledford et al., 2019 ), or to conduct response-guided experimentation (Ferron et al., 2017 ). For such purposes, visual analysis enables the researcher to maintain in close contact with the data (Fahmie & Hanley, 2008 ; Perone, 1999 ). Quantifications used for a summative purpose can complement this by providing objective and easily communicable results that can be aggregated across participants, avoiding subjectivity and potential confirmation bias in visual analysis (Laraway et al., 2019 ). Such quantification facilitates the analysis of multiple data sets, making it easier than inspecting each one of them separately (Kranak et al., 2021 ). In addition, quantifications can be used to integrate the results across studies via meta-analysis (Jenson et al., 2007 ; Onghena et al., 2018 ), which is important considering the need for examining the external validity of treatment results. The complementarity between visual and quantitative analyses can be illustrated by data analytic techniques such as ALIV (Manolov & Onghena, 2018 ), which was developed to quantify exactly the same aspect that is visually evaluated: the degree of separation between data paths. It is possible that a separation or differentiation be of such size that it is easy to identify via visual inspection (Perone, 1999 ), but a quantification can still be useful for communicating and aggregating the results via meta-analysis of SCED data.

Quantifications Commonly Accompany Visual Analysis

When presenting visual analysis, it is common to refer to visual aids (e.g., trend lines, which are based on quantitative methods) and descriptive quantifications, such as means and overlap indices (Lane & Gast, 2014 ; Ninci, 2019 ). In addition, probabilities (such as the ones arising from a null hypothesis test) have also been suggested as tools for aiding visual analysts: see the dual criteria (Fisher et al., 2003 ), which are commonly recommended and tested in the context of visual analysis (Falligant et al., 2020 ; Lanovaz et al., 2017 ; Wolfe et al., 2018 ).

Why Quantifications Are Useful

Quantifications can help mitigate some of the potential problems associated with visual inspection, such as insufficient interrater agreement (Ninci et al., 2015 ) or the fact that the graphical features of the plot can affect the result of the visual inspection (Dart & Radley, 2017 ; Kinney, 2020 ; Radley et al., 2018 ). A quantitative analysis requires several decisions to be made which leads to “researcher degrees of freedom” (Hantula, 2019 ; Simmons et al., 2011 ), potentially affecting the results through the decisions that were made. However, once an appropriate specific quantitative method is chosen, it yields the same result regardless of how the data are graphed.

Some of the quantifications illustrated in this article (i.e., Manolov et al., 2020 ; Manolov & Tanious, 2020 ) refer to an issue that is critical for SCEDs: replication (Kennedy, 2005 ; Sidman, 1960 ; Wolery et al., 2010 ; see also the special issue of Perspectives on Behavior Science on the “replication crisis”: Hantula, 2019 ) and the consistency of results across replications (Ledford, 2018 ; Maggin et al., 2018 ). Considering the fact that p -values in the classical null hypothesis significance testing approach do not provide information about the replicability of an effect (Branch, 2014 ; Killeen, 2005 ), we consider that it is important to emphasize quantifications that emphasize the consistency of effects across replications.

Some Quantifications that are Easy to Understand and to Use

Applied researchers are likely to be more familiar with visual analysis and prefer avoiding the steep learning curve required for specialized skills such as advanced statistical analysis. However, most of the quantifications described in the current text are straightforward and intuitive. For instance, ALIV is simply a quantification of the distance between data paths, whereas ADISO is a quantification of the average difference between successive measurements. Likewise, a randomization test entails the calculation of a test statistic (e.g., mean difference between conditions) for the actual alternation sequence as compared with all possible alternation sequences that could have been obtained according to the randomization scheme. There is no need to assume hypothetical sampling distribution with normal distribution of data points. Simple quantifications, like the ones illustrated here, are more likely to be used by applied researchers 3 who are typically more familiar with visual inspection of graphically depicted data. Moreover, the quantifications illustrated here are implemented in intuitive and user friendly software that is available for free (e.g., https://tamalkd.shinyapps.io/scda/ and https://manolov.shinyapps.io/ATDesign/ ).

Open Access Software for Data Analysis

List of software.

The current section provides a list of software that can be used when analyzing ATD data. All software listed, except for the Microsoft Excel macro for randomization tests ( https://ex-prt.weebly.com/; Gafurov & Levin, 2020 ), are user-friendly and freely available websites that do not require that the user has any specific program installed.

  • Choosing an alternation sequence at random (i.e., designing the study) and performing a randomization tests for data analysis (Heyvaert & Onghena, 2014 ; Levin et al., 2012 ; Onghena & Edgington, 1994 , 2005 ): https://tamalkd.shinyapps.io/scda and https://ex-prt.weebly.com/ .
  • Comparing data paths via ALIV (Manolov & Onghena, 2018 ; with the possibility of obtaining a p -value for ALIV on the basis of randomization test, Manolov, 2019 ) and also as a basis for the visual structured criterion (Lanovaz et al., 2019 ): https://manolov.shinyapps.io/ATDesign .
  • Comparing adjacent data points using ADISO (Manolov & Onghena, 2018 ): https://manolov.shinyapps.io/ATDesign .
  • Visual aid and objective rule (VAIOR; Manolov & Vannest, 2019 ) for complementing visual analysis, using Theil-Sen trend and a variability band: https://manolov.shinyapps.io/TrendMAD .
  • Assessment of consistency on the basis of variance partitioning (Manolov et al., 2020 ): https://manolov.shinyapps.io/ConsistencyRBD .
  • Assessment of consistency in relation to the modified Brinley plot—MAPESIM and MAPEDIFF (Manolov & Tanious, 2020 ): https://manolov.shinyapps.io/Brinley .

Data Files to Use

The structure of the data file that is required is the same for several instances of software: (1) the randomization test via https://tamalkd.shinyapps.io/scda; (2) for applying the comparison involving actual and linearly interpolated values (ALIV) and the ADISO (Manolov & Onghena, 2018 : https://manolov.shinyapps.io/ATDesign ); and (3) VAIOR (Manolov & Vannest, 2019 : https://manolov.shinyapps.io/TrendMAD ). In particular, a simple text file (.txt extension, from Notepad) is required with two columns, separated either by a tab or a comma. One column must contain the header “condition” and it include on each row the letters A and B, marking the condition. The other column should be labeled “score” and it includes the values obtained at each measurement occasion. One data file is required for each alternation sequence (i.e., for each participant). For ADISO, in order to specify where each block ends (i.e., how to split the alternation sequence in blocks), an additional data file is required. A text file with a single line with the last measurement occasion for each block is required—each number separated by commas. For instance, for a design with seven blocks of two conditions, the additional file will contain the following text: 2, 4, 6, 8, 10, 12, 14. This is the specific set of points in which each block ends for a sequence with seven blocks: it is valid not only for the current data, but also for any other sequence that entails seven blocks. Complementary to this, if there are five blocks, the sequence will be 2, 4, 6, 8, 10 and if there are 20 blocks, the sequence will be 2, 4, 6, 8, 10, 12, 14, 16, 18, 20.

For the assessment of consistency via variance partitioning ( https://manolov.shinyapps.io/ConsistencyRBD ) and the quantifications related to the modified Brinley plot ( https://manolov.shinyapps.io/Brinley ), a different kind of data file is used. There is a column called “Tier,” which contains only the value 1, given that a single ATD sequence is to be represented (i.e., a single individual) 4 and repeated as many time as there are measurements. A second column is called “Id” and it marks the block, repeating twice each consecutive value (e.g., 1, 1, 2, 2, 3, 3, if there are three blocks). The third column is called “Time” and it contains the values making the measurement occasions (1, 2, up to the number of measurements). A fourth column is called “Score” and contains the measurements. A fifth and final column is called “Phase” and contains the values 0 and 1 for conditions A and B, respectively.

In the Open Science Framework Project ( https://osf.io/ks4p2 ) we have included the data for the illustrations, organized as previously described. The data are available in two Microsoft Excel files and to use them it is only necessary to copy the data from each worksheet and paste it into a new text (Notepad) file. The pasting creates a file separated by tabs.

Use of the Software

The websites use point-and-click menus for loading the text files with the data and for obtaining the results. It is possible to modify the default display of the graphical representations by adding visual aids (for https://tamalkd.shinyapps.io/scda ) and by changing the minimum and maximum value of the Y-axis and the size of the plot (for the remaining websites from the list). The tabs within each website and the options to be chosen include self-explanatory descriptions.

Illustrations and Comparison of the Results

Selection of published data for the illustrations.

Two studies were selected for three reasons: (1) the studies describe procedures consistent with block randomization for the ATD; (2) the studies represent a variety of data patterns—some show clear differences (i.e., completely differentiated data paths that do not cross) and others show more subtle differences (i.e., data paths crossing to different degrees); and (3) the studies were selected to include a variety of data analysis techniques (Fletcher et al., 2010 , use visual analysis with means and number of sessions to achieving a criterion, whereas Sjolie et al., 2016 , use Cohen’s d and a randomization test).

Only a selection of all possible results from all the data analysis procedures described previously in the manuscript is presented here. Results applying all these previously mentioned quantitative techniques, applied to each of the two data sets, can be obtained from the previously mentioned websites, using the data files from the Open Science Framework Project ( https://osf.io/ks4p2 ). The assessment of presence, magnitude, and consistency of effect is summarized in Table ​ Table2 2 .

Quantifications obtained for the data in the three illustrations. For the comparison involving actual and linearly interpolated values (ALIV) and the average difference between successive observations (ADISO), the calculation performed is A minus B. The ADISO superiority percentage refers to the superiority of B over A, except for Retention for Participant 1008 (superiority of A over B).

CEAB consistency of effects across blocks, MAPE mean absolute percentage error (A denotes condition A, B denotes condition B, DIFF denotes the effect or difference between conditions), VAIOR visual aid and objective rule, VSC visual structured criterion, NA calculation not available for the data set

ATD Data Reanalyzed

In Fletcher et al. ( 2010 ), a comparison was performed between TOUCHMATH, a multisensory mathematics program and a number line, for three middle-school students (Ashley, Robert, and Ken) with moderate and multiple disabilities in the context of solving single-digit mathematics problems. The data for the comparison phase in which the two interventions are alternated are presented in Fig. ​ Fig.1 1 .

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig1_HTML.jpg

Data gathered by Fletcher et al. ( 2010 ) for Ashley (upper panel), Robert (middle panel), and Ken (lower panel). Condition A (number line): blue. Condition B (touch points): yellow. Plots created via https://manolov.shinyapps.io/ConsistencyRBD/

According to Fletcher et al. ( 2010 , p. 454), all students

showed significant improvements using the ‘touch points’ method compared to the number line strategy to solve. . . . During the baseline phase, the students averaged 4% of the single-digit mathematics problems accurately, however, while in the ‘touch points’ phase the students averaged 92% of the problems correctly, compared to only 30% while using the number line strategy.

The authors also mention that each participant reached the criterion of 90% accuracy for three consecutive sessions faster for the touch points program.

Figure ​ Figure2 2 represents the differences for each pair of conditions within a block. The closer that the dots are to the red horizontal line, the more similar the differences between conditions in each block. Thus, the differences are most similar (i.e., most consistent) for Ken and more variable (i.e., least consistent) for Ashley. In particular, for Ken, most differences are exactly the same, except for the last two. For Robert, all differences are similar except one 0 difference. For Ashley, there is greater variability.

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig2_HTML.jpg

Differences between conditions for each block, for the Fletcher et al. ( 2010 ) data for Ashley (upper panel), Robert (middle panel), and Ken (lower panel). The red horizontal line is the mean difference for each participant. The vertical distance between the dots and the red horizontal line visualizes the consistency of the difference between conditions across blocks. Plots created via https://manolov.shinyapps.io/ConsistencyRBD/ , as presented by Manolov et al. ( 2020 ) in the context of the development of CEAB

In order to further study the results for Ashley, we quantify the degree of consistency for each condition in Fig. ​ Fig.3. 3 . This figure represents a modified Brinley plot, constructed as described in Blampied ( 2017 ) with the additional graphical aids described Manolov and Tanious ( 2020 ). In particular, for ATDs, the coordinates of each data point are defined by a condition A value (X-axis) and the corresponding condition B value (Y-axis) from the same block of the ATD. Both the left and the right panel of Fig. ​ Fig.3 3 include the same data and thus the same configuration of data points. The left panel focuses on the condition A measurements, represented in the X-axis, and it represents the distance between each condition A value and the condition A mean via the horizontal dashed lines. In complementary fashion, the right panel focuses on the condition B measurements, represented in the Y-axis, and it represents the distance between each condition B value and the condition B mean via the vertical dashed lines. MAE, standing for mean absolute error (also called “mean absolute deviation”) is the average of these horizontal (left panel) or vertical (right panel) distances. Therefore, the longer these horizontal or vertical lines, the larger the value of MAE (mean absolute error) and, thus, the lower the consistency within each condition.

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig3_HTML.jpg

Consistency of data points for participant Ashley from the Fletcher et al. ( 2010 ) study. The left panel illustrates the consistency in Condition A (number line): the greater the horizontal distance between the points and the vertical line representing the condition A mean, the lower the consistency. The right panel illustrates the consistency in Condition B (touch points): the greater the vertical distance between the points and the horizontal line representing the condition B mean, the lower the consistency. Plots created via https://manolov.shinyapps.io/Brinley/ , as part of the MAPESIM quantification (Manolov & Tanious, 2020 )

In absolute terms (here, accuracy as a percentage), the MAE from the average level is similar for both conditions. MAE is equal to 14.91 for condition A (number line) and 10.41 for condition B (touch points). However, in relative terms (i.e., the quantification called MAPESIM [mean absolute percentage error for similar conditions]), this variability represents 42% of the mean for condition A (which is equal to 35.38 and thus 14.19/35.38 = 42.14%) and only 11% of the mean for condition B (which is equal to 91.54 and thus 10.41/91.54 = 11.38%), indicating greater consistency for the latter. This is an additional result that can be used for justifying the conclusion of difference between conditions for Ashley. Given that the data paths for Ashley do not cross, the greater variability in condition A can be detected from visual inspection, and MAPESIM serves as a quantitative complement.

Finally, given the greater variability of values in condition A (number line), we checked for evidence regarding whether the improvement observed in condition B (touch points), is sufficient. In Fig. ​ Fig.4, 4 , we apply VAIOR (Manolov & Vannest, 2019 ) to Ashley’s data. Despite the variability, there is no upward or downward overall trend in Condition A. A total of 46% (6 of 13) of the baseline data are beyond the variability band. According to the VAIOR criterion, at least twice this percentage of condition B values needs to be beyond the variability band in order to have an indication of intervention effect. Thus, at least 92% of the condition B data need to exceed the variability band. In fact, this is the case, as all condition B measurements are above the variability band. Considering the 100% superiority of one condition over the other the visual structured criterion (Lanovaz et al., 2019 ) also indicates that the “touch points” condition leads to better results. In addition, a randomization test can be performed. In particular, using the mean difference as a test statistic and the website https://tamalkd.shinyapps.io/scda/ , we obtain that the value of the difference between the means of the two condition is 56.2. In the randomization distribution, there are 8,192 values given that there are 13 blocks in the ATD and 2 13 = 8192, representing the number of possible alternation sequences using block randomization. The observed test statistic is the largest value of all 8,192 values. Thus the p -value is 1/8192 = 0.0001220703.

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig4_HTML.jpg

Data for participant Ashley from the Fletcher et al. ( 2010 ) study. Theil-Sen trend fitted to Condition A (Number Line), plus a variability band defined by the median absolute deviation. Plots created via https://manolov.shinyapps.io/TrendMAD/ , as part of VAIOR (Manolov & Vannest, 2019 )

The analyses exemplified in this section demonstrate how to obtain a more thorough and detailed picture of differences between conditions and the consistency of effects, when the effect is clear (participant Ken) and when there is a lot of variability in one condition (Ashley). Further analyses may strengthen the conclusion regarding the difference between the conditions or reveal different characteristics of the data. Additional analyses for this data set can be accessed at https://osf.io/ks4p2 .

In Sjolie et al. ( 2016 ), a comparison is performed between two versions of speech therapy: with and without exposure to ultrasound visual feedback for postvocalic rhotics (/r/- colored vowels). The authors studied the effects of the two treatments on acquisition, retention, and generalization, hypothesizing that the ultrasound would facilitate acquisition but hinder retention and generalization. Four participants (age 7–9) were studied. Focusing on some of the most interesting and challenging data patterns, Fig. ​ Fig.5 5 includes the acquisition data for Participant 1003 and the retention data for Participant 1008.

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig5_HTML.jpg

Data gathered by Sjolie et al. ( 2016 ) for Participant 1003 during acquisition (upper panel) and Participant 1008 during retention. Condition A (No Ultrasound): Blue. Condition B (Ultrasound): Yellow. Plots created via https://manolov.shinyapps.io/ConsistencyRBD/

Sjolie et al. ( 2016 ) report, for acquisition, that

Participant 1003 showed a generally consistent advantage for US sessions over NoUS sessions. Participant 1008 showed signs of acquisition, but no consistent advantage for either US sessions or NoUS sessions. Consistent with the graphical trend, Participant 1003 showed a significant advantage for US sessions over NoUS sessions in acquisition scores ( p = .039, d = 0.78); however, the remaining three subjects did not show a significant advantage for either treatment. (p. 69)

In order to provide a more in-depth analysis of the statistically significant result obtained via a randomization test, as reported by the original authors, we compared several different types of quantitative analyses to see if they would yield similar conclusion. For instance, the application of VAIOR (Fig. ​ (Fig.6, 6 , left panel) indicates that 43% (3 of 7) of the measurements in the condition without ultrasound are outside the variability band constructed around the trend line for this condition. According to the VAIOR criterion for sufficient change, requiring for doubling this percentage (Manolov & Vannest, 2019 ), at least 86% of the measurements of the condition with ultrasound should be outside the upper limit of the variability band. However, this is the case for only 57% (4 of 7) of the measurements.

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig6_HTML.jpg

Data gathered by Sjolie et al. ( 2016 ). Left panel: acquisition for Participant 1003; Condition A (No ultrasound): Black triangles. Condition B (Ultrasound): Red, Yellow, and Green Dots. Right panel: retention for Participant 1008; Condition A (Ultrasound): Black triangles. Condition B (No ultrasound): Red, Yellow, and Green Dots. Plots created via https://manolov.shinyapps.io/ATDesign/ , as part of VAIOR (Manolov & Vannest, 2019 )

A different comparison can be performed, comparing data paths, rather than only actually obtained measurements, using ALIV (Manolov & Onghena, 2018 ) and the visual structured criterion (Lanovaz et al., 2019 ). Figure ​ Figure7 7 (upper panel) represents this comparison between data paths. With seven measurements per condition, there are 14 measurement occasions and 12 comparisons, which are delimited by the blue vertical lines. Both VSC and ALIV entail omitting the initial value for the ultrasound condition and the last value for the no ultrasound condition. The lines with arrows show a connection between a real data point from one condition to an interpolated point from the other condition. They always originate with the condition denoted as A. Green lines show where condition B (usually the active treatment) is better than condition A (usually the control). If we compare the data paths, it can be seen that the ultrasound condition is superior in 10 of these 12 comparisons. According to the visual structured criterion, one condition being superior to the other in only 10 out of 12 comparisons is not sufficient evidence for superiority, as at least 11 out of 12 is required, following the criteria derived by Lanovaz et al. ( 2019 ).

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig7_HTML.jpg

Data gathered by Sjolie et al. ( 2016 ). Condition A is No ultrasound, whereas Condition B is Ultrasound. Upper panel: acquisition for participant 1003; green marks values greater in Condition B, whereas red marks values greater in Condition A. Upper panel: retention for participant 1008; green marks values greater in Condition A, whereas red marks values greater in Condition B. Plots created via https://manolov.shinyapps.io/ATDesign/ , as part of ALIV (Manolov & Onghena, 2018 )

When we computed ADISO for the acquisition data from Participant 1003 (Fig. ​ (Fig.8, 8 , upper panel), we see that the mean difference in favor of the ultrasound condition is 13% correct of all trained items, with the ultrasound condition being superior in 85% of the comparisons. Both these quantifications appear as subtitled in the upper panel of Fig. ​ Fig.8. 8 . Finally, to assess the consistency of effects, we can look at the color and the size of the arrows in the upper panel of Fig. ​ Fig.8: 8 : there is one red arrow (i.e., superiority of condition A) and the green arrows (i.e., superiority of condition B) are of different lengths. Thus, at least visually, according to Fig. ​ Fig.8, 8 , the effect does not seem to be very consistent. In addition, we can also inspect the modified Brinley plot (Fig. ​ (Fig.9, 9 , left panel). This plot is slightly different from Fig. ​ Fig.3, 3 , in that a parallel dashed diagonal line is added, parallel to the solid diagonal line (i.e., no difference) and representing the mean difference between the conditions. The consistency of effect is represented as the vertical distance between the red dots and the dashed diagonal line: the longer the distances, the lower the consistency. Overall, the degree of consistency of effect is quantified as a MAE (equal to 12.75) and as MAE relative to the mean difference (which is the MAPEDIFF quantification). Once again, the effect does not seem to be consistent, considering that the typical distance between the overall mean difference and the difference between conditions within each block is 97% of the overall mean difference (i.e., MAPEDIFF = 0.97). This may be the reason why the statistically significant result from the randomization test, reported by Sjolie et al. ( 2016 ), is not detected by VAIOR.

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig8_HTML.jpg

Data gathered by Sjolie et al. ( 2016 ). Condition A is No ultrasound, whereas Condition B is Ultrasound. Upper panel: acquisition for participant 1003; green marks values greater in Condition B, whereas red marks values greater in Condition A. Upper panel: retention for participant 1008; green marks values greater in Condition A, whereas red marks values greater in Condition B. Plots created via https://manolov.shinyapps.io/ATDesign/ , as part of ADISO (Manolov & Onghena, 2018 )

An external file that holds a picture, illustration, etc.
Object name is 40614_2021_289_Fig9_HTML.jpg

Consistency of Effects for the Sjolie et al. ( 2016 ) study. The X-axis represents the measurements in condition A (No Ultrasound). The Y-axis represents the measurements in condition B (Ultrasound). Left panel: acquisition for participant 1003. Right panel: retention for participant 1008. The greater the vertical distance between the red dots and the dashed diagonal line, the lower the consistency of differences between conditions across blocks. Plots created via https://manolov.shinyapps.io/Brinley/ , as part of the MAPEDIFF quantification (Manolov & Tanious, 2020 )

For retention, Sjolie ( 2016 , p. 70) report “a negligible difference between US sessions and NoUS sessions. None of the participants showed a statistically significant advantage for one treatment over the other in retention scores.” It is noteworthy that for Participant 1008 the authors report d = −0.303 and a p -value of .297. Further analyses can reveal whether this lack of statistical significance hides a relevant difference, in favor of the condition without ultrasound. Thus, it should be noted that in the right panel of Fig. ​ Fig.6, 6 , representing VAIOR, the condition with ultrasound is treated and depicted as condition A and the condition without ultrasound as condition B. This is opposite to the representation in the left panel of Fig. ​ Fig.6, 6 , but we proceeded in this way in order to explore whether there is any evidence for the superiority of the condition without ultrasound. The application of VAIOR reveals that 42% (3 of 7) of the measurements in the condition with ultrasound are outside the variability band constructed around the trend line for this condition. According to the VAIOR criterion (Manolov & Vannest, 2019 ), at least 84% of the measurements of the condition without ultrasound should be above that variability band. However, just like for acquisition, only 57.14% of the intervention phase data points improve the projected variability band. Using the visual structured criterion (Lanovaz et al., 2019 ) for comparing data paths, we see that the condition without ultrasound is superior in 8 of the 12 comparisons (as depicted in Fig. ​ Fig.7, 7 , bottom panel), which is insufficient evidence. Thus, the conclusion appears to be the same as for acquisition for Participant 1003.

However, when computing ADISO (Fig. ​ (Fig.8, 8 , lower panel), we see that the mean difference in favor of the no ultrasound condition is 5% correct of all trained items, with the ultrasound condition being superior in only 42% of the comparisons, which is much less that the superiority of the ultrasound condition observed for acquisition for Participant 1003. Finally, the low degree of superiority for retention for Participant 1008 is well-aligned with the results about the consistency of the effect. Focusing on the modified Brinley plot represented on the right panel of Fig. ​ Fig.9, 9 , it can be seen that the differences between conditions in each block are relatively far away from the overall mean difference. That is, the vertical distance between the dots and the dashed diagonal line is relatively large, compared to the mean difference. In particular, as indicated in the right panel of Fig. ​ Fig.9, 9 , the typical distance between the overall mean difference and the difference between conditions within each block is more than three times (342%) of the overall mean difference.

Overall, the analyses performed here in addition to the ones reported by Sjolie et al. ( 2016 ) provide further information about the effectiveness of the two treatments (beyond a quantification expressed as a standardized mean difference) and the consistency of the effect (beyond a p -value). More complete results can be accessed at https://osf.io/ks4p2 .

We focused on ATDs, a form of SCEDs that have been the focus for several recent data analytical developments. Several of these developments were reviewed and illustrated, with an emphasis on techniques that can be implemented by applied researchers with relatively minimal training in advanced quantitative methods. When using ATDs, several challenges need to be addressed. The specific design and method for generating the alternation sequence for treatment conditions need to be correctly labeled and described with sufficient detail to enable replication. In terms of data analysis, the use of randomization of condition ordering in the design enables the use of an analytical technique allowing for tentative causal inference, but the p -values need to be derived and interpreted correctly. These issues are discussed here.

Need for Transparent Reporting

Labeling the design.

Transparent reporting is necessary with regards to the design used to isolate the effects of the independent variable on the dependent variable that match SCRIBE guidelines for SCEDs (Tate et al., 2016 ) and CENT guidelines for N- of-1 trials from the health sciences (Vohra et al., 2015 ). To begin with, the name of the design should be correctly and consistently specified across studies, in order to be able to locate them and include them in systematic reviews and meta-analyses. Difficulties might arise because the same design is sometimes referred to using different names (e.g., as an ATD or a multielement design; Hammond & Gast, 2010 ; Wolery et al., 2018 ). Any tentative recommendation that we made in the current manuscript has to consider the tradition for data analysis in different fields. Thus, following Ledford et al. ( 2019 ), one option would be to reserve the term “ATD” for designs in which there is an intervention (or two different treatments are being compared), whereas the term “multielement design” could be used when the effect of contextual variables is being studied, such as in functional analysis of problem behavior.

The different variations of ATD (Onghena & Edgington, 1994 , 2005 ) are not equivalent. Thus, it is important to label the type of ATD correctly so applied researchers can analyze the data properly and readers can easily understand (and be able to replicate) the analyses performed. When block randomization of conditions is used, the comparisons to be performed between adjacent conditions are more straightforward because the presence of blocks makes it easier to apply ADISO and it enables using only actually obtained measurements without the need to interpolate as in ALIV. Moreover, the alternation sequences that can possibly be generated using block randomization are not the same as the ones that can arise when using an ATD with restricted randomization. This has implications for the way in which statistical significance is determined (see the later section “Analytical Implications for Randomization Tests”). Further complications in reporting and data analysis arise by the use of combinations of designs (Ledford & Gast, 2018 ; Moeyaert et al., 2020 ), such as embedding an ATD within a multiple baseline design or within a reversal design. The main suggestions that we are making here, in relation to ATD in which the effect of a treatment (or more than one treatment) is studied, is to state clearly how the alternation sequence is determined, by specifying whether (1) counterbalancing or randomization is used; and (2) whether blocks are used or there is a restriction imposed on the number of consecutive administrations of the same condition (being explicit about his number). When randomization is used, the terms “ATD with block randomization” and “ATD with restricted randomization” should be used to reduce ambiguity.

Determining the Alternation Sequence

In absence of transparent reporting, it may not be clear exactly what was done to determine the condition sequence (i.e., counterbalancing, randomization, or blocking), and any ambiguity interferes with replication attempts, the reanalysis of the data, and subsequent reviews of the published literature. In relation to randomization, Item 8 of the CENT guidelines require reporting “[w]hether the order of treatment periods was randomised, with rationale, and method used to generate allocation sequence. When applicable, type of randomisation; details of any restrictions (such as pairs, blocking)” (Vohra et al., 2015 , p. 4). In the SCRIBE guidelines, Item 8 requires the authors to “[s]tate whether randomization was used, and if so, describe the randomization method and the elements of the study that were randomized” (Tate et al., 2016 , p. 140).

It is important not only to state how the alternation sequence was determined, but also to provide additional details. For instance, only stating that counterbalancing was used (e.g., Russell & Reinecke, 2019 ; Thirumanickam et al., 2018 ) is often not sufficient to understand and replicate the procedure. Regarding ATDs with block randomization, the most straightforward option is to use this label for the design, or the term “randomized block design” (e.g., Sjolie et al., 2016 ) and/or to describe the procedure clearly. For example, Lloyd et al. ( 2018 ) in particular refer to random assignment between successive pairs of observation, whereas Fletcher et al. ( 2010 ) somewhat more ambiguously state that the interventions were administered “semi-randomly” to counterbalance which treatment takes place first each data.

It is possible to further enrich the design by introducing both randomization and counterbalancing. For instance, Maas et al. ( 2019 , p. 3167) state the

[o]rder of conditions within each session was pseudorandomized as follows: The child rolled a die before the first weekly session to determine which condition would be presented first in that session; the following session would have the reverse order. Thus, the order of conditions was counterbalanced by week but randomized across weeks, and each condition was presented an equal number of times in the first and second half of a session (8/16 first, 8/16 second).

Analytical Implications for Randomization Tests

Randomization scheme for determining the alternation sequence.

When randomization is used in the context of any SCED in general and in the context of an ATD in particular, it is important to be clear in describing how the alternation sequence is generated and how the reference distribution for obtaining statistical significance is obtained. It is crucial that the random assignment procedure used for determining the alternation sequence is matched by the randomization performed for obtaining the statistical significance of the result (Edgington, 1980 ; Levin et al., 2019 ). For instance, if four days include a morning and an afternoon session, and two conditions take place each day, alternated in random order, this would lead to 2 4  = 16 possible sequences and it will not be equivalent to dividing eight measurement occasions into two groups of four, which would lead to 8 ! /(4 ! 4!) = 70 possible divisions (Kratochwill & Levin, 1980 ). The former is a randomized block design, whereas the latter is a completely randomized design (Onghena & Edgington, 2005 ). An apparent confusion between the two ways of determining the alternation sequence at random, when obtaining a p -value, is present in Hua et al. ( 2020 ). Thus, ensuring statistical-conclusion validity (Levin et al., 2019 ) requires both the presence of randomization when designing and the correspondence between what is done in the design stage and in the analytical stage in which the randomization distribution is constructed (Bulté & Onghena, 2008 ).

Statistical Inference

Incorporating randomization in the design boosts internal validity and scientific credibility in any type of design, including SCEDs (Edgington, 1975 ; Kratochwill & Levin, 2010 ). Moreover, the use of randomization makes possible and valid the use of randomization tests, a kind of statistical test that makes no distributional assumptions and no assumptions about random sampling (Edgington & Onghena, 2007 ; Levin et al., 2019 ). The evidence provided by the application of a randomization test to an individual’s data is more closely related to the typical aims in behavioral sciences (Craig & Fisher, 2019 ). Applied researchers need to be cautious only when performing multiple statistical tests, in relation to potentially committing a Type I error. Finally, statistical inference can be expressed as a confidence interval constructed around an effect size estimate, thanks to inverting the randomization (Michiels et al., 2017 ).

A potential limitation of randomization tests is that some applied researchers may not be familiar with the correct interpretation of its p -value, but this could also be applicable to other data analytical techniques suggested in the SCED context. For instance, the conservative dual criterion fits a mean line and a trend line to the baseline data and extends them into the intervention phase for comparison (Fisher et al., 2003 ). The conservative dual criterion can be considered a visual aid, as suggested by its authors, but it actually entails obtaining a p -value (i.e., the probability of observing, only by chance, as many or more intervention points superior to both extended baseline lines, as the number actually observed). In order to avoid repeating the misuses and misinterpretations of p -values (Branch, 2019; Cohen, 1990 , 1994 ; Gigerenzer, 2004 ; Nickerson, 2000 ; Wicherts et al., 2016 ), it is important for applied researchers to know what a null hypothesis is (and is not), when a randomization test is used, and what the statistical inference refers to. In particular, a very small p -value indicates that the difference between the conditions (expressed as difference in means, difference between data paths compared via ALIV, or otherwise, according to the test statistic chosen) is not likely to be obtained only by chance (i.e., if the there is no difference between conditions). The p -value is not a quantification of the reliability or the replicability of the results (Branch, 2014 ). In fact, p -values do not preclude replications or make them unnecessary, because they are not a tool for extrapolating the results to other participants.

Limitations of the Quantitative Techniques Reviewed and Suggestions for Future Research

It is impossible to recommend a single optimal choice for graphing ATD data or for analyzing these data quantitatively. This is because different graphical representations and analytical techniques provide different types of information: presence or absence of effect, degree of ordinal superiority, average difference between adjacent measurements, average difference between data paths, statistical significance. All these components can be considered together with broader social validity criteria (Horner et al., 2005 ; Kazdin, 1977 ) when deciding the degree to which one treatment is superior to another.

Acknowledgements

The authors thank Joelle Fingerhut for reviewing a version of the manuscript and providing feedback on formal and style issues related to the English language.

Availability of data and material

Code availability (software application or custom code).

Several freely-available software applications are mentioned in the text, but the underlying code for creating has not been publicly shared.

Declarations

The authors report no conflicts of interest. Furthermore, the authors have no financial interest for any of the websites mentioned in the manuscript, as they are free to use and the authors do not generate revenue for themselves by the use of the websites.

1 “Single-case designs” (e.g., What Works Clearinghouse, 2020 ), “single-case experimental designs” (e.g., Smith, 2012 ), “single-case research designs” (e.g., Maggin et al., 2018 ), or “single-subject research designs” (e.g., Hammond & Gast, 2010 ) are terms often used interchangeably. Another possible term is “within-subject designs” (Greenwald, 1976 ), referring to the fact that in most cases the comparison is performed within the same individual, although in a multiple-baseline design across participants there is also a comparison across participants (Ferron et al., 2014 ).

2 Given the absence of phases, immediacy and variability are likely to have a different meaning in the ATD context, as compared to multiple-baseline and ABAB designs. Regarding immediacy, an effect should be immediately visible, if it is to be detected, as each condition lasts for only one or two consecutive measurement occasions. Regarding data variability in each condition, it refers to measurements that are not adjacent.

3 For instance, Wolfe and McCammon ( 2020 ) reviewed instructional practices for behavior analysts and found that instruction on statistical analyses was scarce and most calculations involved only nonoverlap indices. Likewise, the difference between the second edition of the book by Riley-Tillman et al. ( 2020 ) and the first edition of 2009, in terms of summary measures and possibilities for meta-analyses, are a few nonoverlap indices mentioned, without referring to either the between-case standardized mean difference (Shadish et al., 2014 ) or to multilevel models (Van den Noortgate & Onghena, 2003 ).

4 For phase designs, several A-B comparisons can be represented on the same modified Brinley plot, because each A-B comparison is a single dot. However, for an ATD, there are multiple dots for each sequence (i.e., one dot for each block). Therefore, having several ATDs on the same modified Brinley plot can make the graphical representation more difficult to interpret.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Barlow DH, Hayes SC. Alternating treatments design: One strategy for comparing the effects of two treatments in a single subject. Journal of Applied Behavior Analysis. 1979; 12 (2):199–210. doi: 10.1901/jaba.1979.12-199. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single case experimental designs: Strategies for studying behavior change (3rd ed.). Pearson.
  • Blampied NM. Analyzing therapeutic change using modified Brinley plots: History, construction, and interpretation. Behavior Therapy. 2017; 48 (1):115–127. doi: 10.1016/j.beth.2016.09.002. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Branch M. Malignant side effects of null-hypothesis significance testing. Theory & Psychology. 2014; 24 (2):256–277. doi: 10.1177/0959354314525282. [ CrossRef ] [ Google Scholar ]
  • Bulté I, Onghena P. An R package for single-case randomization tests. Behavior Research Methods. 2008; 40 (2):467–478. doi: 10.3758/BRM.40.2.467. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cohen J. Things I have learned (so far) American Psychologist. 1990; 45 (12):1304–1312. doi: 10.1037/0003-066X.45.12.1304. [ CrossRef ] [ Google Scholar ]
  • Cohen J. The Earth is round ( p < .05) American Psychologist. 1994; 49 (12):997–1003. doi: 10.1037/0003-066X.49.12.997. [ CrossRef ] [ Google Scholar ]
  • Cox, A., & Friedel, J. E. (2020). Toward an automation of functional analysis interpretation: A proof of concept. Behavior Modification . Advance online publication. 10.1177/0145445520969188 [ PubMed ]
  • Craig AR, Fisher WW. Randomization tests as alternative analysis methods for behavior-analytic data. Journal of the Experimental Analysis of Behavior. 2019; 111 (2):309–328. doi: 10.1002/jeab.500. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dart EH, Radley KC. The impact of ordinate scaling on the visual analysis of single-case data. Journal of School Psychology. 2017; 63 :105–118. doi: 10.1016/j.jsp.2017.03.008. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De TK, Michiels B, Tanious R, Onghena P. Handling missing data in randomization tests for single-case experiments: A simulation study. Behavior Research Methods. 2020; 52 (3):1355–1370. doi: 10.3758/s13428-019-01320-3. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dugard, P., File, P., & Todman, J. (2012). Single-case and small-n experimental designs: A practical guide to randomization tests (2nd ed.). Routledge.
  • Edgington ES. Statistical inference from N=1 experiments. Journal of Psychology. 1967; 65 (2):195–199. doi: 10.1080/00223980.1967.10544864. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Edgington ES. Randomization tests for one-subject operant experiments. Journal of Psychology. 1975; 90 (1):57–68. doi: 10.1080/00223980.1975.9923926. [ CrossRef ] [ Google Scholar ]
  • Edgington ES. Validity of randomization tests for one-subject experiments. Journal of Educational Statistics. 1980; 5 (3):235–251. doi: 10.3102/10769986005003235. [ CrossRef ] [ Google Scholar ]
  • Edgington ES. Randomized single-subject experimental designs. Behaviour Research & Therapy. 1996; 34 (7):567–574. doi: 10.1016/0005-7967(96)00012-5. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Edgington, E. S., & Onghena, P. (2007). Randomization tests (4th ed.). Chapman & Hall/CRC.
  • Fahmie TA, Hanley GP. Progressing toward data intimacy: A review of within-session data analysis. Journal of Applied Behavior Analysis. 2008; 41 (3):319–331. doi: 10.1901/jaba.2008.41-319. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Falligant JM, Kranak MP, Schmidt JD, Rooker GW. Correspondence between fail-safe k and dual-criteria methods: Analysis of data series stability. Perspectives on Behavior Science. 2020; 43 (2):303–319. doi: 10.1007/s40614-020-00255-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferron JM, Joo S-H, Levin JR. A Monte Carlo evaluation of masked visual analysis in response-guided versus fixed-criteria multiple-baseline designs. Journal of Applied Behavior Analysis. 2017; 50 (4):701–716. doi: 10.1002/jaba.410. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferron JM, Moeyaert M, Van den Noortgate W, Beretvas SN. Estimating causal effects from multiple-baseline studies: Implications for design and analysis. Psychological Methods. 2014; 19 (4):493–510. doi: 10.1037/a0037038. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fisher WW, Kelley ME, Lomas JE. Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs. Journal of Applied Behavior Analysis. 2003; 36 (3):387–406. doi: 10.1901/jaba.2003.36-387. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fletcher, D., Boon, R. T., & Cihak, D. F. (2010). Effects of the TOUCHMATH program compared to a number line strategy to teach addition facts to middle school students with moderate intellectual disabilities. Education & Training in Autism & Developmental Disabilities, 45 (3), 449–458 https://www.jstor.org/stable/23880117 . Accessed 3 May 2021.
  • Gafurov, B. S., & Levin, J. R. (2020). ExPRT-Excel® package of randomization tests: Statistical analyses of single-case intervention data (Version 4.1, March 2020). Retrieved from https://ex-prt.weebly.com/ . Accessed 3 May 2021.
  • Gigerenzer G. Mindless statistics. Journal of Socio-Economics. 2004; 33 (5):587–606. doi: 10.1016/j.socec.2004.09.033. [ CrossRef ] [ Google Scholar ]
  • Greenwald AG. Within-subject designs: To use or not to use? Psychological Bulletin. 1976; 8 (2):314–320. doi: 10.1037/0033-2909.83.2.314. [ CrossRef ] [ Google Scholar ]
  • Guyatt GH, Keller JL, Jaeschke R, Rosenbloom D, Adachi JD, Newhouse MT. The n-of-1 randomized controlled trial: Clinical usefulness. Our three-year experience. Annals of Internal Medicine. 1990; 112 (4):293–299. doi: 10.7326/0003-4819-112-4-293. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hagopian LP, Fisher WW, Thompson RH, Owen-DeSchryver J, Iwata BA, Wacker DP. Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis. 1997; 30 (2):313–326. doi: 10.1901/jaba.1997.30-313. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hall SS, Pollard JS, Monlux KD, Baker JM. Interpreting functional analysis outcomes using automated nonparametric statistical analysis. Journal of Applied Behavior Analysis. 2020; 53 (2):1177–1191. doi: 10.1002/jaba.689. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hammond, D., & Gast, D. L. (2010). Descriptive analysis of single subject research designs: 1983-2007. Education & Training in Autism & Developmental Disabilities, 45 (2), 187–202. https://www.jstor.org/stable/23879806 . Accessed 3 May 2021.
  • Hammond JL, Iwata BA, Rooker GW, Fritz JN, Bloom SE. Effects of fixed versus random condition sequencing during multielement functional analyses. Journal of Applied Behavior Analysis. 2013; 46 (1):22–30. doi: 10.1002/jaba.7. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hantula DA. Editorial: Replication and reliability in behavior science and behavior analysis: A call for a conversation. Perspectives on Behavior Science. 2019; 42 (1):1–11. doi: 10.1007/s40614-019-00194-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Heyvaert M, Onghena P. Randomization tests for single-case experiments: State of the art, state of the science, and state of the application. Journal of Contextual Behavioral Science. 2014; 3 (1):51–64. doi: 10.1016/j.jcbs.2013.10.002. [ CrossRef ] [ Google Scholar ]
  • Holcombe, A., Wolery, M., & Gast, D. L. (1994). Comparative single subject research: Description of designs and discussion of problems. Topics in Early Childhood and Special Education, 16 (1), 168–190. 10.1177/027112149401400111.
  • Horner RH, Carr EG, Halle J, McGee G, Odom S, Wolery M. The use of single-subject research to identify evidence-based practice in special education. Exceptional Children. 2005; 71 (2):165–179. doi: 10.1177/001440290507100203. [ CrossRef ] [ Google Scholar ]
  • Horner, R. J., & Odom, S. L. (2014). Constructing single-case research designs: Logic and options. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case intervention research: Methodological and statistical advances (pp. 27–51). American Psychological Association. 10.1037/14376-002.
  • Howick, J., Chalmers, I., Glasziou, P., Greenhaigh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., Thornton, H., Goddard, O., & Hodgkinson, M. (2011). The 2011 Oxford CEBM Levels of Evidence . Oxford Centre for Evidence-Based Medicine. https://www.cebm.ox.ac.uk/resources/levels-of-evidence/ocebm-levels-of-evidence
  • Hua Y, Hinzman M, Yuan C, Balint Langel K. Comparing the effects of two reading interventions using a randomized alternating treatment design. Exceptional Children. 2020; 86 (4):355–373. doi: 10.1177/0014402919881357. [ CrossRef ] [ Google Scholar ]
  • Iwata BA, Duncan BA, Zarcone JR, Lerman DC, Shore BA. A sequential, test-control methodology for conducting functional analyses of self-injurious behavior. Behavior Modification. 1994; 18 (3):289–306. doi: 10.1177/01454455940183003. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jacobs KW. Replicability and randomization test logic in behavior analysis. Journal of the Experimental Analysis of Behavior. 2019; 111 (2):329–341. doi: 10.1002/jeab.501. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jenson WR, Clark E, Kircher JC, Kristjansson SD. Statistical reform: Evidence-based practice, meta-analyses, and single subject designs. Psychology in the Schools. 2007; 44 (5):483–493. doi: 10.1002/pits.20240. [ CrossRef ] [ Google Scholar ]
  • Johnson AH, Cook BG. Preregistration in single-case design research. Exceptional Children. 2019; 86 (1):95–112. doi: 10.1177/0014402919868529. [ CrossRef ] [ Google Scholar ]
  • Kazdin AE. Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification. 1977; 1 (4):427–452. doi: 10.1177/014544557714001. [ CrossRef ] [ Google Scholar ]
  • Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings (2nd ed.). Oxford University Press.
  • Kennedy, C. H. (2005). Single-case designs for educational research . Pearson.
  • Killeen PR. An alternative to null hypothesis statistical tests. Psychological Science. 2005; 16 (5):345–353. doi: 10.1111/j.0956-7976.2005.01538.x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kinney, C. E. L. (2020). A clarification of slope and scale. Behavior Modification. Advance online publication. 10.1177/0145445520953366. [ PubMed ]
  • Kranak, M. P., Falligant, J. M., & Hausman, N. L. (2021). Application of automated nonparametric statistical analysis in clinical contexts. Journal of Applied Behavior Analysis, 54 (2), 824–833. 10.1002/jaba.789. [ PubMed ]
  • Kratochwill TR, Hitchcock JH, Horner RH, Levin JR, Odom SL, Rindskopf DM, Shadish WR. Single-case intervention research design standards. Remedial & Special Education. 2013; 34 (1):26–38. doi: 10.1177/0741932512452794. [ CrossRef ] [ Google Scholar ]
  • Kratochwill TR, Levin JR. On the applicability of various data analysis procedures to the simultaneous and alternating treatment designs in behavior therapy research. Behavioral Assessment. 1980; 2 (4):353–360. [ Google Scholar ]
  • Kratochwill TR, Levin JR. Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue. Psychological Methods. 2010; 15 (2):124–144. doi: 10.1037/a0017736. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Krone T, Boessen R, Bijlsma S, van Stokkum R, Clabbers ND, Pasman WJ. The possibilities of the use of N-of-1 and do-it-yourself trials in nutritional research. PloS One. 2020; 15 (5):e0232680. doi: 10.1371/journal.pone.0232680. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lane JD, Gast DL. Visual analysis in single case experimental design studies: Brief review and guidelines. Neuropsychological Rehabilitation. 2014; 24 (3−4):445–463. doi: 10.1080/09602011.2013.815636. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lane JD, Ledford JR, Gast DL. Single-case experimental design: Current standards and applications in occupational therapy. American Journal of Occupational Therapy. 2017; 71 (2):7102300010p1–7102300010p9. doi: 10.5014/ajot.2017.022210. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lanovaz M, Cardinal P, Francis M. Using a visual structured criterion for the analysis of alternating-treatment designs. Behavior Modification. 2019; 43 (1):115–131. doi: 10.1177/0145445517739278. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lanovaz MJ, Huxley SC, Dufour MM. Using the dual-criteria methods to supplement visual inspection: An analysis of nonsimulated data. Journal of Applied Behavior Analysis. 2017; 50 (3):662–667. doi: 10.1002/jaba.394. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Laraway S, Snycerski S, Pradhan S, Huitema BE. An overview of scientific reproducibility: Consideration of relevant issues for behavior science/analysis. Perspectives on Behavior Science. 2019; 42 (1):33–57. doi: 10.1007/s40614-019-00193-3. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ledford JR. No randomization? No problem: Experimental control and random assignment in single case research. American Journal of Evaluation. 2018; 39 (1):71–90. doi: 10.1177/1098214017723110. [ CrossRef ] [ Google Scholar ]
  • Ledford JR, Barton EE, Severini KE, Zimmerman KN. A primer on single-case research designs: Contemporary use and analysis. American Journal on Intellectual & Developmental Disabilities. 2019; 124 (1):35–56. doi: 10.1352/1944-7558-124.1.35. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ledford, J. R., & Gast, D. L. (2018). Combination and other designs. In D. L. Gast & J. R. Ledford (Eds.), Single case research methodology: Applications in special education and behavioral sciences (3rd ed., pp. 335–364). Routledge.
  • Levin JR, Ferron JM, Gafurov BS. Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types. Journal of School Psychology. 2017; 63 :13–34. doi: 10.1016/j.jsp.2017.02.003. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Levin, J. R., Ferron, J. M., & Gafurov, B. S. (2020). Investigation of single-case multiple-baseline randomization tests of trend and variability. Educational Psychology Review . Advance online publication. 10.1007/s10648-020-09549-7
  • Levin JR, Ferron JM, Kratochwill TR. Nonparametric statistical tests for single-case systematic and randomized ABAB…AB and alternating treatment intervention designs: New developments, new directions. Journal of School Psychology. 2012; 50 (5):599–624. doi: 10.1016/j.jsp.2012.05.001. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Levin JR, Kratochwill TR, Ferron JM. Randomization procedures in single-case intervention research contexts: (Some of) “the rest of the story” Journal of the Experimental Analysis of Behavior. 2019; 112 (3):334–348. doi: 10.1002/jeab.558. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lloyd BP, Finley CI, Weaver ES. Experimental analysis of stereotypy with applications of nonparametric statistical tests for alternating treatments designs. Developmental Neurorehabilitation. 2018; 21 (4):212–222. doi: 10.3109/17518423.2015.1091043. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maas E, Gildersleeve-Neumann C, Jakielski K, Kovacs N, Stoeckel R, Vradelis H, Welsh M. Bang for your buck: A single-case experimental design study of practice amount and distribution in treatment for childhood apraxia of speech. Journal of Speech, Language, & Hearing Research. 2019; 62 (9):3160–3182. doi: 10.1044/2019_JSLHR-S-18-0212. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maggin DM, Cook BG, Cook L. Using single-case research designs to examine the effects of interventions in special education. Learning Disabilities Research & Practice. 2018; 33 (4):182–191. doi: 10.1111/ldrp.12184. [ CrossRef ] [ Google Scholar ]
  • Manolov R. A simulation study on two analytical techniques for alternating treatments designs. Behavior Modification. 2019; 43 (4):544–563. doi: 10.1177/0145445518777875. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Manolov R, Onghena P. Analyzing data from single-case alternating treatments designs. Psychological Methods. 2018; 23 (3):480–504. doi: 10.1037/met0000133. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Manolov, R., & Tanious, R. (2020). Assessing consistency in single-case data features using modified Brinley plots. Behavior Modification. Advance online publication. 10.1177/0145445520982969 [ PubMed ]
  • Manolov, R., Tanious, R., De, T. K., & Onghena, P. (2020). Assessing consistency in single-case alternation designs. Behavior Modification . Advance online publication. 10.1177/0145445520923990 [ PubMed ]
  • Manolov, R., & Vannest, K. (2019). A visual aid and objective rule encompassing the data features of visual analysis. Behavior Modification . Advance online publication. 10.1177/0145445519854323 [ PubMed ]
  • Michiels B, Heyvaert M, Meulders A, Onghena P. Confidence intervals for single-case effect size measures based on randomization test inversion. Behavior Research Methods. 2017; 49 (1):363–381. doi: 10.3758/s13428-016-0714-4. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Michiels B, Onghena P. Randomized single-case AB phase designs: Prospects and pitfalls. Behavior Research Methods. 2019; 51 (6):2454–2476. doi: 10.3758/s13428-018-1084-x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moeyaert M, Akhmedjanova D, Ferron J, Beretvas SN, Van den Noortgate W. Effect size estimation for combined single-case experimental designs. Evidence-Based Communication Assessment & Intervention. 2020; 14 (1−2):28–51. doi: 10.1080/17489539.2020.1747146. [ CrossRef ] [ Google Scholar ]
  • Moeyaert M, Ugille M, Ferron J, Beretvas SN, Van den Noortgate W. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-case experimental designs research. Behavior Modification. 2014; 38 (5):665–704. doi: 10.1177/0145445514535243. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nickerson RS. Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods. 2000; 5 (2):241–301. doi: 10.1037/1082-989X.5.2.241. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nikles, J., & Mitchell, G. (Eds.). (2015). The essential guide to N-of-1 trials in health . Springer.
  • Ninci, J. (2019). Single-case data analysis: A practitioner guide for accurate and reliable decisions. Behavior Modification . Advance online publication. 10.1177/0145445519867054 [ PubMed ]
  • Ninci J, Vannest KJ, Willson V, Zhang N. Interrater agreement between visual analysts of single-case data: A meta-analysis. Behavior Modification. 2015; 39 (4):510–541. doi: 10.1177/0145445515581327. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Onghena, P. (2020). One by one: The design and analysis of replicated randomized single-case experiments. In R. van de Schoot & M. Miočević (Eds.), Small sample size solutions: A guide for applied researchers and practitioners (pp. 87–101). Routledge.
  • Onghena P, Edgington ES. Randomization tests for restricted alternating treatments designs. Behaviour Research & Therapy. 1994; 32 (7):783–786. doi: 10.1016/0005-7967(94)90036-1. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Onghena P, Edgington ES. Customization of pain treatments: Single-case design and analysis. Clinical Journal of Pain. 2005; 21 (1):56–68. doi: 10.1097/00002508-200501000-00007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Onghena P, Michiels B, Jamshidi L, Moeyaert M, Van den Noortgate W. One by one: Accumulating evidence by using meta-analytical procedures for single-case experiments. Brain Impairment. 2018; 19 (1):33–58. doi: 10.1017/BrImp.2017.25. [ CrossRef ] [ Google Scholar ]
  • Perone M. Statistical inference in behavior analysis: Experimental control is better. The Behavior Analyst. 1999; 22 (2):109–116. doi: 10.1007/BF03391988. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Petursdottir AI, Carr JE. Applying the taxonomy of validity threats from mainstream research design to single-case experiments in applied behavior analysis. Behavior Analysis in Practice. 2018; 11 (3):228–240. doi: 10.1007/s40617-018-00294-6. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pustejovsky, J. E., Swan, D. M., & English, K. W. (2019). An examination of measurement procedures and characteristics of baseline outcome data in single-case research. Behavior Modification . Advance online publication. 10.1177/0145445519864264 [ PubMed ]
  • Radley KC, Dart EH, Wright SJ. The effect of data points per x- to y-axis ratio on visual analysts evaluation of single-case graphs. School Psychology Quarterly. 2018; 33 (2):314–322. doi: 10.1037/spq0000243. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riley-Tillman, T. C., Burns, M. K., & Kilgus, S. P. (2020). Evaluating educational interventions: Single-case design for measuring response to intervention (2nd ed.). Guilford Press.
  • Russell SM, Reinecke D. Mand acquisition across different teaching methodologies. Behavioral Interventions. 2019; 34 (1):127–135. doi: 10.1002/bin.1643. [ CrossRef ] [ Google Scholar ]
  • Shadish WR, Hedges LV, Pustejovsky JE. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: A primer and applications. Journal of School Psychology. 2014; 52 (2):123–147. doi: 10.1016/j.jsp.2013.11.005. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shadish WR, Kyse EN, Rindskopf DM. Analyzing data from single-case designs using multilevel models: New applications and some agenda items for future research. Psychological Methods. 2013; 18 (3):385–405. doi: 10.1037/a0032964. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shadish WR, Sullivan KJ. Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods. 2011; 43 (4):971–980. doi: 10.3758/s13428-011-0111-y. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sidman, M. (1960). Tactics of scientific research . Basic Books.
  • Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science. 2011; 22 (11):1359–1366. doi: 10.1177/0956797611417632. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sjolie GM, Leveque MC, Preston JL. Acquisition, retention, and generalization of rhotics with and without ultrasound visual feedback. Journal of Communication Disorders. 2016; 64 :62–77. doi: 10.1016/j.jcomdis.2016.10.003. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith JD. Single-case experimental designs: A systematic review of published research and current standards. Psychological Methods. 2012; 17 (4):510–550. doi: 10.1037/a0029312. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Solmi F, Onghena P, Salmaso L, Bulté I. A permutation solution to test for treatment effects in alternation design single-case experiments. Communications in Statistics—Simulation & Computation. 2014; 43 (5):1094–1111. doi: 10.1080/03610918.2012.725295. [ CrossRef ] [ Google Scholar ]
  • Solomon BG. Violations of assumptions in school-based single-case data: Implications for the selection and interpretation of effect sizes. Behavior Modification. 2014; 38 (4):477–496. doi: 10.1177/0145445513510931. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tanious, R., & Onghena, P. (2020). A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data aspects, and data analysis. Behavior Research Methods . Advance online publication. 10.3758/s13428-020-01502-4 [ PubMed ]
  • Tate RL, Perdices M, Rosenkoetter U, Shadish W, Vohra S, Barlow DH, Horner R, Kazdin A, Kratochwill TR, McDonald S, Sampson M, Shamseer L, Togher L, Albin R, Backman C, Douglas J, Evans JJ, Gast D, Manolov R, Mitchell G, et al. The Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 statement. Journal of School Psychology. 2016; 56 :133–142. doi: 10.1016/j.jsp.2016.04.001. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tate RL, Perdices M, Rosenkoetter U, Wakim D, Godbee K, Togher L, McDonald S. Revision of a method quality rating scale for single-case experimental designs and n-of-1 trials: The 15-item Risk of Bias in N-of-1 Trials (RoBiNT) Scale. Neuropsychological Rehabilitation. 2013; 23 (5):619–638. doi: 10.1080/09602011.2013.824383. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Thirumanickam A, Raghavendra P, McMillan JM, van Steenbrugge W. Effectiveness of video-based modelling to facilitate conversational turn taking of adolescents with autism spectrum disorder who use AAC. AAC: Augmentative & Alternative Communication. 2018; 34 (4):311–322. doi: 10.1080/07434618.2018.1523948. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van den Noortgate W, Onghena P. Hierarchical linear models for the quantitative integration of effect sizes in single-case research. Behavior Research Methods, Instruments, & Computers. 2003; 35 (1):1–10. doi: 10.3758/BF03195492. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vannest KJ, Parker RI, Davis JL, Soares DA, Smith SL. The Theil–Sen slope for high-stakes decisions from progress monitoring. Behavioral Disorders. 2012; 37 (4):271–280. doi: 10.1177/019874291203700406. [ CrossRef ] [ Google Scholar ]
  • Vohra S, Shamseer L, Sampson M, Bukutu C, Schmid CH, Tate R, Nikles J, Zucker DR, Kravitz R, Guyatt G, Altman DG, Moher D. CONSORT extension for reporting N-of-1 trials (CENT) 2015 Statement. British Medical Journal. 2015; 350 :h1738. doi: 10.1136/bmj.h1738. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Weaver ES, Lloyd BP. Randomization tests for single case designs with rapidly alternating conditions: An analysis of p-values from published experiments. Perspectives on Behavior Science. 2019; 42 (3):617–645. doi: 10.1007/s40614-018-0165-6. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • What Works Clearinghouse. (2020). What works clearinghouse standards handbook, version 4.1 . U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation & Regional Assistance. https://ies.ed.gov/ncee/wwc/handbooks . Accessed 3 May 2021.
  • Wicherts JM, Veldkamp CL, Augusteijn HE, Bakker M, van Aert RC, Van Assen MA. Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p -hacking. Frontiers in Psychology. 2016; 7 :1–12. doi: 10.3389/fpsyg.2016.01832. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wilkinson L, The Task Force on Statistical Inference Statistical methods in psychology journals: Guidelines and explanations. American Psychologist. 1999; 54 (8):694–704. doi: 10.1037/0003-066X.54.8.594. [ CrossRef ] [ Google Scholar ]
  • Wolery M, Busick M, Reichow B, Barton EE. Comparison of overlap methods for quantitatively synthesizing single-subject data. Journal of Special Education. 2010; 44 (1):18–29. doi: 10.1177/0022466908328009. [ CrossRef ] [ Google Scholar ]
  • Wolery, M., Gast, D. L., & Ledford, J. R. (2018). Comparative designs. In D. L. Gast & J. R. Ledford (Eds.), Single case research methodology: Applications in special education and behavioral sciences (3rd ed., pp. 283–334). Routledge.
  • Wolfe, K., & McCammon, M. N. (2020). The analysis of single-case research data: Current instructional practices. Journal of Behavioral Education . Advance online publication. 10.1007/s10864-020-09403-4
  • Wolfe K, Seaman MA, Drasgow E, Sherlock P. An evaluation of the agreement between the conservative dual-criterion method and expert visual analysis. Journal of Applied Behavior Analysis. 2018; 51 (2):345–351. doi: 10.1002/jaba.453. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zucker DR, Ruthazer R, Schmid CH. Individual (N-of-1) trials can be combined to give population comparative treatment effect estimates: Methodologic considerations. Journal of Clinical Epidemiology. 2010; 63 (12):1312–1323. doi: 10.1016/j.jclinepi.2010.04.020. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

16 Best Types of Charts and Graphs for Data Visualization [+ Guide]

Jami Oetting

Published: June 08, 2023

There are more type of charts and graphs than ever before because there's more data. In fact, the volume of data in 2025 will be almost double the data we create, capture, copy, and consume today.

Person on laptop researching the types of graphs for data visualization

This makes data visualization essential for businesses. Different types of graphs and charts can help you:

  • Motivate your team to take action.
  • Impress stakeholders with goal progress.
  • Show your audience what you value as a business.

Data visualization builds trust and can organize diverse teams around new initiatives. Let's talk about the types of graphs and charts that you can use to grow your business.

visual representation of quantitative data

Free Excel Graph Templates

Tired of struggling with spreadsheets? These free Microsoft Excel Graph Generator Templates can help.

  • Simple, customizable graph designs.
  • Data visualization tips & instructions.
  • Templates for two, three, four, and five-variable graph templates.

You're all set!

Click this link to access this resource at any time.

Different Types of Graphs for Data Visualization

1. bar graph.

A bar graph should be used to avoid clutter when one data label is long or if you have more than 10 items to compare.

ypes of graphs — example of a bar graph.

Best Use Cases for These Types of Graphs

Bar graphs can help you compare data between different groups or to track changes over time. Bar graphs are most useful when there are big changes or to show how one group compares against other groups.

The example above compares the number of customers by business role. It makes it easy to see that there is more than twice the number of customers per role for individual contributors than any other group.

A bar graph also makes it easy to see which group of data is highest or most common.

For example, at the start of the pandemic, online businesses saw a big jump in traffic. So, if you want to look at monthly traffic for an online business, a bar graph would make it easy to see that jump.

Other use cases for bar graphs include:

  • Product comparisons.
  • Product usage.
  • Category comparisons.
  • Marketing traffic by month or year.
  • Marketing conversions.

Design Best Practices for Bar Graphs

  • Use consistent colors throughout the chart, selecting accent colors to highlight meaningful data points or changes over time.
  • Use horizontal labels to improve readability.
  • Start the y-axis at 0 to appropriately reflect the values in your graph.

2. Line Graph

A line graph reveals trends or progress over time, and you can use it to show many different categories of data. You should use it when you chart a continuous data set.

Types of graphs — example of a line graph.

Line graphs help users track changes over short and long periods. Because of this, these types of graphs are good for seeing small changes.

Line graphs can help you compare changes for more than one group over the same period. They're also helpful for measuring how different groups relate to each other.

A business might use this graph to compare sales rates for different products or services over time.

These charts are also helpful for measuring service channel performance. For example, a line graph that tracks how many chats or emails your team responds to per month.

Design Best Practices for Line Graphs

  • Use solid lines only.
  • Don't plot more than four lines to avoid visual distractions.
  • Use the right height so the lines take up roughly 2/3 of the y-axis' height.

3. Bullet Graph

A bullet graph reveals progress towards a goal, compares this to another measure, and provides context in the form of a rating or performance.

Types of graph — example of a bullet graph.

In the example above, the bullet graph shows the number of new customers against a set customer goal. Bullet graphs are great for comparing performance against goals like this.

These types of graphs can also help teams assess possible roadblocks because you can analyze data in a tight visual display.

For example, you could create a series of bullet graphs measuring performance against benchmarks or use a single bullet graph to visualize these KPIs against their goals:

  • Customer satisfaction.
  • Average order size.
  • New customers.

Seeing this data at a glance and alongside each other can help teams make quick decisions.

Bullet graphs are one of the best ways to display year-over-year data analysis. You can also use bullet graphs to visualize:

  • Customer satisfaction scores.
  • Customer shopping habits.
  • Social media usage by platform.

Design Best Practices for Bullet Graphs

  • Use contrasting colors to highlight how the data is progressing.
  • Use one color in different shades to gauge progress.

Different Types of Charts for Data Visualization

To better understand these chart types and how you can use them, here's an overview of each:

1. Column Chart

Use a column chart to show a comparison among different items or to show a comparison of items over time. You could use this format to see the revenue per landing page or customers by close date.

Types of charts — example of a column chart.

Best Use Cases for This Type of Chart

You can use both column charts and bar graphs to display changes in data, but column charts are best for negative data. The main difference, of course, is that column charts show information vertically while bar graphs show data horizontally.

For example, warehouses often track the number of accidents on the shop floor. When the number of incidents falls below the monthly average, a column chart can make that change easier to see in a presentation.

In the example above, this column chart measures the number of customers by close date. Column charts make it easy to see data changes over a period of time. This means that they have many use cases, including:

  • Customer survey data, like showing how many customers prefer a specific product or how much a customer uses a product each day.
  • Sales volume, like showing which services are the top sellers each month or the number of sales per week.
  • Profit and loss, showing where business investments are growing or falling.

Design Best Practices for Column Charts

2. dual-axis chart.

A dual-axis chart allows you to plot data using two y-axes and a shared x-axis. It has three data sets. One is a continuous data set, and the other is better suited to grouping by category. Use this chart to visualize a correlation or the lack thereof between these three data sets.

 Types of charts — example of a dual-axis chart.

A dual-axis chart makes it easy to see relationships between different data sets. They can also help with comparing trends.

For example, the chart above shows how many new customers this company brings in each month. It also shows how much revenue those customers are bringing the company.

This makes it simple to see the connection between the number of customers and increased revenue.

You can use dual-axis charts to compare:

  • Price and volume of your products.
  • Revenue and units sold.
  • Sales and profit margin.
  • Individual sales performance.

Design Best Practices for Dual-Axis Charts

  • Use the y-axis on the left side for the primary variable because brains naturally look left first.
  • Use different graphing styles to illustrate the two data sets, as illustrated above.
  • Choose contrasting colors for the two data sets.

3. Area Chart

An area chart is basically a line chart, but the space between the x-axis and the line is filled with a color or pattern. It is useful for showing part-to-whole relations, like showing individual sales reps’ contributions to total sales for a year. It helps you analyze both overall and individual trend information.

Types of charts — example of an area chart.

Best Use Cases for These Types of Charts

Area charts help show changes over time. They work best for big differences between data sets and help visualize big trends.

For example, the chart above shows users by creation date and life cycle stage.

A line chart could show more subscribers than marketing qualified leads. But this area chart emphasizes how much bigger the number of subscribers is than any other group.

These charts make the size of a group and how groups relate to each other more visually important than data changes over time.

Area graphs can help your business to:

  • Visualize which product categories or products within a category are most popular.
  • Show key performance indicator (KPI) goals vs. outcomes.
  • Spot and analyze industry trends.

Design Best Practices for Area Charts

  • Use transparent colors so information isn't obscured in the background.
  • Don't display more than four categories to avoid clutter.
  • Organize highly variable data at the top of the chart to make it easy to read.

4. Stacked Bar Chart

Use this chart to compare many different items and show the composition of each item you’re comparing.

Types of charts — example of a stacked bar chart.

These graphs are helpful when a group starts in one column and moves to another over time.

For example, the difference between a marketing qualified lead (MQL) and a sales qualified lead (SQL) is sometimes hard to see. The chart above helps stakeholders see these two lead types from a single point of view — when a lead changes from MQL to SQL.

Stacked bar charts are excellent for marketing. They make it simple to add a lot of data on a single chart or to make a point with limited space.

These graphs can show multiple takeaways, so they're also super for quarterly meetings when you have a lot to say but not a lot of time to say it.

Stacked bar charts are also a smart option for planning or strategy meetings. This is because these charts can show a lot of information at once, but they also make it easy to focus on one stack at a time or move data as needed.

You can also use these charts to:

  • Show the frequency of survey responses.
  • Identify outliers in historical data.
  • Compare a part of a strategy to its performance as a whole.

Design Best Practices for Stacked Bar Graphs

  • Best used to illustrate part-to-whole relationships.
  • Use contrasting colors for greater clarity.
  • Make the chart scale large enough to view group sizes in relation to one another.

5. Mekko Chart

Also known as a Marimekko chart, this type of graph can compare values, measure each one's composition, and show data distribution across each one.

It's similar to a stacked bar, except the Mekko's x-axis can capture another dimension of your values — instead of time progression, like column charts often do. In the graphic below, the x-axis compares the cities to one another.

Types of charts — example of a Mekko chart.

Image Source

You can use a Mekko chart to show growth, market share, or competitor analysis.

For example, the Mekko chart above shows the market share of asset managers grouped by location and the value of their assets. This chart clarifies which firms manage the most assets in different areas.

It's also easy to see which asset managers are the largest and how they relate to each other.

Mekko charts can seem more complex than other types of charts and graphs, so it's best to use these in situations where you want to emphasize scale or differences between groups of data.

Other use cases for Mekko charts include:

  • Detailed profit and loss statements.
  • Revenue by brand and region.
  • Product profitability.
  • Share of voice by industry or niche.

Design Best Practices for Mekko Charts

  • Vary your bar heights if the portion size is an important point of comparison.
  • Don't include too many composite values within each bar. Consider reevaluating your presentation if you have a lot of data.
  • Order your bars from left to right in such a way that exposes a relevant trend or message.

6. Pie Chart

A pie chart shows a static number and how categories represent part of a whole — the composition of something. A pie chart represents numbers in percentages, and the total sum of all segments needs to equal 100%.

Types of charts — example of a pie chart.

The image above shows another example of customers by role in the company.

The bar graph example shows you that there are more individual contributors than any other role. But this pie chart makes it clear that they make up over 50% of customer roles.

Pie charts make it easy to see a section in relation to the whole, so they are good for showing:

  • Customer personas in relation to all customers.
  • Revenue from your most popular products or product types in relation to all product sales.
  • Percent of total profit from different store locations.

Design Best Practices for Pie Charts

  • Don't illustrate too many categories to ensure differentiation between slices.
  • Ensure that the slice values add up to 100%.
  • Order slices according to their size.

7. Scatter Plot Chart

A scatter plot or scattergram chart will show the relationship between two different variables or reveal distribution trends.

Use this chart when there are many different data points, and you want to highlight similarities in the data set. This is useful when looking for outliers or understanding your data's distribution.

Types of charts — example of a scatter plot chart.

Scatter plots are helpful in situations where you have too much data to see a pattern quickly. They are best when you use them to show relationships between two large data sets.

In the example above, this chart shows how customer happiness relates to the time it takes for them to get a response.

This type of graph makes it easy to compare two data sets. Use cases might include:

  • Employment and manufacturing output.
  • Retail sales and inflation.
  • Visitor numbers and outdoor temperature.
  • Sales growth and tax laws.

Try to choose two data sets that already have a positive or negative relationship. That said, this type of graph can also make it easier to see data that falls outside of normal patterns.

Design Best Practices for Scatter Plots

  • Include more variables, like different sizes, to incorporate more data.
  • Start the y-axis at 0 to represent data accurately.
  • If you use trend lines, only use a maximum of two to make your plot easy to understand.

8. Bubble Chart

A bubble chart is similar to a scatter plot in that it can show distribution or relationship. There is a third data set shown by the size of the bubble or circle.

 Types of charts — example of a bubble chart.

In the example above, the number of hours spent online isn't just compared to the user's age, as it would be on a scatter plot chart.

Instead, you can also see how the gender of the user impacts time spent online.

This makes bubble charts useful for seeing the rise or fall of trends over time. It also lets you add another option when you're trying to understand relationships between different segments or categories.

For example, if you want to launch a new product, this chart could help you quickly see your new product's cost, risk, and value. This can help you focus your energies on a low-risk new product with a high potential return.

You can also use bubble charts for:

  • Top sales by month and location.
  • Customer satisfaction surveys.
  • Store performance tracking.
  • Marketing campaign reviews.

Design Best Practices for Bubble Charts

  • Scale bubbles according to area, not diameter.
  • Make sure labels are clear and visible.
  • Use circular shapes only.

9. Waterfall Chart

Use a waterfall chart to show how an initial value changes with intermediate values — either positive or negative — and results in a final value.

Use this chart to reveal the composition of a number. An example of this would be to showcase how different departments influence overall company revenue and lead to a specific profit number.

Types of charts — example of a waterfall chart.

The most common use case for a funnel chart is the marketing or sales funnel. But there are many other ways to use this versatile chart.

If you have at least four stages of sequential data, this chart can help you easily see what inputs or outputs impact the final results.

For example, a funnel chart can help you see how to improve your buyer journey or shopping cart workflow. This is because it can help pinpoint major drop-off points.

Other stellar options for these types of charts include:

  • Deal pipelines.
  • Conversion and retention analysis.
  • Bottlenecks in manufacturing and other multi-step processes.
  • Marketing campaign performance.
  • Website conversion tracking.

Design Best Practices for Funnel Charts

  • Scale the size of each section to accurately reflect the size of the data set.
  • Use contrasting colors or one color in graduated hues, from darkest to lightest, as the size of the funnel decreases.

11. Heat Map

A heat map shows the relationship between two items and provides rating information, such as high to low or poor to excellent. This chart displays the rating information using varying colors or saturation.

 Types of charts — example of a heat map.

Best Use Cases for Heat Maps

In the example above, the darker the shade of green shows where the majority of people agree.

With enough data, heat maps can make a viewpoint that might seem subjective more concrete. This makes it easier for a business to act on customer sentiment.

There are many uses for these types of charts. In fact, many tech companies use heat map tools to gauge user experience for apps, online tools, and website design .

Another common use for heat map graphs is location assessment. If you're trying to find the right location for your new store, these maps can give you an idea of what the area is like in ways that a visit can't communicate.

Heat maps can also help with spotting patterns, so they're good for analyzing trends that change quickly, like ad conversions. They can also help with:

  • Competitor research.
  • Customer sentiment.
  • Sales outreach.
  • Campaign impact.
  • Customer demographics.

Design Best Practices for Heat Map

  • Use a basic and clear map outline to avoid distracting from the data.
  • Use a single color in varying shades to show changes in data.
  • Avoid using multiple patterns.

12. Gantt Chart

The Gantt chart is a horizontal chart that dates back to 1917. This chart maps the different tasks completed over a period of time.

Gantt charting is one of the most essential tools for project managers. It brings all the completed and uncompleted tasks into one place and tracks the progress of each.

While the left side of the chart displays all the tasks, the right side shows the progress and schedule for each of these tasks.

This chart type allows you to:

  • Break projects into tasks.
  • Track the start and end of the tasks.
  • Set important events, meetings, and announcements.
  • Assign tasks to the team and individuals.

Gantt Chart - product creation strategy

Download the Excel templates mentioned in the video here.

5 Questions to Ask When Deciding Which Type of Chart to Use

1. do you want to compare values.

Charts and graphs are perfect for comparing one or many value sets, and they can easily show the low and high values in the data sets. To create a comparison chart, use these types of graphs:

  • Scatter plot

2. Do you want to show the composition of something?

Use this type of chart to show how individual parts make up the whole of something, like the device type used for mobile visitors to your website or total sales broken down by sales rep.

To show composition, use these charts:

  • Stacked bar

3. Do you want to understand the distribution of your data?

Distribution charts help you to understand outliers, the normal tendency, and the range of information in your values.

Use these charts to show distribution:

4. Are you interested in analyzing trends in your data set?

If you want more information about how a data set performed during a specific time, there are specific chart types that do extremely well.

You should choose one of the following:

  • Dual-axis line

5. Do you want to better understand the relationship between value sets?

Relationship charts can show how one variable relates to one or many different variables. You could use this to show how something positively affects, has no effect, or negatively affects another variable.

When trying to establish the relationship between things, use these charts:

Featured Resource: The Marketer's Guide to Data Visualization

Types of chart — HubSpot tool for making charts.

Don't forget to share this post!

Related articles.

9 Great Ways to Use Data in Content Creation

9 Great Ways to Use Data in Content Creation

Data Visualization: Tips and Examples to Inspire You

Data Visualization: Tips and Examples to Inspire You

17 Data Visualization Resources You Should Bookmark

17 Data Visualization Resources You Should Bookmark

An Introduction to Data Visualization: How to Create Compelling Charts & Graphs [Ebook]

An Introduction to Data Visualization: How to Create Compelling Charts & Graphs [Ebook]

Why Data Is The Real MVP: 7 Examples of Data-Driven Storytelling by Leading Brands

Why Data Is The Real MVP: 7 Examples of Data-Driven Storytelling by Leading Brands

How to Create an Infographic Using Poll & Survey Data [Infographic]

How to Create an Infographic Using Poll & Survey Data [Infographic]

Data Storytelling 101: Helpful Tools for Gathering Ideas, Designing Content & More

Data Storytelling 101: Helpful Tools for Gathering Ideas, Designing Content & More

What Great Data Visualization Looks Like: 12 Complex Concepts Made Easy

What Great Data Visualization Looks Like: 12 Complex Concepts Made Easy

Stats Shouldn't Stand Alone: Why You Need Data Visualization to Teach and Convince

Stats Shouldn't Stand Alone: Why You Need Data Visualization to Teach and Convince

How to Harness the Power of Data to Elevate Your Content

How to Harness the Power of Data to Elevate Your Content

Tired of struggling with spreadsheets? These free Microsoft Excel Graph Generator Templates can help

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Quantitative Data – Types, Methods and Examples

Quantitative Data – Types, Methods and Examples

Table of Contents

 Quantitative Data

Quantitative Data

Definition:

Quantitative data refers to numerical data that can be measured or counted. This type of data is often used in scientific research and is typically collected through methods such as surveys, experiments, and statistical analysis.

Quantitative Data Types

There are two main types of quantitative data: discrete and continuous.

  • Discrete data: Discrete data refers to numerical values that can only take on specific, distinct values. This type of data is typically represented as whole numbers and cannot be broken down into smaller units. Examples of discrete data include the number of students in a class, the number of cars in a parking lot, and the number of children in a family.
  • Continuous data: Continuous data refers to numerical values that can take on any value within a certain range or interval. This type of data is typically represented as decimal or fractional values and can be broken down into smaller units. Examples of continuous data include measurements of height, weight, temperature, and time.

Quantitative Data Collection Methods

There are several common methods for collecting quantitative data. Some of these methods include:

  • Surveys : Surveys involve asking a set of standardized questions to a large number of people. Surveys can be conducted in person, over the phone, via email or online, and can be used to collect data on a wide range of topics.
  • Experiments : Experiments involve manipulating one or more variables and observing the effects on a specific outcome. Experiments can be conducted in a controlled laboratory setting or in the real world.
  • Observational studies : Observational studies involve observing and collecting data on a specific phenomenon without intervening or manipulating any variables. Observational studies can be conducted in a natural setting or in a laboratory.
  • Secondary data analysis : Secondary data analysis involves using existing data that was collected for a different purpose to answer a new research question. This method can be cost-effective and efficient, but it is important to ensure that the data is appropriate for the research question being studied.
  • Physiological measures: Physiological measures involve collecting data on biological or physiological processes, such as heart rate, blood pressure, or brain activity.
  • Computerized tracking: Computerized tracking involves collecting data automatically from electronic sources, such as social media, online purchases, or website analytics.

Quantitative Data Analysis Methods

There are several methods for analyzing quantitative data, including:

  • Descriptive statistics: Descriptive statistics are used to summarize and describe the basic features of the data, such as the mean, median, mode, standard deviation, and range.
  • Inferential statistics : Inferential statistics are used to make generalizations about a population based on a sample of data. These methods include hypothesis testing, confidence intervals, and regression analysis.
  • Data visualization: Data visualization involves creating charts, graphs, and other visual representations of the data to help identify patterns and trends. Common types of data visualization include histograms, scatterplots, and bar charts.
  • Time series analysis: Time series analysis involves analyzing data that is collected over time to identify patterns and trends in the data.
  • Multivariate analysis : Multivariate analysis involves analyzing data with multiple variables to identify relationships between the variables.
  • Factor analysis : Factor analysis involves identifying underlying factors or dimensions that explain the variation in the data.
  • Cluster analysis: Cluster analysis involves identifying groups or clusters of observations that are similar to each other based on multiple variables.

Quantitative Data Formats

Quantitative data can be represented in different formats, depending on the nature of the data and the purpose of the analysis. Here are some common formats:

  • Tables : Tables are a common way to present quantitative data, particularly when the data involves multiple variables. Tables can be used to show the frequency or percentage of data in different categories or to display summary statistics.
  • Charts and graphs: Charts and graphs are useful for visualizing quantitative data and can be used to highlight patterns and trends in the data. Some common types of charts and graphs include line charts, bar charts, scatterplots, and pie charts.
  • Databases : Quantitative data can be stored in databases, which allow for easy sorting, filtering, and analysis of large amounts of data.
  • Spreadsheets : Spreadsheets can be used to organize and analyze quantitative data, particularly when the data is relatively small in size. Spreadsheets allow for calculations and data manipulation, as well as the creation of charts and graphs.
  • Statistical software : Statistical software, such as SPSS, R, and SAS, can be used to analyze quantitative data. These programs allow for more advanced statistical analyses and data modeling, as well as the creation of charts and graphs.

Quantitative Data Gathering Guide

Here is a basic guide for gathering quantitative data:

  • Define the research question: The first step in gathering quantitative data is to clearly define the research question. This will help determine the type of data to be collected, the sample size, and the methods of data analysis.
  • Choose the data collection method: Select the appropriate method for collecting data based on the research question and available resources. This could include surveys, experiments, observational studies, or other methods.
  • Determine the sample size: Determine the appropriate sample size for the research question. This will depend on the level of precision needed and the variability of the population being studied.
  • Develop the data collection instrument: Develop a questionnaire or survey instrument that will be used to collect the data. The instrument should be designed to gather the specific information needed to answer the research question.
  • Pilot test the data collection instrument : Before collecting data from the entire sample, pilot test the instrument on a small group to identify any potential problems or issues.
  • Collect the data: Collect the data from the selected sample using the chosen data collection method.
  • Clean and organize the data : Organize the data into a format that can be easily analyzed. This may involve checking for missing data, outliers, or errors.
  • Analyze the data: Analyze the data using appropriate statistical methods. This may involve descriptive statistics, inferential statistics, or other types of analysis.
  • Interpret the results: Interpret the results of the analysis in the context of the research question. Identify any patterns, trends, or relationships in the data and draw conclusions based on the findings.
  • Communicate the findings: Communicate the findings of the analysis in a clear and concise manner, using appropriate tables, graphs, and other visual aids as necessary. The results should be presented in a way that is accessible to the intended audience.

Examples of Quantitative Data

Here are some examples of quantitative data:

  • Height of a person (measured in inches or centimeters)
  • Weight of a person (measured in pounds or kilograms)
  • Temperature (measured in Fahrenheit or Celsius)
  • Age of a person (measured in years)
  • Number of cars sold in a month
  • Amount of rainfall in a specific area (measured in inches or millimeters)
  • Number of hours worked in a week
  • GPA (grade point average) of a student
  • Sales figures for a product
  • Time taken to complete a task.
  • Distance traveled (measured in miles or kilometers)
  • Speed of an object (measured in miles per hour or kilometers per hour)
  • Number of people attending an event
  • Price of a product (measured in dollars or other currency)
  • Blood pressure (measured in millimeters of mercury)
  • Amount of sugar in a food item (measured in grams)
  • Test scores (measured on a numerical scale)
  • Number of website visitors per day
  • Stock prices (measured in dollars)
  • Crime rates (measured by the number of crimes per 100,000 people)

Applications of Quantitative Data

Quantitative data has a wide range of applications across various fields, including:

  • Scientific research: Quantitative data is used extensively in scientific research to test hypotheses and draw conclusions. For example, in biology, researchers might use quantitative data to measure the growth rate of cells or the effectiveness of a drug treatment.
  • Business and economics: Quantitative data is used to analyze business and economic trends, forecast future performance, and make data-driven decisions. For example, a company might use quantitative data to analyze sales figures and customer demographics to determine which products are most popular among which segments of their customer base.
  • Education: Quantitative data is used in education to measure student performance, evaluate teaching methods, and identify areas where improvement is needed. For example, a teacher might use quantitative data to track the progress of their students over the course of a semester and adjust their teaching methods accordingly.
  • Public policy: Quantitative data is used in public policy to evaluate the effectiveness of policies and programs, identify areas where improvement is needed, and develop evidence-based solutions. For example, a government agency might use quantitative data to evaluate the impact of a social welfare program on poverty rates.
  • Healthcare : Quantitative data is used in healthcare to evaluate the effectiveness of medical treatments, track the spread of diseases, and identify risk factors for various health conditions. For example, a doctor might use quantitative data to monitor the blood pressure levels of their patients over time and adjust their treatment plan accordingly.

Purpose of Quantitative Data

The purpose of quantitative data is to provide a numerical representation of a phenomenon or observation. Quantitative data is used to measure and describe the characteristics of a population or sample, and to test hypotheses and draw conclusions based on statistical analysis. Some of the key purposes of quantitative data include:

  • Measuring and describing : Quantitative data is used to measure and describe the characteristics of a population or sample, such as age, income, or education level. This allows researchers to better understand the population they are studying.
  • Testing hypotheses: Quantitative data is often used to test hypotheses and theories by collecting numerical data and analyzing it using statistical methods. This can help researchers determine whether there is a statistically significant relationship between variables or whether there is support for a particular theory.
  • Making predictions : Quantitative data can be used to make predictions about future events or trends based on past data. This is often done through statistical modeling or time series analysis.
  • Evaluating programs and policies: Quantitative data is often used to evaluate the effectiveness of programs and policies. This can help policymakers and program managers identify areas where improvements can be made and make evidence-based decisions about future programs and policies.

When to use Quantitative Data

Quantitative data is appropriate to use when you want to collect and analyze numerical data that can be measured and analyzed using statistical methods. Here are some situations where quantitative data is typically used:

  • When you want to measure a characteristic or behavior : If you want to measure something like the height or weight of a population or the number of people who smoke, you would use quantitative data to collect this information.
  • When you want to compare groups: If you want to compare two or more groups, such as comparing the effectiveness of two different medical treatments, you would use quantitative data to collect and analyze the data.
  • When you want to test a hypothesis : If you have a hypothesis or theory that you want to test, you would use quantitative data to collect data that can be analyzed statistically to determine whether your hypothesis is supported by the data.
  • When you want to make predictions: If you want to make predictions about future trends or events, such as predicting sales for a new product, you would use quantitative data to collect and analyze data from past trends to make your prediction.
  • When you want to evaluate a program or policy : If you want to evaluate the effectiveness of a program or policy, you would use quantitative data to collect data about the program or policy and analyze it statistically to determine whether it has had the intended effect.

Characteristics of Quantitative Data

Quantitative data is characterized by several key features, including:

  • Numerical values : Quantitative data consists of numerical values that can be measured and counted. These values are often expressed in terms of units, such as dollars, centimeters, or kilograms.
  • Continuous or discrete : Quantitative data can be either continuous or discrete. Continuous data can take on any value within a certain range, while discrete data can only take on certain values.
  • Objective: Quantitative data is objective, meaning that it is not influenced by personal biases or opinions. It is based on empirical evidence that can be measured and analyzed using statistical methods.
  • Large sample size: Quantitative data is often collected from a large sample size in order to ensure that the results are statistically significant and representative of the population being studied.
  • Statistical analysis: Quantitative data is typically analyzed using statistical methods to determine patterns, relationships, and other characteristics of the data. This allows researchers to make more objective conclusions based on empirical evidence.
  • Precision : Quantitative data is often very precise, with measurements taken to multiple decimal points or significant figures. This precision allows for more accurate analysis and interpretation of the data.

Advantages of Quantitative Data

Some advantages of quantitative data are:

  • Objectivity : Quantitative data is usually objective because it is based on measurable and observable variables. This means that different people who collect the same data will generally get the same results.
  • Precision : Quantitative data provides precise measurements of variables. This means that it is easier to make comparisons and draw conclusions from quantitative data.
  • Replicability : Since quantitative data is based on objective measurements, it is often easier to replicate research studies using the same or similar data.
  • Generalizability : Quantitative data allows researchers to generalize findings to a larger population. This is because quantitative data is often collected using random sampling methods, which help to ensure that the data is representative of the population being studied.
  • Statistical analysis : Quantitative data can be analyzed using statistical methods, which allows researchers to test hypotheses and draw conclusions about the relationships between variables.
  • Efficiency : Quantitative data can often be collected quickly and efficiently using surveys or other standardized instruments, which makes it a cost-effective way to gather large amounts of data.

Limitations of Quantitative Data

Some Limitations of Quantitative Data are as follows:

  • Limited context: Quantitative data does not provide information about the context in which the data was collected. This can make it difficult to understand the meaning behind the numbers.
  • Limited depth: Quantitative data is often limited to predetermined variables and questions, which may not capture the complexity of the phenomenon being studied.
  • Difficulty in capturing qualitative aspects: Quantitative data is unable to capture the subjective experiences and qualitative aspects of human behavior, such as emotions, attitudes, and motivations.
  • Possibility of bias: The collection and interpretation of quantitative data can be influenced by biases, such as sampling bias, measurement bias, or researcher bias.
  • Simplification of complex phenomena: Quantitative data may oversimplify complex phenomena by reducing them to numerical measurements and statistical analyses.
  • Lack of flexibility: Quantitative data collection methods may not allow for changes or adaptations in the research process, which can limit the ability to respond to unexpected findings or new insights.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Primary Data

Primary Data – Types, Methods and Examples

Qualitative Data

Qualitative Data – Types, Methods and Examples

Research Data

Research Data – Types Methods and Examples

Secondary Data

Secondary Data – Types, Methods and Examples

Research Information

Information in Research – Types and Examples

Enago Academy

How to Use Creative Data Visualization Techniques for Easy Comprehension of Qualitative Research

' src=

“A picture is worth a thousand words!”—an adage used so often stands true even whilst reporting your research data. Research studies with overwhelming data can perhaps be difficult to comprehend by some readers or can even be time-consuming. While presenting quantitative research data becomes easier with the help of graphs, pie charts, etc. researchers face an undeniable challenge whilst presenting qualitative research data. In this article, we will elaborate on effectively presenting qualitative research using data visualization techniques .

Table of Contents

What is Data Visualization?

Data visualization is the process of converting textual information into graphical and illustrative representations. It is imperative to think beyond numbers to get a holistic and comprehensive understanding of research data. Hence, this technique is adopted to help presenters communicate relevant research data in a way that’s easy for the viewer to interpret and draw conclusions.

What Is the Importance of Data Visualization in Qualitative Research?

According to the form in which the data is collected and expressed, it is broadly divided into qualitative data and quantitative data. Quantitative data expresses the size or quantity of data in a countable integer. Unlike quantitative data, qualitative data cannot be expressed in continuous integer values; it refers to data values ​​described in the non-numeric form related to subjects, places, things, events, activities, or concepts.

What Are the Advantages of Good Data Visualization Techniques?

Excellent data visualization techniques have several benefits:

  • Human eyes are often drawn to patterns and colors. Moreover, in this age of Big Data , visualization can be considered an asset to quickly and easily comprehend large amounts of data generated in a research study.
  • Enables viewers to recognize emerging trends and accelerate their response time on the basis of what is seen and assimilated.
  • Illustrations make it easier to identify correlated parameters.
  • Allows the presenter to narrate a story whilst helping the viewer understand the data and draw conclusions from it.
  • As humans can process visual images better than texts, data visualization techniques enable viewers to remember them for a longer time.

Different Types of Data Visualization Techniques in Qualitative Research

Here are several data visualization techniques for presenting qualitative data for better comprehension of research data.

1. Word Clouds

data visualization techniques

  • Word Clouds is a type of data visualization technique which helps in visualizing one-word descriptions.
  • It is a single image composing multiple words associated with a particular text or subject.
  • The size of each word indicates its importance or frequency in the data.
  • Wordle and Tagxedo are two majorly used tools to create word clouds.

2. Graphic Timelines

data visualization techniques

  • Graphic timelines are created to present regular text-based timelines with pictorial illustrations or diagrams, photos, and other images.
  • It visually displays a series of events in chronological order on a timescale.
  • Furthermore, showcasing timelines in a graphical manner makes it easier to understand critical milestones in a study.

3. Icons Beside Descriptions

data visualization techniques

  • Rather than writing long descriptive paragraphs, including resembling icons beside brief and concise points enable quick and easy comprehension.

4. Heat Map

data visualization techniques

  • Using a heat map as a data visualization technique better displays differences in data with color variations.
  • The intensity and frequency of data is well addressed with the help of these color codes.
  • However, a clear legend must be mentioned alongside the heat map to correctly interpret a heat map.
  • Additionally, it also helps identify trends in data.

5. Mind Map

data visualization techniques

  • A mind map helps explain concepts and ideas linked to a central idea.
  • Allows visual structuring of ideas without overwhelming the viewer with large amounts of text.
  • These can be used to present graphical abstracts

Do’s and Don’ts of Data Visualization Techniques

data visualization techniques

It perhaps is not easy to visualize qualitative data and make it recognizable and comprehensible to viewers at a glance. However, well-visualized qualitative data can be very useful in order to clearly convey the key points to readers and listeners in presentations.

Are you struggling with ways to display your qualitative data? Which data visualization techniques have you used before? Let us know about your experience in the comments section below!

' src=

nicely explained

None. And I want to use it from now.

visual representation of quantitative data

Would it be ideal or suggested to use these techniques to display qualitative data in a thesis perhaps?

Using data visualization techniques in a qualitative research thesis can help convey your findings in a more engaging and comprehensible manner. Here’s a brief overview of how to incorporate data visualization in such a thesis:

Select Relevant Visualizations: Identify the types of data you have (e.g., textual, audio, visual) and the appropriate visualization techniques that can represent your qualitative data effectively. Common options include word clouds, charts, graphs, timelines, and thematic maps.

Data Preparation: Ensure your qualitative data is well-organized and coded appropriately. This might involve using qualitative analysis software like NVivo or Atlas.ti to tag and categorize data.

Create Visualizations: Generate visualizations that illustrate key themes, patterns, or trends within your qualitative data. For example: Word clouds can highlight frequently occurring terms or concepts. Bar charts or histograms can show the distribution of specific themes or categories. Timeline visualizations can help display chronological trends. Concept maps can illustrate the relationships between different concepts or ideas.

Integrate Visualizations into Your Thesis: Incorporate these visualizations within your thesis to complement your narrative. Place them strategically to support your arguments or findings. Include clear and concise captions and labels for each visualization, providing context and explaining their significance.

Interpretation: In the text of your thesis, interpret the visualizations. Explain what patterns or insights they reveal about your qualitative data. Offer meaningful insights and connections between the visuals and your research questions or hypotheses.

Maintain Consistency: Maintain a consistent style and formatting for your visualizations throughout the thesis. This ensures clarity and professionalism.

Ethical Considerations: If your qualitative research involves sensitive or personal data, consider ethical guidelines and privacy concerns when presenting visualizations. Anonymize or protect sensitive information as needed.

Review and Refinement: Before finalizing your thesis, review the visualizations for accuracy and clarity. Seek feedback from peers or advisors to ensure they effectively convey your qualitative findings.

Appendices: If you have a large number of visualizations or detailed data, consider placing some in appendices. This keeps the main body of your thesis uncluttered while providing interested readers with supplementary information.

Cite Sources: If you use specific software or tools to create your visualizations, acknowledge and cite them appropriately in your thesis.

Hope you find this helpful. Happy Learning!

Rate this article Cancel Reply

Your email address will not be published.

visual representation of quantitative data

Enago Academy's Most Popular Articles

Research Interviews for Data Collection

  • Reporting Research

Research Interviews: An effective and insightful way of data collection

Research interviews play a pivotal role in collecting data for various academic, scientific, and professional…

Planning Your Data Collection

Planning Your Data Collection: Designing methods for effective research

Planning your research is very important to obtain desirable results. In research, the relevance of…

visual representation of quantitative data

  • Manuscript Preparation
  • Publishing Research

Qualitative Vs. Quantitative Research — A step-wise guide to conduct research

A research study includes the collection and analysis of data. In quantitative research, the data…

explanatory variables

Explanatory & Response Variable in Statistics — A quick guide for early career researchers!

Often researchers have a difficult time choosing the parameters and variables (like explanatory and response…

hypothesis testing

6 Steps to Evaluate the Effectiveness of Statistical Hypothesis Testing

You know what is tragic? Having the potential to complete the research study but not…

visual representation of quantitative data

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

visual representation of quantitative data

What should universities' stance be on AI tools in research and academic writing?

IMAGES

  1. Quantitative Data: What it is, Types & Examples

    visual representation of quantitative data

  2. Visualizing Quantitative Data: Using Graphs and Charts ~GM Lectures

    visual representation of quantitative data

  3. What is Quantitative Data?

    visual representation of quantitative data

  4. The Visual Display of Quantitative Information

    visual representation of quantitative data

  5. Interpreting the Quantitative Data (Numbers) in Your Business

    visual representation of quantitative data

  6. Importance of Graphical Representation of Data

    visual representation of quantitative data

VIDEO

  1. AP Statistics Chapter 3: Displaying and Describing Quantitative Data (Lehman

  2. شرح مادة الاحصاء المحاضرة الثالثة "Graphical Representation of Quantitative Data"

  3. 07. Lecture 3.1 Tables for quantitative data recoding and visual binning 1

  4. Math 1030 Unit 5

  5. Stats Tutor Frequency Polygon definition and drawing

  6. Foundations of Data Visualisation

COMMENTS

  1. The Visual Display of Quantitative Information

    The Visual Display of Quantitative Information The classic book on statistical graphics, charts, tables. Theory and practice in the design of data graphics, 250 illustrations of the best (and a few of the worst) statistical graphics, with detailed analysis of how to display data for precise, effective, quick analysis.

  2. The Visual Display of Quantitative Information, 2nd Ed

    Editing and improving graphics. The data-ink ratio. Time-series, relational graphics, data maps, multivariate designs. Detection of graphical deception: design variation vs. data variation. Sources of deception. Aesthetics and data graphical displays. This is the second edition of The Visual Display of Quantitative Information.

  3. 1.3: Visual Representation of Data II

    The most important visual representation of quantitative data is a histogram. Histograms actually look a lot like a stem-and-leaf plot, except turned on its side and with the row of numbers turned into a vertical bar, like a bar graph. The height of each of these bars would be how many. Another way of saying that is that we would be making bars ...

  4. 11 Data Visualization Techniques for Every Use-Case with Examples

    The Power of Good Data Visualization. Data visualization involves the use of graphical representations of data, such as graphs, charts, and maps. Compared to descriptive statistics or tables, visuals provide a more effective way to analyze data, including identifying patterns, distributions, and correlations and spotting outliers in complex ...

  5. Data Display in Qualitative Research

    A core goal of quantitative data display is to provide "a visual one-to-one correspondence of number to graphical element" (Onwuegbuzie & Dickinson, 2008, p. 204). ... Visual representation of data is well facilitated by technology media, and it is expected that visual displays will become more prominent in qualitative research analysis. ...

  6. Data Visualization: Definition, Benefits, and Examples

    Data visualization is the representation of information and data using charts, graphs, maps, and other visual tools. These visualizations allow us to easily understand any patterns, trends, or outliers in a data set. Data visualization also presents data to the general public or specific audiences without technical knowledge in an accessible ...

  7. 17 Important Data Visualization Techniques

    Some data visualization tools, however, allow you to add interactivity to your map so the exact values are accessible. 15. Word Cloud. A word cloud, or tag cloud, is a visual representation of text data in which the size of the word is proportional to its frequency. The more often a specific word appears in a dataset, the larger it appears in ...

  8. What Is Data Visualization? Definition & Examples

    What is data visualization? Data visualization is the graphical representation of information and data. By using visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data.Additionally, it provides an excellent way for employees or business owners to present data to non-technical audiences ...

  9. Library Guides: Research Data Literacy 101: Data Visualization

    Data visualization - visual representations of quantitative data in a schematic form. Information visualization - interactive, visual representations of data to amplify understanding. Data is transformed into an interactive image. Concept visualization - methods to elaborate mostly qualitative concepts, ideas, plans and analysis.

  10. Visualizing Quantitative Data

    Definition. Quantitative data are data that can be measured on a numerical scale. Examples of such data are length, height, volume, speed, temperature, or cost. A quantitative variable can be transformed into a categorical variable by grouping, for example, weight can be divided into underweight, normal weight, and overweight.

  11. The Science of Visual Data Communication: What Works

    A summary of the power and limits of visual data processing. ... designers often prioritize the vertical and horizontal dimensions of two-dimensional space when depicting or organizing quantitative data. Faced with a single column of numbers in a spreadsheet, a visualization designer might depict those data vertically with position (in a bar or ...

  12. Data Visualization: How to Present Your Research Data Visually

    Researchers who need to communicate quantitative data have several options to present it visually through bar graphs, pie charts, histograms and even infographics. However, communicating research findings based on complex datasets is not always easy. This is where effective data visualization can greatly help readers.

  13. Ultimate Guide to Using Data Visualization in Your Presentation

    1. Collect your data. First things first, and that is to have all your information ready. Especially for long business presentations, there can be a lot of information to consider when working on your slides. Having it all organized and ready to use will make the whole process much easier to go through. 2.

  14. The Visual Display of Quantitative Information / Edition 2

    Edward Tufte is a professor at Yale University, where he teaches courses in statistical evidence and information design. His books include Visual Explanations, Envisioning Information, The Visual Display of Quantitative Information, Political Control of the Economy, Data Analysis for Politics and Policy, and Size and Democracy (with Robert A. Dahl).. He is a fellow of the American Statistical ...

  15. Data and information visualization

    Data and information visualization ( data viz/vis or info viz/vis) [2] is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount [3] of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items.

  16. What is a data display? Definition, Types, & Examples

    Also known as data visualization, a data display is a visual representation of raw or processed data that aims to communicate a small number of insights about the behavior of an underlying table, which is otherwise difficult or impossible to understand to the naked eye. ... Since data is quantitative, applying data displays to qualitative ...

  17. What Is Data Visualization?

    Data visualization is the representation of data through use of common graphics, such as charts, plots, infographics and even animations. These visual displays of information communicate complex data relationships and data-driven insights in a way that is easy to understand. Data visualization can be utilized for a variety of purposes, and it ...

  18. Presentation of Quantitative Data: Data Visualization

    Now, let us create a chart from beginning to end for Fig. 2.6 —the Line chart for all products. The process we will use includes the following steps: (1) select a chart type, (2) identify the data to be charted including the x-axis, and (3) provide titles for the axes, series, and chart.

  19. Quantitative Techniques and Graphical Representations for Interpreting

    The visual representation of the data should always be inspected, and the individual values can be analyzed. The researchers can, and must, still seek the possible causes of specific outlier measurements according to their knowledge about the client, the context, and the target behavior. ... Visual and Quantitative Analyses should be Used in ...

  20. 16 Best Types of Charts and Graphs for Data Visualization [+ Guide]

    The data in the chart is nested in the form of rectangles and sub-rectangles. Each of these rectangles and sub-rectangles has different dimensions and plot colors which are assigned w.r.t to the quantitative data. Image Source. Best Use Cases for This Type of Chart. The treemap chart compares the different products in a category or sub-category.

  21. Quantitative Data

    The purpose of quantitative data is to provide a numerical representation of a phenomenon or observation. Quantitative data is used to measure and describe the characteristics of a population or sample, and to test hypotheses and draw conclusions based on statistical analysis. Some of the key purposes of quantitative data include:

  22. Data representations

    A variety of data representations can be used to communicate qualitative (also called categorical) data. A table summarizes the data using rows and columns. Each column contains data for a single variable, and a basic table contains one column for the qualitative variable and one for the quantitative variable.

  23. 5 Creative Data Visualization Techniques for Qualitative Research

    Different Types of Data Visualization Techniques in Qualitative Research. Here are several data visualization techniquesfor presenting qualitative data for better comprehension of research data. 1. Word Clouds. Word Clouds is a type of data visualization techniquewhich helps in visualizing one-word descriptions.