Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

How to create a perfect design hypothesis

ux design hypothesis examples

A design hypothesis is a cornerstone of the UX and UI design process. It guides the entire process, defines research needs, and heavily influences the final outcome.

Design Hypothesis UX

Doing any design work without a well-defined hypothesis is like riding a car without headlights. Although still possible, it forces you to go slower and dramatically increases the chances of unpleasant pitfalls.

The importance of a hypothesis in the design process

Design change for your hypothesis, the objective of your hypothesis, mapping underlying assumptions in your hypothesis, example 1: a simple design hypothesis, example 2: a robust design hypothesis.

There are three main reasons why no discovery or design process should start without a well-defined and framed hypothesis. A good design hypothesis helps us:

  • Guide the research
  • Nail the solutions
  • Maximize learnings and enable iterative design

Benefits of Hypotheses

A design hypothesis guides research

A good hypothesis not only states what we want to achieve but also the final objective and our current beliefs. It allows designers to assess how much actual evidence there is to support the hypothesis and focus their research and discovery efforts on areas they are least confident about.

Research for the sake of research brings waste. Research for the sake of validating specific hypotheses brings learnings.

A design hypothesis influences the design and solution

Design hypothesis gives much-needed context. It helps you:

  • Ideate right solutions
  • Focus on the proper UX
  • Polish UI details

The more detailed and robust the design hypothesis, the more context you have to help you make the best design decisions.

A design hypothesis maximizes learnings and enables iterative design

If you design new features blindly, it’s hard to truly learn from the launch. Some metrics might go up. Others might go down, so what?

With a well-defined design hypothesis, you can not only validate whether the design itself works but also better understand why and how to improve it in the future. This helps you iterate on your learnings.

Components of a good design hypothesis

I am not a fan of templatizing how a solid design hypothesis should look. There are various ways to approach it, and you should choose whatever works for you best. However, there are three essential elements you should include to ensure you get all the benefits mentioned earlier of using design hypotheses, that is:

  • Design change
  • The objective
  • Underlying assumptions

Elements of Good Design Hypothesis

The fundamental part is the definition of what you are trying to do. If you are working on shortening the onboarding process, you might simply put “[…] we’d like to shorten the onboarding process […].”

The goal here is to give context to a wider audience and be able to quickly reference that the design hypothesis is concerning. Don’t fret too much about this part; simply boil the problem down to its essentials. What is frustrating your users?

In other words, the objective is the “why” behind the change. What exactly are you trying to achieve with the planned design change? The objective serves a few purposes.

ux design hypothesis examples

Over 200k developers and product managers use LogRocket to create better digital experiences

ux design hypothesis examples

First, it’s a great sanity check. You’d be surprised how many designers proposed various ideas, changes, and improvements without a clear goal. Changing design just for the sake of changing the design is a no-no.

It also helps you step back and see if the change you are considering is the best approach. For instance, if you are considering shortening the onboarding to increase the percentage of users completing it, are there any other design changes you can think of to achieve the same goal? Maybe instead of shortening the onboarding, there’s a bigger opportunity in simply adjusting the copy? Defining clear objectives invites conversations about whether you focus on the right things.

Additionally, a clearly defined objective gives you a measure of success to evaluate the effectiveness of your solution. If you believed you could boost the completion rate by 40 percent, but achieved only a 10 percent lift, then either the hypothesis was flawed (good learning point for the future), or there’s still room for improvements.

Last but not least, a clear objective is essential for the next step: mapping underlying assumptions.

Now that you know what you plan to do and which goal you are trying to achieve, it’s time for the most critical question.

Why do you believe the proposed design change will achieve the desired objective? Whether it’s because you heard some interesting insights during user interviews or spotted patterns in users’ behavioral data, note it down.

Proposed Design Change

Even if you don’t have any strong justification and base your hypothesis on pure guesses (we all do that sometimes!), clearly name these beliefs. Listing out all your assumption will help you:

  • Focus your discovery efforts on validating these assumptions to avoid late disappointments
  • Better analyze results post-launch to maximize your learnings

You’ll see exactly how in the examples of good design hypotheses below.

Examples of good design hypotheses

Let’s put it all into practice and see what a good design hypothesis might look like.

I’ll use two examples:

  • A simple design hypothesis
  • A robust design hypothesis

You should still formulate a design hypothesis if you are working on minor changes, such as changing the copy on buttons. But there’s also no point in spending hours formulating a perfect hypothesis for a fifteen-minute test. In these cases, I’d just use a simple one-sentence hypothesis.

Yet, suppose you are working on an extensive and critical initiative, such as redesigning the whole conversion funnel. In that case, you might want to put more effort into a more robust and detailed design hypothesis to guide your entire process.

A simple example of a design hypothesis could be:

Moving the sign-up button to the top of the page will increase our conversion to registration by 10 percent, as most users don’t look at the bottom of the page.

Although it’s pretty straightforward, it still can help you in a few ways.

First of all, it helps prioritize experiments. If there is another small experiment in the backlog, but with the hypothesis that it’ll improve conversion to registration by 15 percent, it might influence the order of things you work on.

Impact assessments (where the 10 percent or 15 percent comes from) are another quite advanced topic, so I won’t cover it in detail, but in most cases, you can ask your product manager and/or data analyst for help.

It also allows you to validate the hypothesis without even experimenting. If you guessed that people don’t look at the bottom of the page, you can check your analytics tools to see what the scroll rate is or check heatmaps.

Lastly, if your hypothesis fails (that is, the conversion rate doesn’t improve), you get valuable insights that can help you reassess other hypotheses based on the “most users don’t look at the bottom of the page” assumption.

Now let’s take a look at a slightly more robust assumption. An example could be:

Shortening the number of screens during onboarding by half will boost our free trial to subscription conversion by 20 percent because:

  • Most users don’t complete the whole onboarding flow
  • Shorter onboarding will increase the onboarding completion rate
  • Focusing on the most important features will increase their adoption
  • Which will lead to aha moments and better premium retention
  • Users will perceive our product as simpler and less complex

The most significant difference is our effort to map all relevant assumptions.

Listing out assumptions can help you test them out in isolation before committing to the initiative.

For example, if you believe most users don’t complete the onboarding flow , you can check self-serve tools or ask your PM for help to validate if that’s true. If the data shows only 10 percent of users finish the onboarding, the hypothesis is stronger and more likely to be successful. If, on the other hand, most users do complete the whole onboarding, the idea suddenly becomes less promising.

The second advantage is the number of learnings you can get from the post-release analysis.

Say the change led to a 10 percent increase in conversion. Instead of blindly guessing why it didn’t meet expectations, you can see how each assumption turned out.

It might turn out that some users actually perceive the product as more complex (rather than less complex, as you assumed), as they have difficulty figuring out some functionalities that were skipped in the onboarding. Thus, they are less willing to convert.

Not only can it help you propose a second iteration of the experiment, that learning will help you greatly when working on other initiatives based on a similar assumption.

Closing thoughts

Ensuring everything you work on is based on a solid design hypothesis can greatly help you and your career.

It’ll guide your research and discovery in the right direction, enable better iterative design, maximize learning, and help you make better design decisions.

Some designers might think, “Hypotheses are the job of a product manager, not a designer.”

While that’s partly true, I believe designers should be proactive in working with hypotheses.

If there are none set, do it yourself for the sake of your own success. If all your designs succeed, or worse, flunk, no one will care who set or didn’t set the hypotheses behind these decisions. You’ll be judged, too.

If there’s a hypothesis set upfront, try to understand it, refine it, and challenge it if needed.

Most senior and desired product designers are not just pixel-pushers that do what they are being told to do, but they also play an active role in shaping the direction of the product as a whole. Becoming fluent in working with hypotheses is a significant step toward true seniority.

Header image source: IconScout

LogRocket : Analytics that give you UX insights without the need for interviews

LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.

See how design choices, interactions, and issues affect your users — get a demo of LogRocket today .

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #ux research

ux design hypothesis examples

Stop guessing about your digital experience with LogRocket

Recent posts:.

Color And Culture: How Color Language Changes By Background

Color and culture: How color language changes by background

Color variance across cultures and perceptions can impact the way UX design is interpreted and how effective it is.

ux design hypothesis examples

How to run one-on-one meetings with a design team

Let’s talk about the purpose of a one-on-one meeting, what makes a successful one, and strategies to plan and run one effectively.

ux design hypothesis examples

Why you shouldn’t use vertical trim in Figma (yet)

Did you know about the vertical trim setting in Figma? Here’s what it does and why it’s awesome but why you shouldn’t use it yet.

ux design hypothesis examples

Using the similarity matrix to surface card sorting insights (+template)

Analyzing a similarity matrix can help you identify patterns that tend to be grouped together by participants in card sorting.

ux design hypothesis examples

Leave a Reply Cancel reply

Integrations

What's new?

Prototype Testing

Live Website Testing

Feedback Surveys

Interview Studies

Card Sorting

Tree Testing

In-Product Prompts

Participant Management

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Research Maturity Model

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Creating a research hypothesis: How to formulate and test UX expectations

User Research

Mar 21, 2024

Creating a research hypothesis: How to formulate and test UX expectations

A research hypothesis helps guide your UX research with focused predictions you can test and learn from. Here’s how to formulate your own hypotheses.

Armin Tanovic

Armin Tanovic

All great products were once just thoughts—the spark of an idea waiting to be turned into something tangible.

A research hypothesis in UX is very similar. It’s the starting point for your user research; the jumping off point for your product development initiatives.

Formulating a UX research hypothesis helps guide your UX research project in the right direction, collect insights, and evaluate not only whether an idea is worth pursuing, but how to go after it.

In this article, we’ll cover what a research hypothesis is, how it's relevant to UX research, and the best formula to create your own hypothesis and put it to the test.

Test your hypothesis with Maze

Maze lets you validate your design and test research hypotheses to move forward with authentic user insights.

ux design hypothesis examples

What defines a research hypothesis?

A research hypothesis is a statement or prediction that needs testing to be proven or disproven.

Let’s say you’ve got an inkling that making a change to a feature icon will increase the number of users that engage with it—with some minor adjustments, this theory becomes a research hypothesis: “ Adjusting Feature X’s icon will increase daily average users by 20% ”.

A research hypothesis is the starting point that guides user research . It takes your thought and turns it into something you can quantify and evaluate. In this case, you could conduct usability tests and user surveys, and run A/B tests to see if you’re right—or, just as importantly, wrong .

A good research hypothesis has three main features:

  • Specificity: A hypothesis should clearly define what variables you’re studying and what you expect an outcome to be, without ambiguity in its wording
  • Relevance: A research hypothesis should have significance for your research project by addressing a potential opportunity for improvement
  • Testability: Your research hypothesis must be able to be tested in some way such as empirical observation or data collection

What is the difference between a research hypothesis and a research question?

Research questions and research hypotheses are often treated as one and the same, but they’re not quite identical.

A research hypothesis acts as a prediction or educated guess of outcomes , while a research question poses a query on the subject you’re investigating. Put simply, a research hypothesis is a statement, whereas a research question is (you guessed it) a question.

For example, here’s a research hypothesis: “ Implementing a navigation bar on our dashboard will improve customer satisfaction scores by 10%. ”

This statement acts as a testable prediction. It doesn’t pose a question, it’s a prediction. Here’s what the same hypothesis would look like as a research question: “ Will integrating a navigation bar on our dashboard improve customer satisfaction scores? ”

The distinction is minor, and both are focused on uncovering the truth behind the topic, but they’re not quite the same.

Why do you use a research hypothesis in UX?

Research hypotheses in UX are used to establish the direction of a particular study, research project, or test. Formulating a hypothesis and testing it ensures the UX research you conduct is methodical, focused, and actionable. It aids every phase of your research process , acting as a north star that guides your efforts toward successful product development .

Typically, UX researchers will formulate a testable hypothesis to help them fulfill a broader objective, such as improving customer experience or product usability. They’ll then conduct user research to gain insights into their prediction and confirm or reject the hypothesis.

A proven or disproven hypothesis will tell if your prediction is right, and whether you should move forward with your proposed design—or if it's back to the drawing board.

Formulating a hypothesis can be helpful in anything from prototype testing to idea validation, and design iteration. Put simply, it’s one of the first steps in conducting user research.

Whether you’re in the initial stages of product discovery for a new product, a single feature, or conducting ongoing research, a strong hypothesis presents a clear purpose and angle for your research It also helps understand which user research methodology to use to get your answers.

What are the types of research hypotheses?

Not all hypotheses are built the same—there are different types with different objectives. Understanding the different types enables you to formulate a research hypothesis that outlines the angle you need to take to prove or disprove your predictions.

Here are some of the different types of hypotheses to keep in mind.

Null and alternative hypotheses

While a normal research hypothesis predicts that a specific outcome will occur based upon a certain change of variables, a null hypothesis predicts that no difference will occur when you introduce a new condition.

By that reasoning, a null hypothesis would be:

  • Adding a new CTA button to the top of our homepage will make no difference in conversions

Null hypotheses are useful because they help outline what your test or research study is trying to dis prove, rather than prove, through a research hypothesis.

An alternative hypothesis states the exact opposite of a null hypothesis. It proposes that a certain change will occur when you introduce a new condition or variable. For example:

  • Adding a CTA button to the top of our homepage will cause a difference in conversion rates

Simple hypotheses and complex hypotheses

A simple hypothesis is a prediction that includes only two variables in a cause-and-effect sequence, with one variable dependent on the other. It predicts that you'll achieve a particular outcome based on a certain condition. The outcome is known as the dependent variable and the change causing it is the independent variable .

For example, this is a simple hypothesis:

  • Including the search function on our mobile app will increase user retention

The expected outcome of increasing user retention is based on the condition of including a new search function. But, what happens when there are more than two factors at play?

We get what’s called a complex hypothesis. Instead of a simple condition and outcome, complex hypotheses include multiple results. This makes them a perfect research hypothesis type for framing complex studies or tracking multiple KPIs based on a single action.

Building upon our previous example, a complex research hypothesis could be:

  • Including the search function on our mobile app will increase user retention and boost conversions

Directional and non-directional hypotheses

Research hypotheses can also differ in the specificity of outcomes. Put simply, any hypothesis that has a specific outcome or direction based on the relationship of its variables is a directional hypothesis . That means that our previous example of a simple hypothesis is also a directional hypothesis.

Non-directional hypotheses don’t specify the outcome or difference the variables will see. They just state that a difference exists. Following our example above, here’s what a non-directional hypothesis would look like:

  • Including the search function on our mobile app will make a difference in user retention

In this non-directional hypothesis, the direction of difference (increase/decrease) hasn’t been specified, we’ve just noted that there will be a difference.

The type of hypothesis you write helps guide your research—let’s get into it.

How to write and test your UX research hypothesis

Now we’ve covered the types of research hypothesis examples, it’s time to get practical.

Creating your research hypothesis is the first step in conducting successful user research.

Here are the four steps for writing and testing a UX research hypothesis to help you make informed, data-backed decisions for product design and development.

1. Formulate your hypothesis

Start by writing out your hypothesis in a way that’s specific and relevant to a distinct aspect of your user or product experience. Meaning: your prediction should include a design choice followed by the outcome you’d expect—this is what you’re looking to validate or reject.

Your proposed research hypothesis should also be testable through user research data analysis. There’s little point in a hypothesis you can’t test!

Let’s say your focus is your product’s user interface—and how you can improve it to better meet customer needs. A research hypothesis in this instance might be:

  • Adding a settings tab to the navigation bar will improve usability

By writing out a research hypothesis in this way, you’re able to conduct relevant user research to prove or disprove your hypothesis. You can then use the results of your research—and the validation or rejection of your hypothesis—to decide whether or not you need to make changes to your product’s interface.

2. Identify variables and choose your research method

Once you’ve got your hypothesis, you need to map out how exactly you’ll test it. Consider what variables relate to your hypothesis. In our case, the main variable of our outcome is adding a settings tab to the navigation bar.

Once you’ve defined the relevant variables, you’re in a better position to decide on the best UX research method for the job. If you’re after metrics that signal improvement, you’ll want to select a method yielding quantifiable results—like usability testing . If your outcome is geared toward what users feel, then research methods for qualitative user insights, like user interviews , are the way to go.

3. Carry out your study

It’s go time. Now you’ve got your hypothesis, identified the relevant variables, and outlined your method for testing them, you’re ready to run your study. This step involves recruiting participants for your study and reaching out to them through relevant channels like email, live website testing , or social media.

Given our hypothesis, our best bet is to conduct A/B and usability tests with a prototype that includes the additional UI elements, then compare the usability metrics to see whether users find navigation easier with or without the settings button.

We can also follow up with UX surveys to get qualitative insights and ask users how they found the task, what they preferred about each design, and to see what additional customer insights we uncover.

💡 Want more insights from your usability tests? Maze Clips enables you to gather real-time recordings and reactions of users participating in usability tests .

4. Analyze your results and compare them to your hypothesis

By this point, you’ve neatly outlined a hypothesis, chosen a research method, and carried out your study. It’s now time to analyze your findings and evaluate whether they support or reject your hypothesis.

Look at the data you’ve collected and what it means. Given that we conducted usability testing, we’ll want to look to some key usability metrics for an indication of whether the additional settings button improves usability.

For example, with the usability task of ‘ In account settings, find your profile and change your username ’, we can conduct task analysis to compare the times spent on task and misclick rates of the new design, with those same metrics from the old design.

If you also conduct follow-up surveys or interviews, you can ask users directly about their experience and analyze their answers to gather additional qualitative data . Maze AI can handle the analysis automatically, but you can also manually read through responses to get an idea of what users think about the change.

By comparing the findings to your research hypothesis, you can identify whether your research accepts or rejects your hypothesis. If the majority of users struggle with finding the settings page within usability tests, but had a higher success rate with your new prototype, you’ve proved the hypothesis.

However, it's also crucial to acknowledge if the findings refute your hypothesis rather than prove it as true. Ruling something out is just as valuable as confirming a suspicion.

In either case, make sure to draw conclusions based on the relationship between the variables and store findings in your UX research repository . You can conduct deeper analysis with techniques like thematic analysis or affinity mapping .

UX research hypotheses: four best practices to guide your research

Knowing the big steps for formulating and testing a research hypothesis ensures that your next UX research project gives you focused, impactful results and insights. But, that’s only the tip of the research hypothesis iceberg. There are some best practices you’ll want to consider when using a hypothesis to test your UX design ideas.

Here are four research hypothesis best practices to help guide testing and make your UX research systematic and actionable.

Align your hypothesis to broader business and UX goals

Before you begin to formulate your hypothesis, be sure to pause and think about how it connects to broader goals in your UX strategy . This ensures that your efforts and predictions align with your overarching design and development goals.

For example, implementing a brand new navigation menu for current account holders might work for usability, but if the wider team is focused on boosting conversion rates for first-time site viewers, there might be a different research project to prioritize.

Create clear and actionable reports for stakeholders

Once you’ve conducted your testing and proved or disproved your hypothesis, UX reporting and analysis is the next step. You’ll need to present your findings to stakeholders in a way that's clear, concise, and actionable. If your hypothesis insights come in the form of metrics and statistics, then quantitative data visualization tools and reports will help stakeholders understand the significance of your study, while setting the stage for design changes and solutions.

If you went with a research method like user interviews, a narrative UX research report including key themes and findings, proposed solutions, and your original hypothesis will help inform your stakeholders on the best course of action.

Consider different user segments

While getting enough responses is crucial for proving or disproving your hypothesis, you’ll want to consider which users will give you the highest quality and most relevant responses. Remember to consider user personas —e.g. If you’re only introducing a change for premium users, exclude testing with users who are on a free trial of your product.

You can recruit and target specific user demographics with the Maze Panel —which enables you to search for and filter participants that meet your requirements. Doing so allows you to better understand how different users will respond to your hypothesis testing. It also helps you uncover specific needs or issues different users may have.

Involve stakeholders from the start

Before testing or even formulating a research hypothesis by yourself, ensure all your stakeholders are on board. Informing everyone of your plan to formulate and test your hypothesis does three things:

Firstly, it keeps your team in the loop . They’ll be able to inform you of any relevant insights, special considerations, or existing data they already have about your particular design change idea, or KPIs to consider that would benefit the wider team.

Secondly, informing stakeholders ensures seamless collaboration across multiple departments . Together, you’ll be able to fit your testing results into your overall CX strategy , ensuring alignment with business goals and broader objectives.

Finally, getting everyone involved enables them to contribute potential hypotheses to test . You’re not the only one with ideas about what changes could positively impact the user experience, and keeping everyone in the loop brings fresh ideas and perspectives to the table.

Test your UX research hypotheses with Maze

Formulating and testing out a research hypothesis is a great way to define the scope of your UX research project clearly. It helps keep research on track by providing a single statement to come back to and anchor your research in.

Whether you run usability tests or user interviews to assess your hypothesis—Maze's suite of advanced research methods enables you to get the in-depth user and customer insights you need.

Frequently asked questions about research hypothesis

What is the difference between a hypothesis and a problem statement in UX?

A research hypothesis describes the prediction or method of solving that problem. A problem statement, on the other hand, identifies a specific issue in your design that you intend to solve. A problem statement will typically include a user persona, an issue they have, and a desired outcome they need.

How many hypotheses should a UX research problem have?

Technically, there are no limits to the amount of hypotheses you can have for a certain problem or study. However, you should limit it to one hypothesis per specific issue in UX research. This ensures that you can conduct focused testing and reach clear, actionable results.

InVisionApp, Inc.

Inside Design

5 steps to a hypothesis-driven design process

  •   mar 22, 2018.

S ay you’re starting a greenfield project, or you’re redesigning a legacy app. The product owner gives you some high-level goals. Lots of ideas and questions are in your mind, and you’re not sure where to start.

Hypothesis-driven design will help you navigate through a unknown space so you can come out at the end of the process with actionable next steps.

Ready? Let’s dive in.

Step 1: Start with questions and assumptions

On the first day of the project, you’re curious about all the different aspects of your product. “How could we increase the engagement on the homepage? ” “ What features are important for our users? ”

Related: 6 ways to speed up and improve your product design process

To reduce risk, I like to take some time to write down all the unanswered questions and assumptions. So grab some sticky notes and write all your questions down on the notes (one question per note).

I recommend that you use the How Might We technique from IDEO to phrase the questions and turn your assumptions into questions. It’ll help you frame the questions in a more open-ended way to avoid building the solution into the statement prematurely. For example, you have an idea that you want to make riders feel more comfortable by showing them how many rides the driver has completed. You can rephrase the question to “ How might we ensure rider feel comfortable when taking ride, ” and leave the solution part out to the later step.

“It’s easy to come up with design ideas, but it’s hard to solve the right problem.”

It’s even more valuable to have your team members participate in the question brainstorming session. Having diverse disciplines in the room always brings fresh perspectives and leads to a more productive conversation.

Step 2: Prioritize the questions and assumptions

Now that you have all the questions on sticky notes, organize them into groups to make it easier to review them. It’s especially helpful if you can do the activity with your team so you can have more input from everybody.

When it comes to choosing which question to tackle first, think about what would impact your product the most or what would bring the most value to your users.

If you have a big group, you can Dot Vote to prioritize the questions. Here’s how it works: Everyone has three dots, and each person gets to vote on what they think is the most important question to answer in order to build a successful product. It’s a common prioritization technique that’s also used in the Sprint book by Jake Knapp —he writes, “ The prioritization process isn’t perfect, but it leads to pretty good decisions and it happens fast. ”

Related: Go inside design at Google Ventures

Step 3: Turn them into hypotheses

After the prioritization, you now have a clear question in mind. It’s time to turn the question into a hypothesis. Think about how you would answer the question.

Let’s continue the previous ride-hailing service example. The question you have is “ How might we make people feel safe and comfortable when using the service? ”

Based on this question, the solutions can be:

  • Sharing the rider’s location with friends and family automatically
  • Displaying more information about the driver
  • Showing feedback from previous riders

Now you can combine the solution and question, and turn it into a hypothesis. Hypothesis is a framework that can help you clearly define the question and solution, and eliminate assumption.

From Lean UX

We believe that [ sharing more information about the driver’s experience and stories ] For [ the riders ] Will [ make riders feel more comfortable and connected throughout the ride ]

4. Develop an experiment and testing the hypothesis

Develop an experiment so you can test your hypothesis. Our test will follow the scientific methods, so it’s subject to collecting empirical and measurable evidence in order to obtain new knowledge. In other words, it’s crucial to have a measurable outcome for the hypothesis so we can determine whether it has succeeded or failed.

There are different ways you can create an experiment, such as interview, survey , landing page validation, usability testing, etc. It could also be something that’s built into the software to get quantitative data from users. Write down what the experiment will be, and define the outcomes that determine whether the hypothesis is valids. A well-defined experiment can validate/invalidate the hypothesis.

In our example, we could define the experiment as “ We will run X studies to show more information about a driver (number of ride, years of experience), and ask follow-up questions to identify the rider’s emotion associated with this ride (safe, fun, interesting, etc.). We will know the hypothesis is valid when we get more than 70% identify the ride as safe or comfortable. ”

After defining the experiment, it’s time to get the design done. You don’t need to have every design detail thought through. You can focus on designing what is needed to be tested.

When the design is ready, you’re ready to run the test. Recruit the users you want to target , have a time frame, and put the design in front of the users.

5. Learn and build

You just learned that the result was positive and you’re excited to roll out the feature. That’s great! If the hypothesis failed, don’t worry—you’ll be able to gain some insights from that experiment. Now you have some new evidence that you can use to run your next experiment. In each experiment, you’ll learn something new about your product and your customers.

“Design is a never-ending process.”

What other information can you show to make riders feel safe and comfortable? That can be your next hypothesis. You now have a feature that’s ready to be built, and a new hypothesis to be tested.

Principles from from The Lean Startup

We often assume that we understand our users and know what they want. It’s important to slow down and take a moment to understand the questions and assumptions we have about our product.

After testing each hypothesis, you’ll get a clearer path of what’s most important to the users and where you need to dig deeper. You’ll have a clear direction for what to do next.

by Sylvia Lai

Sylvia Lai helps startup and enterprise solve complex problems through design thinking and user-centered design methodologies at Pivotal Labs . She is the biggest advocate for the users, making sure their voices are heard is her number one priority. Outside of work, she loves mentoring other designers through one-on-one conversation. Connect with her through LinkedIn or Twitter .

Collaborate in real time on a digital whiteboard Try Freehand

Get awesome design content in your inbox each week, give it a try—it only takes a click to unsubscribe., thanks for signing up, you should have a thank you gift in your inbox now-and you’ll hear from us again soon, get started designing better. faster. together. and free forever., give it a try. nothing’s holding you back..

Hypothesis statement

  • Introduction to Hypothesis statement
  • Essential characteristics

Introduction to hypothesis statements

Image showing an empathy map

Brainstorming solutions is similar to making a hypothesis or an educated guess about how to solve the problem.

In UX design, we write down possible solutions to the problem as hypothesis statements. A good hypothesis statement requires more effort than just a guess. In particular, your hypothesis statement may start with a question that can be further explored through background research.

How do you write hypothesis statements? Unlike problem statements, there's no standard formula for writing hypothesis statements. For starters, let's try what's called an if-then statement.

It looks like this: If (name an action), then (name an outcome).

Hypothesis statements don't have a standard formula. Instead of an if-then statement, you can formulate this hypothesis statement in a more flexible way.

Essential characteristics of a hypothesis statement

To formulate a promising hypothesis, ask yourself the following questions:

Is the language clear and purposeful?

What is the relationship between your hypothesis and your research topic?

Is your hypothesis testable? If so, how?

What possible explanations would you like to explore?

You may need to come up with more than one hypothesis for a problem. That's okay! There will always be multiple solutions to your users' problems. Your job is to use your creativity and problem-solving skills to decide which solutions are best for each user you are designing for.

  • #ClearLanguage
  • #HypothesisVSResearchTopic
  • #PossibleExplanations

Previous article • 5min read

Create, define problem statements, next article • 8min read, understand human factors, table of contents, esc hit escape to close, introduction to ux, what is user experience.

User experience, definition of a good design, lifecycle product development

UX Design Frameworks

Key frameworks.

User-Centered Design, the five elements of UX Design, Design Thinking, Lean UX, Double Diamond

Equity and Accessibility

Designing for all.

Universal design, inclusive design, and equity-focused design

The importance of Accessibility

Motor disabilities, deaf or hard of hearing, cognitive disabilities, visual disabilities

Design for the Next Billion User (Google)

Majority of people that gets online for the first time

Design for different platforms

Responsiveness, Call-to-actions, navigation and more

The 4Cs of princples of design

Consistency, Continuity Context and Complementary principles

Assistive Technology

Voice control and switch devices, screen readers, alternative text and speech

Impostor Syndrome/Biases

Overcome the impostor syndrome.

Impostor Syndrome is the feeling that makes you doubt that you actually earn your accomplishments.

Most common Biases

Learn about favoring or having prejudice against something based on limited information.

Prevent Biases

Recognize your own biases and prevent them from affecting your work.

Design Sprint

Introduction to design sprint.

Introduction to the framework, benefits and advantages

Plan a Design Sprint

Tips and tricks about design sprint planning

Write a Design Sprint brief

Insights and free canvas about making a design sprint brief.

Design Sprint retrospective

What went well? What can be improved?

UX Research

Introduction to ux research.

Learn techniques of research for designing better products

Foundational Research

Easily center on a problem or topic that's undefined in your project's scope

Design Research

Find stories, engage in conversations, understand the users motivations

Post-Launch Research

Know the impact of pre- and post-launch publicity and advertising on new product sales

Choosing the right Research method

Learn which research method to pick depending on the questions you still have unanswered

Benefits and drawbacks

Learn how to create an optimal product for the user

Recruit interview participants

Learn how to determine interview goals and write questions

Conduct user interviews

Insights on how to be prepared before speaking with real users

Create interview transcripts

Discover new topics of interest through a written transcript

Empathize with users

Master ability to fully understand, mirror a person's expressions, needs, and motivations

Consider a11y when empathizing

Learn why empathizing and accessibility go hand in hand

Empathy Maps

Learn how to empathize and synthesise your observations from the research phase

Identify pain points

Identify a specific problem that your users are experiencing

Understand personas

Learn how to shape product strategy and accompany it the usability testing sessions

User stories

Learn how to base your user stories on user goals and keep the product user focused

Create a user journey map

Learn how to make a visual representation of the customer experience

Determine value propositions

Set and explain why a customer should buy from you

Create and define a problem statement

Learn how to focus on the specific needs that you have uncovered yet

Learn how to predict the behavior of a proposed solution

Learn how people interact with technology

Competitive Audits

Introduction to competitive audits, limits to competitive audits, steps to conduct competitive audits, present a competitive audit, design ideation, understand design ideation, business needs during ideation, use insights from competitive audits to ideate, use "how might we" to ideate, use crazy eights to ideate, use journey map to ideate, goal statements, build a goal statement, introduction to user flows, storyboarding user flows, types of storyboards, wireframing, introduction to wireframes, paper wireframes, transition from paper to digital wireframes, information architecture, ethical and inclusive design, identify deceptive patterns, role as a ux designer, press shift to trigger the table of contents.

  • Arrow Down Resources
  • Envelope Subscribe
  • Cookies Policy
  • Terms & Conditions

UX Research: Objectives, Assumptions, and Hypothesis

by Rick Dzekman

An often neglected step in UX research

Introduction

UX research should always be done for a clear purpose – otherwise you’re wasting the both your time and the time of your participants. But many people who do UX research fail to properly articulate the purpose in their research objectives. A major issue is that the research objectives include assumptions that have not been properly defined.

When planning UX research you have some goal in mind:

  • For generative research it’s usually to find out something about users or customers that you previously did not know
  • For evaluative research it’s usually to identify any potential issues in a solution

As part of this goal you write down research objectives that help you achieve that goal. But for many researchers (especially more junior ones) they are missing some key steps:

  • How will those research objectives help to reach that goal?
  • What assumptions have you made that are necessary for those objectives to reach that goal?
  • How does your research (questions, tasks, observations, etc.) help meet those objectives?
  • What kind of responses or observations do you need from your participants to meet those objectives?

Research objectives map to goals but that mapping requires assumptions. Each objective is broken down into sub-objectives which should lead to questions, tasks, or observations. The questions we ask in our research should map to some research objective and help reach the goal.

One approach people use is to write their objectives in the form of research hypothesis. There are a lot of problems when trying to validate a hypothesis with qualitative research and sometimes even with quantitative.

This article focuses largely on qualitative research: interviews, user tests, diary studies, ethnographic research, etc. With qualitative research in mind let’s start by taking a look at a few examples of UX research hypothesis and how they may be problematic.

Research hypothesis

Example hypothesis: users want to be able to filter products by colour.

At first it may seem that there are a number of ways to test this hypothesis with qualitative research. For example we might:

  • Observe users shopping on sites with and without colour filters and see whether or not they use them
  • Ask users who are interested in our products about how narrow down their choices
  • Run a diary study where participants document the ways they narrowed down their searches on various stores
  • Make a prototype with colour filters and see if participants use them unprompted

These approaches are all effective but they do not and cannot prove or disprove our hypothesis. It’s not that the research methods are ineffective it’s that the hypothesis itself is poorly expressed.

The first problem is that there are hidden assumptions made by this hypothesis. Presumably we would be doing this research to decide between a choice of possible filters we could implement. But there’s no obvious link between users wanting to filter by colour and a benefit from us implementing a colour filter. Users may say they want it but how will that actually benefit their experience?

The second problem with this hypothesis is that we’re asking a question about “users” in general. How many users would have to want colour filters before we could say that this hypothesis is true?

Example Hypothesis: Adding a colour filter would make it easier for users to find the right products

This is an obvious improvement to the first example but it still has problems. We could of course identify further assumptions but that will be true of pretty much any hypothesis. The problem again comes from speaking about users in general.

Perhaps if we add the ability to filter by colour it might make the possible filters crowded and make it more difficult for users who don’t need colour to find the filter that they do need. Perhaps there is a sample bias in our research participants that does not apply broadly to our user base.

It is difficult (though not impossible) to design research that could prove or disprove this hypothesis. Any such research would have to be quantitative in nature. And we would have to spend time mapping out what it means for something to be “easier” or what “the right products” are.

Example Hypothesis: Travelers book flights before they book their hotels

The problem with this hypothesis should now be obvious: what would it actually mean for this hypothesis to be proved or disproved? What portion of travelers would need to book their flights first for us to consider this true?

Example Hypothesis: Most users who come to our app know where and when they want to fly

This hypothesis is better because it talks about “most users” rather than users in general. “Most” would need to be better defined but at least this hypothesis is possible to prove or disprove.

We could address this hypothesis with quantitative research. If we found out that it was true we could focus our design around the primary use case or do further research about how to attract users at different stages of their journey.

However there is no clear way to prove or disprove this hypothesis with qualitative research. If the app has a million users and 15/20 research participants tell you that this is true would your findings generalise to the entire user base? The margin of error on that finding is 20-25%, meaning that the true results could be closer to 50% or even 100% depending on how unlucky you are with your sample.

Example Hypothesis: Customers want their bank to help them build better savings habits

There are many things wrong with this hypothesis but we will focus on the hidden assumptions and the links to design decisions. Two big assumptions are that (1) it’s possible to find out what research participants want and (2) people’s wants should dictate what features or services to provide.

Research objectives

One of the biggest problem with using hypotheses is that they set the wrong expectations about what your research results are telling you. In Thinking, Fast and Slow, Daniel Kahneman points out that:

  • “extreme outcomes (both high and low) are more likely to be found in small than in large samples”
  • “the prominence of causal intuitions is a recurrent theme in this book because people are prone to apply causal thinking inappropriately, to situations that require statistical reasoning”
  • “when people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when these arguments are unsound”

Using a research hypothesis primes us to think that we have found some fundamental truth about user behaviour from our qualitative research. This leads to overconfidence about what the research is saying and to poor quality research that could simply have been skipped in exchange for simply making assumption. To once again quote Kahneman: “you do not believe that these results apply to you because they correspond to nothing in your subjective experience”.

We can fix these problems by instead putting our focus on research objectives. We pay attention to the reason that we are doing the research and work to understand if the results we get could help us with our objectives.

This does not get us off the hook however because we can still create poor research objectives.

Let’s look back at one of our prior hypothesis examples and try to find effective research objectives instead.

Example objectives: deciding on filters

In thinking about the colour filter we might imagine that this fits into a larger project where we are trying to decide what filters we should implement. This is decidedly different research to trying to decide what order to implement filters in or understand how they should work. In this case perhaps we have limited resources and just want to decide what to implement first.

A good approach would be quantitative research designed to produce some sort of ranking. But we should not dismiss qualitative research for this particular project – provided our assumptions are well defined.

Let’s consider this research objective: Understand how users might map their needs against the products that we offer . There are three key aspects to this objective:

  • “Understand” is a common form of research objective and is a way that qualitative research can discover things that we cannot find with quant. If we don’t yet understand some user attitude or behaviour we cannot quantify it. By focusing our objective on understanding we are looking at uncovering unknowns.
  • By using the word “might” we are not definitively stating that our research will reveal all of the ways that users think about their needs.
  • Our focus is on understanding the users’ mental models. Then we are not designing for what users say that they want and we aren’t even designing for existing behaviour. Instead we are designing for some underlying need.

The next step is to look at the assumptions that we are making. One assumption is that mental models are roughly the same between most people. So even though different users may have different problems that for the most part people tend to think about solving problems with the same mental machinery. As we do more research we might discover that this assumption is not true and there are distinctly different kinds of behaviours. Perhaps we know what those are in advance and we can recruit our research participants in a way that covers those distinct behaviours.

Another assumption is that if we understand our users’ mental models that we will be able to design a solution that will work for most people. There are of course more assumptions we could map but this is a good start.

Now let’s look at another research objective: Understand why users choose particular filters . Again we are looking to understand something that we did not know before.

Perhaps we have some prior research that tells us what the biggest pain points are that our products solve. If we have an understanding of why certain filters are used we can think about how those motivations fit in with our existing knowledge.

Mapping objectives to our research plan

Our actual research will involve some form of asking questions and/or making observations. It’s important that we don’t simply forget about our research objectives and start writing questions. This leads to completing research and realising that you haven’t captured anything about some specific objective.

An important step is to explicitly write down all the assumptions that we are making in our research and to update those assumptions as we write our questions or instructions. These assumptions will help us frame our research plan and make sure that we are actually learning the things that we think we are learning. Consider even high level assumptions such as: a solution we design with these insights will lead to a better experience, or that a better experience is necessarily better for the user.

Once we have our main assumptions defined the next step is to break our research objective down further.

Breaking down our objectives

The best way to consider this breakdown is to think about what things we could learn that would contribute to meeting our research objective. Let’s consider one of the previous examples: Understand how users might map their needs against the products that we offer

We may have an assumption that users do in fact have some mental representation of their needs that align with the products they might purchase. An aspect of this research objective is to understand whether or not this true. So two sub-objectives may be to (1) understand why users actually buy these sorts of products (if at all), and (2) understand how users go about choosing which product to buy.

Next we might want to understand what our users needs actually are or if we already have research about this understand which particular needs apply to our research participants and why.

And finally we would want to understand what factors go into addressing a particular need. We may leave this open ended or even show participants attributes of the products and ask which ones address those needs and why.

Once we have a list of sub-objectives we could continue to drill down until we feel we’ve exhausted all the nuances. If we’re happy with our objectives the next step is to think about what responses (or observations) we would need in order to answer those objectives.

It’s still important that we ask open ended questions and see what our participants say unprompted. But we also don’t want our research to be so open that we never actually make any progress on our research objectives.

Reviewing our objectives and pilot studies

At the end it’s important to review every task, question, scenario, etc. and seeing which research objectives are being addressed. This is vital to make sure that your planning is worthwhile and that you haven’t missed anything.

If there’s time it’s also useful to run a pilot study and analyse the responses to see if they help to address your objectives.

Plan accordingly

It should be easy to see why research hypothesis are not suitable for most qualitative research. While it is possible to create suitable hypothesis it is more often than not going to lead to poor quality research. This is because hypothesis create the impression that qualitative research can find things that generalise to the entire user base. In general this is not true for the sample sizes typically used for qualitative research and also generally not the reason that we do qualitative research in the first place.

Instead we should focus on producing effective research objectives and making sure every part of our research plan maps to a suitable objective.

  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

ux design hypothesis examples

A Simple Introduction to Lean UX

Lean UX is an incredibly useful technique when working on projects where the Agile development method is used. Traditional UX techniques often don’t work when development is conducted in rapid bursts – there’s not enough time to deliver UX in the same way. Fundamentally Lean UX and other forms of UX all have the same goal in mind; delivering a great user experience it’s just that the way you work on a project is slightly different. So let’s take a look at how that might work.

Lean UX – What is It?

Lean UX is focused on the experience under design and is less focused on deliverables than traditional UX. It requires a greater level of collaboration with the entire team. The core objective is to focus on obtaining feedback as early as possible so that it can be used to make quick decisions. The nature of Agile development is to work in rapid, iterative cycles and Lean UX mimics these cycles to ensure that data generated can be used in each iteration .

ux design hypothesis examples

Author/Copyright holder: Vimeo. Copyright terms and licence: Public Domain

The Need for Assumptions in Lean UX

In traditional UX the project is built upon requirements capture and deliverables. The objective is to ensure that deliverables are as detailed as possible and respond adequately to the requirements that are laid down at the start of the project.

Lean UX is slightly different. You aren’t focused on detailed deliverables. You are looking to produce changes that improve the product in the here and now – essentially to mould the outcome for the better.

This works in practice by ditching “requirements” and using a “ problem statement ” which should lead to a set of assumptions that can be used to create hypotheses.

What is an assumption? An assumption is basically a statement of something that we think is true. They are designed to generate common understanding around an idea that enables everyone to get started. It is fully understood that assumptions may not be correct and may be changed during the project as a better understanding develops within the team.

Assumptions are normally generated on a workshop basis. You get the team together and state the problem and then allow the team to brainstorm their ideas for solving the problem. In the process you generate answers to certain questions that form your assumptions.

Typical questions might include:

Who are our users?

What is the product used for?

When is it used?

What situations is it used in?

What will be the most important functionality ?

What’s the biggest risk to product delivery?

There may be more than one answer to each question. That leaves us with a greater number of assumptions than it might be practical to handle. If this is the case, the team can prioritize their assumptions quickly following their generation. In general you would prioritize your assumptions by the risk they represent (what are the consequences of this being badly wrong? The more severe the consequence the higher the priority) and the level of understanding of the issue at hand (the less you know, the higher the priority).

ux design hypothesis examples

Author/Copyright holder: visualpun.ch. Copyright terms and licence: CC BY-SA 2.0

Creating a Hypothesis in Lean UX

The hypotheses created in Lean UX are designed to test our assumptions. There’s a simple format that you can use to create your own hypotheses, quickly and easily.

An example:

We believe that enabling people to save their progress at any time is essential for smartphone users. This will achieve a higher level of sign up completions. We will have demonstrated this when we can measure an improvement of the current completion rate of 20%.

We state the belief and why it is important and who it is important to. Then we follow that with what we expect to achieve. Finally, we determine what evidence we would need to collect to prove that our belief was true.

If we find that there’s no way to prove our hypothesis – we may be heading in the wrong direction because our outcomes are not clearly defined.

One of the big advantages of working like this is it removes much of the “I don’t think that’s a good idea” and political infighting from the UX design process. Every idea is going to be tested and the evidence criteria clearly determined. No evidence? Then it’s time to drop the idea and try something else.

If everyone can understand a hypothesis and the expectations from it, they tend to be happy to wait to see if it’s true rather than passionately debating their own subjective viewpoint.

The Minimum Viable Product and Lean UX

The Minimum Viable Product (MVP) is a core concept in Lean UX. The idea is to build the most basic version of the concept as possible, test it and if there are no valuable results to abandon it. The MVPs which show promise can then be incorporated into further design and development rounds without too much hassle.

This is a great way of maximizing your resources and one of the reasons that it works so well with Agile development – it allows for a lot of experimentation with no “sacred cows”.

ux design hypothesis examples

Author/Copyright holder: Jussi Pasanen. With acknowledgements to Aarron Walter, Ben Tollady, Ben Rowe, Lexi Thorn and Senthil Kugalur. Copyright terms and license: All rights reserved

User Research and Testing in Lean UX

User research and testing, by the very nature of Lean UX, are based on the same principles as used in traditional UX environments. However, the approach tends to be “quick and dirty” – results need to be delivered before the next Agile Sprint starts; so there’s much less focus on heavy-duty, meticulously document outputs and more focus on raw data.

Responsibilities for research also tend to be spread more widely across the whole team so that there’s no “bottleneck” created by having a single UX design resource trying to get the whole job done in tight timescales by themselves. This often gets development resources to do “hands on” UX work and increases the level of understanding and support for UX work within the development team too.

This is a very high-level overview of Lean UX and, of course, there’s a lot more to it than you can cover in a short(ish) article. However, these basic concepts should enable you to start heading in the right direction when it comes to implementing Lean UX in your Agile environment.

References & Where to Learn More

Header Image: Author/Copyright holder: Rosenfeld Media. Copyright terms and licence: CC BY 2.0

Course: UX Management: Strategy and Tactics

User Experience: The Beginner’s Guide

ux design hypothesis examples

Get Weekly Design Insights

Topics in this article, what you should read next, apple’s product development process – inside the world’s greatest design organization.

ux design hypothesis examples

  • 1.4k shares

What is Interaction Design?

ux design hypothesis examples

How to Change Your Career from Graphic Design to UX Design

ux design hypothesis examples

Shneiderman’s Eight Golden Rules Will Help You Design Better Interfaces

ux design hypothesis examples

  • 1.3k shares

The Principles of Service Design Thinking - Building Better Services

ux design hypothesis examples

  • 10 mths ago

Dieter Rams: 10 Timeless Commandments for Good Design

ux design hypothesis examples

  • 3 years ago

How to Do a Thematic Analysis of User Interviews

ux design hypothesis examples

  • 1.2k shares

The 7 Factors that Influence User Experience

ux design hypothesis examples

Adaptive vs. Responsive Design

ux design hypothesis examples

The Grid System: Building a Solid Design Layout

ux design hypothesis examples

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this article , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this article.

New to UX Design? We’re giving you a free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

New to UX Design? We’re Giving You a Free ebook!

MeasuringU Logo

Hypothesis Testing in the User Experience

ux design hypothesis examples

It’s something we all have completed and if you have kids might see each year at the school science fair.

  • Does an expensive baseball travel farther than a cheaper one?
  • Which melts an ice block quicker, salt water or tap water?
  • Does changing the amount of vinegar affect the color when dying Easter eggs?

While the science project might be relegated to the halls of elementary schools or your fading childhood memory, it provides an important lesson for improving the user experience.

The science project provides us with a template for designing a better user experience. Form a clear hypothesis, identify metrics, and collect data to see if there is evidence to refute or confirm it. Hypothesis testing is at the heart of modern statistical thinking and a core part of the Lean methodology .

Instead of approaching design decisions with pure instinct and arguments in conference rooms, form a testable statement, invite users, define metrics, collect data and draw a conclusion.

  • Does requiring the user to double enter an email result result in more valid email addresses?
  • Will labels on the top of form fields or the left of form fields reduce the time to complete the form?
  • Does requiring the last four digits of your Social Security Number improve application rates over asking for a full SSN?
  • Do users have more trust in the website if we include the McAfee security symbol or the Verisign symbol ?
  • Do more users make purchases if the checkout button is blue or red?
  • Does a single long form generate higher form submissions than the division of the form on three smaller pages?
  • Will users find items faster using mega menu navigation or standard drop-down navigation?
  • Does the number of monthly invoices a small business sends affect which payment solution they prefer?
  • Do mobile users prefer to download an app to shop for furniture or use the website?

Each of the above questions is both testable and represents real examples. It’s best to have as specific a hypothesis as possible and isolate the variable of interest. Many of these hypotheses can be tested with a simple A/B test , unmoderated usability test , survey or some combination of them all .

Even before you collect any data, there is an immediate benefit gained from forming hypotheses. It forces you and your team to think through the assumptions in your designs and business decisions. For example, many registration systems require users to enter their email address twice. If an email address is wrong, in many cases a company has no communication with a prospective customer.

Requiring two email fields would presumably reduce the number of mistyped email addresses. But just like legislation can have unintended consequences, so do rules in the user interface. Do users just copy and paste their email thus negating the double fields? If you then disable the pasting of email addresses into the field, does this lead to more form abandonment and less overall customers?

With a clear hypothesis to test, the next step involves identifying metrics that help quantify the experience . Like most tests, you can use a simple binary metric (yes/no, pass/fail, convert/didn’t convert). For example, you could collect how many users registered using the double email vs. the single email form, how many submitted using the last four numbers of their SSN vs. the full SSN, and how many found an item with the mega menu vs. the standard menu.

Binary metrics are simple, but they usually can’t fully describe the experience. This is why we routinely collect multiple metrics, both performance and attitudinal. You can measure the time it takes users to submit alternate versions of the forms, or the time it takes to find items using different menus. Rating scales and forced ranking questions are good ways of measuring preferences for downloading apps or choosing a payment solution.

With a clear research hypothesis and some appropriate metrics, the next steps involve collecting data from the right users and analyzing the data statistically to test the hypothesis. Technically we rework our research hypothesis into what’s called the Null Hypothesis, then look for evidence against the Null Hypothesis, usually in the form of the p-value . This is of course a much larger topic we cover in Quantifying the User Experience .

While the process of subjecting data to statistical analysis intimidates many designers and researchers (recalling those school memories again), remember that the hardest and most important part is working with a good testable hypothesis. It takes practice to convert fuzzy business questions into testable hypotheses. Once you’ve got that down, the rest is mechanics that we can help with.

You might also be interested in

Feature-051921

ux design hypothesis examples

What if we found ourselves building something that nobody wanted? In that case, what did it matter if we did it on time and on budget? —Eric Ries

Lean User Experience (Lean UX) is a team-based approach to building better products by focusing less on the theoretically ideal design and more on iterative learning, overall user experience, and customer outcomes.

Lean UX design extends the traditional UX role beyond merely executing design elements and anticipating how users might interact with a system. Instead, it encourages a far more comprehensive view of why a Feature exists, the functionality required to implement it, and the benefits it delivers. By getting immediate feedback to understand if the system will meet the fundamental business objectives, Lean UX provides a closed-loop method for defining and measuring value.

Generally, UX represents a user’s perceptions of a system—ease of use, utility, and the user interface’s (UI) effectiveness. UX design focuses on building systems that demonstrate a deep understanding of end users. It considers users’ needs and wants while making allowances for their context and limitations.

When using Agile methods, a common problem is how best to incorporate UX design into a rapid Iteration cycle, resulting in a full-stack implementation of the new functionality. When teams attempt to resolve complex and seemingly subjective user interactions while simultaneously trying to develop incremental deliverables, they can often churn through many designs, creating frustration with Agile.

Fortunately, the Lean UX movement addresses this using Agile development with Lean Startup implementation approaches. The mindset, principles, and practices of SAFe reflect this thinking. This process often begins with the SAFe Lean Startup Cycle described in the Epic article. It continues developing Features and Capabilities using the Lean UX process described here.

As a result, Agile Teams and Agile Release Trains (ARTs) can leverage a common strategy to generate rapid development, fast feedback, and a holistic user experience that delights users.

The Lean UX Process

In Lean UX , Gothelf and Seiden [2] describe a model we have adapted to SAFe, as Figure 1 illustrates.

Benefit Hypothesis

The Lean UX approach starts with a benefit hypothesis: Agile teams and UX designers accept that the right answer is unknowable up-front. Instead, teams apply Agile methods to avoid Big Design Up-front (BDUF), focusing on creating a hypothesis about the feature’s expected business result. Then they implement and test that hypothesis incrementally.

The SAFe Feature and Benefits matrix (FAB) can be used to describe the hypothesis as it moves through the Continuous Exploration aspect of the CDP:

  • Feature  – A short phrase giving a name and context
  • Benefit hypothesis – The proposed measurable benefit to the end-user or business

Note : Design Thinking practices suggest changing the order of the feature benefit hypothesis elements to identify the customer benefits first and then determine what features might satisfy their needs.

Outcomes are measured in the Release on Demand aspect of the CDP. They are best done using leading indicators (see Innovation Accounting in [1]) to evaluate how well the new feature meets its benefits hypothesis. For example, “We believe the administrator can add a new user in half the time it took before.”

Collaborative Design

Traditionally, UX design has been an area of high specialization. People with a talent for design, a feel for user interaction, and specialty training are often entirely in charge of the design process. The goal was ‘pixel perfect’ early designs, done before the implementation. But this work was often done in silos by specialists that may or may not know the most about the system and its context. Success was measured by how well the implemented user interface complied with the initial UX design. In Lean UX, this changes dramatically:

“Lean UX has no time for heroes. The entire concept of design as a hypothesis immediately dethrones notions of heroism; as a designer, you must expect that many of your ideas will fail in testing. Heroes don’t admit failure. But Lean UX designers embrace it as part of the process.” [2]

Continuous exploration takes the hypothesis and facilitates an ongoing and collaborative process that solicits input from a diverse group of stakeholders – Architects , Customers , Business Owners , Product Owners , and Agile Teams . This group further refines the problem and creates artifacts that clearly express the emerging understanding, including personas, empathy maps, and customer experience maps (see Design Thinking ).

Agile teams are empowered to design and implement collaborative UX, significantly improving business outcomes and time-to-market. Moreover, another important goal is to deliver a consistent user experience across various system elements or channels (for example, mobile, web, kiosk) or even different products from the same company. Enabling this consistency requires balancing decentralized control with centralizing certain reusable design assets (following Principle #9 – Decentralize decision-making ). For example, creating a design system [2] with a set of standards that contains whatever UI elements ARTs and Value Streams find helpful, including:

  • Editorial rules, style guides, voice and tone guidelines, naming conventions, standard terms, and abbreviations
  • Branding and corporate identity kits, color palettes, usage guidelines for copyrights, logos, trademarks, and other attributions
  • UI asset libraries, which include icons and other images, templates, standard layouts, and grids
  • UI widgets, which include the design of buttons and other similar elements

These centralized assets are integral to the Architectural Runway , which supports decentralized control while recognizing that some design elements must be centralized. After all, these decisions are infrequent , long-lasting, and provide significant economies of scale across both the user base and enterprise applications, as described in Principle #9.

Building a Minimum Marketable Feature

With a hypothesis and design, teams can implement the functionality as a Minimal Marketable Feature (MMF). The MMF should be the smallest amount of functionality that must be provided for a customer to recognize any value and for the teams to learn whether the benefit hypothesis is valid.

By creating an MMF, the ARTs apply SAFe Principle #4 – Build incrementally with a fast, integrated learning cycle to implement and evaluate the feature. Teams may preserve options with Set-Based Design  as they define the initial MMF.

In many cases, extremely lightweight and not even functional designs can help validate user requirements (ex., paper prototypes, low-fidelity mockups, simulations, API stubs). In other cases, a vertical thread (full stack) of just a portion of an MMF may be necessary to test the architecture and get fast feedback at a System Demo . However, in some instances, the functionality may need to proceed to deployment and release, where application instrumentation and telemetry provide feedback data from production users.

MMFs are evaluated as part of deploying and releasing (where necessary). There are various ways to determine if the feature delivers the proper outcomes. These include:

  • Observation – Wherever possible, directly observe the actual usage of the system. It’s an opportunity to understand the user’s context and behaviors.
  • User surveys – A simple end-user questionnaire can obtain fast feedback when direct observation isn’t possible.
  • Usage analytics – Lean-Agile teams build analytics into their applications, which helps validate initial use and provides the application telemetry needed to support a Continuous Delivery model. Application telemetry offers constant operational and user feedback from the deployed system.
  • A/B testing – This is a form of statistical hypothesis comparing two samples, which acknowledges that user preferences are unknowable in advance. Recognizing this is liberating, eliminating endless arguments between designers and developers—who likely won’t use the system. Teams follow Principle #3 – Assume variability; preserve options to keep design options open as long as possible. And wherever it’s practical and economically feasible, they should implement multiple alternatives for critical user activities. Then they can test those other options with mockups, prototypes, or even full-stack implementations. In this latter case, differing versions may be deployed to multiple subsets of users, perhaps sequenced over time and measured via analytics.

In short, measurable results deliver the knowledge teams need to refactor, adjust, redesign—or even pivot to abandon a feature based solely on objective data and user feedback. Measurement creates a closed-loop Lean UX process that iterates toward a successful outcome, driven by evidence of whether a feature fulfills the hypothesis.

Implementing Lean UX in SAFe

Lean UX differs from the traditional, centralized approach to user experience design. The primary difference is how the hypothesis-driven aspects are evaluated by implementing the code, instrumenting where applicable, and gaining user feedback in a staging or production environment. Implementing new designs is primarily the responsibility of the Agile Teams, working in conjunction with Lean UX experts.

Of course, this shift, like many others with Lean-Agile development, can cause significant changes to the way teams and functions are organized, enabling a continuous flow of value. For more on coordinating and implementing Lean UX —specifically how to integrate Lean UX in the PI cycle—read the advanced topic article Lean UX and the PI Lifecycle .

Last update: 21 February 2023

Privacy Overview

  • Formulate hypotheses as a foundation for this method. The hypotheses can be statements of stakeholders or users, a research outcome or even a possible Future Trend .
  • Conduct research to question the hypothesis. Depending on the size of the target group, it makes sense to conduct Surveys or perform User Interviews . Remember not to ask suggestive questions.
  • Record the results of your research. Interpret the recordings to match them with your hypotheses.
  • Verify or disprove the hypothesis if possible. In case, you were not able to do so, the hypothesis might be phrased incorrectly. In either case you should continue to research around your hypotheses to bring them into a more detailed shape and be aware of changes in the future.

Test new features.

Start your meeting with an creative and communicative atmosphere by seeing your project with new, extraterrestrial eyes.

Stimulate new ideas and challenge existing ones.

Reflect on what was learned from the experience of designing a product or service.

Spot quality ideas after having generated a good amount of output.

DB-City

  • Bahasa Indonesia
  • Eastern Europe
  • Moscow Oblast

Elektrostal

Elektrostal Localisation : Country Russia , Oblast Moscow Oblast . Available Information : Geographical coordinates , Population, Area, Altitude, Weather and Hotel . Nearby cities and villages : Noginsk , Pavlovsky Posad and Staraya Kupavna .

Information

Find all the information of Elektrostal or click on the section of your choice in the left menu.

  • Update data

Elektrostal Demography

Information on the people and the population of Elektrostal.

Elektrostal Geography

Geographic Information regarding City of Elektrostal .

Elektrostal Distance

Distance (in kilometers) between Elektrostal and the biggest cities of Russia.

Elektrostal Map

Locate simply the city of Elektrostal through the card, map and satellite image of the city.

Elektrostal Nearby cities and villages

Elektrostal weather.

Weather forecast for the next coming days and current time of Elektrostal.

Elektrostal Sunrise and sunset

Find below the times of sunrise and sunset calculated 7 days to Elektrostal.

Elektrostal Hotel

Our team has selected for you a list of hotel in Elektrostal classified by value for money. Book your hotel room at the best price.

Elektrostal Nearby

Below is a list of activities and point of interest in Elektrostal and its surroundings.

Elektrostal Page

Russia Flag

  • Information /Russian-Federation--Moscow-Oblast--Elektrostal#info
  • Demography /Russian-Federation--Moscow-Oblast--Elektrostal#demo
  • Geography /Russian-Federation--Moscow-Oblast--Elektrostal#geo
  • Distance /Russian-Federation--Moscow-Oblast--Elektrostal#dist1
  • Map /Russian-Federation--Moscow-Oblast--Elektrostal#map
  • Nearby cities and villages /Russian-Federation--Moscow-Oblast--Elektrostal#dist2
  • Weather /Russian-Federation--Moscow-Oblast--Elektrostal#weather
  • Sunrise and sunset /Russian-Federation--Moscow-Oblast--Elektrostal#sun
  • Hotel /Russian-Federation--Moscow-Oblast--Elektrostal#hotel
  • Nearby /Russian-Federation--Moscow-Oblast--Elektrostal#around
  • Page /Russian-Federation--Moscow-Oblast--Elektrostal#page
  • Terms of Use
  • Copyright © 2024 DB-City - All rights reserved
  • Change Ad Consent Do not sell my data

IMAGES

  1. Design Hypothesis: What, why, when and where

    ux design hypothesis examples

  2. Ux Research Hypothesis Example

    ux design hypothesis examples

  3. Hypotheses driven UX design

    ux design hypothesis examples

  4. The full guide to Lean UX

    ux design hypothesis examples

  5. Hypothesis Prioritisation Canvas for Lean UX

    ux design hypothesis examples

  6. DEFINE THE PROBLEM STATEMENT IN UX DESIGN

    ux design hypothesis examples

VIDEO

  1. Why Darwin really gave up Christianity, John van Wyhe (2010)

  2. UW Data Science Seminar 04/10: Jihyeon Bae

  3. Hypothesis Trailer

  4. A Hypothesis Test for a Population Proportion

  5. Hypothesis Test Step 1 of 5

  6. Dawkins’ Central Argument

COMMENTS

  1. How to create a perfect design hypothesis

    A design hypothesis is a cornerstone of the UX and UI design process. It guides the entire process, defines research needs, and heavily influences the final outcome. Doing any design work without a well-defined hypothesis is like riding a car without headlights. Although still possible, it forces you to go slower and dramatically increases the ...

  2. How to create product design hypotheses: a step-by-step guide

    Which brings us to the next step, writing hypotheses. Take all your ideas and turn them into testable hypotheses. Do this by rewriting each idea as a prediction that claims the causes proposed in Step 2 will be overcome, and furthermore that a change will occur to the metrics you outlined in Step 1 (your outcome).

  3. Design Hypothesis: What, why, when and where

    As a UX designer, I started to wonder how can I summarise design insights. ... Design Hypothesis is a process of creating a hypothesis or assumption about how a specific design change can improve a product/campaign's performance. It involves collecting data, generating ideas, and testing those ideas to validate or invalidate the hypothesis ...

  4. How to Create a Research Hypothesis for UX: Step-by-Step

    Here are the four steps for writing and testing a UX research hypothesis to help you make informed, data-backed decisions for product design and development. 1. Formulate your hypothesis. Start by writing out your hypothesis in a way that's specific and relevant to a distinct aspect of your user or product experience.

  5. 5 steps to a hypothesis-driven design process

    Recruit the users you want to target, have a time frame, and put the design in front of the users. 5. Learn and build. You just learned that the result was positive and you're excited to roll out the feature. That's great! If the hypothesis failed, don't worry—you'll be able to gain some insights from that experiment.

  6. Hypothesis statement

    Brainstorming solutions is similar to making a hypothesis or an educated guess about how to solve the problem. In UX design, we write down possible solutions to the problem as hypothesis statements. A good hypothesis statement requires more effort than just a guess. In particular, your hypothesis statement may start with a question that can be ...

  7. UX Research: Objectives, Assumptions, and Hypothesis

    With qualitative research in mind let's start by taking a look at a few examples of UX research hypothesis and how they may be problematic. Research hypothesis Example Hypothesis: Users want to be able to filter products by colour. ... It is difficult (though not impossible) to design research that could prove or disprove this hypothesis. Any ...

  8. What is "design hypothesis"?

    Let's go back to the example of e-commerce to illustrate how a design hypothesis is formed.The first point is to understand the problem. In this case, you notice that there is a considerable ...

  9. Hypothesis testing in UX

    Hypothesis testing is a statistical method used in UX design to test assumptions and make informed design decisions. By formulating and testing hypotheses, UX designers can gain insights into user behaviour and validate their design decisions. Formulate a clear hypothesis: The first step is to identify a specific question that you want to ...

  10. Building high-quality hypotheses for better design decisions

    Building high-quality hypothesis based on predictions. First of all, designers must avoid the design-prophecy approach by introducing a system where design decisions are evaluated based on evidence. Working in this way simplifies the decision-making process, because new features can be designed and built quickly and very consistently — and it ...

  11. Hypothesis Template

    The canvas is a simple matrix. The horizontal axis measures your assessment of the risk of each hypothesis. This is a team activity and is the collective best guess of the people assembled of how risky this idea is to the system, product, service or business. The challenge with assessing risk is that every hypothesis is different.

  12. A Simple Introduction to Lean UX

    Lean UX is focused on the experience under design and is less focused on deliverables than traditional UX. It requires a greater level of collaboration with the entire team. The core objective is to focus on obtaining feedback as early as possible so that it can be used to make quick decisions. The nature of Agile development is to work in ...

  13. Using Hypothesis Driven Design to Improve your Digital ...

    As part of London Tech week, I spoke about Hypothesis Driven Design at an event hosted by Forge&Co.Below is a summary of the things we shared. The talk touched on how to form and write design ...

  14. Hypothesis Testing in the User Experience

    Hypothesis testing is at the heart of modern statistical thinking and a core part of the Lean methodology. Instead of approaching design decisions with pure instinct and arguments in conference rooms, form a testable statement, invite users, define metrics, collect data and draw a conclusion.

  15. A Hypothesis-Driven Design Canvas. For Designers.

    We always test. Following these principles helps us to know instead of guess. But they take discipline to stick to. So to help us stick to them we created a tool called the Hypothesis-Driven Designer Canvas. It forces us to test whether our design hypotheses are true.

  16. Hypotheses in user research and discovery

    The unit of measurement is user research. As this is about research and learning (discovery), the measure is simply what we want to learn from user research. Each assumption can become testable ...

  17. Lean UX

    They are best done using leading indicators (see Innovation Accounting in [1]) to evaluate how well the new feature meets its benefits hypothesis. For example, "We believe the administrator can add a new user in half the time it took before." Collaborative Design. Traditionally, UX design has been an area of high specialization.

  18. Hypothesis Testing · UX Strategy Kit by the User Experience Strategy

    Discover UX methods for your next design sprint, agile software development process or digital product life cycle. ... The Hypothesis Testing template offers a simple way of recording hypotheses, assigning them to Personas, and a specific testing method. The results can then be recorded and analysed in the template. Step-by-step. Formulate ...

  19. Elektrostal, Moscow Oblast, Russia

    Elektrostal Geography. Geographic Information regarding City of Elektrostal. Elektrostal Geographical coordinates. Latitude: 55.8, Longitude: 38.45. 55° 48′ 0″ North, 38° 27′ 0″ East. Elektrostal Area. 4,951 hectares. 49.51 km² (19.12 sq mi) Elektrostal Altitude.

  20. Framing hypotheses in problem discovery phase

    Like all great experiments, a hypothesis is a launching point to investigate further when the project is in the problem discovery phase. Here are some examples that we may take to dig out the underlying cause of problems and work out how to best solve them. Create prototypes based on the proposed solution, plus create prototypes to test with ...

  21. Landscape Architects & Designers in Elektrostal'

    Landscape architects near me will also design any structures or outbuildings that will be added to the space. Any grading changes that will be added to the yard are also the responsibility of the architect. This includes any drainage modifications. An engineer can also be responsible for these tasks, but licensed Moscow Oblast landscapers are ...

  22. Lighting Companies & Designers in Elektrostal'

    Search 545 Elektrostal' lighting companies & designers to find the best lighting company or designer for your project. See the top reviewed local lighting companies & designers in Elektrostal', Moscow Oblast, Russia on Houzz.

  23. Design-Build Contractors & Firms in Elektrostal'

    Just answer a few questions to get matched with a local Design-Build Firm. Or browse through the list of trusted Design-Build Firms in Elektrostal' on Houzz: See Elektrostal' Design-Build Firms' profiles, dive into their work photos and check out customer reviews. Reach out to the pro(s) you want, then share your vision to get the ball rolling.