Back to Blog

February 2024

What is hypothesis in the context of development?

February 2024

What is hypothesis in the context of development?

In this article

HADI cycles

How to form a clear and testable hypothesis?

Hypotheses ideation

Hypotheses prioritization

Imagine you are the owner of a task tracker application, and you have a goal to increase the product's revenue. You devise an additional paid feature: uploading all tasks to Excel. But how can you ascertain that people truly need this feature?

A product hypothesis is a testable assumption that specific actions will impact product metrics. Product development is based on hypotheses, which must, therefore, be validated by research, testing, and user feedback. In the context of product development, teams generate hypotheses—assumptions about market needs and user behavior. The hypothesis approach helps to make a strong correlation between features, UI/UX improvement, and users' needs. Hypothesis testing helps validate it without a massive investment in building new features, etc, or implementing smth to the whole audience that could have a negative impact on user behavior.

HADI cycles

The HADI cycles methodology is used for a structured approach to formulating, testing and validating hypotheses. This approach prevents wasting money, time, and other resources on something that will not benefit you.

HADI cycle is a hypothesis testing methodology that consists of 4 steps: formulating a hypothesis, actions to implement the hypothesis, collecting data on the results of the hypothesis, and drawing conclusions based on the hypothesis.

The HADI cycle consists of:

H - Hypothesis

Formulate a hypothesis, i.e., a change that can improve product performance.  Example: "If you change the CTA text on the landing page, the conversion to registration will increase by 8%". 

A - Action

Determine what actions need to be taken to validate the hypothesis. Example: For hypothesis "Changing the CTA text on the landing page will increase the conversion to enrollment on the landing page by 8%" we need 1. to make changes to the website 2.set up website and analytics for A/B testing.

D - Data 

Start collecting data on the indicators that will be affected by the change. Write out a list of the data we need to collect before and after the analysis. And record the current metrics to understand what initial data we started testing with. Example: "Changing the CTA text on the landing page will increase the conversion to registration on the landing page by 8%". What was the landings traffic before and at the time of testing? What was the enrollment conversion rate before the change? What was the enrollment conversion rate after the change? What is the increase in conversions?

I - Insights

What was the result? Was the hypothesis win or false? Can we trust the obtained results? Does it have statistical significance?

If the hypothesis “win”, we implement it, scale it. The faster you test hypotheses, the faster you grow. Remember - if a hypothesis is not confirmed, that is also a result.

But depending on the source of the hypothesis, or rather the confidence in it and the labor intensity of implementation, different methods of testing are used. For example, if the source of the hypothesis was transcripts of sales managers' calls and the need for a new feature was explicitly formulated, then the confidence in testing such a hypothesis is reduced. We will tell you how to choose a suitable method in the next article.

There are lots of templates that could provide you with a step-by-step guide to creating a hypothesis and forming a habit around it. We will pick some tips in this article, and for a visual demonstration, we will use our Hypotheses Management Service.

By the way, our AI Pleadcop can help formulate a hypothesis based on best practices and industry benchmarks when there are anomalies in the analytics system or for product growth. [Free sign-up now]

How to form a clear and testable hypothesis?

As we mentioned earlier, the clearer the hypothesis is formulated, the more effective its testing will be.

Case: in the questionnaire when registering in the task tracker app, in addition to contact information, it is mandatory to include information about the company and their team. Analyzing the metrics, the team found a high rate of questionnaire abandonment. And the question arose: how to increase conversion to completed registrations? 

When we formulate a problem, we have one of the variables of the hypothesis - a metric [Succes metric] that we will influence. In our case, it is enrollment conversion. 

Then we hypothesize what changes might have an impact on the metric. And then we formulate a second variable, for example, we will ask for information about the company and the team after registration, when the user has already performed several actions in the product ["Description"].

But it is important to realize that the relationship between the variables must be clear and logical. If it is not, the test results will be unreliable. If we consider our case study with registration in a task tracker, we can identify three things to avoid when determining the relationship between variables: 

  • Weak link: the assumption that reducing the enrollment form will lead to increased purchases. This link requires more steps.

  • Far-fetched connection: the assumption that more social media followers will lead to more registrations. However, there is no reason a social media subscriber would be interested in using your product. 

Then, to see if our assumption is indeed working, we measure the “current success metric” and predict its growth when a change is introduced. 

And our hypothesis goes like this: "If we remove the obligation to enter company and team information at the registration stage (variable 1 - independent), the conversion to registration will increase by 20% (variable 2 - dependent)". If the number of registrations increases after the experiment, the hypothesis will be considered confirmed. And if the number of registrations remains unchanged or there are insignificant changes, the hypothesis will be considered as failed. 

Independent variables (= causes) are any changes you make to your product. Dependent variables (= effects) are metrics that you influence. 

To simplify the formulation of the hypothesis, we need to answer the following questions: 

  • [Funnel impact] What product problem are we solving, and at what stage of the funnel?

  • [Description] Exactly what changes do we want to make; what are the variables going to be?

  • [Success metric] Which product metric will be impacted?

  • [Current value] The current value of the metric we are influencing.

  • [Expected results] How do we think the success metric will change? 

AI Pleadcop can help answer each of these questions.   

Hypotheses ideation

We have figured out how to formulate a hypothesis correctly to test it and evaluate the results. But what kinds of hypotheses can there be, and where do they come from? 

Five types of hypotheses can be noted: 

Persona Hypothesis

Hypotheses about the characteristics, behavior, and needs of a product's target audience. Such hypotheses help to identify and confirm the key personas or user segments targeted by the product. 

Problem Hypothesis

Hypotheses about the pains or problems faced by target audience. These hypotheses can be used to identify the main problems faced by users and confirm whether the proposed problems resonate with the target audience. 

Value Hypothesis

Hypotheses about the unique value proposition or benefits a product brings to users. They allow you to identify and validate the key value propositions that attract users to a product.

Usability Hypothesis

Hypotheses about the ease of use and overall user experience of a product. They address UI, UX, and CX aspects. Testing these hypotheses helps to ensure that the product is user-friendly and meets user expectations in terms of navigation and usability.

Growth Hypothesis

Hypotheses about strategies for attracting, retaining, and overall user growth. Working with growth hypotheses allows you to plan and adhere to a product development strategy. 

Helping to generate all these hypotheses can: 

  1. User research. User research helps you find new growth opportunities by improving the customer experience and customer value. Research can be conducted using a variety of methods including: study of user support calls and communication with sales managers, interviews and user surveys.

  2. Analytics and metrics. Exploring data in your analytic tools, such as Amplitude, Mixpanel, or (at least) Google Analytics, could help you better understand your user behavior. What is their real journey, where is the bottleneck, and are there any differences in behavior between different customer profiles (cohorts)?

  3. Customer experience research. A deep dive into the customer experience will help you better understand and improve user flow. Put yourself in the customer's shoes and walk the full journey, eliminating bias and being mindful of criticism. 

  4. Market and competitor research. Track and analyze websites, products, ad traffic of key competitors to get new insights, etc.

Hypotheses prioritization

Obviously, team time is best spent on high-impact work and avoiding low-confidence work. We usually tend to focus on things that have high impact but require little effort before moving on to things that have low impact but require more effort. But sometimes there is no need to test every product hypothesis, especially if you already have a lot of evidence that it can work. For example, if you see that competitors are successfully applying a certain strategy or if other products for your target audience are showing positive results, then this may be a sufficient reason to implement such a hypothesis without additional validation.

There is plenty of evidence that this prioritization of work leads to better balance and helps teams stay focused. Hypothesis testing is not a one-time action, but a process that occurs cyclically. After testing one hypothesis and drawing conclusions, the next hypothesis must be formulated, starting a new testing cycle. This iterative approach continues throughout the product's lifecycle. And to properly prioritize the sequence of hypothesis testing for the greatest effect, you need to prioritize the hypotheses. 

Some hypotheses will focus on usability of functionality, while others will generate more revenue from a small group of users. And prioritization will allow you to focus on the most effective results. 

To easily calculate the prioritization of the hypotheses, the ICE method can be used to estimate the resources required and the effect of implementation. For this purpose, it is necessary to determine: 

  • [Impact] - how much impact the change will have on the metric (from 1 to 5)

  • [Confidence in potential] - how confident you are in your impact score (10% to 100%)

  • [Efforts] - how easy it is to implement the change (from 1 to 5) 

Priority [ICE Score]=Impact * Confidence * Efforts

You can read more about prioritizing methods in this article

Evaluation of the test result

After the experiment is conducted, the data must be analyzed to evaluate the result. This is done based on the product data BEFORE the experiment and the data obtained AFTER. It is important to examine whether the data is sufficient (how to determine whether the test results are sufficient, read in more detail in the article "Experiment") to make a correct decision. If not, you can extend the test or conduct another one. Otherwise, we extend the results to the whole product or implement solution for all users. 

It is important to remember data integrity depends on how well the test was performed. Therefore, when testing and analyzing the results:

  • Collect and analyze data thoroughly. Ensure data is clean and up to date.

  • Conduct the right number of experiments with the product. It may take several tests to determine whether your hypothesis is correct or not. But do not spend too much time experimenting in the hope of getting the desired result. 

In Pleadcop, you can record the results of all experiments and get expert support from Pleadcop's AI about the correctness of the test. To do this, you need to enter: 

  • [Actual duration] - how long the experiment lasted. This is important to consider because if the period may be too small, the metric may not have time to change.

  • [Number of users] - the number of users participating in the experiment. 

  • [Actual value] - the received value of the metric being tested. 

And so, you can evaluate the hypothesis and decide on the data. Even if the hypothesis fails, you should not be upset. If you do not get the expected result, you can still improve your product. This gives you an idea of where you need to go.

In the end, the critical metric for early-stage companies is regular hypothesis validation. The more you test, the quicker you will find the secret to growing faster than your competitors. 

Back to Blog