In our last post in this series, we introduced A/B testing—a popular means of testing different design variables in order to gain greater insight into site visitor behaviour and, ultimately, develop long-term effective development practices. To recap, A/B testing involves testing two different design elements, such as copy, navigation, visuals, calls to action, and more, side-by-side in real time among users, in order to establish which variable works better. Whether you’re thinking of including a new header in your site, or a new animated menu element to your app, A/B testing allows you to test both, thereby acting as a check on all-too human impulses and groupthink which can lead to poor design.

Today, we’re going to take a look at the importance of hypotheses in A/B testing environments. Like any experiment, a hypothesis is a vital component in testing because it represents something that you can aim to actively measure during the course of the test in question, an assumption designed to draw out certain empirical consequences.

Testing random ideas that are not based on well-thought out hypotheses can waste your time, money, and website traffic. To develop a successful test hypothesis, you need to find problems and concerns that your customers struggle with when they are completing the conversion goal of your website,” argues Smriti Chawla.

Assuming you’ve determined your conversion goal and identified what you think is a problem or obstacle to that goal, you’re ready to draft a basic A/B hypothesis. For example:

If I replace element x with element y, then conversions will increase by z, due to element y’s cleaner minimalist design.

A hypothesis explicitly links a design change with a conversion outcome, and can only be proved or disproved quantitatively, but it doesn’t end there. This VWO article outlines some ‘essential elements’ for a solid hypothesis:

  1. 1. It aims to alter customer behaviour either positively or negatively.
  2. 2. It focuses on deriving customer learning from tests.
  3. 3. It is derived from at least some evidence.
  4. 4. With this in mind, let’s break up a hypothesis into its component parts: a variable, a result, and a rationale.

The variable in this instance would be the element in question, whether that be a new navigation bar or a slightly different call to action. The variable you select is perhaps the most important part of any A/B test hypothesis and will determine—no, force you—to reckon with both its result and the underlying rationale. This is usually where analytics come in, as they enable you to isolate poorly performing pages or features that your users often fail to pay attention to or ignore.

The intended result is easy to establish: more conversions, clicks, longer browsing time, or a reduced bounce rate. The hard part comes with the rationale. You need to demonstrate the connection between your intended result and the variable in question in order to arrive at a solid rationale for your testing. Qualitative and quantitative methodologies come in useful here—customer interview data, surveys, heat maps, social media analytics, and user testing are all great evidence for determining exactly how your variable will affect your outcome.

Even with the strongest hypothesis, an A/B test might not always be a success. What is important about a good hypothesis and a properly-conducted test, though, is that it gives you much more information than you had before about the kinds of things your users engage with—in other words, an actionable result—so that next time, you have even more weapons in your arsenal for rationalising your hypotheses, improving your testing, and arriving at a great design outcome.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.