article

A/Bout Testing

In this post we will go through all the steps to adopt A/B testing as a common practice on your project.

TL;DR Follow up our checklist with all the insights gathered from doing A/B tests on our projects: A/B Testing Checklist & A/B Testing Checklist Repository

Why A/B Tests

Knowing what customers want can be a matter of life or death for many companies. From the most basic components of your site to major changes in the flow, it’s important to know: the type of communication that performs better, the images that please your target users the most, and what makes the customer follow the flow.

Any product can be tested and it leads to heaps of value to your company. For a small product, this is possible by opening a communication channel with the users or elaborating usability tests for instance. However, not all precious things can be on the radar on these tests, besides the fact, this strategy has poor scalability: imagine yourself having to schedule exploratory meetings with hundreds, perhaps thousands of users and crunching all that data, added to the long list of features I’m sure you already have to implement. That is followed by methodological questions: How do you evaluate what scenario performs better? And how do you make sure that your test scenario lines up with the real scenario? Things just started to get a little more complicated for you to deal by yourself while developing, managing or mentoring.

What are A/B tests

A/B tests are a remarkable way of getting this information, regardless of what kind of question you want to answer. It delivers on a simple matter, with crystalline goals. A simple definition for A/B testing is the practice of setting up two outcomes of the same scenario to different segments of visitors at the same time and comparing which variation achieves one specific goal the best. "A", is the control group, the current state of your system. "B", is the modified experience that tries to achieve a specific goal.

This goal can be one or many to help you enhance the experience. Such as the following:

- Content Engagement: You want customers to engage with your product, evaluating and measuring every aspect of the content, giving both you and your clients a better perspective on the product.

- Conversion rates: Conversion metrics are unique to each website. For e-commerce, they may be the gross sales of the products, or revenue, while for a B2B, they may be the generation of qualified leads.

- Abandonment rates: This metric is very useful to e-commerce sites because is directly associated with the use of virtual shopping carts. Abandonment of virtual carts is a quite common thing, knowing the ratio abandoned shopping carts over the number of initiated transactions can be crucial to increase conversions.

- Bounce Rates: Bounce rate is a measure of "stickiness", you need to know the percentage of visitors who enter the site and then leave rather than continuing to view other pages within the same site.

- Others: Other metrics can be used as well, like page views, session duration, clicks, share rate, churn rate or any other success metrics that may impact your business.

Through A/B testing you will be able to see which variant impacted your customers the most, and then alter your strategy to be more appealing to your target audience. Check further examples on this Basecamp post: [How we lost (and found) millions by not A/B testing](https://signalvnoise.com/posts/3945-how-we-lost-and-found-millions-by-not-ab-testing) and see how much monetary value they found in A/B testing.

Where on your product to do A/B tests?

Let’s take the Basecamp example from the above link. In 2014, there was an odd drop of sign up rates on Basecamp’s website. The sign-up step had been removed from their homepage by a design iteration.

After that, metrics plunged. The issue could have been assessed and addressed within 2 weeks of A/B testing on the homepage, gathering information to make an informed decision about which homepage was best for their business. After bringing the sign up back, ratings spiked. Basecamp kept the sign up on their homepage until recent days:

Now, they just change it again to a call to action button.

Landing pages are pages designed to convert visitors into leads, and testing different types of them is just one of the possible things you can do. You can A/B test all components of your website that might influence visitor behavior:

- Headlines

- Sub-headlines

- Paragraph Text

- Testimonials

- Call to Action Button

- Links

- Images

Let's take another example: Netflix is also a strong adopter of the experimenting philosophy. Every product change Netflix considers - yes, I'm surprised by that too - goes through a rigorous A/B testing process before becoming the default user experience. In their blog post Selecting the best artwork for videos through A/B testing they show a commitment to test everything possible, even the images associated with films’ titles are A/B tested, sometimes resulting in 20% to 30% more views for that title.

Finishing up the round of examples, Facebook has recently carried out some experiments in their own app.

Check out the usual header of the app. They tested minor decisions around the design and information architecture, assuming that these things also need to be validated in the real world, resulting in more than 12 different icons being tested on the tab bar.

How to perform A/B tests

Although it’s relatively easy to set up an experiment, don't fall for adding experiments on all of your features. You need to be strategic about what you’ll test. Testing everything just for the sake of testing isn’t experimenting, neither will it validate everything. It's not possible to test every single possibility, and even if you could, testing every possibility would only make you good at A/B testing, and would delay the development of the right product faster.

From our research and experiences here at Vinta a well-structured flow to A/B testing is:

- Start with a Research: Every project has room for improvement, search for them, discuss them with the team. Back these insights with data: use the analytics tools available, such as Google Analytics, heatmaps, surveys, to collect data on visitor behavior and metrics. Use all this data to find out what is stopping visitors from converting. Remember this: One in every seven A/B tests is a winning test, doing this step right can improve this statistic, and save you some time.

- Formulate a Hypothesis: This is the time to build a well-defined hypothesis. Based on the insights from your research, build a hypothesis to increase conversions or achieve one specific goal. Here’s an awesome article about formulating a good hypothesis: A/B Test Hypothesis Definition, Tips and Best Practices

- Creating a variation: This is when you build the variation based on your hypothesis, to be A/B tested against the existing version. If you don't use a service to build experiments, here would be the part in which the dev team would work to bring your variation to life.

- Testing: Flag off the test and wait for the stipulated time in order to achieve a statistically significant result. Unsure about how to define the experiment duration? Here’s a great way to do it: A/B split & multivariate test duration calculator

- Analyzing results and drawing conclusions: Analyze the test results keeping in mind monthly visitors, current conversion rate, the expected change in the conversion rate, weekday/weekend/holiday variations, seasonality of your traffic, sample size, and all other variables that can interfere on your results. In case of a positive outcome, deploy the winning variation. If the results remain inconclusive, draw insights from it and implement these in a subsequent test.

Useful Tools to A/B test

We have talked before about analytics tools, they are important tools for any company. One of them is Google Analytics, as it provides a way to analyze your customers’ behavior. If you don't have a web analytics package on your site, this is something you should do right away. Sign up to Google Analytics and start collecting data to base decisions and tests on. You can't do A/B tests without a baseline data.

When it comes to actually implementing the test experiments there are a lot of options available such as: Optimizely, VWO, Adobe Target, Google Optimize, AB Tasty, and so many others. Here at Vinta, we recommend the use of Google Optimize, which is easy to set up, freemium and integrated with Google Analytics by default. Don't waste your time trying to build your own platform. From my experience, Optimize will fill 95% of your needs, and when it doesn't you will be able to complement Optimize with an implementation of your own, they even provide a way for you to perform Server-side Experiments.

Google Optimize

As I said we use Google Optimize because it's simple to set up and use. It’s integrated with Google Analytics, and the free plan already comes with the basic features needed by any application. In a quick manner, Optimize provides four types of experiments:

- A/B tests: The A/B test in its basic form. A randomized experiment using two or more variants of the same web page (A and B). Each variant is served at similar times so that its performance can be observed and measured independently of other external factors.

- Multivariate Tests: A multivariate test (MVT) experiments variants of two or more elements simultaneously to see which combination creates the best outcome. Instead of showing which page variant is most effective (as in an A/B test), MVT identifies the most effective variant of each element and analyzes the interactions between them. This is very useful for optimizing multiple aspects of a landing page. The drawback of MVT is that they can lead to unreliable results, by having multiple variants the volume of visitors will be spread through the variants resulting in longer tests and the possibility of not achieving statistical reliability needed for decision-making.

- Redirect Tests: A redirect test is a type of A/B test that allows you to test different web pages against each other. In redirect tests, variants are identified by URL or path instead of traditional page components. They are useful when you want to test two very different landing pages, a complete redesign of a page, or two different flows.

- Personalizations: In case you want to build a landing page for customers who’ll access the site from a specific campaign. Optimize allows you to build a completely new page without the need of a real deploy.

After the creation of the experiment, Optimize gives you all the tools to create variations, define traffic, goals and more:

- Create and edit variants: The specific changes to your web page are called variants. Optimize provides an editor through which you can create as many variants as you wish to test against your original page

- Multiple options of users segmentation: The selection of which users will participate on your test can be done in various ways: by URL, by source, by a campaign, by Google Analytics Audience, by GeoLocation or even randomly.

- Goals: On Optimize, goals are also called Objectives, and they are synced with your analytic goals. They are the metrics used to measure your variants.

- Preview variants: Before publishing the experiments you can preview your changes to see if everything looks as planned.

- Consistency when showing experiments on the same device: Once the user accesses one of the variants, they will access the same variant as long as the experiment is running.

Beyond creating and running experiments, Optimize also provides a default result analyzer using the Bayesian method to perform an analysis on the performance of the experiment. The following images are an example of what Optimize reports can look like. The results are divided into sections, where the first one is a summary that displays the experiment status and a summary of the results. The second section is the improvement overview card, which compares the performance of the original to your variant(s), with their percentual improvement against the experiment’s objective(s). The third section displays the performance of your variants against a chosen objective.

A/B testing is hard, although it doesn’t look like it. Since here at Vinta we are huge fans of checklists, we’ve developed a checklist that tries to provide an easy and quick set of things that we recommend you to keep in mind when A/B testing. This checklist is linked to this GitHub repository, so feel free to contribute too.

Want to know more? There’s a free course on Udacity for you: A/B testing by Google

Also a big thanks to Amanda and Rob for reviewing this post!

Rebeca Sarai

Frontend and Backend developer, Python and modern JavaScript evangelist.