A/B tests can be used for a variety of product experiments. They’re sometimes called “split tests” because you split your traffic 50/50 in order to test two versions of one piece of content with changes to a single variable.

By testing these two versions, you can analyze their performance so you can optimize content and increase ROI.

In this article, we'll share some of the basics of A/B testing for ads as well as the things you'll need to consider for planning, executing and analyzing an A/B test. If you'd like to skip to the actual how-to guides, follow these links:

 

A/B Testing: The Basics

What Can You Test?

A/B testing is very flexible, but here are some key content types you should consider testing:

  • Images - how do your readers respond to different versions of an image or design element? Test fonts and colors, images, graphics, and icons.
  • Calls to action - how do your readers respond to different calls to action? Which CTA generates more engagement?
  • Content - what content is your audience most receptive to? Test subject lines, headers and sub-headers, titles, copy, and more.
  • Placement - how do your readers engage with the content depending on where it's placed? For example, do featured stories get read more if they're linked at the bottom of an article, or to the side? Do different ad zones perform better than others?

 

When Shouldn’t You Run an A/B Test

Because A/B testing is so flexible, sometimes you end up running tests when you shouldn’t. Here are some situations where A/B testing is not the best option:

  • If your audience or average monthly website users is less than 1,000 you should consider waiting to test. Having a sample size that is too small can skew your results, rendering them inconclusive or even invalid. Why is this? Because a small sample size isn’t necessarily representative of your overall audience.
  • You'll be wasting your time if you test a change that is a no-brainer; this includes industry best practices or clear standards. Read Indiegraf’s Guide to Newsletter Best Practices.
  • If it's broken, just fix it. You don't need to A/B test something that isn't working as intended. If your website has broken links or processes that dead-end or frustrate readers, don't bother with A/B testing. 
  • If you’re adding something your readers have asked for you likely don’t need to run an A/B test.

 

 

Building an A/B test

1: Define the Problem and Determine a Hypothesis

Before you set up and run your first A/B test, you will want to define a problem and then develop a hypothesis. Make sure your hypothesis is aligned with your publication’s business and editorial goals. A strong hypothesis should include three main parts: the variable, the desired result, and the rationale behind it.

For example, if you’re designing house ads you might want to test the color of the CTA button. Your hypothesis could be, “If we change the button color on our house ad, then more people will click on it because the yellow is a higher contrast than the green button”.

 

2: Define Sample Size & Test Duration

Now that you know what you want to test, you need to determine how long to run the test and how many readers (aka sample size) you need to reach for your results to be statistically significant. Trying to calculate your test duration and sample size can be tricky, that's a lot of math! But don't worry - there are plenty of free calculators available that you can use (like this one here).

What is Statistical Significance?

Investopedia defines statistical significance as a determination that a relationship between two or more variables is caused by something other than chance. It is used to provide evidence concerning the plausibility of the null hypothesis, which hypothesizes that there is nothing more than random chance at work in the data.

As we mentioned earlier, our recommendation is to wait until you have an audience size of at least 1,000 readers or monthly website users before you start to conduct A/B tests. While this isn't a firm rule, if your audience is smaller than this, you may have a difficult time getting enough responses to reach statistical significance.

There is no set duration for how long an A/B test has to run, but you should expect to commit a minimum of two weeks to each test. This can vary depending on how big the variation is. The smaller and less obvious the change, the longer you may need to run the test.

 

3: Analyze your results

Once you have finished running your A/B tests, refer back to your hypothesis. Based on the results, did you prove or disprove your hypothesis? If you were running your A/B test through an ESP or through A/B Testing software, check if a winner has been declared. Many platforms do a basic analysis for you. Typically, a winner will be declared if these two conditions are met:

  • The A/B test has reached a significance level (or confidence level) of 90% or higher, and
  • The minimum test duration has passed

You should also review the following metrics (if applicable):

  • Sample Size: how many users were included in your A/B test overall, and how large was each segment?
  • Impressions: how many users saw your A/B test?
  • Clicks and Click-Through Rate: of the users shown your A/B test, how many clicked on it? Are they more likely to click on one over the other?
  • Conversions and Conversion Rate: if a user is shown an A/B test, do they convert into paying customers?
  • Bounce Rate: if a user visits a web page, is the following visited page on the same website, or do they leave?
  • Uplift: the difference between the performance of the control and the challenging variations. For example, if one variation received 28 clicks and the other received 42, the uplift is 50%.

Remember, there's no guarantee that your hypothesis will result in a winning test, no matter how well you research it.

Inconclusive Results

Sometimes A/B test results will come back as inconclusive - that’s ok! Don’t get discouraged, you can revise your hypothesis or your variant and try again. An inconclusive answer can happen when the results of your test are too close to determine a clear winner. Make changes to your A/B test based on data from each experiment, continue to test until you find your ideal outcome.

 

 

Key Takeaways

  • Pick a single variable to test, don't test more than one variant per version.
  • Align your hypothesis with your business goals. For example, if you're trying to increase reader revenue, then A/B testing different calls to action might be a great option.
  • Always run your variants simultaneously and ensure weighting, prioritization, etc. are all identical.
  • Record your results using a spreadsheet or an online tool - analyze your data to determine the success of each A/B test.
  • Don’t get discouraged if results are inconclusive at first. Revise your hypothesis and try again.
  • Make changes to your A/B test based on data from each experiment, continue to test different variations to find your ideal results.

Learn More: