Marketers often use A/B testing – or split testing – to inform their marketing strategy and optimize their promotional content.
If the term is new to you, don’t worry, we’ve got you covered. In this post, we’ll be telling you all about A/B split testing, how you can use it, and the benefits it can bring to your business.
Let’s get started!
What is A/B split testing?
A/B split testing is the process of comparing two versions of a piece of marketing material that are identical aside from one varying element in order to see which one performs better.
Both variations are prepared; one is presented to half of your audience, and the other is presented to the other half. Data is collected to see how both versions ‘performed’ by measuring certain pre-defined KPIs. The control version (version A) is compared to the challenger version (version B). If the challenger version performs better, it’s adopted.
First, you’d create two versions of the landing page, with only the CTA changed. You’d then direct half of your visitors towards version A of the landing page, and the other half to version B. Whichever variation leads to the most conversions is likely the most effective and would probably be adopted.
How A/B split testing is used
The above example of comparing two call-to-action variations is just one possible use case of split testing, but there are plenty more.
You can use A/B split testing to test literally anything. It might be on-site marketing materials, like your website copy, images, titles, or brand colors, or off-site materials like your email subject lines or ads.
However, it would be unrealistic to test every single thing – that would take forever. Instead, we use A/B split testing to compare the things that make the biggest difference. This is usually things like:
- Article headlines
- Your CTAs
- Product descriptions
- Capture page copy
- Email subject lines
Benefits of A/B testing
There are lots of benefits to A/B testing. The most obvious one is that it tells you what words, phrases, visuals, and other elements best impact your conversion rates – and higher conversion rates often translates to increased revenue.
For example, Performable ran a split test to compare how different colors impacted conversion rates back in 2011. They created two versions of their home page; one with a red CTA button and one with a green CTA button and compared how they performed. Based on 2,000-page visits, the red button led to a 21% increase in conversions.
But it’s not just about conversions. Split testing can also bring other business benefits.
For example, split testing can help you to achieve better SEO on your website by allowing you to compare loading speeds, which is a huge ranking factor.
You can use split testing to compare two versions of a page to see which has the fastest load time and adopt the quickest one.
For example, if a change to one of your product descriptions would lead to fewer conversions, split testing will tell you this beforehand, so that you can rule it out. This saves you money in the long run, even after factoring in the initial, relatively-low costs of running the split test.
A/B split testing process
Here’s a quick guide on how to carry out your own A/B split test, broken down into 5 easy steps.
1. Determine your goals and what variable you want to test
Start by working out what exactly element you want to test. The element should be relevant to the metric that you’re trying to improve.
For example, if you’re working on improving email click-through rates, do you specifically want to test the subject line, the email copy, the CTA, the design, or something else entirely? Isolate the exact variable you want to test and keep all other variables the same.
Time is a variable too, which is why you should run both tests in tandem (unless, of course, you’re trying to find out the ideal time of day to send out an email).
2. Look at existing data
Next, look at your existing data using an analytics tool to determine your starting point. Sticking with our above example, you might want to look at your current average email click-through rate. Once you know where you’re starting from, you’ll be able to determine whether the changed variable has had any impact.
3. Create your variant
Take your ‘control’ version, i.e. the version of the material that hasn’t been changed, and create another version of it. In the second version, change only the variable you selected in step 1.
For example, if you’re testing email subject lines, only change the subject line – leave everything else the same as the control version.
4. Design and execute the test
Next, design your test by determining when you’re going to send out the variations of your email and who you’re going to send them two. To get accurate results, you should send your control email out to half of the recipients and the second version out to the other half, making sure to select each half at random.
Before you run the test, you’ll need to make sure you have the tools in place to track performance. In our example, the metric we’re interested in would be click-through rates.
5. Analyze the results
After the test is complete, look at the data to draw your own conclusions about which variation performed better, and by what margin. You can use this to inform future marketing campaigns, or repeat the process over, this time changing an entirely different variable.