A/B Testing: A Complete Guide With Examples
How do you know if the email you’re sending out next week will do the job and convert prospects into paying customers? You don’t.
The truth is, your guess is as good as mine.
Instead of relying on assumptions and letting opinions delegate your decision making, what if you could make a data backed decision?
A/B testing, also known as split testing or bucket testing, is a method used to find which marketing tactic is the most effective and will convert more customers.
Most, if not all of your A/B tests will center around: Content, design, and timing.
A/B testing examples:
- Email subject line
- CTA (call to action) location on an email
- CTA (call to action) location on a splash page
- The CTA itself
- Ex: Click Here, More Information, Continue Reading, Learn More, etc.
- Tone of copy
- Ex: Hard sell or soft sell?
- Hard sell – Purchase tickets now…
- Soft sell – You may want to consider purchasing tickets…
- Email template design (color, format, etc.)
- Splash page template design (color, format, etc.)
- Graphics in a paid ad
- Copy in a paid ad
- The time/day you send an email
- Ex: Monday at 6AM vs. Monday at 1PM
- Monday at 7AM vs. Thursday at 7AM
5 Steps to conduct your next A/B test
1. Choose the variable you will A/B test
You will quickly learn that there are multiple variables you want to test. It’s essential that you only isolate one variable and keep the other elements equal.
This means that you only change the variable you are testing on version A and version B, all other variables and/or elements included should remain the same.
The variables you do not alter are your controlled variables. They might be images, copy, subject line, etc.
Why? If you run too many A/B tests at once, you won’t be sure which variable played a factor in converting customers.
For example, let’s say you’re interested in A/B testing the CTA location in your email promoting an upcoming sale.
Version A – CTA located at the top of the email
Version B – CTA located at the bottom of the email
It’s also important to note there are scenarios where it might be best to test more than one variable at once, this is referred to as multivariant testing. If you’re not sure whether to run an A/B test or multivariant test, I’d be happy to help. Feel free to email me at: KimYangCreative@gmail.com for a second opinion – I respond within 24 hours.
2. Research
You’ll want to collect primary existing data you have right now.
Using the CTA location in an email example, look at the previous emails you’ve sent to promote upcoming sales.
Specifically, you’ll want a good understanding of your average open rate, average click rate, and average conversion rate.
Your email service provider should have a dashboard of your analytics.
If you’re testing variables on a splash page, look to Google Analytics and/or your CMS may provide a dashboard.
You need this data so you can compare if version A and/or version B is performing at an average, below average, or above average rate.
Lastly, you’ll want to collect secondary data.
Secondary data is available from other sources and makes it easier for you to carry out further research.
In the primary and secondary research section, I’ve listed reliable sources you can turn to.
3. Develop a hypothesis for your test
In other words, you want to have a clear idea of the results you’re looking for and your end goal.
Ex: The location of the CTA located at the top of the email will increase conversion by 10% compared to the CTA located at the bottom of the email.
If your hypothesis is proved correct, you will gain insight and understanding of the variables responsible for conversion; allowing you to replicate more effective marketing emails.
If you choose to run this A/B test more than once, which I would recommend, you’ll have even more evidence why the location of your CTA in a sales email matter.
4. Split your test group equally and randomly
There is no perfect “sample size.”
Most times, you’ll want to test your entire list – this provides a clear picture of how your email subscribers respond to CTA location.
However, you also need to consider the type of test you are running.
If you’re testing more extreme content and you want to limit how many people and/or the type of contacts who receive this email, only test a segment of your list.
Ideally, you would conduct an A/B test with an email list of 500-1,000 contacts.
Working with the audience you have; you will want to make sure that you’re sending version A and version B to the same amount of people in your email list.
For example, if you send version A to only 30% of your email list and the remaining 70% receive version B, chances are version B will convert more contacts.
Because both versions were not sent to an equal amount of contacts, this would not be an accurate representation of version B having a higher conversion rate.
The group that received version B simply had more opportunities to convert.
5. Measure results and analyze data
Once you’ve run your A/B test, it’s time to review your results.
Keep your hypothesis top of mind when analyzing:
- Open rate
- Click through rate
- Conversion rate
- Bounce rate
The open rate tells you who was most interested based off of your email subject line whereas the click rate will tell you who wanted to continue to the next step of purchasing or at least get more information.
Your conversion rate is the number of people taking your desired action and completing a goal out of the total number of visitors.
Once contacts have converted to your website, pay attention to your bounce rate. If you have a high bounce rate, it’s likely contacts aren’t as interested in the offer on your website or splash page.
It’s important the message on your website or splash page is cohesive with the message on your email.
A high bounce rate could also be the result of slow page load time. Walmart found that for every 1 second improvement in page load time, conversions increased by 2%.
After analyzing your data and findings, you will have a better idea of the marketing landscape you are currently operating in and which marketing tactics your audience finds most attractive.
CONCLUSION:
A/B tests improve your bottom line. What does that mean? When you have real data and evidence that proves an input correlates with a desired outcome, you have better insight, you can make better decisions, you can craft a more effective market strategy, and you can meet your customers and prospects where they are.