Do you remember those little lab notebooks you kept when you were a kid at school? You’d run experiments in class and carefully document the entire process.
First your professor would ask a question. Then you’d do a bit of reading and come up with a hypothesis. Finally, you’d test the hypothesis by running an experiment with a control and a variable.
Well, guess what, you were A/B testing! That’s right, A/B testing isn’t a recent development. It’s been around for hundreds of years and is more commonly known as the scientific method. Unfortunately, somewhere along the way, you forgot about that process you learnt as a kid.
Today you’re either not A/B testing, or you are but you’re doing it wrong. It’s time to go back to the basics and look at some tips for A/B testing the right way.
Define your success metrics
There’s got to be a reason for running an experiment. You’re trying to find out what works and what doesn’t so you need to define what a success is.
It’s important to do this right up front. A common mistake is to run the experiment and then define success based on the results. That serves no purpose.
A success metric could be anything. Many people associate A/B testing with signups, but your objectives might be different. You might want to measure the number of times an article was shared, or open rates for an e-mail.
For starters, pick something that directly impacts your business. If scheduling a demo of your product brings in more revenue than signing up for a free trial, then the number of demo requests you receive is your success metric.
For example, Content Verve defined their success metric as opt-in rates for their e-book. This would generate qualified leads for their services. They created a simple landing page with information about the e-book, a cover photo, and some testimonials from industry experts.
Pick one variable
Another common mistake people make is to run experiments while testing multiple variables at the same time. This defeats the purpose of A/B testing because then you wouldn’t be able to identify what caused the difference in results.
Back in school you had one control and one independent variable. The control was the baseline, and you’d change one variable and test that against the control.
The same process applies to websites. Your current page is the control, and you need to change one variable and test how that change affects your success metric.
Of course, the variable you test depends on the success metric. If success means higher e-mail open rates, it doesn’t make sense for your variable to be the color of the navigation links on your website. A better variable would be e-mail subject lines.
In the Content Verve example, their success metric was opt-in rates on the landing page. It only made sense to pick a variable to test from the landing page. They picked testimonial placement, but they could also test the image or the CTA in another experiment.
Here are some other elements you can test:
- Web or product copy
- Page layout
Create a hypothesis
Now that you’ve identified a success metric and a variable to test, you can create a hypothesis. The hypothesis is what you think will happen. It’s an assumption but the real purpose is to give the experiment a direction.
Let’s go back to e-mail open rates and subject lines. The idea is to change the subject line and see if it results in higher open rates. But what change should you make? Well, that depends on the hypothesis.
You could hypothesize that funnier subject lines cause more open rates. You would then need to come up with some witty stuff and test it out. Or you could hypothesize that subject lines starting with ‘How to’ cause more open rates. In this case, you’d test ‘How to’ against your regular subject lines.
Back in the Content Verve experiment, the hypothesis was that placing testimonials higher in the page would increase opt-ins. They could go with a different hypothesis, like larger testimonial images increase opt-ins, in a follow up experiment.
Collect enough data
You’ll notice that when you start running your experiment, your results will change wildly in the first few days. On the first day option B will lead by 34% and the next day it will lag by 68%. What gives?
The problem is you don’t have enough data to make a decision. If 2 out of your first 5 visitors convert, that’s a conversion rate of 40%. If none out of the next 5 convert, your new rate is 20%.
Now let’s say you’ve already seen 1000 visitors and your conversion rate is 40%. If you get 5 more visitors and none convert, does that mean your new rate is 20%? Absolutely not! In fact, it’s almost the same at 39.8%.
The more data you have collected, the more statistically significant your results are. Prematurely ending your test because one variation has a higher success rate might lead you to an incorrect conclusion. What you see as saving time could really cost you a lot of money in the long term.
Never stop testing
As you can imagine, there’s no limit to the number of tests you can run. There’s always something you can improve on your site.
To make it more organized and ensure you don’t waste time on frivolous tests, identify the actions that visitors need to take on your site to maximize revenue. For example, you may have a funnel starting with an ad, then a landing page with a free product, followed by a free trial of your core offering, and finishing with a purchase.
At every stage of the funnel you can test multiple variables, and each test can have multiple hypotheses. You can run different tests at the same time as long as they are mutually exclusive. That means the outcome of one test doesn’t affect the other. This allows you to identify the variable that caused the difference in results in each test.
To better manage tests and keep track of results, use an A/B testing software. Here are some popular ones:
Content Experiments – This used to be called Google Website Optimizer but now it’s part of the Google Analytics suite. It’s a free but basic tool to test success metrics on different pages. Not a bad option if you’re on a very tight budget and aren’t running complicated tests.
Optimizely – A powerful A/B testing software that doesn’t require any coding. You install a script on your site and then use their interface to make changes to your site by dragging and editing elements. You can start with a 30-day free trial and move on to a paid plan.
Visual Web Optimizer – Another powerful A/B testing software that works the same way as Optimizely. They have a couple of extra features, like heat maps to help you identify where people are looking on your site. They have a 30-day free trial as well.
Unbounce – a landing page creator with A/B testing features. You wouldn’t use this to test changes on your main website, but if you need to create squeeze pages for your funnel, this is a good tool. Again, you don’t need any coding knowledge. Use their drag and drop interface to create pages and variations quickly.
Lead Pages – A lead generation suite that consists of a landing page creator and other dynamic elements to generate more leads. A/B testing and tracking is built into the suite and you can do everything without touching code.
All right, the theory class is over! It’s time to hit the lab and start experimenting. It’s never too early to create A/B tests. If you have an online presence, you have an opportunity to test.
In the next half an hour, try to come up with at least one test and hypothesis. Then come back here and post your hypothesis in the comments section. Bookmark this page and, after you run your test for a sufficient amount of time, come back again and let us know your results.