Design

Improving conversion rates with better A/B tests

4 min read
Craig Kistler
  •  Feb 11, 2016
Link copied to clipboard

Making impulsive changes to your product based on a gut feeling—or, worse, because something worked for your competitors—isn’t sustainable when it comes to conversion optimization.

The best way to find out why and where people are struggling with your product, pinpoint problems within your checkout flow, and systematically increase your conversion rates: A/B testing.

And when you start testing, you can’t just rely on best practices or things that worked for another company—conversion optimization with A/B testing requires a methodology and a repeatable process.Twitter Logo Here’s how to do it.

“Running tons of usability and A/B tests at every stage of the process as well as using live products and prototypes is key.” From Inside Design: Wix.

Stop randomly picking tests

Dig into your analytics, collect feedback, and mash it together to form a data-driven hypothesis. Research, test, then analyze. Follow these steps:

1. Gather data
There are 2 ways to find problems. I use both—most of the time paired together.

  1. Analytics
  2. Usability testing

Let’s quickly break each one down.

Analytics

When I want to optimize a website, I start inside analytics. It’s hard to ignore quantitative data.Twitter Logo

To get started with this, look at where people are going on your website, what they’re doing, and where there could be potential problems.

Common questions that can be answered using analytics:

  1. Traffic
    • What traffic sources are converting well? Poorly? Why?
    • What device types are converting well? Poorly? Why?
    • What are the key pages? Bounce rates? Exit rates?
  2. Funnel
    • Where are the key drop-off points within your funnel?
    • Where are the key drop-off points within your forms?
  3. Goals
    • What’s the current conversion rate?
    • What’s been your best conversion rate?
  4. Site search
    • What phrases are people searching for?
    • What pages are people starting their search from?

Protip: Pages with high abandonment rates are a great place to start to identify conversion problems.

After you have an idea of what your visitors are doing and where they’re going, your next step is to understand why. And usability testing can provide the answer.

Usability testing

Watching someone struggle to get through your website is one of the best ways to identify problem areas.

Now that you have data and a list of potential problem pages (funnel drop-off points, search data, etc.) the next step is to run a series of usability tests. Do usability testing before A/B testing to see points of frictionTwitter Logo where visitors are stuck, confused, or frustrated.

“Do usability testing before A/B testing to see points of friction.”

Twitter Logo

These areas of friction are a good place to start A/B testing. Jot down all these and save them for later.

3 things I’ll always do before any A/B test:

Browse flow: Can a visitor efficiently find a specific product? This will allow you to see how visitors chose to start (search or navigation), is there confusion in the navigation, are the product pages doing an effective job, are calls-to-action (CTA) clear, etc.?

Buy flow: When they’re ready to buy, does the flow from cart to confirmation give the visitor any problems? Is there confusion around shipping times, too many CTAs, asking too many questions, etc.?

Competitive test: Understanding what competitors are doing in your space is a great way to get ideas. I’ll run both a browse flow and buy flow test on 2-3 competitors.

At this point you should have a solid list of the problem areas on your site.

Analytics and case studies are good sources of ideas for hypotheses, but marketers should also look at qualitative data such as usability tests in order to get hypotheses ideas.

Step 2: Create hypotheses and prioritize

Now that you’ve identified some areas to improve, develop a working hypothesis. To have the best chance for a positive impact on your conversions, it’s important to have a strategic and well-formed hypothesis for your A/B test.Twitter Logo

This question is critical to answer before starting any testing project. Answer it, and you’ll be able to clearly identify your goals—and you’ll have a head start on how you’ll use the results.

“Before any testing initiative, know exactly what you hope to learn.”

Twitter Logo

Building a hypothesis doesn’t need to be hard. To get started, you need to understand 2 things.

  1. Know what your goal is—what are you measuring? For example, this could be tied to conversion.
  2. What problem you’re trying to solve with the test

After you’ve outlined your problem, now you can build your hypothesis.

Your hypothesis should include a proposed solution to the problem and the result you are hoping to achieve.

The most critical question to answer before any testing initiative: what do I hope to learn?

Let’s look at a quick example.

You run an ecommerce site. After looking through your analytics you’ve noticed there’s a large drop off between the shopping cart page and the first step of the checkout flow.

You’ve run a couple usability tests and noticed that potential customers are getting tripped up with the amount of information they must understand before moving forward.

You’re hoping to increase the number of people making it to the next step of the checkout process by reducing some of the information shown.  

“Have a strategic and well-formed hypothesis for your A/B test.”

Twitter Logo

Your hypothesis could read something like:

“By reducing the number of decisions a person has to make at this point in the process I can increase the number of visitors who continue on to the next step.”

How to prioritize

It’s impossible to test all of these hypotheses at once. It’s time to prioritize.

I’m sure there are plenty of ways to go about this, but I’ve found scoring ideas across 3 factors to be most useful.

1. Potential impact

How much impact will any change make on the business? For example, running a headline test on an about us page probably won’t have much impact on the business. However, testing a headline on a product page carries more weight.

2. Importance

A good way to gauge the importance of a page: look at the value of the traffic coming to the page. The shopping cart page is more important than the home page.

3. Ease

Difficulty can be looked at a couple of ways. Is there going to be a technical challenge in running the test? Or, is there going to be a lot of internal fighting about a specific test idea? Either case will increase the difficulty of pulling off the test.

Start off by listing out each test idea.

Then for each idea give a score for each factor. Use a 1 to 5 scale, with 1 meaning low and 5 meaning high.

Multiply each factor together to get a total score. The higher the score, the higher priority. The lower the score, the lower priority.

Protip: If you have multiple stakeholders involved, have each stakeholder prioritize the testing list separately then average the scores.

Step 4: Run your tests

The goal of this article isn’t to provide detailed instructions on running your tests. There a ton of quality tools that make it fairly easy to run tests.

Step 5: Analyze and repeat the process

So far you’ve identified pages with problems, written and prioritized your hypotheses, and conducted some tests. The final step in the process: analyze the test data.

Judging a test comes down to one question: did the test prove or disprove your hypothesis?

There are 3 outcomes:

1. The test failed

Most tests fail, but it doesn’t mean the test was worthless. There’s something to be learned from a failing test. This is why the hypothesis is so critical.

You learn something from every A/B test—even ones that fail.”

Twitter Logo

For example, if you run a sloppy A/B test that tests something like “We want to test a blue CTA button because XYZ company has one,” it’ll be difficult to learn from because the hypothesis is so general and nothing specific is being measured. However, if the hypothesis were tweaked to something like “Visitors appear to be struggling to find the current CTA. By changing only the button color from grey to blue (not size or placement), we’ll increase our current add-to-cart rate,” it’d allow you to go back and see if the test changed the specific KPI you’re measuring.

When you have a failed test, don’t immediately throw it away.Twitter Logo Take a look at what caused the failure, reevaluate your original hypothesis, then test that again.

2. The test didn’t do anything

This is similar to tests that fail—you should still learn something from flatlines.

If your test wasn’t much different from the control, this is a common result and may mean you’ve focused on something too small like changing a single word in a product description or headline instead of looking at something larger (Do visitors read my product description? What happens if I remove it?).

“A/B testing is a continuous process.”

Twitter Logo

When this happens, go back to the list of problems you’ve noted and test the issues that’ll have a bigger impact.

3. The test won

Congratulations! Your hard work produced a winning test! Make the change to your website, and move on to the next problem from your list.

Protip: You should be learning something with every A/B testTwitter Logo—winners, losers, and flatlines.

Inside Design: Pocket. Photo by Peter Prato.

Conclusion

A/B testing is a continuous processTwitter Logo designed to make site make improvements. Even if you believe your conversion rate is high, there are still areas of your site that can be improved. And when you have a solid framework, it’s much easier to make improvements.

Relying on gut instinct and copying others is expensive and unsustainable. Having a solid framework allows you to produce repeatable, testable hypotheses that will lead to increased sales and an overall better experience for your customers.

Collaborate in real time on a digital whiteboard