What is A/B Testing?

An A/B test, also known as a split test, is an experiment for determining which of different variations of an online experience performs better by presenting each version to users at random and analyzing the results. So, what is A/B testing? A/B testing demonstrates the efficacy of potential changes, enabling data-driven decisions and ensuring positive impacts.

Kyle Rush, VP of Engineering at Casper and former Head of Optimization at Optimizely, used A/B testing to increase donation conversions by 49% when he worked for the 2012 Obama for America campaign, helping the Digital team raise $250 million in 6 months. “If you aren’t testing, you don’t know how effective your changes are,” Rush says. “You might have correlation but no causation.”

A/B testing can do a lot more than prove how changes can impact your conversions in the short-term. “It helps you prioritize what to do in the future,” Rush says. “If you have 20 items on your product roadmap and you want a definitive answer as to what will move the needle, you need data. If there’s a feature that is very similar to a feature that did not work in testing, don’t go forward with it.”

“With A/B testing, you have so much more knowledge for what to roll out 3-to-4 years out.”

Used continuously and consistently, testing can improve your user’s overall experience, increasing your conversion rates in the short-term and over time.What is A/B testing - BrightEdge

Benefits of A/B Testing

1. Improved user engagement

Elements of a page, app, ad, or emails that can be A/B tested include the headline or subject line, imagery, call-to-action (CTA) forms and language, layout, fonts, and colors, among others. Testing one change at a time will show which affected users’ behavior and which did not. Updating the experience with the “winning” changes will improve the user experience in aggregate, ultimately optimizing it for success.

2. Improved content

Testing ad copy, for instance, requires a list of potential improvements to show users. The very process of creating, considering, and evaluating these lists winnows out ineffective language and makes the final versions better for users.

3. Reduced bounce rates

A/B testing points to the combination of elements that helps keep visitors on site or app longer. The more time visitors spend on site, the likelier they’ll discover the value of the content, ultimately leading to a conversion.

4. Increased conversion rates

A/B testing is the simplest and most effective means to determine the best content to convert visits into sign-ups and purchases. Knowing what works and what doesn’t helps convert more leads.

5. Higher conversion values

The learnings from A/B testing successfully applied to one experience can be applied to additional experiences, including pages for higher-priced products and services. Better engagement on these pages will demonstrate similar lifts in conversions.

“We use A/B testing before making any major changes at Casper,” Rush says. “We A/B tested components of a new e-commerce experience before we launched it with our new, premium mattress. The premium mattress is a huge change to our business model so it was testing was important to make sure the launch went well.”

6. Ease of analysis

Determining a winner and a loser of an A/B test is straight-forward: which page’s or app’s metrics come closer to its goals (time spent, conversions, etc.,).

“In the past A/B testing metrics were just raw numbers and you’d have to interpret them to make a decision,” Rush says. “Now services like Optimizely make the decisions for you with stats engines following best practices.”

And while testing services have evolved to include statistical analysis for users of all levels of spreadsheet expertise, the numbers for a comparison of two experiences are underwhelming in their complexity. The clarity of these stats also undermines the highest-paid person’s opinion (HIPPO) that may otherwise be overvalued.

7. Quick results

how to conduct a/b testing? - brightedgeEven a relatively small sample size in an A/B test can provide significant, actionable results as to which changes are most engaging for users. This allows for short-order optimization of new sites, new apps, and low-converting pages.

8. Everything is testable

Forms, images, and text are typical items for A/B testing and updating, but any element of a page or app can be tweaked and tested. Headline styling, CTA button colors, form length, etc., can all affect user engagement and conversion rates in ways that may never be known if they’re not tested. No idea need be rejected on a conference call; testing and metrics, not emotions, prove what works and what doesn’t.

9. Reduced risks

By A/B testing, commitments to costly, time-intensive changes that are proven ineffective can be avoided. Major decisions can be well-informed, avoiding mistakes that could otherwise tie up resources for minimum or negative gain.

“The most obvious way to use A/B testing is to use it to rule something out,” Rush says. “If you see that making a change could decrease conversions, don’t move forward with it.”

10. Reduced cart abandonment

For e-commerce, getting a user to follow through with checkout after clicking “buy” on an item is a significant challenge, as most potential customers abandon their carts before paying. A/B testing can help find the optimal combination of tweaks to the order pages that will get users to the finish.

“The user experience between checkout and entering a shipping address is the best place to focus on with A/B testing,” Rush says.

11. Increased sales

Any and all of the above-mentioned A/B testing benefits serve to increase sales volume. Beyond the initial sales boost optimized changes produce, testing provides better user experiences which, in turn, breeds trust in the brand, creating loyal, repeat customers and, therefore, increased sales.

How do you perform an A/B test?

1. Go for big easy

Inculcate a culture of experimentation by A/B testing user experience elements that are easy to change but still have big potential impacts. Test changes to optimize your landing page headline and CTAs, adjusting the language to determine the best messaging. A headline change could net a 100% or better increase in conversions. With this quick win, you’ll be enabled and emboldened to do additional testing.

“Get your feet wet and see how it goes,” Rush says. “Keep doing it until you pick it up. The more you do it, the better you get at it. It’s not magic.”

2. Find your sore spots

When considering what to test, look at your sales funnel to determine where you’re losing potential conversions.

“Figure out what the conversion rate is at each stage of the funnel,” Rush says. “Focus on where the biggest drop-off occurs. Optimizing that will make for the biggest impact.”

3. Test changes only where changes are needed

If it ain’t broke, don’t test changing it.

“If a page has a conversion rate of 99%, you don’t have a problem,” Rush says. “But if you open it up to testing, people will find things to nitpick based on their preferences, such as the color. But testing for that is a waste of time.” 

4. Make A and B significantly different

A/B testing can validate a decision to make a change or to keep things as they are, but only if the proposed update is noticeably different from the original (though still within site style guidelines).

“Adding a comma isn’t going to make a difference,” Rush says. “Pick something that is completely different, not just another way to say the same thing. It should be high-contrast but still within brand. If it’s too much contrast, you can always dial it down.”

5. Get ideas from everyone

The hardest part of A/B testing is coming up with what to change and how to change it. With regards to user experience, it’s important to think—and look—outside the box for ideas.

“High-level ideas for changes can come from anywhere,” Rush says. “Not from brainstorming sessions, which may be a waste of time. Ideas have come from my mom based on my asking her, ‘What do you think about this?’ She has a unique perspective, which can be helpful.”

6. Control for time

To prove causality, an A/B test needs control variables, the elements that are kept the same throughout the experiment. One variable to control for is time, i.e., the period during which the test is run. Don’t run tests sequentially: the period for an A/B test must be the same for both “A” and “B” variables so that the user base seeing each version is the same.

7. Run tests in week-long increments

Unless you’re planning to post different versions of your page or app on the weekends, your A/B test should run long enough to ensure a full week of traffic. This ensures you’ll get accurate overall results that account for dips and spikes due to the day of week and time of day. 

8. Always be innovating

A/B testing can optimize your site or app via incremental improvements that yield quick, positive results. But don’t allow short-term successes to supersede real innovation which requires risk-taking and possible failures—but also has the greatest potential for even richer rewards.