Get in touch!

we respect your privacy and wont share your information. By entering information in this form you accept CXperts' privacy policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Close Window

Five Reasons A/B Testing Won’t Work

When Not to A/B Test

By Steven Shyne

We love experimentation. And the darling of them is A/B Testing (or A/B/n Testing if you’re a little fancy). But for some clients, we encourage them NOT to conduct A/B Testing – or any other controlled website experimentation for that matter.

Yeah yeah, we know, this sounds a bit sensational, especially coming from a CX consultancy that supports brands with conversion optimization and experimentation services; however, we aren’t in the business of running one-off, forever-running, or failed-from-the start experiments. We want to help companies grow and win through strategic, best-in-class experimentation. Before you can hope for that, here are the five main hurdles that your company or team will have to jump over:

       1. Threshold Traffic - This is the easiest “hard stop” when it comes to testing. If your site doesn’t have enough traffic (and this usually means unique visitors) then A/B and Multivariate Testing probably isn’t right for you. How much traffic, you say? Well, it does depend a little bit on some of the other answers below, but generally we tell our clients no less than 20K monthly unique visitors (MUVs), but of course there are exceptions both ways – some may get away with less; others, may require more.

       2. Threshold Conversions - Let’s say you do have traffic, but no-to-little conversions on your page, well that’s going to be a no for testing also. Know that conversions in this sense isn’t just ecommerce sales (although it could be); conversions as they pertain to testing are the actions we are measuring that are tied to a test goal. Regardless of traffic, certain pages like help articles, blog posts, or support pages where there's no action (or no measurable on-page action) are usually poor candidates for testing.

       3. Lacking Time (or Patience) - If you are telling your team or agency that you only have a week to run a test, then testing might not be for you. We’re not saying you can’t make a win within a week, but sound testing requires some amount of time (read: patience). How much? Again, it depends, but typically our recommendation is letting a test run for a minimum of two business cycles (often two weeks) to ensure we are smoothing out the differences among weekday/weekend user behavior.

       4. Lacking Process - Let’s say you’ve jumped all previously mentioned hurdles, but your idea of running tests is making changes by whim, or worse yet, you’re calling tests a “win” before hitting statistical significance In this case, we may need to have real talk about real process. Defining your hypotheses is critically important to help you understand what you’re testing, why you’re testing it, and what to do at the end of your test. Plus, having a baseline understanding of statistics will help you make informed, data-backed decisions.

       5. Lacking Resources - Okay, so you’ve got it all – traffic, conversions, time to insights, and process... But you come to find out that you don’t have a developer who can implement the change until next year. Shit - you just wasted time and resources for nothing, my friend. Yes, you’ve learned something and yes, there are ways to “hack” the implementation of this win through a testing tool, but this is not recommended for long-term changes. 

So You’re Not Ready to Run A/B Tests (Yet). What CAN You Do?

After reviewing the above, if you tick some or all the boxes, here are some things that you can focus on instead of website experimentation:

       1. Traditional Analytics to see what people are doing and where to identify parts of the website/app funnel that need to be investigated further

       2. User Analytics by way of heatmaps, session recordings, and other observational insights to help find bugs and breakdowns in your UX 

       3. Usability to gain real feedback from real users into what they are doing on your site and why, so that you can make customer-informed updates 

       4. Low/No Risk Implementations like more understandable copy or better product images or improved shipping options are all things that may be worth direct updates to the site versus putting into a testing queue (perhaps) 

       5. Traffic Acquisition like earned and paid media to pump up your traffic and conversion numbers, and once achieved, then circling back to A/B testing and optimizing those journeys

If you’re reading this article and you’ve got the traffic, conversions, patience, and resources, but you lack a plan or even knowing where to start, let’s have a chat: