Published on 9/16/2020
Whether you work in growth engineering/hacking, marketing, or simply want to get more insights into feature usage, AB testing can help you make data-driven decisions that are based on actual user behavior.
But AB testing tools are not one size fits all, and surprisingly (to me), these tools can be quite expensive.
About a year ago, I was tasked with replacing our homegrown AB testing framework. I set out to evaluate several tools out there. I evaluated Optimizely, Slice, VWO, and Convertize.
The pricing for these tools ranged from $150,000 a year (not a joke!) to $200 a month. But what you get in each platform is, of course, widely different.
I wanted to distill some of my findings into what makes a good AB testing platform that is worth the money.
The cheapest AB testing platform I found was $200 a month. I ran a test with it, and it was easy enough to set up.
A few days or so after I started the experiment, I got an alert saying we had a winner. My Product Manager thought it was too soon to have achieved statistical significance -- or stat sig, if you're cool... (Ok, no one says that.)
We checked the tool, and sure enough, the tool had no data on stat sig, which is something other tools do offer in their analysis.
You should see data around how many days it would take to achieve stat sig based on usage (and how you're pacing towards it), so if the tool you're evaluating does not provide that, it's not worth the money.
This is a question you really want an answer for before you start looking for ther perfect AB testing tool.
If you're looking to run simple experiments where you will change the color of a button, or change some text on a page, that is a very different need than if you want to radically change the look and feel of a page.
Are mostly non-technical team members going to be setting up and running the experiments, or are the experiments meant to be set up and ran by an engineer?
If you're looking to run simple experiments that can be implemented with a WYSIWYG editor, and are mostly meant to be setup and ran by non-technical folks, Optimizely X is a great option.
If you're looking to run more complicated experiments, perhaps on a more full-stack basis, Optimizely Full Stack, or Slice are good options.
That said, both those options are quite costly. So if you're on a budget like I was, don't forget about Google Optimize! Which leads me to my next point.
When I was set out to evaluate AB testing platforms, we were just getting ramped up again with AB tests, so we didn't have a very well planned out roadmap of experiments.
The tools we had evaluated that we really liked were quite costly ($150k a year sticker shock for a startup is real).
Before spending so much a year on a platform, it really made more sense for us to start with simple experiments on something we could get for free.
Enter Google Optimize.
We were already using Google Analytics for web analytics. Google Optimize is free and it integrates beautifully with Google Analytics.
Google Optimize lets you set up easy experiments with their WYSIWYG, and you can even tie the experiments with any targets or goals you have set up on the Google Analytics side.
The mistake we made when trying to ramp up our AB testing efforts was jumping on something too quickly with too many unknowns.
I really recommend a more iterative approach where you start running small experiments with free tools, like Google Optimize, until you start getting more confidence in the experiments you're running.
This will give you a better idea of what to look for in a more robust tool, and what type of experiments you could run with them.
Netlify Split Testing is currently in Beta. It allows you to run a split test based on different Git branches. You can send data to Google Analytics or Segment.io (and if you don't use Segment.io, let me tell you, it's awesome. Just check it out.)
I'm also a fan of Heap Analytics custom events, which allow you to pretty much send any data you want along with user events, such as clicks or form submissions.
You could pair Heap with something like Netlify's Split Testing to set up a "custom" budget-friendly AB testing setup, where you could push experiment data to Heap using their track API.