Putting A/B Tests to the Test: 4 Pitfalls of Testing

Website testing can be one of the most fruitful website optimization tactics in your arsenal. But, it can also be a huge waste of time. Below are 4 pitfalls I’ve found myself in at one time or another.

1. Testing “Everything”

A common axiom in website optimization is to “test everything.” The problem with this advice is it ignores the reality we all find ourselves in, that we have limited time and resources. If you have the choice to test between testing your checkout process or a different coloured add-to-cart button, which do you think will legitimately provide long term value to your visitors?

Quite frankly, not everything is worth testing. Yes, you might inch out a slight improvement to your add-to-cart rate with a flashy new button, but how will that translate to profit and lifetime customer value? My advice is to skip the gimmicks and test stuff that matters.

2. Focusing Entirely on Conversion Rate

In the beginning I simply used Google Website Optimizer (GWO) to test conversion rate improvements. Then I began testing along with another variable, the average order amount. I suddenly realized that Google lies. It tells me I have a winner, but when I factor in the average order, I don’t. In other words, one variation may have had a higher conversion rate, but the other had a higher average order amount, therefore resulting in a wash in sales or sometimes even the opposite result.

Google only shows conversion rate for one simple reason. Adding another variable makes their product (GWO) much more difficult and time-consuming to use. When you test 2 things, conversion rate and average order value, that test that would normally takes 2 weeks might take 2 months. Most webmasters don’t have this kind of patience.

Ultimately, profit pays the bills, not conversion rate. Your bank or investors don’t care what GWO says, they care about profit. If you’re not currently testing with average order and conversion rate, work with your web developer on creating a more accurate testing environment.

3. Bias towards one variation

Let’s be honest. We all have a preference for which variation in our test we’d like to win. Most likely, one of them represents the status quo, and one represents weeks or months of hard work to bring to fruition. We always have a favourite. And this is the problem. Sometimes we test our new pet feature over and over, believing that in the end, our view will be justified.

Ask yourself, will you and your team be able to accept the results of this test? If not, then don’t waste your time testing.

4. Testing Short Term Tweaks without Regard to Long-Term Effects

I recently ran a test for a company I work with. This site had recently began contributing a significant part of it’s revenue towards a charity they work with. As a result, they redesigned the homepage to clearly communicate the change. The new version of the homepage explained that each time their particular service was used, a significantly portion of the profits were donated to a charity.

Within 2 weeks, a old version was proven the winner by a slight margin. There’s just one problem though, that just wouldn’t do. This company was dedicated to communicating the social good their business was providing. They believed, long-term that this change would better set themselves apart in the minds of their customers.

Some companies just don’t test. Ever. They make a decision that fits their brand, and they stick with it. And there’s nothing wrong with this strategy. I personally land somewhere in the middle. Lots of things are worth testing, but I believe there are times when brands should stick with who they are regardless of short term test results.