We've created Zuko, our next-generation form analytics platform. Explore Zuko

Why you aren’t growing through CRO

Conversion Rate Optimisation

We’ve all been there before. Well, if you haven’t been there I’ll admit that I’ve been there before. We’ve made a tonne of improvements to a website via methodical, insight-driven testing and yet upon implementation of the winning variations there’s no dramatic improvement. In some instances, there’s no improvement whatsoever. It’s frustrating but it happens.

A man covered in flour smiling and frowning

Why does this occur?

Vanity CRO, that’s what.

The tests are working and you’re showing your boss these incredible uplifts. He’s happy with your performance and you’re feeling pretty smug with yourself. But come 3 months down the line questions start to be asked about why that “guaranteed” uplift you highlighted a few months ago isn’t happening. Is it seasonal? Has a new competitor entered the market? Or perhaps some other variable you’re desperately trying to understand?

I’ve broken down some rather top-level indications that might suggest why independent A/B testing may show good results but cumulatively, in a non-independent testing environment, could just be in vain.

1. You’re only testing tweaks

André Morys, CEO of optimisation agency Web Arts, says if you’re not affecting behaviour you’re not testing.

I would bundle ‘perception’ into this as well. We need to continually argue that if you’re not affecting perception or behaviour you’re not testing. Too often we test to tweak; tweaking elements that don’t affect either of these two variables.

As a result, you might see a 2% uplift here, or a 5% uplift there, but ultimately we’re not affecting the behavioural patterns of the user (nor perception which ultimately affects behaviour) and in a real world environment can cumulatively provide little to no uplift.

Ask yourself with each test, does this experiment affect my users perception or behaviour?

2. Lack of genuine user insight

“We can’t do effective A/B testing unless we’re testing the right things. This requires data interpretation. The gold lies in the interpretation” Brian Massey, Conversion Scientist

Are you testing an impactful attribute or parameter based on a genuine user insight?

For example, if we find that buyers are 5x more likely to use the search bar, is that really because it increases the propensity to purchase? From that assumption you might decide to test increasing the prominence of Search over the primary navigation. Or is it just because buyers who know what they want, and are already primed to purchase, will use the search bar to go straight to the product?

Both pre and post validation of any insight generated is vital in testing your hypothesis.Continually asking the question “why” will ensure that you are backing up insight with data; accruing genuine user insight.

3. Poor quality hypothesis

If the insight is poor, the hypothesis is often false. It’s easy to hypothesise based on a false promise or a fallacious insight, only to learn from a failed test and see a huge uplift within the iteration.

Here’s a simple structure for testing hypotheses, developed by Craig Sullivan:

  1. Because we saw (data/feedback)
  2. We expect that (change) will cause (impact)
  3. We’ll measure this using (data metric)

Brooks Bell talks about ensuring that hypotheses are “tweet style 140 characters” to keep it concise. However, it’s not necessarily the length but the insight that’s key to a quality hypothesis. Ask the above question of whether the recommendation would genuinely affect user perception or behaviour.

Your hypothesis should also work in reverse i.e. if adding an element to a page increases conversions, then taking it away should decrease conversions.

You need to be aware of your own bias too. It’s important to realise that, as conversion optimisers, we have to remain as objective as possible, purging your hypothesis of all emotion. As much as I love a cool animation, using one to spice up an add-to-basket task doesn’t guarantee that users will view their basket more or that it will result in more conversions. It might, but not just because you like it.

Michael Aadgard talks about this in more detail when he wanted to test the anti-button copy of “don’t click this” which is a cool little case study.

4. Lack of methodology

Without process we are but apes.

Conversion rate optimisation is a concept just like a political belief or management style; every concept needs structure. Plan by asking, how are we going to approach x, how are we collecting and measuring data?

When you see descriptions of what conversion rate optimisation is you’ll generally always see words like “structure”, “system” or “process”. We’re methodical and will plan from start to end.

To follow this approach, gather your tools, know how long it’s going to take, create a timeline, set the metrics that are important to you and know what your goals are.

5. The majority of the investment is in the tool

This isn’t a self promotion I swear… however, we do preach the use of conversion optimisation agencies or teams when undertaking a CRO program. That is because it’s a dedicated resource to focus predominately on your conversion optimisation efforts.

Companies outsource for one of two reasons: capacity and / or expertise. Let me ask, how often do you think about optimisation a day? (the general response I get is 30 seconds to that answer). An optimisation team thinks about it all day, every day and has the focus and expertise to facilitate, champion and spearhead your efforts.

Here’s a situation we see all too often, companies spending a lot on the optimisation tool without considering the strategy or resource to use the tool.This results in a failed attempt to optimise the user experience and companies bucket themselves into what we call ‘popcorn testing’ (testing randomly without thought or process).

6. Silo testing where 2 + 2 = 3

Silo testing means running several experiments independent of each other. What happens when those experiments are collated? Do they negate each other? Does one behaviour nullify or disagree with another?

An example might be when you’ve implemented category boxes on the homepage and the experiment succeeds. Meanwhile, a separate experiment, implementing a mega-menu sitewide also succeeds. But how is the behaviour of those users that land on the homepage now affected? Would they choose to click the category boxes or the mega-menu and how does this now affect the propensity to purchase? Have we caused a 2 + 2 = 3 scenario?

This is most common within tactical experiments rather than strategic and behavioural implementations.

If you’re still no closer to understanding why you’re not growing through CRO, ask yourself, Is my A/B test broken?

CRO is an incredibly efficient process. Although there are many reasons why you aren’t growing through CRO, if you feel you aren’t getting the most from your efforts get professional advice.

——————————
Thanks to David Mannheim for this guest post.

David Mannheim is owner of and Head of Optimisation at User Conversion, a conversion rate optimisation agency in Manchester.

image representing average form conversion rate uplift from using Formisimo From 0% to 24% Form Conversion Uplift Get the insight that increased form conversions by 24% (and rising)
Latest Insight
Improving Insurance Forms Reviewing the top 31 insurance forms