Contact

Conversion rate optimisation

Conversion rate optimisation (CRO) is the method for increasing the portion of visitors on a website that converts into customers, or more generally, takes any desired action on a webpage.

Conversion rate optimisation requires several separate parts and includes more than just increasing your website’s conversion rate. After all, conversion rates are just one metric and the CRO testing process allows you to improve not only the proportion of visitors who convert, but also customer value and retention.

Ultimately, a CRO strategy supports you to drive revenue and efficiency, increase market share and customer satisfaction. The purpose of conversion rate optimisation is to create data-driven decisions based on your customers to generate growth.

The A/B testing method is the process of getting data-driven decisions within your conversion rate optimisation strategy and always include the following steps:

Our process

Data analysis

The first step is to gather insights from your data. Businesses collect information from website data, loyalty programmes, CRM systems, store purchasing and onsite heat mapping. The aim is to determine which parts of the customer journey are working well and which parts aren’t.

From here you’ll need to add a layer of qualitative data to uncover what motivates or causes certain behaviours that you see in your quantitative data. This could take the form of getting customer services feedback from calls to moderated user research (with many other ways in between). By pulling all your data sources together, you’ll be able to triage insights which are supported from several sources, giving more evidence to the idea as well as the “what” and the “why” behind the things you uncover.

Hypothesis development

You’ve worked out where you want to test based on the data and what you want to check based on why visitors may not be taking the desired action, such as usability, trust, persuasiveness, perceived value, or confusion as a few examples. Now you’ll be able to create your hypothesis. A hypothesis is a statement proposing that by changing X to Y, it will cause Z (effect).

For example, if your data showed that there was a significant drop-off with certain types of customers on your payment page and your data showed the reason was that customers were worried about the security of their data. Your hypothesis could be “by adding trust messaging and signals to the payment page we hypothesise that more customers will complete their payment”.

Now you’ll need to prioritise your hypotheses, taking into account the projected impact, the amount of resource to create the test, and the amount of traffic/time calculated to run the test to ensure you get sound results.

Creating the challenger

It’s time to get creative and use psychology techniques, bright copy or design solutions to create your challenger or “variant”. The challenger will be tested against the existing page/element (known as the control) in your A/B test.

There is a range of ways to develop solutions to your hypothesis such as innovation exercises to collaborative sketching workshops with a cross-section of your business, ideally with those who have expertise in design, copywriting, consumer psychology, neuromarketing and usability.

A/B testing

Setting up your A/B test and configuring tracking allows you to A/B test your hypothesis solution (challenger) against your existing concept (control). Probably one of the hardest elements of the process is setting up the right metrics to measure your test.

With many testing tools, it’s possible to include offline conversion data into your tests. Ensure you’ve got the full picture – e.g. if a visitor sees one of your variations and then converts over the phone ensure this is being tracked. You should consider both micro and macro conversion metrics as this will enhance the learning you can take from the test such as changes in user behaviour which can inspire new hypotheses and influence further tests.

You may require some front end/back end development support depending on the test. There is also a range of A/B testing tools for a variety of budgets which will allow you to run your tests such as Optimizely, Qubit and VWO. You’ll also need to ensure you stop the test at the right time to get statistically sound results.

Post test analysis

Once a test has concluded, it is crucial that you examine the resulting data. If a test was successful, is there a follow-up test to make a further improvement? If it failed, what does this tell you and again, is there a follow-up test with this new insight?

Improving the number of people who purchase a product offers little value if it creates a corresponding increase in product returns; it’s essential to take a holistic view and look at more than your website conversion rate when conducting the post-test analysis.

Winners and losers

What to do if an AB test fails?

Examine the data. Segment the results to better understand what failed and where. It may have “won” for a particular section or traffic source for example or abandoned for a specific browser/OS indicating there may have been a bug. However, be careful about sample size if you segment your data.

Look into whether the test “failed” due to execution. For example, if you’ve added an element to a page, was the creative striking enough? Using heat maps would help to identify this or some on-site survey tools will allow you to add test variables to the data collected. This means you can receive some qualitative feedback on your test variations. Look into the data to see if there are any potential iterative changes you can make to turn this negative result into a positive one potentially.

Make sure you have correctly documented the test result so that when you are looking to implement future tests, you have all the research and results data recorded. This will then feed into future experiments. This way your ‘negative’ result isn’t without some positives, as your learnings will feed into future tests.

What to do if an AB test wins?

If your test is successful, the first thing is to start taking steps to put the successful challenger changes onto your live website. In the meantime, you can set the challenger to show 100% of your site’s traffic within the CRO testing tool so that you can reap the rewards straight away.

From this success, you should look to identify further opportunities to gain more improvements by iterating the successful test. Make sure you have correctly documented the test result so that when you are looking to implement future tests, you have all the research and results data recorded. This will then feed into future trials.

Occasionally, a test won’t generate a positive result or a negative result and will come back as ‘flat’. If this is the case, you should go back to your data to help understand why. Do not see a ‘flat’ result as a failure because, after some analysis, it may well have a positive effect in other areas on your site or in your customer journey. Alternatively, it might merely have saved future time and resources for your team, developing an idea that wouldn’t have had any impact. Once again, make sure you documented the test result correctly.

Would it work for your business?

Any business who spends money, time and or resources getting traffic to their website should be spending time converting those potential customers to be effective.

It’s not just e-commerce sites that should be doing conversion rate optimisation but websites where the purpose is lead generation or even online communities. To do A/B testing, you need a minimum of 2000 conversions per month (to run enough statistically significant tests vs the resource required to test effectively).

Ready to get started?

Get in touch and let’s discuss what we can do for you.


View