A/B Testing to Deliver Results
The concept of A/B testing is not a new one, but the conversation has shifted dramatically in recent years. A few years ago, UE/UX experts were laser-focused on conversion and how to get the most out of existing web traffic. Most A/B testing plans were driven by experiments to improve conversion without any regard to “non-tangible” components such as visitor engagement.
These strategies ignored the fact that without having deeper visitor engagement insights, it is challenging to improve conversion. While there are still a significant number of UE/UX agencies that continue to focus on traffic and conversion, there is now an emerging consensus that visitor engagement is a crucial factor in improving website conversion.
A/B Testing in Traditional Terms
In the simplest terms, A/B testing requires you start with a webpage that you want to improve (version A) and create a modified version of it (version B). You then run both pages simultaneously and see which one performs better. The idea is that half of the visitors will be shown version A – otherwise known as the control page – and half will be shown version B, the variation page. At the end of the test period, you can look at the results for both pages and see which one performed better.
Current Gaps with the A/B Testing
As discussed earlier, the idea is to develop a page that performs better as compared to its earlier version. While there are several performance metrics to quantify “better” in terms of conversion, there aren’t many metrics to measure or quantify user engagement. Ultimately good A/B testing should lead to increased revenue, but increasing customer engagement can amplify this increase.
How to Develop an Effective A/B Testing Plan
Spend time learning client’s business
Understanding the nuances of a client’s business is the key to do effective work for them, especially in A/B testing. Statistical results from A/B tests can provide a wealth of information about website audience and their interaction with the content. This should be cross-referenced with what you’ve learned from your client about their business.
Know exactly what you’re trying to measure
You can’t measure what you can’t define. It is as simple as that. Having a clear understanding of what is being measured is a good start. Work with your client to understand what they consider is meaningful customer engagement. It may present a unique opportunity to educate your client about long-term ROI rather than focus on short-term revenue. This is how you can differentiate customer engagement from short-term conversion. A client needs to be on-board with your philosophy of treating the cause (engagement) rather than symptoms (conversion).
Start With One Variable
Trying to accomplish too much too fast is a recipe for disaster. Many times, it is unrealistic to go after far too many objectives due to time and resource limitations. Keep things as simple as possible and begin with by only adjusting one element of the subject page.
Be Decisive With A/B Testing
Too often, A/B testing projects get derailed because stakeholders continue to sit on decisions too long before taking action.
Never Stop Testing
It’s important to note that A/B testing is not a one-and-done activity but an ongoing process. Even if web pages are performing well, you might be able to get even more out of them with just a few simple changes.
Trust Data Not Gut Feeling
While gut feeling can lead to a better outcome in fantasy football, A/B testing is all about making a decision based on statistics and data. Tools such as Google analytics and Opitimizely are extremely intuitive and should help you to get to your decision points quickly