We live in an era of rapid technological change. Most of what enterprise and consumers experience as change is actually new use cases, designs, appearances and experiences of less cutting edge technologies. The art is in intuitive, appealing and adaptive application of recently articulated capabilities. The growing citizenry of the web expects visually appealing and self-explanatory interface. There are over 3 billion people with internet access, 40% of the world’s population, as we go into 2015.
Given the large and still growing numbers of users, from every age group, culture, language, sub-culture, there is a growing challenge to communicate most effectively to impatient and diverse hordes with endlessly proliferating choices. How effectively and adaptively this hurdle is vaulted is the emerging measure of success. In the struggle to maintain, grow influence users, A/B testing is emerging as the data driven and rigorous path forward. The largest and most sophisticated organizations have been doing this for a while and there is every indication it has been and is generating huge returns to active users. As staffs migrate and word gets out about the value of iterative insight into use patterns, the consciousness of A/B testing is rising.
As data collection, storage and analysis grows in popularity and declines in price, A/B testing is growing in popularity. A large and growing community of companies is implementing, debating and experimenting with new or increased use of testing. Interestingly we are seeing the need and competitive pressure to test rising without seeing adaptation to the very different needs, budgets and goals of smaller institutions. This is the challenge of the future for many and has become a focus for a few exciting emerging start-ups. Among others, Optimizely comes to mind.
A/B testing involves using statistics to analyze data on usage patterns on apps, webpages, fonts, layouts, graphics, and phrasing. The name comes from the standard statistical method of comparing a status quo to an alternative being considered. The name A/B comes from comparing option A – usually the present choice- with an option B that is different. Data is collected on both option A and option B. The data collection depends heavily on the amount of page views, engagements or interactions that users have with option A and option B. This creates a number of serious problems that restrict smaller firms.
The statistical methods used depend on norms from econometrics and statistical testing. These metrics, true to statistics, rely on large quantities of data to arrive at statistical significance. Statistical significance is the difference between maybe option A is better than option B and, do option A and don’t do option B. In order to arrive at statistical significance a large quantity of data and an error tolerance must be decided upon. Whenever statistical sampling is used to estimate populations there are real risks that the results are the product of errors in sampling. All samples are different from the populations that they seek to be representative of. These differences between sample data and population data create inaccuracies and raise the prospects of false negative or false positive results. We always want to set the bar high in statistical testing. Thus, we generally try to reduce the risk that we are falsely rejecting the status quo, option A, at or below 5%. Statistical significance depends on the relative likelihood of different types of errors. Standard accuracy standards make it slow and expensive for many to do A/B testing. Competitive conditions and market conditions make it essential to do A/B testing and make adjustments immediately.
We see A/B testing coming to many firms and striving to bridge this gap. For today, this is a huge growth opportunity and will have a shaping influence on many firms.
Also featured in this edition is our news section containing articles from Re/Code, Computerworld, and PYMNTS.