Dave Bell, Co-founder and CEO of Gummicube, offers advice on how to get the most out of A/B testing in the Apple and Google app stores.
A/B testing enables marketers and developers to further optimize their conversion potential on the app stores. Whether it be through running Google Play Experiments or using Apple’s new testing capabilities, developers can now test directly through app store consoles on both app store platforms.
Despite this ease of access, native A/B testing capabilities still pose hurdles in testing app store assets in full. Google Play Experiments, Apple’s Product Page Optimization, and Custom Product Pages can only give us part of the full picture. In the process of App Store Optimization (ASO), developers need the full picture to make more informed and data-backed growth strategies for long-term success in the app stores.
Know the limitations of each platform
Acknowledging the known limitations of each native testing platform is the first step toward creating a testing strategy that envelopes all conversion success factors. While native testing platforms provide valuable information, these limitations call on developers to look to other modes of A/B testing to address any discrepancies in their testing capabilities.
No two apps on the app store are completely alike – each has its specific target market, features to highlight, and unique functionalities. Thus, many developers still find that A/B testing on native platforms frequently miss the mark in offering more flexible and customizable testing capabilities that fit their unique needs.
Google Play Experiments
Google Play Experiments is the longest-running native A/B testing platform currently available to developers. Before Apple released its A/B testing platforms, some iOS developers used Google Play experiments to conduct tests if no other A/B testing platform was available. While this yielded often inaccurate results, it provided developers with a vague benchmark of performance. Compared to Apple’s newly-released A/B testing platforms, Google Play Experiments still has the most robust native testing functions available for developers to use.
Developers can test the following assets on Google Play Experiments:
The list of app assets that can be A/B tested on Google Play experiments is larger compared to Apple’s. However, many developers still struggle to differentiate between paid and organic traffic when they run Google Play experiments. Differentiation between paid and organic results is necessary for understanding the effectiveness of conversion on the app stores. Conflated traffic may blur the lines of paid vs. organic performance. By combining A/B testing with ASO, traffic can be more easily distinguished between paid and organic channels.
Apple’s Product Page Optimization (PPO)
Since Apple released PPO, many developers speculated testing capabilities that mimicked Google Play experiments in its scope and breadth of app asset testing. PPO gave iOS app developers the ability to test directly through App Store Connect, but developers faced limitations to what they can test compared to Google Play experiments or external testing channels.
As of now, PPO lets us test the following assets:
In addition to limited creative asset testing, user-facing metadata assets like the subtitle and app title cannot be tested. With such discrepancies in testing, developers often have to rely on Apple Search Ad campaigns to test the effectiveness of their remaining assets. This is costly, time-consuming, and results in a less streamlined testing strategy.
For multi-territory apps, PPO also poses a significant limitation to testing. Since PPO only allows developers to test a single page at a time, international territory tests must all run individually at different times. Otherwise, all tests currently running will stop, and the process must be repeated entirely.
Apple’s Custom Product Pages (CPP)
CPP is a game-changer in better user targeting. Developers no longer have to design an all-encompassing product page to capture each unique audience segment. With CPP, they could address all of their unique users with relevant metadata and creatives tailored to their needs and preferences. Developers could also drive external traffic to each CPP with the help of a custom URL which would redirect the user to one of the 35 CPPs.
Currently, CPP offers the following testing capabilities:
As you can see, CPPs aren’t 100 per cent fully customizable. CPP also limits developers in segment selection. Developers must make strategic decisions to address who and how they will target users with specific treatment changes, and CPPs will only capture metrics from those who have interacted with the external link.
The customization of product pages is an exciting, new axiom of the user experience that also helps developers target their users with more precision. However, reaching and segmenting these users can still seem impenetrable with current testing limitations in CPP and PPO. The assets that drive visibility and conversion can’t be tested – so what can be done to better spearhead these limitations?
Filling the gaps with external A/B testing
Developers have used external A/B testing channels to supplement native A/B testing capabilities ever since these native A/B testing limitations came to light.
In reality, there are multiple variables that affect app store performance. This includes the synergy of metadata, creatives, and external attributes to success like app ratings and reviews. All of these assets have a direct effect on the organic and paid performance of your app – especially conversion.
External texting allows us to mitigate the limitations of native A/B testing platforms with additional testing capabilities that allow developers to get the bigger picture of conversion optimization.
External A/B testing with ASO technology including Splitcube, allows developers to test every metadata and creative app store asset without having to deploy directly through the app stores. Moreover, Splitcube gives developers the opportunity to manually assign traffic, conduct simultaneous multivariate testing, and perform tests that may not be possible on the stores like pre-launch testing.
Native A/B testing tools on native platforms are powerful, but they only give developers a sliver of their full conversion potential. The factors that affect conversion are synergistic, thus developers need to account for the testing limitations that impede their ability to test all app store assets effectively. Developers must understand that conversion optimization requires:
Adjusting for known limitations in testing is no easy feat. External A/B testing channels like Splitcube help developers account for the current discrepancies in their testing capabilities. With the help of an ASO company with over 12 years of experience, you can better understand the factors in conversion optimization and what strategies help drive growth.
About the Author
Dave Bell is Co-founder and CEO of Gummicube. Gummicube is a global leader in App Store Optimization with more than 11 years of experience optimizing and marketing apps. We offer the leading enterprise ASO technology and agency services, providing support to clients around the world. Our company is trusted by thousands of enterprise brands and leading startups including Microsoft, LinkedIn, Bethesda, SWEAT, GrubHub, McAfee and many others.