Beta software gets a bad rap. It is usually the first public release and comes as part of the final phase of testing, which means it isn’t always polished and bugs are common. It should be considered more than just a public bug bash, though. A beta serves another important purpose: it is your first chance as a developer to see if the software does its job and meets the need it is designed to meet.
Releasing your application to a community of “beta users” lets you observe unscripted interaction with your software in a wide variety of real-world environments. Managing this kind of release (a “beta test”), though, and collecting the interaction data has traditionally been a logistical chore.
Say we have a mobile game with multiple levels. How should they be presented? The obvious answer is in order of increasing difficulty, but maybe it makes sense to alternate easy and hard levels so players don’t become discouraged. Perhaps they should be arranged in groups, each of which starts easy and grows more challenging as a player advances.
This is the kind of fundamental behavioral question you might try to answer with a beta test. Using traditional techniques, you would code one approach, release it to your beta users, and evaluate its success based on, for example, the number of levels completed or highest level completed. Then you would code the next approach, re-release the updated app, record the user response to the new version, and so on. Finally, you would select the most successful approach for your production release.
There are obvious disadvantages to performing a beta test this way. You must release a new version of your app for each variation you want to evaluate. Beta periods are longer because you must test variations sequentially (unless you manage several different groups of beta users). Test results may be influenced by users’ previous experience, since they won’t necessarily bring fresh eyes to each variation.
For mobile app developers, distributing software to a limited group of users may not even be possible. Once an app is published to an app store—often the only practical way to handle distribution—it is automatically available to everyone using that store. App updates must also go through the store, which means resubmitting a new version and potentially restarting the approval process.
Although everyone agrees that beta testing is important, most developers would also agree it is very time-consuming and difficult to execute.
Amazon’s A/B Testing Service is designed to run in-app experiments, which also makes it perfect for beta tests. A free, cross-platform service that supports iOS, Fire OS, and Android, it can handle multiple tests at once with up to five variations each. You can use it to make app adjustments on the fly without updating client-side code or redeploying your app.
The service also makes it easy to manage beta users, since it has built-in support for user segmentation. This means that you can target your beta test to users matching particular criteria (that you specify), or even run more than one test at once targeting different groups.
The Amazon Developer Console provides an online dashboard from which you can monitor and control all aspects of your beta tests.
We can use Amazon’s A/B Testing Service to address these issues and simplify the testing process. First, we create a project to represent our test.
Next, we identify which users will be included in the test by defining a special segment. Only beta users will see the variations; everyone else will see the default behavior (also called the control).
Finally, we create the actual A/B test we want to run and describe each variation in terms of the variables that connect it to our app. We will evaluate success based on the percentage of time that players complete Level 5 after starting Level 5.
The user segment we defined above (betaUsers) will be divided equally, with each subgroup seeing one of the variations we described. We can use the Amazon Developer Console to review the results of our test.
As the customers in the “betaUsers” segment, play the game data is uploaded to the service is tabulated within an hour. Once the confidence reaches statistical significance (100% in the example above) you can decide how you’d like to react. You can pick the winning variation and make that live for all new users. In this situation, we see that the existing Level 5 design (Variation A) is better than the new level 5 (Variation B) that we’re testing. Somehow, we made the level worse and shouldn’t deploy this particular change to everyone.
Beta testing is an effective way to test different configurations in your app under real-world conditions, but it’s also a good opportunity to evaluate user satisfaction with your app’s basic operation. Using Amazon’s A/B Testing Service, you can easily compare different functional implementations of key features or behavior, allowing you to identify which approach will resonate most with your users.
To learn more about A/B Testing and how you can incorporate it into your app development, see Amazon’s developer website and the A/B Testing Service documentation: