No results found

Try a different or more specific query
Developer Console
Appstore Blogs

Appstore Blogs

Want the latest?

appstore topics

Recent Posts

Archive

Showing posts tagged with AB Testing

July 02, 2014

Peter Heinrich

Beta software gets a bad rap. It is usually the first public release and comes as part of the final phase of testing, which means it isn’t always polished and bugs are common. It should be considered more than just a public bug bash, though. A beta serves another important purpose: it is your first chance as a developer to see if the software does its job and meets the need it is designed to meet.

Making Beta Testing Easier

Releasing your application to a community of “beta users” lets you observe unscripted interaction with your software in a wide variety of real-world environments. Managing this kind of release (a “beta test”), though, and collecting the interaction data has traditionally been a logistical chore.

Say we have a mobile game with multiple levels. How should they be presented? The obvious answer is in order of increasing difficulty, but maybe it makes sense to alternate easy and hard levels so players don’t become discouraged. Perhaps they should be arranged in groups, each of which starts easy and grows more challenging as a player advances.

This is the kind of fundamental behavioral question you might try to answer with a beta test. Using traditional techniques, you would code one approach, release it to your beta users, and evaluate its success based on, for example, the number of levels completed or highest level completed. Then you would code the next approach, re-release the updated app, record the user response to the new version, and so on. Finally, you would select the most successful approach for your production release.

There are obvious disadvantages to performing a beta test this way. You must release a new version of your app for each variation you want to evaluate. Beta periods are longer because you must test variations sequentially (unless you manage several different groups of beta users). Test results may be influenced by users’ previous experience, since they won’t necessarily bring fresh eyes to each variation.

For mobile app developers, distributing software to a limited group of users may not even be possible. Once an app is published to an app store—often the only practical way to handle distribution—it is automatically available to everyone using that store. App updates must also go through the store, which means resubmitting a new version and potentially restarting the approval process.

Although everyone agrees that beta testing is important, most developers would also agree it is very time-consuming and difficult to execute.

A/B Testing Simplifies Everything

Amazon’s A/B Testing Service is designed to run in-app experiments, which also makes it perfect for beta tests. A free, cross-platform service that supports iOS, Fire OS, and Android, it can handle multiple tests at once with up to five variations each. You can use it to make app adjustments on the fly without updating client-side code or redeploying your app.

The service also makes it easy to manage beta users, since it has built-in support for user segmentation. This means that you can target your beta test to users matching particular criteria (that you specify), or even run more than one test at once targeting different groups.

The Amazon Developer Console provides an online dashboard from which you can monitor and control all aspects of your beta tests.

Make the App Your Customers Want:  Set Up an A/B Test

We can use Amazon’s A/B Testing Service to address these issues and simplify the testing process. First, we create a project to represent our test.

Next, we identify which users will be included in the test by defining a special segment. Only beta users will see the variations; everyone else will see the default behavior (also called the control).

Finally, we create the actual A/B test we want to run and describe each variation in terms of the variables that connect it to our app. We will evaluate success based on the percentage of time that players complete Level 5 after starting Level 5.

Macintosh HD:Users:cutsinge:Pictures:ABTesting copy.png

The user segment we defined above (betaUsers) will be divided equally, with each subgroup seeing one of the variations we described. We can use the Amazon Developer Console to review the results of our test.

Macintosh HD:Users:cutsinge:Pictures:AbTestingResults copy.png

As the customers in the “betaUsers” segment, play the game data is uploaded to the service is tabulated within an hour. Once the confidence reaches statistical significance (100% in the example above) you can decide how you’d like to react. You can pick the winning variation and make that live for all new users. In this situation, we see that the existing Level 5 design (Variation A) is better than the new level 5 (Variation B) that we’re testing. Somehow, we made the level worse and shouldn’t deploy this particular change to everyone.

Next Steps

Beta testing is an effective way to test different configurations in your app under real-world conditions, but it’s also a good opportunity to evaluate user satisfaction with your app’s basic operation. Using Amazon’s A/B Testing Service, you can easily compare different functional implementations of key features or behavior, allowing you to identify which approach will resonate most with your users.

To learn more about A/B Testing and how you can incorporate it into your app development, see Amazon’s developer website and the A/B Testing Service documentation:

-peter (@peterdotgames)

 

January 16, 2014

Peter Heinrich

A/B Testing is about using data to challenge assumptions and test new ideas. Watch this video to hear about the “happy accident” that inspired an important A/B test we hadn’t considered and how it led to an increase in retention and monetization for Air Patriots.

Created in-house at Amazon, Air Patriots is a plane-based tower defense game for iOS, Android, and Fire OS. The development team uses A/B Testing to experiment with new ideas, so I recently I sat down with Senior Producer Russell Carroll and Game Development Engineer Julio Gorge to discuss how they used the service on Air Patriots. They described for me the design choices they tested, how the experiments were constructed, and what benefits they derived from the results.

Check out the conversation to learn how Russell and Julio’s experience on Air Patriots made them advocates for A/B Testing in every mobile app, especially those offering in-app purchase.

 

January 13, 2014

Peter Heinrich

Closely following the launch of the A/B Testing service for iOS, Android, and Fire OS apps, we have just released an update addressing one of our most popular feature requests. You can now track up to ten goals in a single A/B test, which means you can see how your experiment affects up to ten metrics at once. This is especially powerful when the metrics aren’t entirely independent and it would be difficult to create A/B tests to isolate them from each other. Let me illustrate with an example.

Say you have a mobile game that generates revenue using a combination of in-app purchasing (IAP) and mobile ads. You know that player engagement is the key to monetization, so you decide to test a hunch that more challenging levels will keep players in the game longer.

You create an A/B test project for your app, adding an experiment that allows you to adjust the overall difficulty of each level. Since you can have up to five variations for each test (see A/B/n testing for more information), you decide to measure player engagement when the game is much harder, slightly harder, slightly easier, and much easier than normal. “Normal” will be a variation of its own, called the Control.

In this case, you create a test variable called difficultyMultiplier, which your code can access and use to modify its behavior for each user. For the control group (60% of players in this example), difficultyMultiplier is 1.00, indicating no change from the default difficulty. The other groups see a slightly different value for difficultyMultiplier, depending on how hard the game should be for those players.

To measure the effect of changing this variable, you define a view event and a conversion event, which your code records as they happen and reports to the A/B Testing service. For the purposes of this test, you consider it a view whenever a player starts a new game session. A conversion is registered if he/she plays for five minutes or more. The A/B Testing service tabulates these events by variation and reports on the conversion rate for each group of users.

Say you run the experiment and discover your hunch was right: harder levels are played longer, leading to an increase in the average amount of time players engage with your game. The logical next step would be to ratchet up game difficulty. But what if improved engagement isn’t the whole story? Changing the difficulty may affect other metrics you care about, but you can’t always tell based on a single type of conversion event. For example, how does this change the way people share their progress on Facebook, a major customer acquisition channel? How does it impact ad click-thru rates? Does it impact how users rate the game? Setting multiple goals can help you detect such unintended consequences and choose the variation that delivers balanced results.

Now that the latest version of the A/B Testing service allows a single view event to be associated with up to ten different conversion events (goals), you can measure and compare the impact of each variation along more than one axis. Each goal can be maximized or minimized independently. For example, here you are trying to maximize game sessions, in-app purchases, ad clicks, and Facebook shares while minimizing one-star reviews, all in the same experiment.

When generating reports, the A/B Testing service includes the results for all goals associated with an experiment, organized by variation. The service highlights the “best” variation with respect to each goal, so you can tell at a glance which one resulted in the most game sessions, for example (Variation C), or maximized shares on Facebook (Variation A).

When goals overlap or depend on one another, as they do here, there may be no single variation that definitively “wins” every goal. A report like the one above, however, can help you make an educated choice, weighing the trade-offs of each alternative. In this case, Variation B looks like a good candidate since it succeeded in minimizing one-star reviews and came close to winning several other goals as well. When you look at the big picture, Variation B appears to have the best performance overall.

The orange checkmarks indicate which results achieved statistical significance—that is, where there are enough measurements to be confident that the observed change is actually due to the test variation. More details are available for each individual goal, so you can drill down on the ad clicks, for example, associated with each variation:

With the addition of up to ten goals for a single experiment, the A/B Testing service expands its flexibility and becomes an even more powerful tool for refining your app and optimizing it based on customer behavior. For more information on A/B testing, multiple goals, and how you can incorporate them into your mobile app or game, check out the online documentation.

 

December 02, 2013

Peter Heinrich

Join me Thursday, December 5th at 10:00am PST for a live webinar demonstrating Amazon’s A/B Testing service on iOS.

  • Learn the five reasons to include A/B testing in every iOS app you write from now on, and how your customers benefit when you adopt A/B testing methodology.
  • I’ll explain how to integrate the service into your project and add the appropriate hooks to record conversion events and enable data collection and analysis.
  • I’ll also demonstrate user segmentation and teach you how to use simple and complex filters to restrict your tests to just the users you define.

If you have ever wondered why A/B testing is good for your mobile apps and games, or simply been curious about how Amazon’s service works on iOS, sign up for the webinar today.

 

October 29, 2013

Mike Hines

Following up on the latest in a series of webinars covering Amazon devices, game services, and mobile applications, here’s a list of questions we collected during and after our presentation on the Amazon AB Testing API.

Q: AB test requires internet connection at every launch so even i switch to 100% for say thanksgiving.. it will not applicable to user if he is not connected to internet.. Is there any way to make it concrete...
A: If you query the server for a variant each time you use the app, the user will get the default value (“ABTest Default”) from this line:

newText = varNewText.getVariableAsString("varNewText", "ABTest Default");

If that default were set to “Thanksgiving”, you could be sure that you would get this value even if offline.

If you have a different, undesired value as default, you will need to wait until the user launches the app while online to effect the change. Even then, when offline, the user will see Default again.  To make sure this doesn’t happen, you can save the value once it’s been set. Then you can check to see if there is a connection before resetting the value from the server, and use the stored value if there is no connection. This way, the value can always be “Thanksgiving” (or whatever variant you select), even when the user is offline.

Q: if we r using a|b testing in our google play apps, does it need the amazon appstore app installed (for drm validation) in the device or just adding the insights jar file in our google play app will do?
A: Amazon Appstore does not need to be installed for A|B Testing to work. Just adding the Insights jar file will work.

Q: when i was a|b testing, i noticed delays in getting the variables  in my app as well as viewing the results in the portal... why is there a delay? what is stopping from real-time measurements on the portal?
A: The call to the Insights service updates asynchronously, and results are not posted to the server real-time.  It is reasonable to see some delay before the results appear in the portal.  When offline, the service batches all collected data and saves it until the customer’s device connects with our service.

 

Don’t miss out on our next webinar event: 
Tips & Tricks: How To Test, Submit, and Earn Revenue with Your HTML5 Mobile Web Apps
on November 7th, 2013 @ 10:00 AM
Pre-register here!
 

 

May 14, 2013

Mike Hines

As developers, we’re occasionally (okay, maybe more than occasionally) stuck in the middle between designers who ‘know what works’ and executives who ‘know what they want.  Even in smaller shops, it may not be clear which user experience will more often result in the desired behavior. Beyond simple use tracking, testing two different options to determine which works better usually meant two separate APKs and a lot of data mining and management to see which was the best.

Amazon has changed that with the release of the A/B Testing Service, where developers can run experiments within one APK. You define the variables to test in each of two variations, and then decide what percentage of downloads will get each variation. The app then collects data and allows you to make an informed decision about which variation you want to enable. These variations could vary from the speed of the ball in a game, or the message displayed while trying to upsell an In-App purchase item like extra lives. It’s easy to configure and integrate the A/B Testing Service with your app and it’s also free for any developer distributing their apps on the Amazon Mobile App Distribution Program for Android.

In this post, you will learn how to integrate A/B testing into your app. For our example, we will use the “Snake Game”. In the traditional game, the speed of the snake increases every time it is fed.  We will run tests to figure out the optimal speed increment in order to ensure that the game is neither too easy nor too hard for the player and that the player is always engaged.  In our case, a successful test would be if 70% - 73% of players are able to feed the snake 20 times before the snake collided with the boundary or the snake itself. This will give us objective data on whether the increment was too high, too small, or just right.

Creating your test

Once you have identified your test, you can create an A/B test by going to the Mobile App Distribution page to create it.

In our example, we will create a project called “Snake Speed Project” and an A/B test called snakeSpeedTest. We will use this to test out various increments in the speed of the snake until we find the optimal one.

To configure an A/B test you will need the following information:

  1. Name of the test
  2. Event name to count the number of views
  3. Event name to count the number of conversions
  4. Variable name for each variation
    1. Variation A
    2. Variation B
  5. Distribution percentage

In our example, the test would look like the screenshot below:

AB Testing Setup Form

For more details on how to setup an A/B test, please visit the startup guide.

Integrating the API

Now that you have a test set up in the Mobile App Distribution page, you’re ready to integrate it into your application. For this, you will need to download the SDK.

After downloading the SDK you will need to integrate it into your project. For more information on how to setup your project, please visit Integrate the SDK.

To initialize the Amazon Insights SDK, you will need to obtain the following from the Mobile App Distribution page:

  1. Application key – Which can be retrieved by going to your “My Apps” dashboard and selecting the app. One of the properties available in the General Information is the Application Key.
  2. Private Key Which can be retrieved by going to the  app’s A/B testing page and click on “View Private Key”.

You can now initialize the SDK using these two keys.

// Configure the Amazon Insights SDK
AmazonInsights
    .withApplicationKey(YOUR_APPLICATION_KEY)
    .withPrivateKey(YOUR_PRIVATE_KEY)
    .withContext(getApplicationContext())
    .initialize();

Now that your application is initialized, you can start receiving different variations for your test. In our case, it is the increment by which to increase the snake speed. 

//Get a variation allocated for the “Snake Revive Project” experiment and
//retrieve the speed variable.

ABTest
    .getVariationsByProjectNames("Snake Speed Project")
    .withVariationListener("Snake Speed Project",new VariationListener() {
        public void onVariationAvailable(Variation variation) {
 
          speedMultiplier = variation.getVariableAsString("speedMultiplier",
           "feedingTime");
            //... increase speed.

    }}

); 

 After you have successfully retrieved the variation, you would need to notify the Amazon A/B Testing Service of a successful view. You can do that by simply adding the following code. (Note that snakeSpeedIncremented is the same event we added in the A/B testing portal page for counting views)

 // record when the snake is fed for the first time only (visit)
CustomEvent
    .create("snakeSpeedIncremented")
    .record();

 Once the game ends by either the snake colliding with the boundary or itself, we will check the count of the how may times it was fed. If it was more than 20, then we will record a successful conversion. (Note: snakeLevel20Reached is the same event we added in the A/B testing portal page for counting conversions)

 // record if number of feeds is more than 20.
if (noOfFeeds > 20) {
    CustomEvent
        .create("snakeLevel20Reached")
        .record();

}

 Once you have incorporated the SDK and deployed it through the Amazon Mobile App Distribution Program, you now start collecting data.

In our case, we determined that 95% of the players reached level 20 for both test increments, which suggest that the game play was easier than our target. We ran additional rounds of tests by doing launches with new increments and found that the 1.65 multiplier was the optimal level of difficulty, as the conversation rate was around 71%. Refining the increment amounts to do new rounds can be done by just going to the A/B test page. No new APK is required.

The Start your first A/B test guide tells you how you can start an A/B test, view results, and end a test.

As you can see, setting up and integrating Amazon’s A/B Testing Service is simple and straightforward. The best part is that it’s free for developers of the Amazon Mobile App Distribution Program.

December 05, 2012

Amazon Mobile App Distribution Program

Today, we announced a new,free A/B Testing service for developers like you, who distribute their app or game through the Amazon Mobile App Distribution Program. This service was built to help you improve your customer retention and monetization through in-app experimentation. Amazon’s A/B Testing service is easy to integrate, simple to control, and is built on Amazon Web Services. This means you can be up and running in less than a day and trust that the service will scale with your app.

 

When we set out to buil dan A/B Testing service, we met with developers to learn what they needed most.We discovered that it was something very simple--to better understand customer needs and to be able to react to those needs quickly. Our A/B Testing service provides simple to integrate tools that enable you to continually create and run experiments, view how customers are reacting to these experiments, and release new, improved experiences without writing any more code or resubmitting your game or app.

 

The service’s benefits include:

 

Free to Use: our A/B Testing service is free to use for developers distributing their app or game through the Amazon Mobile App Distribution Program.

 

Easy Integration: early partners report that the SDK can be integrated and ready for release in less than a day.

 

Precise Control: set up experiments and monitor results from the familiar Mobile App Distribution Portal.

 

Painless Deployment: server-side logic allows you to quickly iterate tests and deploy new, improved experiences to customers without having to resubmit your APK or write any additional client-side code.

 

Effortless Scaling: built on Amazon Web Services, Amazon’s A/B Testing service lets you focus on building great games and apps instead of architecting scalable back-end services.

 

With Amazon’s A/B Testing service you no longer need to guess when deciding between different customer experiences.You can evaluate which in-game promotion drives better performance, which button design maximizes customer click-through, or which tutorial offers the highest conversion rate. 

 

The Amazon A/B Testing service is currently available in beta. Learn more and get started with A/B Testing here

Want the latest?

appstore topics

Recent Posts

Archive