No results found

Try a different or more specific query
Developer Console
Amazon Developer Blogs

Amazon Developer Blogs

Showing posts tagged with App Testing Service

October 07, 2014

Paul Cutsinger

It started with a 90 second test to see if your Android app is ready to launch in the Amazon Appstore.

Then there was added support for Fire phone and screenshots from the actual device.

Now, you can get the results from your device testing even if you don’t have a developer account. So, grab your APK and get test results in 90 seconds.

Test your apps for Fire Tablets, Fire Phone and Appstore for Android in just a few minutes. 75% of existing apps and games we've tested require no changes before going live. You can find out whether your app has any of the common issues that can block publication on the Amazon Appstore. Our App Testing Service also gives you access to additional test results that show you how your app looks and performs on live devices. Start the test here.

@PaulCutsinger

January 13, 2014

Peter Heinrich

Closely following the launch of the A/B Testing service for iOS, Android, and Fire OS apps, we have just released an update addressing one of our most popular feature requests. You can now track up to ten goals in a single A/B test, which means you can see how your experiment affects up to ten metrics at once. This is especially powerful when the metrics aren’t entirely independent and it would be difficult to create A/B tests to isolate them from each other. Let me illustrate with an example.

Say you have a mobile game that generates revenue using a combination of in-app purchasing (IAP) and mobile ads. You know that player engagement is the key to monetization, so you decide to test a hunch that more challenging levels will keep players in the game longer.

You create an A/B test project for your app, adding an experiment that allows you to adjust the overall difficulty of each level. Since you can have up to five variations for each test (see A/B/n testing for more information), you decide to measure player engagement when the game is much harder, slightly harder, slightly easier, and much easier than normal. “Normal” will be a variation of its own, called the Control.

In this case, you create a test variable called difficultyMultiplier, which your code can access and use to modify its behavior for each user. For the control group (60% of players in this example), difficultyMultiplier is 1.00, indicating no change from the default difficulty. The other groups see a slightly different value for difficultyMultiplier, depending on how hard the game should be for those players.

To measure the effect of changing this variable, you define a view event and a conversion event, which your code records as they happen and reports to the A/B Testing service. For the purposes of this test, you consider it a view whenever a player starts a new game session. A conversion is registered if he/she plays for five minutes or more. The A/B Testing service tabulates these events by variation and reports on the conversion rate for each group of users.

Say you run the experiment and discover your hunch was right: harder levels are played longer, leading to an increase in the average amount of time players engage with your game. The logical next step would be to ratchet up game difficulty. But what if improved engagement isn’t the whole story? Changing the difficulty may affect other metrics you care about, but you can’t always tell based on a single type of conversion event. For example, how does this change the way people share their progress on Facebook, a major customer acquisition channel? How does it impact ad click-thru rates? Does it impact how users rate the game? Setting multiple goals can help you detect such unintended consequences and choose the variation that delivers balanced results.

Now that the latest version of the A/B Testing service allows a single view event to be associated with up to ten different conversion events (goals), you can measure and compare the impact of each variation along more than one axis. Each goal can be maximized or minimized independently. For example, here you are trying to maximize game sessions, in-app purchases, ad clicks, and Facebook shares while minimizing one-star reviews, all in the same experiment.

When generating reports, the A/B Testing service includes the results for all goals associated with an experiment, organized by variation. The service highlights the “best” variation with respect to each goal, so you can tell at a glance which one resulted in the most game sessions, for example (Variation C), or maximized shares on Facebook (Variation A).

When goals overlap or depend on one another, as they do here, there may be no single variation that definitively “wins” every goal. A report like the one above, however, can help you make an educated choice, weighing the trade-offs of each alternative. In this case, Variation B looks like a good candidate since it succeeded in minimizing one-star reviews and came close to winning several other goals as well. When you look at the big picture, Variation B appears to have the best performance overall.

The orange checkmarks indicate which results achieved statistical significance—that is, where there are enough measurements to be confident that the observed change is actually due to the test variation. More details are available for each individual goal, so you can drill down on the ad clicks, for example, associated with each variation:

With the addition of up to ten goals for a single experiment, the A/B Testing service expands its flexibility and becomes an even more powerful tool for refining your app and optimizing it based on customer behavior. For more information on A/B testing, multiple goals, and how you can incorporate them into your mobile app or game, check out the online documentation.

 

November 07, 2013

Mike Hines

Now is one of the best times of year to submit your apps to the Amazon Appstore and have them published for Android phones and tablets, including the new Kindle Fire line of tablets. In 2012, we saw a 50% increase in the number of app downloads during Thanksgiving week as compared to an average week. During ‘Digital Week’ in 2012, the week after Christmas, customers purchased and downloaded 600% more apps than an average week during the year.

As we’ve noted in earlier blog posts, it’s easy to get started as 75% of the Android tablet apps we’ve tested already work on Kindle Fire without any extra development. Amazon also has a tool that quickly ensures your apps have the best chance of passing both Amazon Appstore and Kindle Fire compatibility testing. Even if you’ve already got an app in the Amazon Appstore, you can use this service to check out any updates you plan on submitting.

The testing tool works fast and screens your apps for potential errors or incompatibilities. For example, you’ll learn:

  • If there are structural issues with implementation of Amazon APIs
  • If you are using any libraries that might impact compatibility
  • If your app has features that are not supported by some Kindle Fire devices

If an issue is found, you will also get some suggestions for fixing the problem. Note, the tool is not designed to replace debugging in your IDE, and it won’t find null pointer exceptions or similar coding errors.

To get started, you can find the tool in the SDK & Tools area of the Developer Portal where there is now a link for the App Testing Service.
 


 

The App Testing Service detail page includes a brief description of the tool and a button that initiates the App Testing Service. Clicking that button brings you to the app testing page (below) which contains a control into which you can drag your .apk.
 


To give you a sense on the experience you can expect, I’ll walk you through the short process. I started with a small quotation app that I created and dragged it into the tool. The testing was complete in under a minute and my app passed. The tool then displayed a ‘Submit to Amazon Appstore’ button that I could use to start the app submission process.

Next, I tested the same app that used Google In-App Billing instead of Amazon In-App Purchasing. The tool caught that error and correctly identified the issue and offered suggestions for fixing my app. Here is that test result:
 


Here is another test result that identifies an error in In-App Purchasing implementation:
 


Once the test is complete, you can find the results of this and all your tests in a table at the bottom of the app testing page. This lets you go back and re-visit previous issues and recommendations across all the apps you have tested.

So don’t miss out on getting in front of all those customers during the holiday season. Save time and go to https://developer.amazon.com/tya/welcome.html and make sure your apps are ready to submit to the Amazon Appstore.