Starting today, developers can test live Alexa skills with the same toolset used to test skills during development. As Alexa gets smarter and new capabilities are introduced, it’s always good to run ongoing regression tests after publishing your skill to ensure customers continue to have the great voice experience you have designed.
In the past, testing tools were only available in the developer console during skill development. Now you can debug skills and test after publishing your skill directly from the developer console, allowing you to deliver a consistent skill experience over time. With the ability to test live skills, you can simulate and reproduce any issues reported by customers. Live skill testing also allows you to set up automated test cases for production environments through the Skill Management API (SMAPI), which will report if there are any changes in expected skill behavior or the customer experience over time.
In this post, we walk you through three ways to test your live skill: in the developer console, through the Alexa Skills Kit (ASK) Command-Line Interface (CLI), and through the Simulation and Invocation SMAPI APIs.
The first place you can test live skills is in the developer console. In the “Test Tab,” select “Live” as your skill stage and begin to test your published skill.
After you select the live skill to test, you can debug customer-reported issues or investigate feedback using the Alexa simulator with the provided JSON responses.
If you need to dive deeper into a reported issue, the “Device Logs” option provides further insight into your skill, including information on the time it took a skill to respond and the list of directives being sent between requests. If you are a smart home developer, review the Device Change Reports for live debugging of smart home devices’ interaction with your published skill.
You can only test one skill stage at a time, either in development or live. In other words, by enabling your skill for testing while live, you will disable the development version of your skill from testing. Once you enable live testing in the developer console, your live skill will now be available for testing in the Developer Console and devices. After you make the “Live” selection, the testing website will redirect to the live stage URL and all session and testing information will be reset. If you have any information you want to preserve, save it before proceeding. This activity will also generate a new User ID and can be used to simulate a first-time user experience. If you set a specific AWS Lambda function to operate the live version of your skill, it will also be used while testing the live version of your skill.
In order to test a published skill when using the simulation, invocation, or dialog ASK CLI commands, simply set “live” as the stage in the format below. Commands will default to the “development” stage if stage is not specified.
dialog
command format:
ask dialog [-s|--skill-id <skill-id>] [-l|--locale <locale>] [-g|--stage <stage>] [-r|--replay <file-path>] [-o|--output <file-path>]
When using the dialog CLI with the ASK Toolkit for Visual Studio Code , you can now have a multi-turn conversation with your published skill from the independent development environment (IDE). This will return the JSON responses and debugging information in the output file for further investigation.
The “enable-skill” CLI command now allows you to enable your live skill for testing, so you can switch between testing the development and live versions of your skill without having to leave the ASK CLI. This setting will persist and allow you to test your live skill in the Developer Console and devices.
For details on simulation, invocation, dialog, and other ASK CLI commands, visit the ASK CLI Command Reference documentation.
You can now use version 2 of the Simulation and Invocation SMAPI APIs to test against your published skills. These APIs serve the purpose of programmatically performing Alexa skill management tasks. In this case, they can simulate skill execution and invoke your HTTPS endpoint (Lambda or otherwise). Using these APIs, you can create automated tests that safeguard your published skill from regressions. With the Invocation API, you can test your skill endpoint in isolation in order to verify that expected JSON responses are returned, and that your endpoint latency is always below an expected threshold. Similarly, the Simulation API allows you to perform end-to-end tests that ensure that your skill’s interaction model and endpoint continue to work as expected.
For details on the Simulation and Invocation APIs, visit the Get Started with Skill Testing Operations page.
Testing your skill after it has been published allows you proactively validate that customers are receiving a consistent experience. Whether you use the Developer Console, the ASK CLI, or SMAPI APIs, you now have the ability to test your published skill.
Get started by creating automated regression tests that continually run against your published skill. Visit our testing documentation page to start testing your live skill today. To see examples of how to build unit tests for an Alexa skill, visit our Github repository.
For more resources on Alexa skill testing, check out the following blogs: