Add APL Support to Your Skill

To add APL support to your skill, you must ensure that the skill service detects whether the customer's device supports APL, that the skill service returns a RenderDocument directive, and that the skill service listens for APL events. In addition, you must configure your skill to support the ALEXA_PRESENTATION_APL interface.

Configure your skill to support the ALEXA_PRESENTATION_APL interface

You can configure your skill to support the ALEXA_PRESENTATION_APL interface through the developer console or through ASK CLI.

Use the developer console

This section describes how to use the developer console to configure your skill to support Alexa Presentation Language.

The process to support the ALEXA_PRESENTATION_APL interface and to enable the use of the Alexa.Presentation.APL.RenderDocument directive, which is the directive used to display content on a screen with APL, is the same for a new skill and for an existing skill.

1. Open the developer console, and click Edit for the skill you want to configure.

2. Navigate to the Build > Custom > Interfaces page.

3. Enable the Alexa Presentation Language option. Click Build Model to re-build your interaction model.

4. In your skill service code, determine what interfaces the customer's device supports, so that your skill service then provides the appropriate responses with the appropriately rendered content, including visual content where appropriate. To determine the supported interfaces, parse the value of event.context.System.device.supportedInterfaces in the Alexa request. The value of supportedInterfaces determines the interfaces supported by the customer's device.

5. If the ALEXA_PRESENTATION_APL interface is supported, include the Alexa.Presentation.APL.RenderDocument directive in your skill responses to display content on screen as appropriate, just as you would include other directives in your skill responses.

6. If the ALEXA_PRESENTATION_APL interface is supported, you can also use the Alexa.Presentation.APL.ExecuteCommands directive in your skill responses to execute APL commands. APL commands are messages that change the visual or audio presentation of an APL document on screen.

Use ASK CLI

This section describes how to use ASK CLI, rather than the developer console, to configure your skill to support Alexa Presentation Language.

For general information about how to manage your skill through ASK CLI rather than the developer console, see Quick Start Alexa Skills Kit Command Line Interface.

If you use ASK CLI to manage your skill, then you must add ALEXA_PRESENTATION_APL to the list of supported interfaces in the skill manifest as follows.

1. Run this command to download your skill components: ask api get-skill -s amzn1.ask.skill.<skillId> See Get skill subcommand.

2. Edit the skill.json file, which is your skill manifest, and add ALEXA_PRESENTATION_APL to the list of interfaces, as shown here: Sample skill manifest.

3. Run this command to deploy the changed skill manifest: ask api update-skill -s amzn1.ask.skill.<skillId> -f skill.json. See Update skill subcommand.

4. To ensure you have correctly deployed the changed skill manifest, perform step #1 again to download the skill manifest. Open the skill.json file in your editor to confirm that the interfaces object contains ALEXA_PRESENTATION_APL.

APL and other display options

In general, do not use APL with other display options.

Do not use Display.RenderTemplate and Alexa.Presentation.APL.ExecuteCommands together in the same skill response, as that may have unexpected results.

Detect Alexa Presentation Language support in a customer device

The supportedInterfaces property lists each interface that the device supports. The name of the interface that is used to send APL documents and receive APL events from Alexa is ['Alexa.Presentation.APL'], as shown in this LaunchRequest example. See Request and Response JSON Reference.

This sample LaunchRequest includes ['Alexa.Presentation.APL'] as a supported interface, which indicates the customer's device includes screen support for APL.

Add APL support with the Node.js 2.0 SDK

See APL Support for the Node.js 2.0 SDK in Your Skill.

Add APL support to your skill without Node.js 2.0 SDK support

If you are creating an Alexa skill without using the Node.js 2.0 SDK, you must ensure your skill services returns a RenderDocument directive. See Request and Response JSON Reference.

Your skill response should include a directive, as shown in the following example:

Listen for APL UserEvents from Alexa

When listening for events raised by an APL document, a skill must listen for the Alexa.Presentation.APL.UserEvent request.

Sample Alexa.Presentation.APL.UserEvent skill request

Make APL resources available with CORS (Cross-Origin Resource Sharing)

You can create a skill that references APL resources hosted on an HTTPS endpoint. This endpoint must meet these requirements:

  • The endpoint provides an SSL certificate signed by an Amazon-approved certificate authority. Many content hosting services provide this. For example, you could host your files at a service such as Amazon Simple Storage Service (Amazon S3) (an Amazon Web Services offering).
  • The endpoint must allow cross-origin resource sharing (CORS) for the images.

To enable CORS, the resource server must set the Access-Control-Allow-Origin header in its responses. To restrict the resources to just Alexa, allow just the origin be whitelisted to *.amazon.com.

If your resources are in an Amazon S3 bucket, you can configure your bucket with the following CORS configuration:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>

For more about S3 and CORS, see Enabling Cross-Origin Resource Sharing.