Gracias por tu visita. Esta página solo está disponible en inglés.

Add APL Support to Your Skill

To successfully add APL support to your skill, ensure that:

Configure your skill to support the ALEXA_PRESENTATION_APL interface

You can configure your skill to support the ALEXA_PRESENTATION_APL interface through the developer console or through ASK CLI.

Use the developer console

This section describes how to use the developer console to configure your skill to support Alexa Presentation Language.

The process to support the ALEXA_PRESENTATION_APL interface and to enable the use of the Alexa.Presentation.APL.RenderDocument directive, which is the directive used to display content on a screen with APL, is the same for a new skill and for an existing skill.

1. Open the developer console, and click Edit for the skill you want to configure.

2. Navigate to the Build > Custom > Interfaces page.

3. Enable the Alexa Presentation Language option. Click Build Model to re-build your interaction model.

4. In your skill service code, determine what interfaces the customer's device supports, so that your skill service then provides the appropriate responses with the appropriately rendered content, including visual content where appropriate. To determine the supported interfaces, parse the value of event.context.System.device.supportedInterfaces in the Alexa request. The value of supportedInterfaces determines the interfaces supported by the customer's device.

Use ASK CLI

This section describes how to use ASK CLI, rather than the developer console, to configure your skill to support Alexa Presentation Language.

For general information about how to manage your skill through ASK CLI rather than the developer console, see Quick Start Alexa Skills Kit Command Line Interface.

If you use ASK CLI to manage your skill, then you must add ALEXA_PRESENTATION_APL to the list of supported interfaces in the skill manifest as follows.

1. Run this command to download your skill components: ask api get-skill -s amzn1.ask.skill.<skillId> See Get skill subcommand.

2. Edit the skill.json file, which is your skill manifest, and add ALEXA_PRESENTATION_APL to the list of interfaces, as shown here: Sample skill manifest.

3. Run this command to deploy the changed skill manifest: ask api update-skill -s amzn1.ask.skill.<skillId> -f skill.json. See Update skill subcommand.

4. To ensure you have correctly deployed the changed skill manifest, perform step #1 again to download the skill manifest. Open the skill.json file in your editor to confirm that the interfaces object contains ALEXA_PRESENTATION_APL.

Detect Alexa Presentation Language support in a customer device

The supportedInterfaces property lists each interface that the device supports. The name of the interface that is used to send APL documents and receive APL events from Alexa is ['Alexa.Presentation.APL'], as shown in this LaunchRequest example. See Request and Response JSON Reference.

This sample LaunchRequest includes ['Alexa.Presentation.APL'] as a supported interface, which indicates the customer's device includes screen support for APL.

Add APL support with the Node.js 2.0 SDK

See APL Support for the Node.js 2.0 SDK in Your Skill.

Add APL support to your skill without Node.js 2.0 SDK support

If you are creating an Alexa skill that supports the ALEXA_PRESENTATION_APL interface, without using the Node.js 2.0 SDK, you must ensure your skill service returns a Alexa.Presentation.APL.RenderDocument directive in its responses, in order to display content on screen as appropriate. See Request and Response JSON Reference.

If the ALEXA_PRESENTATION_APL interface is supported in your skill, you can use the Alexa.Presentation.APL.ExecuteCommands directive in your skill responses to execute APL commands. APL commands are messages that change the visual or audio presentation of an APL document on screen.

In general, do not use APL with other display options. Do not use Display.RenderTemplate and Alexa.Presentation.APL.ExecuteCommands together in the same skill response, as that may have unexpected results.

This sample skill response includes an Alexa.Presentation.APL.RenderDocument directive.

Listen for APL UserEvents from Alexa

When listening for events raised by an APL document, a skill must listen for the Alexa.Presentation.APL.UserEvent request.

Sample Alexa.Presentation.APL.UserEvent skill request

Make APL resources available with CORS (Cross-Origin Resource Sharing)

If your skill references APL resources hosted on an HTTPS endpoint, you must ensure that this endpoint meets these requirements:

  • The endpoint provides an SSL certificate signed by an Amazon-approved certificate authority. Many content hosting services provide this. For example, you could host your files at a service such as Amazon Simple Storage Service (Amazon S3) (an Amazon Web Services offering).
  • The endpoint must allow cross-origin resource sharing (CORS) for the images.

To enable CORS, the resource server must set the Access-Control-Allow-Origin header in its responses. To restrict the resources to just Alexa, allow just the origin be whitelisted to *.amazon.com.

If your resources are in an Amazon S3 bucket, you can configure your bucket with the following CORS configuration:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>

For more about S3 and CORS, see Enabling Cross-Origin Resource Sharing.