Understand Alexa Presentation Language (APL)
With Alexa Presentation Language (APL), you can create visual experiences to accompany your skill. Users can see and interact with your visual experiences on supported devices such as the Echo Show, Fire TV, some Fire tablets, and other devices. You can include animations, graphics, images, slideshows, and video in your visual experience.
APL and skill flow
All custom skills use a request and response interface. Alexa sends your Lambda function or web service a request. Your skill handles this request and returns a response. APL works within this familiar framework:
- Your skill gets a normal
LaunchRequest
orIntentRequest
to start an interaction with the user. - In your skill response, you can return directives to tell the device to display your APL content and run commands. The
Alexa.Presentation.APL
interface defines these directives. The payload of the directive includes an APL document, which is a structure that defines how you want to display your content on the viewport. The payload also includes a data source, which provides the specific data to display. For more information about both of these concepts, see What do you build to use APL in a skill? -
Your skill can listen for requests triggered by user actions, such as when the user selects a button on the screen. The
Alexa.Presentation.APL
interface also defines these request types. You create handlers in your skill code to accept and process these requests, similar to the handlers you create for your intents.Note: An APL document can also trigger commands that change the display without sending a request to your skill. For example, you can define a button on the screen that triggers video playback or an animation directly. In this case, Alexa doesn't need to send a request to your skill, and then wait for a response. - As with any skill, users can make voice requests, which are sent to your skill as a normal
IntentRequest
request. For a good user experience, your skill should let users interact with your skill with both voice and touch and not force users to just one input mode.
For example, a user might have the following interaction with an APL skill:
The user's device is an Echo Show.
User: Alexa, open Quick Trivia
The Echo Show displays a welcome animation as Alexa begins speaking.
Alexa: Welcome to Quick Trivia! Are you ready to play?
User: Yes.
The Echo Show displays a scrolling list of trivia categories.
Alexa: OK, please choose the Quick Trivia category you want to play! You can say a category or select an item on the screen.
User: (User touches the "Animals" category item on the screen. Alternatively, the user could have responded with an utterance like "Play the animals category." )
The device displays the question over a related background image.
Alexa: OK, we'll do questions about animals. Here's your first question: text of the trivia question… (Alexa reads the question text and highlights each line)
After reading the question, the screen updates to show a list of possible answers to choose from.
The user responds to the questions either with voice or by touching options on the screen.
The following table shows how the APL directives and requests are used in this interaction.
User interaction | Skill requests/responses |
---|---|
The user's device is an Echo Show. |
Your skill gets a normal
|
The Echo Show displays a welcome animation as Alexa begins speaking. |
The Echo Show renders the document provided in your response. This document displays a welcome animation. When Alexa finishes speaking, the device opens the microphone to listen for the user's response. |
User: Yes. |
Your skill gets a normal |
The Echo Show displays a scrolling list of trivia categories. |
The Echo Show renders the document provided in your response. This document displays a list of game categories. When Alexa finishes speaking, the device opens the microphone to listen for the user's response. |
User: … (User touches the "Animals" category item on the screen. ) |
|
Device displays the question over a related background image. |
The Echo Show renders the document to display the question text, then runs the |
After reading the question, the screen updates to show a list of possible answers to choose from. User can respond to the questions either with voice or by touching options on the screen. |
When the |
APL support on different types of devices
You can use APL to display content on both devices with screens, such as the Echo Show and Fire tablets, and devices with alphanumeric clock displays, such as the Echo Dot with clock:
- You can use all APL features to display content on devices with screens. APL provides full support for user interaction and rich content such as images, video, and animation.
- You can use a smaller set of APL features to display content on devices with character displays such as the Echo Dot with clock. APL supports showing alphanumeric data on the display for these types of devices. These devices also support unique features like the ability to marquee text and show timers and countdowns. See Understand Alexa Presentation Language and Character Displays.
The APL concepts are the same regardless of the device you target. Amazon recommends that you support both of these device categories, as supporting both lets your skill reach more users.
Fire tablets display your APL content in both Show mode and Tablet mode.
- In Show mode, the tablet works like an Echo Show. The tablet stays locked in landscape mode. Users can invoke your skill and see your APL content.
- In Tablet mode, normal tablet functions remain available. When a user invokes your skill, the tablet displays your APL content scaled down to the hub viewport and locked to a portrait orientation.
What do you build to use APL in a skill?
To use APL in your skill, you work with APL documents, data sources, commands, and the APL directives and requests.
The following sections provide a high-level overview of these key APL concepts and where they fit into your skill-building process.
Documents
An APL document is a JSON structure that defines a template to display on the viewport. The document controls the overall structure and layout of the visual response. An APL document combines multiple parts:
- An APL component is a primitive UI element that displays on the viewport, such as a simple text box. Components are the basic building blocks for constructing a document.
- A layout combines components into a reusable template and gives it a name. You can then place the layout in your document by referencing its name. Referencing a name results in a more modular design that is easier to maintain.
- A style assigns a name to a set of visual characteristics, such as color and font size. You can then assign the style to a component. Use styles to keep your visual design consistent.
- A resource is a named constant you can use rather than hardcoding values. For example, you could create a resource called
myRed
that defines a particular shade of red, and then use that resource name to specify a color for a component. - A package bundles together all of the above elements so that you can use them across multiple APL documents. You import the package into your document.
An APL document can be simple or complex. For example, a simple document might just use the Text
component to display plain text. This JSON displays the text "Hello World!" in the center of the viewport:
{
"type": "APL",
"version": "1.5",
"description": "A simple hello world APL document.",
"settings": {},
"theme": "dark",
"import": [],
"resources": [],
"styles": {},
"onMount": [],
"graphics": {},
"commands": {},
"layouts": {},
"mainTemplate": {
"parameters": [
"payload"
],
"items": [
{
"type": "Text",
"height": "100vh",
"textAlign": "center",
"textAlignVertical": "center",
"text": "Hello World!"
}
]
}
}
APL is designed to encourage modular documents and reuse. The Alexa Design System for APL provides a set of responsive components and responsive templates. These combine APL components, styles, and resources into modular, responsive layouts you can use to more quickly build your document and ensure it works well on different viewports. You can also build your own reusable layouts and bundle them into packages to use across multiple documents.
For details about these concepts:
- APL Document
- Build Responsive APL Documents
- Alexa Design System for APL
- Layout
- Style Definition and Evaluation
- Resources
- Package
- Component
Data sources, data binding, and data-binding syntax
APL supports data-binding, which lets your document retrieve data from a separate data source that you provide. Data binding lets you separate your presentation logic (the APL document) from your source data.
In the earlier "Hello World" document example, the text to display is hard-coded in the document. This hard-coded text is fine for a simple example. For easier maintenance, you should put the data in a separate data source, and then point to that data source from the document. In this case, the document would look like the following example.
{
"type": "APL",
"version": "1.5",
"description": "A simple hello world APL document with a data source.",
"theme": "dark",
"mainTemplate": {
"parameters": [
"helloworldData"
],
"items": [
{
"type": "Text",
"height": "100vh",
"textAlign": "center",
"textAlignVertical": "center",
"text": "${helloworldData.properties.helloText}"
}
]
}
}
The value of the Text
component's text
property now contains an expression set off with a dollar sign ($
) and curly brackets ({ }
). This syntax is a data-binding expression. In this example, the expression ${helloworldData.properties.helloText}
tells the document to retrieve the text to display from the data source called helloworldData
. The text to display is in the properties.helloText
property of helloworldData
:
{
"helloworldData": {
"properties": {
"helloText": "Hello World!"
}
}
}
For details about data sources and data binding:
Commands
You use APL commands to do both of the actions:
- Change the visual experience during runtime. For instance, the
SetValue
command can change the value of a component's property, which then changes the appearance or behavior of the component. - Communicate with your skill's Lambda function or web service during the interaction. To do this, you use the
SendEvent
command in your document. This command tells Alexa to send your skill aUserEvent
request, which your code can handle just as it handles other types of requests such asIntentRequest
. For more details about skill requests and responses related to APL, see Skill directives and requests.
The following example shows the SetValue
command configured to change the text
property of the component with the ID buttonDescriptionText
.
{
"type": "SetValue",
"componentId": "buttonDescriptionText",
"property": "text",
"value": "You pressed the 'Click me' button!"
}
There are multiple ways to run APL commands:
- Some components have event handler properties that can trigger a command. For example, the
TouchWrapper
component has anonPress
property to specify a command to run when the user selects the component on the screen. Similarly, theAlexaButton
responsive component has aprimaryAction
property that specifies the command to use when the user selects the button. - The APL document itself has an
onMount
property to run a command when the document loads. This property is useful for creating welcome animations that play when the user launches your skill. - You can use a skill directive to send an APL command from your skill code. Skill directives are discussed more in Skill directives and requests.
For example, you could use the primaryAction
property on an AlexaButton
to trigger the SetValue
command shown earlier.
For details about commands, see the following topics:
Skill directives and requests
Your Lambda function or web service communicates APL-related information with the directives and requests defined in the Alexa.Presentation.APL
interface:
- You send the
Alexa.Presentation.APL.RenderDocument
directive to tell the device to display APL content. Include both the document and the associated data source (if applicable) as part of the directive. - You send the
Alexa.Presentation.APL.ExecuteCommands
directive to send commands to the device. These commands typically reference specific parts of the document. For example, theSpeakItem
command tells the device to speak the text defined with a particular component (such as aText
component). - You can use the
SendEvent
command in your document to send anAlexa.Presentation.APL.UserEvent
request. The request tells your skill about user actions that take place on the device, such as when the user touches a button. Your code should include handlers to accept and process these types of events.
Use the directives and requests to build a user interface that works with both voice and touch. For example, the earlier trivia interaction example described how the user could choose their trivia category by touching the screen or by speaking the category. To accomplish this task, each category shown on the screen is configured with the SendEvent
command, which sends a UserEvent
request. The skill's interaction model would also have a ChooseCategoryIntent
intent with utterances such as "play the {category}
category". The skill code then has a handler that listens for either the Alexa.Presentation.APL.UserEvent
request or the IntentRequest
for ChooseCategoryIntent
and responds by selecting that category and starting the game.
For more about the directives and requests, see:
- Use APL with the ASK SDK v2
- Alexa.Presentation.APL Interface
- Custom Skill Request and Response JSON Reference
Conditional logic and responsive documents
APL is built around conditional logic. You can create an APL document that displays the content in different ways depending on the viewport characteristics or other factors. For example, you could create a list that displays a continuous vertical scrolling list on larger screens, but presents each list item on a single page on small screens.
Every APL component and command has an optional when
property that must be true
or false
. This value determines whether the device displays the component or runs the command. The when
property defaults to true
when you don't provide a value.
To use the when
property, write a data-binding expression that evaluates to true
or false
. For example, the following statement evaluates to true when the device is a small, landscape hub.
"when": "${@viewportProfile == @hubLandscapeSmall}"
As noted previously, data-binding expressions always take the form: ${expression}
. In the previous example, the constants viewportProfile
and hubLandscapeSmall
are resources provided as part of the alexa-viewport-profiles
package. The "at" sign (@
) is the standard syntax to reference a resource.
You can also use data-binding expressions to assign property values on components and commands conditionally. For example, the following expression returns the string "center" for small, round hubs, and "left" for all others.
${@viewportProfile == @hubRoundSmall ? 'center' : 'left'}
You could use this expression to conditionally set the property values on a component, instead of using when
to hide or show the entire component.
Conditional logic is a key ingredient when you write responsive APL documents. A responsive APL document can adjust to the characteristics of the viewport. Since users can invoke skills on devices with a wide variety of screen sizes, shapes, and aspect ratios, responsive APL is critical to creating a good user experience. For more details and recommendations, see Build Responsive APL Documents.
For more information about conditional logic in APL, see the following topics:
High-level steps to implement APL in your skill
These steps assume you're familiar with building custom skills. To learn more about custom skills in general, start with Understand Custom Skills and Steps to Build a Custom Skill.
- Plan your visual design. See the APL sections in the Alexa Design Guide for inspiration and guidance.
- Build the APL document and any accompanying data source.
- You can create multiple documents to display different content at different points during the skill flow. In the earlier trivia interaction example, the trivia skill had X APL documents: one to present the welcome screen, one to present a list of trivia categories, and one to display the question text.
- To preview your document as you build, use the APL authoring tool in the developer console.
- To ensure that your content looks good on all different devices that users might have, follow recommended best practices.
- Configure your skill to support the
Alexa.Presentation.APL
interface. For details, see Configure a Skill to Support APL. - In your skill code, add the code to send
RenderDocument
andExecuteCommands
directives to display your document when needed. For more details about these directives, see the following topics: - In your skill code, create handlers for the
UserEvent
request. For more details about requests, see the following topics: -
Test your skill in the simulator and with actual devices. Update the set of viewport profiles your skill supports, and test to ensure that your content looks good on all different types of devices.
You can use the simulator to see how your content looks on different viewports that are similar to devices with screens, such as the Echo Show. The simulator doesn't include a viewport for character displays like the Echo Dot with clock, so you must use a device to test your content on a character display.
- Submit your skill for certification.