Display APL visuals through LaunchRequestHandler and HelpIntentHandler
Not all devices can react to a render directive. For example, consider an Echo Dot, which doesn't have a screen. If you return a RenderDocument
directive, your skill session might crash.
This means that for an Alexa-enabled device to render your APL document safely, you must implement a response with a render directive.
This directive takes the form of Alexa.Presentation.APL.RenderDocument
. So, by using this directive, you must ensure that the calling device validates that it supports APL.
Before you create the response in your LaunchRequestHandler.handle()
add a check to see if the skill request payload contains the APL interface. The following example shows how you can check the payload:
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
// Create render directive
}
The if-statement checks whether the APL interface is sent in the request envelope. You must only add the response after you know that the device supports APL.
To add a directive by using the Node.js SDK, inside that if-statement you must call:
handlerInput.responseBuilder.addDirective({…});
You will then have to reference the APL document in your backend and enter the data-sources object.
Focusing on the LaunchRequestHandler
of the Tic-Tac-Toe skill and its Welcome screen, the if-statement and the render directive are the following:
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
handlerInput.responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
document: informationDocument,
datasources: {
information: {
backgroundImage: {
url: "https://d3j5530a0cofat.cloudfront.net/adlc/background.png",
colorOverlay: true
},
logoUrl: "https://d3j5530a0cofat.cloudfront.net/adlc/somelogo.png",
textContent: {
primaryText: "Welcome to Tic-Tac-Toe!",
secondaryText: "",
hintText: "Try, \"Alexa, top row, left column\", or \"Alexa, help\""
}
}
}
});
}
The general process is the following:
- Set type:
'Alexa.Presentation.APL.RenderDocument'
. - Use the document attribute of the directive.
- Apply all the data sources to the data sources parameter of the directive as typical JSON.
- Apply the same general mechanism to all screens in your skill.
Exercise 1
Update the HelpIntentHandler
to display the Help screen by using the same APL document as above.
Assume that index.js
contains the following code:
const informationDocument = require('./documents/InformationDocument.json');
Use the following elements:
- Background image: https://d3j5530a0cofat.cloudfront.net/adlc/background.png
- Logo image: https://d3j5530a0cofat.cloudfront.net/adlc/somelogo.png
- Primary text: Say the row and then the column, "cross", "circle", or "board status"
- Secondary text: ""
- Footer text: Try "Alexa, middle row, center column" or "Alexa, circle"
InformationDocument.json
{
"type": "APL",
"version": "1.7",
"theme": "dark",
"import": [
{
"name": "alexa-layouts",
"version": "1.4.0"
}
],
"mainTemplate": {
"parameters": [
"information"
],
"items": [
{
"type": "AlexaHeadline",
"id": "informationScreen",
"backgroundImageSource": "${information.backgroundImage.url}",
"backgroundColorOverlay": "${information.backgroundImage.colorOverlay}",
"headerAttributionImage": "${information.logoUrl}",
"primaryText": "${information.textContent.primaryText}",
"secondaryText": "${information.textContent.secondaryText}",
"footerHintText": "${information.textContent.hintText}"
}
]
}
}
HelpIntentHandler.js
const HelpIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.HelpIntent';
},
handle(handlerInput) {
const speakOutput = `You can place your mark by saying the row, and then the column. Say cross or circle anytime to change your mark, or board status and I will tell you where all the marks are.`;
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
};
Solution
const HelpIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.HelpIntent';
},
handle(handlerInput) {
const speakOutput = `You can place your mark by saying the row, and then the column. Say cross or circle anytime to change your mark, or board status and I will tell you where all the marks are.`;
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
handlerInput.responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
document: informationDocument,
datasources: {
information: {
backgroundImage: {
url: "https://d3j5530a0cofat.cloudfront.net/adlc/background.png",
colorOverlay: true
},
logoUrl: "https://d3j5530a0cofat.cloudfront.net/adlc/somelogo.png",
textContent: {
primaryText: "Say the row and then the column, \"cross\", \"circle\", or \"board status\"",
secondaryText: "",
hintText: "Try \"Alexa, middle row, center column\" or \"Alexa, circle\""
}
}
}
});
}
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
};