Design the APL document and the screen's data sources structure
In this section, you'll bring the APL document JSON you have already authored into your backend code. You will store the JSON in your skill's repository, and then pull it in as a JavaScript object by using require()
in your index.js
. (This is one of the easiest ways to do this.) To read about how to build and use your APL Documents from the developer console, see the section Move everything into the developer console later in this lab.
If you name your APL document file as InformationDocument.json
, the following declaration in your index.js creates the object informationDocument
to reference that APL document:
const informationDocument = require('./documents/InformationDocument.json');
Note: The ./documents/
part of the string parameter means that the JSON file is under the documents folder hosted in the same location as your index.js.
In a skill response, your code must respond with an Alexa Directive. When you create an Alexa skill and implement Alexa interfaces in your skill code, you send messages to, and receive messages from, Alexa. Alexa sends directives to your skill to request something on behalf of the user. If your skill controls a device, Alexa can send directives to control the device or to request state information about the device.
The most important directive for rendering APL on a viewport is Alexa.Presentation.APL.RenderDocument
. The APLRenderDocument
directive includes two major fields, the APL document, and the data sources.
To get started, study the following APL document:
{
"type": "APL",
"version": "1.7",
"theme": "dark",
"import": [
{
"name": "alexa-layouts",
"version": "1.4.0"
}
],
"mainTemplate": {
"parameters": [
"information"
],
"items": [
{
"type": "AlexaHeadline",
"id": "informationScreen",
"backgroundImageSource": "${information.backgroundImage.url}",
"backgroundColorOverlay": "${information.backgroundImage.colorOverlay}",
"headerAttributionImage": "${information.logoUrl}",
"primaryText": "${information.textContent.primaryText}",
"secondaryText": "${information.textContent.secondaryText}",
"footerHintText": "${information.textContent.hintText}"
}
]
}
}
Nothing is new here, apart from the "secondaryText"
property which you will use to display the standings for the user and Alexa (Win, Lose, and Draw screens). Notice that the values that will target data binding are currently empty. You will enter them later in your index.js
.
Data binding in the backend
As a reminder of the concept and the usage of data binding, compare the previous InformationDocument.json
with the following data object JSON that you will use in the LaunchRequestHandler
for displaying the Welcome screen. The main idea of data binding works exactly the same as you saw in Lab 2.
datasources: {
information: {
backgroundImage: {
url: "https://d3j5530a0cofat.cloudfront.net/adlc/background.png",
colorOverlay: true
},
logoUrl: "https://d3j5530a0cofat.cloudfront.net/adlc/somelogo.png",
textContent: {
primaryText: "Welcome to Tic-Tac-Toe!",
secondaryText: "",
hintText: "Try, \"Alexa, top row, left column\", or \"Alexa, help\""
}
}
}
Remember that you will use the same APL Document for five screens (including the Win, Lose, and Draw screens) that all require a value for "secondaryText"
.
Observe the role of the information argument of the APL document’s mainTemplate
for referencing all the attributes.