Collect Slots Turn-by-Turn
Step 1: Getting to "yes"
As you might recall in Module 2, the Cake Time skill has a script for the "new user" scenario, where Alexa asks:
"Welcome to Cake Time. I'll tell you a celebrity name and you try to guess the month and year they were born. See how many you can get! Would you like to play?"
To get the user from here to actually playing the game, you need to handle "yes." You need a "yes" intent in the voice user interface (VUI) and a "yes" handler in your data-access-layer code.
Step 1 has four sub-steps:
- First, add a "yes" intent and enable Alexa Presentation Language (APL)
- Next, update the LaunchIntent handler with new text and a visual
- Next, add a YesIntent handler with very simple content
- Last, test to make sure it works so far
First, add a "yes" intent and enable Alexa Presentation Language (APL)
-
In the developer console, click the Build tab, and in the left-hand menu, click Interaction Model, and then click Intents.
The Intents page lists all the intents that are present in the skill from the template. Most of them, like AMAZON.HelpIntent and AMAZON.StopIntent are Amazon built-in intents. They're common enough that Amazon has pre-built them for you. Only the FallbackIntent, NavigateHomeIntent, StopIntent, HelpIntent, and CancelIntent are required. But they're already handled at a basic level for you in our
index.js
orlambda_function.py
file. -
In the right-hand pane, under Intents, click the +Add Intent button. On the Add Intent page, you can create a custom intent or use an existing intent from the built-in library. You'll be adding a built-in intent in this step, because the word "yes" is so common.
- Next to Use an existing intent from Alexa's built-in library, select the radio button.
-
In the search box underneath, type "yes" to find the intent.
-
In the AMAZON.YesIntent row, click + Add Intent, then click view. Now you'll see the AMAZON.YesIntent in your intents list.
-
At the top of the page, click Save Model.
-
In the left-hand menu, click Assets, and then click Interfaces.
-
In the right-hand pane, scroll down to Alexa Presentation Language and click to toggle on the interface which will expand the options to show all the available profiles.
- At the top of the page, click Save Interfaces, then click Build Model
Next, update the LaunchIntent handler with new text and a visual
To start, lets add an Alexa Presentation Language (APL) document that we can use to show a simple visual for devices that have screens.
-
In the Alexa Developer Console, click the Code tab.
-
In the left file nav, under Skill Code click on lambda to select the folder. Next click New Folder. Next enter "lambda/documents" under Folder Path and click Create Folder.
-
In the left file nav, under Skill Code click the new documents folder to select it. Next click New File. Next enter "lambda/documents/APL_simple.json" under File Path and click Create File. A new tab on the editor tab will open named APL_simple.json.
-
Paste the following APL JSON document into this new tab named APL_simple.json and click Save.
{ "type": "APL", "version": "1.8", "license": "Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: LicenseRef-.amazon.com.-AmznSL-1.0\nLicensed under the Amazon Software License http://aws.amazon.com/asl/", "settings": {}, "theme": "dark", "import": [ { "name": "alexa-layouts", "version": "1.4.0" } ], "resources": [], "styles": {}, "onMount": [], "graphics": {}, "commands": {}, "layouts": {}, "mainTemplate": { "parameters": [ "payload" ], "items": [ { "type": "AlexaHeadline", "primaryText": "${payload.myData.Title}", "secondaryText": "${payload.myData.Subtitle}" } ] } }
- Next, update the speak output string you changed in Module 3 to follow the script. Click back into the
index.js
orlambda_function.py
tab to continue editing the handlers. Locate the following string, which should be about 10-15 or so lines down from the top in Node.js, and 30 lines down for Python, and look like this.
const speakOutput = 'Hello! Welcome to Cake Time. That was a piece of cake! Bye!';
speak_output = "Hello! Welcome to Cake Time. That was a piece of cake! Bye!"
- In place of this string, copy and paste the following new text from the script.
const speakOutput =
`Welcome to Cake Time. I'll tell you a celebrity name and you try
to guess the month and year they were born. See how many you can get!
Would you like to play?`;
In Module 3, we used single-and-double-quoted strings. This string uses backticks (`) instead of single or double quotes. The backtick (in the upper-left corner of most U.S. keyboards, sharing the key with the tilde, which looks like a squiggly line) defines a special kind of string: a template string. Template strings can have embedded variables and handle multiple lines, and they don't conflict with either type of quotation mark. You use a template string here because it makes it easier to present this long string in a more human-readable way in your code.
speak_output = f"Welcome to Cake Time. " \
f"I'll tell you a celebrity name and you try " \
f"to guess the month and year they were born. " \
f"See how many you can get! " \
f"Would you like to play?"
In Module 3, we used single-and-double-quoted strings. This string uses f-strings (f"). The f-string, introduced into Python 3.6, defines a special kind of string: a formatted string literal. F-strings can have embedded variables and handle multiple lines, and they don't conflict with either type of quotation mark. This workshop will make use of f-strings throughout.
- After the code you just copied and pasted, replace this code.
return handlerInput.responseBuilder
.speak(speakOutput)
// .reprompt(speakOutput)
.getResponse();
With the following code, by coping and pasting it into the index.js
or lambda_function.py
file.
//====================================================================
// Add a visual with Alexa Layouts
//====================================================================
// Import an Alexa Presentation Language (APL) template
var APL_simple = require('./documents/APL_simple.json');
// Check to make sure the device supports APL
if (
Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)[
'Alexa.Presentation.APL'
]
) {
// add a directive to render our simple template
handlerInput.responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
document: APL_simple,
datasources: {
myData: {
//====================================================================
// Set a headline and subhead to display on the screen if there is one
//====================================================================
Title: 'Say "yes."',
Subtitle: 'Play some Cake Time.',
},
},
});
}
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
- After the code you just copied and pasted, replace this code.
return (
handler_input.response_builder
.speak(speak_output)
# .ask(speak_output)
.response
)
With the following code, by coping and pasting it into the index.js
or lambda_function.py
file.
#====================================================================
# Add a visual with Alexa Layouts
#====================================================================
# Import an Alexa Presentation Language (APL) template
with open("./documents/APL_simple.json") as apl_doc:
apl_simple = json.load(apl_doc)
if ask_utils.get_supported_interfaces(
handler_input).alexa_presentation_apl is not None:
handler_input.response_builder.add_directive(
RenderDocumentDirective(
document=apl_simple,
datasources={
"myData": {
#====================================================================
# Set a headline and subhead to display on the screen if there is one
#====================================================================
"Title": 'Say "yes."',
"Subtitle": 'Play some Cake Time.',
}
}
)
)
return (
handler_input.response_builder
.speak(speak_output)
.ask(speak_output)
.response
)
- At the top of the file, in the line under the following code.
import logging
import ask_sdk_core.utils as ask_utils
Copy and paste the following code.
import json
from ask_sdk_model.interfaces.alexa.presentation.apl import (
RenderDocumentDirective)
This copy-and-paste action is how you'll add simple visuals to all of your exchanges in this workshop. The pasted code uses the Alexa Presentation Language (APL). For now, you don't have to know how this APL code works. You should just start to think in terms of both voice and visual cues for this workshop.
In addition to Alexa saying the scripted line, if you used a device with a screen, it would show something like this, using the Title and Subtitle properties of the myData object.
For this workshop, you'll focus on just the Title and Subtitle properties for each exchange. To add a visual, for now, you simply need to change those two strings in each handler.
Notice how, in this instance, the visual doesn't simply repeat the welcome message Alexa spoke. The visual reinforces the spoken message. If a user couldn't hear Alexa's response, they still know what to do to move on when they look at the screen.
Next, add a Yes Intent handler with very simple content
The next thing to do, of course, is to add code to handle when the customer says "yes" to agree to play. The easiest way to add the code is to copy another handler.
-
Copy the lines of code that define
LaunchRequestHandler
and paste it a couple of lines after the originalLaunchRequestHandler
definition, but before the beginning of theHelloWorldIntentHandler
. -
Change the name on the handler you pasted in from
LaunchRequestHandler
toPlayGameHandler
.Why don't you name it
YesIntentHandler
? Two reasons:- The names of the handler functions don't have to match the names of the intents they handle. It just helps keep track of what handles what.
- You may have customers saying "yes" to different things and you may want to have different handlers for it depending on the context. In the next module, which is about adding memory, we'll show you how to handle that.
-
Adjust the
canHandle()
(Node.js) orcan_handle()
(Python) method, by finding this code, at about line 55 (Node.js) or 75 (Python).
canHandle(handlerInput) {
return (
Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest'
);
},
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_request_type("LaunchRequest")(handler_input)
- Replace the code at about line 55 (Node.js) or 75 (Python) by copying and pasting in the following code.
canHandle(handlerInput) {
return (
Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' &&
Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.YesIntent'
);
},
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return (
ask_utils.is_request_type("IntentRequest")(handler_input)
and ask_utils.is_intent_name("AMAZON.YesIntent")(handler_input)
)
The method is always supposed to return a true or false boolean value.
-
This new code will check to see if the request type is an IntentRequest instead of a LaunchRequest, and then check if the intent name is AMAZON.YesIntent.
Note: TheAlexa.getRequestType
andAlexa.getIntentName
methods from the Node.js SDK andask_utils.is_request_type
andask_utils.is_intent_name
from the Python SDK are used to get the values for you. If that evaluates to true, then thehandle()
method will run.Now, consider changing the strings a little, just so you know you got it right when you test it in Step 2 of this module.
-
In the following code, update the speak output, title, and subtitle strings. You'll change the strings again after you test the code, so feel free to make the text anything you like.
handle(handlerInput) {
//====================================================================
// Set your speech output
//====================================================================
const speakOutput = 'Welcome to the yes handler.';
//====================================================================
// Add a visual with Alexa Layouts
//====================================================================
// Import an Alexa Presentation Language (APL) template
var APL_simple = require('./documents/APL_simple.json');
// Check to make sure the device supports APL
if (
Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)[
'Alexa.Presentation.APL'
]
) {
// add a directive to render the simple template
handlerInput.responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
document: APL_simple,
datasources: {
myData: {
//====================================================================
// Set a headline and subhead to display on the screen if there is one
//====================================================================
Title: 'You said "yes"!!!',
Subtitle: 'You have made me so happy!',
},
},
});
}
//====================================================================
// Send the response back to Alexa
//====================================================================
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = f"Welcome to the Yes handler."
#====================================================================
# Add a visual with Alexa Layouts
#====================================================================
# Import an Alexa Presentation Language (APL) template
with open("./documents/APL_simple.json") as apl_doc:
apl_simple = json.load(apl_doc)
if ask_utils.get_supported_interfaces(
handler_input).alexa_presentation_apl is not None:
handler_input.response_builder.add_directive(
RenderDocumentDirective(
document=apl_simple,
datasources={
"myData": {
#====================================================================
# Set a headline and subhead to display on the screen if there is one
#====================================================================
"Title": 'You said "yes"!!!',
"Subtitle": 'You have made me so happy!',
}
}
)
)
return (
handler_input.response_builder
.speak(speak_output)
.ask(speak_output)
.response
)
Now that you added a new handler, you need to register it. That means you need to add the new handler to the list of handlers that the code will test when a request comes in.
- At the bottom of
index.js
orlambda_function.js
file, find the list of handlers that looks like the following code.
exports.handler = Alexa.SkillBuilders.custom()
.addRequestHandlers(
LaunchRequestHandler,
PlayGameHandler,
HelloWorldIntentHandler,
HelpIntentHandler,
CancelAndStopIntentHandler,
FallbackIntentHandler,
SessionEndedRequestHandler,
IntentReflectorHandler)
.addErrorHandlers(
ErrorHandler)
.withCustomUserAgent('sample/hello-world/v1.2')
.lambda();
sb = SkillBuilder()
sb.add_request_handler(LaunchRequestHandler())
sb.add_request_handler(PlayGameHandler())
sb.add_request_handler(HelloWorldIntentHandler())
sb.add_request_handler(HelpIntentHandler())
sb.add_request_handler(CancelOrStopIntentHandler())
sb.add_request_handler(FallbackIntentHandler())
sb.add_request_handler(SessionEndedRequestHandler())
sb.add_request_handler(IntentReflectorHandler()) # make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers
sb.add_exception_handler(CatchAllExceptionHandler())
lambda_handler = sb.lambda_handler()
Notice that PlayGameHandler
is placed after LaunchRequestHandler
.
-
In your own code, place
PlayGameHandler
afterLaunchRequestHandler
. -
In the upper-right corner of the Code tab page, click Save, and then click Deploy to deploy your code.
Last, test your skill to make sure your code works so far
-
Click the Test tab, and then select the checkbox next to Device Display so that the visuals you added will show.
-
To start testing, in the Alexa Simulator box on the left-hand side of the Test tab page, type "open cake time" (or "open [other invocation name you set]"), and then press ENTER. Or, click and hold the microphone icon, and then say, "Open Cake Time (or "Open [other invocation name you set]"].
You should receive the new welcome message you scripted.
-
In the right-hand portion of the Test tab page, scroll down to see a simulated visual like the one shown previously in this workshop, that says, "Say 'yes.'"
-
In the Alexa Simulator box, type or say, "Yes."
This response should run the
PlayGameHandler
with the simple, "Welcome to the yes handler." voice response and the enthusiastic text in the visual response.If you encounter warnings or errors in your code, the following tips might help you resolve your issues.
Tip 1: Look for warnings and errors in the Code tab
The code editor will show a yellow caution symbol or a red X next to lines of code that have issues. Hover your mouse over them and you'll get information on what the editor thinks is wrong.Tip 2: Check CloudWatch Logs for errors
On the Code tab, in the row of icons at the top of, click CloudWatch Logs. This action takes you to the AWS CloudWatch service. On the CloudWatch page, you can view the logs of your recent runs. The logs are in reverse chronological order, so the most recent log is on top. Click the link for the most recent log, and you see diagnostic information from the last run of your skill that could help you identify the error.Tip 3: Retrace your steps
Errors commonly occur when you go too fast and miss a step. Did you add the AMAZON.YesIntent in your interaction model, and then save and build your model? Did you remember to change the name of the new handler you created from LaunchRequest toPlayGameHandler
? When you had two identical handlers right after you performed the copy and paste, did you apply some of your changes to one handler and some to another handler by accident? Did you remember to save and deploy your latest code, and then wait for the deployment to finish before you began to test?