The arrival of Echo Show has opened up a whole new world of possiblities for voice-first interactions. Echo Show's high-quality screen combines with Alexa skills to deliver a completely new way for customers to interact across voice and graphical user interfaces.
This means that you can now reimagine how you present visual elements to users when there is a screen available. It also means you'll want to create skills that serve up a tailored experience on each device.
For example, it wouldn't make sense for Alexa to say, "Select an item on the screen for more information" if your skill is invoked from an Echo Dot. While not every device has a screen, the user will quickly adapt and come to expect that your skill supports one.
At this point you're probably asking yourself, "How do I detect if my skill has access to a screen?" It's quite easy. You only need to check if this.event.context.System.device.supportedInterfaces.Display
is true.
You may also be wondering, "Isn't that going to make my code reek of "bad smells?"
Yes, it may prove clunky to continue to using the old method of emitting :tell
, :speak
, :tellWithCard
, :speakWithCard
, etc., to make Alexa speak, prompt, and display information to the screen.
You'll be happy to know that with the latest version of the Node.js SDK, there is an entirely new way to build the JSON response that your skill returns. Please give a round of applause for the response object
!
The response object
represents the JSON response that your skill returns to Alexa. Since it's an object, you can build it dynamically in any order that you like. This is great for intents where you may or may not need to prompt for input based on whether you've already captured the necessary slot from your user. Futhermore, you no longer have to stuff all your response-related items into variables then pass them to this.emit
('your event to emit', a, really, long, list, of, params, I, dont, know, what, they, all, mean, this, is, a, sign, of, bad, design, this, code, reeks);`.
To make Alexa speak using the response object, you can call this.response.speak(speechOutput)
.
To listen for input, call: this.response.speak(reprompt)
.
To render a card, use: this.response.cardRender(cardTitle, cardText, cardImage)
.
Once you are ready to send the response back to the device, emit:reponseReady
.
Let's take a look the code.
// speak
this.response.speak('Hello, World!');
this.emit(':responseReady');
// ask a question with reprompt
let question = 'What's your favorite color?';
let reprompt = question + ' Choose a color you can say blue or green.';
this.response.speak(question).listen(reprompt);
this.emit(':responseReady');
// speak and show a card
let outputSpeech = 'Like always, it's sunny in Seattle. I'm only happy when it rains.';
let cardTitle = 'Seattle Forcast';
let cardBody = 'It's 76 degrees with clear skys. Tonight's forecast is clear skies with a low of 67 degrees.'
let cardImage = 'url to an image';
this.response.cardRenderer(cardTitle, cardBody, cardImage);
this.response.speak(outputSpeech);
this.emit(':responseReady');
// if you want to add a prompt to above,
// add the following before calling this.emit(':responseReady')
let reprompt = 'Would you like to know the 5 day forecast?';
this.response.speak(outputSpeech + " " + reprompt);
this.reposnse.listen(reprompt);
this.emit(':responseReady');
Calling speak()
makes Alexa speak and listen()
makes Alexa listen for utterances. The listen parameter is the reprompt that makes Alexa speak if the user is silent for 8 seconds.
Since you have total control over when you emit :responseReady
, you can call speak()
, listen()
, cardRenderer()
, and so on in any order. You can also chain them! This gives you so much more flexibility, which is just what you need to tailor your response to the the device on which your skill was invoked.
All the examples below will result in the same JSON response that's sent to the device.
this.response.listen(reprompt).speak(question).cardRenderer(CARD_TITLE, question);
this.response.speak(question).listen(reprompt).cardRenderer(CARD_TITLE, question);
this.response.cardRenderer(CARD_TITLE, question);
this.response.speak(question).listen(reprompt);
this.response.speak(question);
this.response.listen(reprompt);
this.response.cardRenderer(CARD_TITLE, question);
Use the handy helper function below to detect whether there's a screen available to your skill:
function supportsDisplay() {
var hasDisplay =
this.event.context &&
this.event.context.System &&
this.event.context.System.device &&
this.event.context.System.device.supportedInterfaces &&
this.event.context.System.device.supportedInterfaces.Display
return hasDisplay;
}
// call supportsDisplay.call(this) from within an intent
You may be asking, "How do I render something and put it up on the Echo Show's display?" You can find the answer in this blog post, Building for Echo Show: Choosing the Right Display Template. For more information about display detection please check the cookbook entry, Detecting if your skill has access to a display. The post, the helper function, and the response object
will come in handy when you tailor the behavior of your skill based on the device on which it was invoked.
I liken it to sports commentary. It differs on the radio and television. Radio announcers have to be more descriptive to help the listener get a picture of what's happening on the field, where as on TV the announcers are seeing the game play out along with the viewer, so they don't need to be as exhaustive, however like your Alexa Skill there is no gurantee that the viewer is watching the TV intensly. So they still have to provide some mental picture of the game. You might want to keep that in mind while you're building your skills.
Now that tailoring your responses based on the device is easier than ever, I can't wait to see how your skills adapt to optimize the customer experience on each Alexa-enabled device.
Learn how to build a standout skill your customers will love. Attend our upcoming webinar to learn the qualities of the top-performing skills in the Alexa Skills Store. Register now for the US session or the UK/Germany session. Then use the new lessons learned to publish a new skill in September and earn Alexa developer perks.