No results found
August 20, 2018Amit Jotwani
We recently released version two of the Alexa Skills Kit (ASK) Software Development Kit (SDK) for Node.js, which introduced a fresh approach to handling requests and responses (among many other enhancements to the SDK). In our new code deep dive series, we’ll show you the ins and outs of the new SDK by providing a step-by-step code overview of the new features. We will also build a listening retention memory game skill from scratch, bit by bit, as we progress through the series.
In our first code deep dive, we covered the basic concepts of the new Alexa Skills Kit (ASK) Software Development Kit (SDK) for Node.js, like `canHandle()`, `handle()`, and Response Builder. In the second deep dive, we looked at how to capture customer input using slots and session attributes for persistence. We will use the final code from the last deep dive as the starting point for today’s post on display directives.
As a skill builder, you can choose whether or not to specifically support a particular interface, such as screen display. For the best customer experience, however, you should plan to create a conditional workflow so that customers who use devices without a screen, like Amazon Echo or Echo Plus can have an optimized experience, and so can the customers accessing your skill from Echo Show, Echo Spot, or Fire TV Cube. Even if the screen experience is not the focus of your skill, you should still think about how visual components could enhance your skill on devices with screens.
This post will build on top of the last couple deep dives, and provide a step-by-step walkthrough for delighting your customers by providing a screen experience for your skill.
In this walkthrough we will show you how to:
Here's an example of what those screens may look like on a square and a round display:
Let’s get started.
First, for your skill to be able to serve on display devices, you need to enable it through the Alexa Developer Console, as shown below.
Let’s first write the code to check if the requesting device has a display screen. We will do that by writing a helper function, which we can call from all our handlers whenever needed. We will call this hasDisplay() -
Next, let’s write a helper function that we can call from our handlers after checking that the requesting device has display screen our skill can use. We will call this getDisplay().
To generate our display response, we use Alexa SDK’s built-in ImageHelper() and RichTextContentHelper() methods. You can learn more about the display directives here.
In our LaunchRequestHandler, we first use the supportsDisplay() function to check if the device supports a display, and then create our response accordingly. If the requesting device does support a display, we use our getDisplay() function to generate our response, which includes a display screen.
Same treatment as LaunchRequestHandler – we check if the device supports display, and then respond accordingly.
Same treatment as LaunchRequestHandler and StoryHandler – we check if the device supports display, and then respond accordingly.
Same treatment as LaunchRequestHandler, StoryHandler, and AnswerHandler – we check if the device supports display, and then respond accordingly.
If you would like to build this skill with us throughout the series, follow the steps below to kick-start your skill: