Building Skills for Echo Spot Customers in India

Sohan Maheshwar Apr 24, 2018
Share:
Tips & Tools Echo Spot India
Blog_Header_Post_Img

Today we announced that Echo Spot is now shipping to customers in India. Echo Spot combines the power of voice with a visual display in a compact design to deliver magical voice experiences for customers. A custom skill for Echo Spot can include an interactive touch display in its response, in addition to standard voice interactions.

For skill developers, voice-enabled devices with a screen create unique opportunities to reimagine voice innovations. Here we show how you can build engaging voice-first skills for Echo Spot.

How to Detect a Device Display

Customers respond to a skill using different responses and actions depending on whether the customer does or does not see a screen while using the skill. Now that your skill is able to detect if a device has display, your skill service code should reflect this difference and support both types of interactions.

Here’s an example where we detect if a device has a display and then generate the graphical user interface (GUI) using one of the body templates in the Alexa Skills Kit. First, for your skill to be able to serve on display devices, you need to enable it through the developer console as shown below.

how to use alexa skills

The JSON request that your skill receives includes all the information you need to determine if your device has a screen display and if it supports other interfaces, like Audio Player and Video. Let’s look closely at the JSON requests received from a variety of Alexa devices: Echo (no display screen), and Echo Spot (display screen)

how to use alexa skills

Step 1: Include this helper function in your skill code to detect if the device has display. As you can see from the JSON above, to determine whether the device supports display, we need to check if the node “Display” exists within the “supportedInterfaces” node in the JSON request we receive. Here’s the helper function that can do that for you:

Copied to clipboard
// returns true if the skill is running on a device with a display function supportsDisplay { 
    var hasDisplay = 
        this.event.context && 
        this.event.context.System && 
           this.event.context.System.device && 
        this.event.context.System.device.supportedInterfaces &&
        this.event.context.System.device.supportedInterfaces.Display 
    return hasDisplay; 
}

Step 2: Call the helper function from within your intent to check if the device has display.

Copied to clipboard
suggestPizza: function (){

  //checking if the device has display by calling our supportsDisplay helper function and passing the JSON request received by the skill as an argument
  if (supportsDisplay.call(this)) {
        //device has display
  }
  else {
        //device does not have display
  }

}

Step 3: Respond differently (display vs. no-display)

Generally speaking, the customer will respond to a skill using different responses and different actions depending on whether the customer does or does not see a screen while using the skill. Now that your skill is able to detect if a device has display, your skill service code should reflect this difference and should reflect both types of interactions.

Here’s an example where after we detect if a device has display, we generate the GUI using one of the body templates provided by the Alexa Skills Kit.

Copied to clipboard
const Alexa = require('alexa-sdk');

const makePlainText = Alexa.utils.TextUtils.makePlainText;
const makeRichText = Alexa.utils.TextUtils.makeRichText;
const makeImage = Alexa.utils.ImageUtils.makeImage;

suggestPizza: function (){
  var speechOutput

  //checking if the device has display by calling our supportsDisplay helper function and passing the JSON request received by the skill as an argument

  if (supportsDisplay.call(this)) { //if device has display, generate display using a template, and the speech output
    var title = 'Veggie Delite’;
    var description = 'We suggest the Veggie Delite pizza which has Golden Corn, Black Olives, Capsicum and a lot of cheese. Yum!';
    var imageURL = 'https://i.imgur.com/rpcYKDD.jpg'
    speechOutput = description;

    // building display directive
    const builder = new Alexa.templateBuilders.BodyTemplate1Builder();
    const template = builder.setTitle(title)
                            .setBackgroundImage(makeImage(imageURL))
                            .setTextContent(makeRichText('' + description + ''), null, null)
                            .build();

   this.response.renderTemplate(template);

  }
  else{ //if device does not have display, simply respond back with speech
    speechOutput = "Here's your " + pizzaSuggested + “which contains” + pizzaDescription;
  }

  this.response.speak(speechOutput);
  this.emit(':responseReady');
}

Build Multimodal Skills with Version 2 of the ASK Software Development Kit for Node.js

We recently announced the launch of v2 of our Node.js SDK. The updated SDK improves existing features and adds new ones to help you build skills faster and reduce complexity in your code. The new SDK also has display directives that make building multimodal skills very easy. Here is how you can use the v2 SDK:

Step 1: Include this helper function to detect if your device has a display.

Copied to clipboard
// returns true if the skill is running on a device with a display 
function supportsDisplay(handlerInput) {
  var hasDisplay =
    handlerInput.requestEnvelope.context &&
    handlerInput.requestEnvelope.context.System &&
    handlerInput.requestEnvelope.context.System.device &&
    handlerInput.requestEnvelope.context.System.device.supportedInterfaces &&
    handlerInput.requestEnvelope.context.System.device.supportedInterfaces.Display
  return hasDisplay;
}

Step 2: Call the supportsDisplay() helper function within a canHandle function to construct a response

Copied to clipboard
const SuggestPizzaHandler = {
  canHandle(handlerInput) {
    const request = handlerInput.requestEnvelope.request;
    return request.type === 'LaunchRequest'
      || (request.type === 'IntentRequest'
        && request.intent.name === SuggestPizza);
  },
  handle(handlerInput) {
	if (supportsDisplay(handlerInput) ) {
		// device has display	
	}
   }
}

Step 3: If your device has a display, we generate a GUI using the addRenderTemplateDirective for one of the body templates provided. 

 

Copied to clipboard
const SuggestPizzaHandler = {
  canHandle(handlerInput) {
    const request = handlerInput.requestEnvelope.request;
    return request.type === 'LaunchRequest'
      || (request.type === 'IntentRequest'
        && request.intent.name === SuggestPizza);
  },
  handle(handlerInput) {
    const speechOutput = "We suggest the Veggie Delite pizza which has Golden Corn, Black Olives, Capsicum and a lot of cheese. Yum!";
 if (supportsDisplay(handlerInput) ) {

      const myImage = new Alexa.ImageHelper()
        .addImageInstance('https://i.imgur.com/rpcYKDD.jpg')
        .getImage();

      const primaryText = new Alexa.RichTextContentHelper()
        .withPrimaryText(speechOutput)
        .getTextContent();

      handlerInput.responseBuilder.addRenderTemplateDirective({
        type: 'BodyTemplate1',
        token: 'string',
        backButton: 'HIDDEN',
        backgroundImage: myImage,
        title: "Pizza Suggest",
        textContent: primaryText,
      });

}
    return handlerInput.responseBuilder
      .speak(speechOutput)
      .withSimpleCard(speechOutput)
      .getResponse();
  },
};

Testing Your Skill on Echo Spot

You can test your skill on your Echo Spot device (provided that the device and your Amazon Developer account are the same) or you can also use the new Echo Spot simulator on the Test page of the Alexa Skills Kit developer console.

More Resources

Check out some additional resources for designing voice-first skills for devices with screens.

Webinar: Designing Multimodal Skills for Alexa

Learn to design skills that shine across all Alexa-enabled devices including Echo Spot. Join our upcoming webinar to learn how to add imagery, video, and formatted text content. Register now to reserve your spot.