Lab 4: Add a Multimodal Response to Your Skill

Welcome to lab 4 of your skill. In this lab, you'll learn how to add multimodality to your skill by using APL.

Time required: 30 minutes

What you'll learn

  • How to add a multimodal response
  • How to use multimodal response in your dialogs
  • How to make changes in your skill backend to support APL
  • (Optional) How to use APL templates on welcome and out of domain responses


In lab 3, you built an Alexa Conversations skill that supports voice-only responses. When the requested flight is found, the skill will prompt the voice-only response which contains the details.

In this lab, you will learn how to extend the response by adding visuals. We will add an APL template to the flight search result to enable multimodality in APL supported devices, such as echo show.

Step 1: Enabling APL support in skill manifest

First, we need to enable APL support in the skill manifest so that our skill can support multimodality. To do so, we need to make changes in "skill.json" file under "skill-package" directory.

We need to add "interfaces" under "manifest.apis.custom":

Your skill.json file will look as indicated below.

Step 2: Add an APL template

Similar to the APL-A templates (voice responses) we created in the previous lab, we will add an APL template to support the visual responses.

  • First, create a new folder named "display" under the "skill-package/response" directory. This folder will contain the APL files that your skill uses.
  • In order to create an APL file to support flight search results, we will create a new folder under "display" called "FlightSearchResponseVisual".
  • Next, create a "document.json" file under newly created "FlightSearchResponseVisual" folder and copy the below code:

Step 3: Use APL template in FlightSearch dialog

First, we need to import the APL to our skill. To do that, you need to add the following code:

Copied to clipboard.

import displays.*

This will import all APL templates under the display folder.

We want to return Flight Details to the APL template we created. To do that, we need to extend our payload with new type:


// FlightDetails object we will return from our API
type FlightDetails {
    DATE date
    NUMBER cost
    US_CITY arrivalCity
    US_CITY departureCity
    TIME time
    Airline airline
    Display display

// Display object we will return from our API
type Display {
    String headerTitle
    String headerSubtitle
    String primaryText
    String secondaryText
    String textAlignment
    String titleText

As you see above, we used String primitive type for the Display type we created. We also need to import String type into our ACDL file. To do so, we need use the following import statement:

Copied to clipboard.


Now, we need to make the change in our dialog. In our last response we have:

           response = MultiModalResponse {
             apla = FlightSearchResponsePrompt
           act = Notify {
             success = true,
             actionName = FlightFinder
           payload = FlightDetailsPayload {
             flightResponse = flightResult

We will need to add our APL template to the MultiModalResponse. After adding the APL, your response will look like:

            response = MultiModalResponse {
              apla = FlightSearchResponsePrompt,
              apl = FlightSearchResponseVisual
            act = Notify {
              success = true,
              actionName = FlightFinder
            payload = FlightDetailsPayload {
              flightResponse = flightResult

Here is the snapshot of your ACDL file:

Step 4: Update your skill backend to support APL response

We added our response to our dialog, but this response will require some inputs such as header title, primary and secondary text, title text and text alignment. We defined the payload data in the APL file as follows:

"mainTemplate": {
       "parameters": [
       "item": [
               "type": "SimpleText",
               "backgroundImageSource": "",
               "footerHintText": "${payload.flightResponse.display.hintText}",
               "foregroundImageLocation": "${payload.flightResponse.display.foregroundImageLocation}",
               "foregroundImageSource": "${payload.flightResponse.display.foregroundImageSource}",
               "headerAttributionImage": "",
               "headerTitle": "${payload.flightResponse.display.headerTitle}",
               "headerSubtitle": "${payload.flightResponse.display.headerSubtitle}",
               "primaryText": "${payload.flightResponse.display.primaryText}",
               "secondaryText": "${payload.flightResponse.display.secondaryText}",
               "textAlignment": "${payload.flightResponse.display.textAlignment}",
               "titleText": "${payload.flightResponse.display.titleText}"

We need to send the payload from our API to the APL template for the data it requires.

  • Let's start with expanding our database(flight-data.json) to support airport names, arrival and departure times for the locations. We will show these data on the screen.

  • Now, we can extend our response for the APL template we created. Let's open the index.js file and create an object in flightresponse called "display" and set the parameters.

const headerTitle = "Flight Search";
const textAlignment = "start";
let response = "";
let primaryText = "";
let secondaryText = "";
let titleText = "";

if (flightData.cost == "") {
    primaryText = `Sorry, I couldn't find any flights from ${util.capitalizeFirstLetter(departure)} to ${util.capitalizeFirstLetter(arrival)}.`;

else {
    primaryText = flightData.airline;
    secondaryText = `<b>Passengers: </b>1 Adult<br><b>Seat: </b> Main Cabin<br><b>Departure Time: </b> ${flightData.departureTime} <br><b>Arrival Time: </b> ${flightData.arrivalTime} <br> <b>Total Cost: </b>$${flightData.cost}`;
    titleText = `${flightData.departureAirport} to ${flightData.arrivalAirport}`;


response = {
    arrivalCity: arrival,
    departureCity: departure,
    date: date,
    time: flightData.time,
    cost: flightData.cost,
    airline: flightData.airline,
    display: {
      headerTitle: headerTitle,
      headerSubtitle: "",
      primaryText: primaryText,
      secondaryText: secondaryText,
      textAlignment: textAlignment,
      titleText: titleText


Our API handler will look like as follows:

Here is the snapshot of the index.js:

Step 5: Deploy your changes and test

We are ready to deploy our changes!

  1. Open the terminal and navigate to the main directory of the Flight Search skill.
  2. Compile the code.

Copied to clipboard.

askx compile
  1. Deploy the code.

Copied to clipboard.

askx deploy
  1. Once the deployment is completed, you can start testing.

(Optional) How to use APL templates on "welcome" and "out-of-domain" responses

We added the first multimodal response to our skill and you were able to test it. This section is optional if you want to learn more about how to add multimodal responses to the skill level responses such as welcome prompt.

  1. We need to create APL template for "welcome" and "out-of-domain" responses. To do that, let's create two new folders under skill-packages/response/display called "WelcomeResponseVisual" and "OutOfDomainResponseVisual"

  2. Now, we should create "document.json" file under each folder we created.

  • We will create a "welcome" prompt as below:
Welcome prompt APL

WelcomeResponseVisual/document.json APL code will be as follows:

  • "Our "out-of-domain" visual response will look as indicated below:
Out-of-domain prompt APL

APL code for OutOfDomainResponseVisual/document.json will be as below:

  1. We need to use skill action to set the skill-wide assets. You can learn more about of skill actions here.
// Multimodal response for Welcome
multiModalWelcome = MultiModalResponse {
  apl = WelcomeResponseVisual,
  apla = AlexaConversationsWelcome

// Multimodal response for Out of domain
multimodalOutOfDomain = MultiModalResponse {
  apl = OutOfDomainResponseVisual,
  apla = AlexaConversationsOutOfDomain

// Skill action to set the skill-wide assets
mySkill = skill(
 locales = [Locale.en_US],
 dialogs = [FlightSearch],
 skillLevelResponses = SkillLevelResponses
         welcome = multiModalWelcome,
         out_of_domain = multimodalOutOfDomain,
         bye = AlexaConversationsBye,
         reqmore = AlexaConversationsRequestMore,
         provide_help = AlexaConversationsProvideHelp

Your ACDL file will be as following snapshot:

  1. Compile the code.

Copied to clipboard.

askx compile
  1. Deploy the code.

Copied to clipboard.

askx deploy
  1. Once the deployment is completed, you can start testing.


Congratulations! You are now equipped to develop an Alexa Conversations skill by using ACDL and have learned how your skill can support multimodality.


If your skill isn't working or you're getting some kind of syntax error, download the code from the github repository.

Was this page helpful?