AVS Display Cards for Low-Resolution Screens


Display Cards for Alexa provide visuals to support verbal responses. Devices with low-resolution screens (e.g. dot-matrix screens) are not required to provide a GUI response for all domains. Additionally, see our guidance below for specific instances where your low-resolution screen can display selected pieces of the returned JSON, without being obliged to display all content. If your implementation is on a low-res screen, we expect that:

  1. The GUI response will appear as soon as Alexa begins responding or media begins playing, and the contents will match the Alexa response or media.

  2. The GUI response will be consistent with your device’s visual style.

GUI Responses

Data is currently returned for a subset of Alexa responses, these include general knowledge, lists (including to do lists, shopping lists, and calendar list), weather, and music or other audio requests. The following JSON is returned for each of the following templates.

BodyTemplate1

BodyTemplate1 is used for Q&A, Wikipedia queries, and third-party Skill requests that do not contain a photo. Sample utterances that would invoke BodyTemplate1 include:

  1. “How deep is the ocean?”
  2. “What is the definition of “paradox”?”
  3. "What is the Karman line?"
  4. "What is bike polo?"

It is not expected that most low-res screens will display a GUI response to these questions, as generally the responses are verbose.

JSON


{
  "directive": {
    "header": {
      "namespace": "TemplateRuntime",
      "name": "RenderTemplate"
    },
    "payload": {
      "token": "{{STRING}}",
      "type": "BodyTemplate1",
      "title": {
        "mainTitle": "Who is Usain Bolt?",
        "subTitle": "Wikipedia"
      },
      "skillIcon": {
        "sources": [
          {
            "url": "https://example.com/smallSkillIcon.png",
            "size": "small"
          }
        ]
      },
      "textField": "Usain St Leo Bolt, OJ, CD born 21..."
    }
  }
}

BodyTemplate2

BodyTemplate2, like BodyTemplate1, is used for Q&A, Wikipedia queries, and third-party Skill requests but, unlike BodyTemplate1, it also returns an image. Sample utterances that would invoke BodyTemplate2 include:

  1. “Who is Usain Bolt?”
  2. “What is 5 miles in kilometers?”
  3. “Who wrote To Kill a Mockingbird?”
  4. “Where is New Mexico?”

Like BodyTemplate1, it is not expected that most low-res screens will display a GUI response to these questions, as generally the responses are verbose and low-res screens might not be able to render the image.

JSON


{
  "directive": {
    "header": {
      "namespace": "TemplateRuntime",
      "name": "RenderTemplate"
    },
    "payload": {
      "token": "{{STRING}}",
      "type": "BodyTemplate2",
      "title": {
        "mainTitle": "Who is Usain Bolt?",
        "subTitle": "Wikipedia"
      },
      "skillIcon": {
        "sources": [
          {
            "url": "https://example.com/smallSkillIcon.png",
            "size": "small"
          }
        ]
      },
      "textField": "Usain St Leo Bolt, OJ, CD Born 21 August...",
      "image": {
        "contentDescription": "Image with two sources."
          "sources": [
            {
              "url": "https://example.com/smallUsainBolt.jpg",
              "size": "small"
            },
            {
              "url": "https://example.com/largeUsainBolt.jpg",
              "size": "large",
              "widthPixels": 1200 ,
              "heightPixels": 800
            }
          ]
      }
    }
  }
}

ListTemplate1

ListTemplate1 is used to display items in a list, such as a calendar or shopping list. Sample utterances that would invoke ListTemplate1 include:

  1. “What’s on my to do list?”
  2. “Add eggs to my shopping list.”
  3. “When is my next event?”
  4. “Add “Lunch with Jayla” to my calendar.”

JSON


{
  "directive": {
    "header": {
      "namespace": "TemplateRuntime",
      "name": "Render"
    },
    "payload": {
      "token": "{{STRING}}",
      "type": "ListTemplate1",
      "title": {
        "mainTitle": "Title",
        "subTitle": "Subtitle"
      },
      "skillIcon": {
        "contentDescription": "Source for the skill icon.",
        "sources": [
          {
            "url": "https://example.com/smallSkillIcon.jpg",
            "size": "small"
          }
        ]
      },
      "listItems": [
        {
          "leftTextField": "1.",
          "rightTextField": "Alfa"
        },
        {
          "leftTextField": "2.",
          "rightTextField": "Bravo"
        },
        {
          ...
        }
      ]
    }
  }
}

WeatherTemplate

WeatherTemplate is used with all weather-related utterances, such as:

  1. “What’s the weather?”
  2. “Will it rain today?”
  3. “What’s the weather in [location]?”

Weather utterances are good candidates for low resolution screens, as displaying the most important information—CurrentWeather and CurrentWeatherIcon—provide concise responses to a user’s question.

JSON


{
  "directive": {
    "header": {
      "namespace": "TemplateRuntime",
      "name": "RenderTemplate"
    },
    "payload": {
      "token": "{{STRING}}",
      "type": "WeatherTemplate",
      "title": {
        "mainTitle": "San Francisco",
        "subTitle": "Friday, October 31"
      },
      "skillIcon": null,
      "currentWeather": "75°",
      "description": "Mostly cloudy and more humid with a couple of showers and ...",
      "currentWeatherIcon" {
        "contentDescription": "Weather image sources.",
        "sources": [
          {
            "url": "https://example.com/mediumPartlyCloudy.jpg",
            "size": "medium"
          }
        ]
      },
      "highTemperature": {
        "value": "76°",
        "arrow": {
          "contentDescription": "Up arrow sources.",
          "sources": [
            {
              "url": "https://example.com/mediumUpArrow.jpg",
              "size": "medium"
            }
          ]
        },
      },
      "lowTemperature": {
        "value": "45°",
        "arrow": {
          "contentDescription": "Down arrow sources.",
          "sources": [
            {
              "url": "https://example.com/mediumDownArrow.jpg",
              "size": "medium"
            }
          ]
        },
      },
      "weatherForecast": [
        {
          "image": {
            "contentDescription": "Partly cloudy...",
            "sources": [
              {
                "url": "https://example.com/smallChanceOfRain.jpg",
                "size": "small"
              }
            ]
          },
          "day": "Sat",
          "date": "Oct 22",
          "highTemperature": "71°",
          "lowTemperature": "55°"
        },
        {
          ...
        }
      ]
    }
  }
}

NowPlaying Cards

NowPlaying is used for media requests. Sample utterances that would invoke NowPlaying include:

  1. “Play jazz music”
  2. “Play Smoke & Retribution”
  3. “Play Freakonomics on iHeartRadio”
  4. “Play a country station from [third-party music provider]”

The GUI response should update for each new song to always match what is currently playing, except in instances where no metadata is returned (in the case of an error). If a full metadata string does not fit onto the screen at once, you may scroll the text to display it in its entirety.

If the user pauses media playback (“Alexa Stop/Play/Cancel”) and initiates no new interaction with Alexa, the metadata should remain for 1 minute and then dismiss.

VUI Playback Commands

The customer can control media playback either using GUI or VUI. VUI playback commands include:

  • Play
  • Stop
  • Cancel
  • Pause
  • Resume
  • Next
  • Previous
  • Rewind
  • Start Over
  • Louder
  • Softer
  • Set Volume
  • Mute
  • Unmute
  • Shuffle
  • Restart
  • Get details
  • Who is this?

JSON: Amazon Prime Music

The name key-value pair may come back as “Amazon Music”, “Prime Music”, or “Prime Station”. If Prime is not enabled, name will be “Digital Music Store.”


{
  "directive": {
    "header": {
      "namespace": "TemplateRuntime",
      "name": "RenderPlayerInfo",
      "messageId": "{{STRING}}",
      "dialogRequestId": "{{STRING}}"
    },
    "payload": {
      "audioItemId": "{{STRING}}",
      "content": {
        "title": "{{STRING}}",
        "titleSubtext1": "{{STRING}}",
        "titleSubtext2": "{{STRING}}",
        "header": "{{STRING}}",
        "headerSubtext1": "{{STRING}}",
        "mediaLengthInMilliseconds": {{LONG}},   
        "art": {{IMAGE_STRUCTURE}},     
        "provider": {
          "name": "{{STRING}}",
          "logo": {{IMAGE_STRUCTURE}}
        }                
      }
      "controls": [
        // This array includes all controls that must be
        // rendered on-screen.
        {
          "type": "{{STRING}}",
          "name": "{{STRING}}",
          "enabled": {{BOOLEAN}},
          "selected": {{BOOLEAN}}
        },
        {
          "type": "{{STRING}}",
          "name": "{{STRING}}",
          "enabled": {{BOOLEAN}},
          "selected": {{BOOLEAN}}
        },
        {
          ...
        }
      ]
    }
  }
}

Interruption Scenarios

If the user interrupts Alexa during TTS playback (via tap or wake word), the GUI response should update as soon as Alexa begins speaking—either displaying the new GUI response or removing the GUI, depending on what matches the new TTS.

The Alexa attention states should clearly and promptly display whenever the user interrupts Alexa.

NowPlaying Interruptions

If Alexa is playing music and a user interrupts, the music attenuates immediately while Alexa enters the listening state. (Audio playback may be silenced entirely if attenuation is technically prohibitive.) If the user:

  • Requests new media, then Alexa audio and GUI response update to match the new request.
  • Provokes a response that is not new media, the music will stay attenuated for the duration of the Alexa response and then resume at the regular volume after the response is finished.
  • Provokes an error from Alexa, the music should resume at its original volume.

Was this page helpful?

Last updated: Nov 27, 2023