APL Accessibility Guide


When you build a skill that uses Alexa Presentation Language (APL), make sure your visuals are accessible to all users, regardless of any temporary, permanent, and situational disabilities.

What's an accessible APL skill?

An accessible APL skill provides a good experience for users with different disabilities and limitations. A custom skill that lets users navigate by voice, touch, and the screen reader interface is accessible to more users:

  • Users with visual impairments can interact with the skill using their voice, the screen reader, or both.
  • Users with hearing impairments can read the content on the screen and respond by touch.
  • Users with speech impairments can listen to spoken content and respond by touch.
  • Users with mobility impairments, who might have difficulty touching the screen, can interact with the skill using their voice.

For custom skill accessibility design considerations and more about different types of disabilities, see Make Your Skill Accessible to All.

How APL supports accessibility

The following sections summarize how a skill with APL visuals supports accessibility.

Custom voice interaction model

Every custom skill defines a custom voice interaction model. This model defines the words users say when making requests to your skill. Exposing all of your skill's functionality through voice intents makes your skill accessible, both to users with disabilities and users who might use your skill on a device that doesn't have a screen.

For details about building your interaction model for your skill, see the following:

Screen reader support for APL visuals

APL supports screen readers that describe the items on the screen. A screen reader improves accessibility for for users with visual impairments. The user can interact with the visual display directly. When building your APL visuals, you provide information about the components on the screen. The screen reader uses this information to render these descriptions to the user.

Echo Show devices include the VoiceView screen reader.

Ensuring your skill works well with the screen reader is a key step in creating an accessible skill. For details about building APL documents that support the screen reader, see the following:

Voice control for interactive objects in your visuals

APL supports voice-enabling interactive items, such as buttons and links. Voice-enabling interactive items improves accessibility for users who find it difficult to select items by touching the screen. The user can interact with the screen with utterances. For example, a user could say "select the start button" to indicate that they want to "tap" a button on the screen.

To voice-enable your visual, create custom intents and utterances that correspond to the interactive objects in your visual. Write intent handlers to process these requests. For details about creating custom intents and writing request handlers, see the following:

Voice control for lists

When your visual displays a list of items, Alexa can read out all the items on a list. Reading out list items improves accessibility for users with visual impairments and those who are too far away from the device to read the items.

If your list items are also interactive objects, users can select items on the list by voice with utterances such as "select the third one". Letting users select items by voice improves accessibility for user who find it difficult to select items by touching the screen.

Use the APL SpeakList command and a transformer to speak the items on a list. The SpeakList command can speak text, SSML, or audio provided by APL for audio. For details and examples of reading list items with SpeakList, see the following:

Use the AMAZON.SelectIntent built-in intent to let users select list items by voice. The IntentRequest sent to your skill includes information about the item the user requested. For details and examples of selecting list items, see the following:

Scroll content by voice

When your APL visual displays scrolling content, such as text or a list of items, you can use built-in scrolling intents to let users scroll the content with utterances such as "scroll up." Letting users scroll content by voice improves accessibility for users who might have difficulty swiping on the screen, and for users who are further away from the device when interacting with the list.

To enable voice scrolling, include an id on the scrolling component such as a Sequence or ScrollView. Then, add the built-in intents to your interaction model:

  • AMAZON.ScrollDownIntent
  • AMAZON.ScrollUpIntent
  • AMAZON.PageUpIntent
  • AMAZON.PageDownIntent
  • AMAZON.MoreIntent

Alexa handles these intents for you, so you don't write your own intent handlers.

For details about the scrolling intents, see the following:

Speech and text synchronization

When your skill displays longer portions of text that Alexa reads out, you can synchronize the spoken text with the text on the screen. Reading the text out loud while highlighting the words on the screen improves accessibility for users with hearing impairments, or those who want to listen to spoken text and follow along on the screen. The device highlights the text as Alexa speaks in a "karaoke" effect.

For details about speech synchronization, see Synchronize Spoken Text with Text on the Screen.

Closed captions for video content

When your skill plays video content using the Video component, you can include captions in a text track file. The video player displays these captions during video playback if the device has video closed captions enabled.

You provide these captions in the textTrack property on the source object that identifies the video. For details, see Video component: textTrack property. For details about how users enable closed captioning on Echo Show devices, see Turn On Captioning on Echo Devices with a Screen.

Test your skill for accessibility

As you build your skill, test it for accessibility.

  • Test every visual you display with the VoiceView screen reader.
    • Use a device to test your skill with VoiceView. The screen reader isn't available in the simulator.
    • Verify that the screen reader describes all items displayed on the screen.
    • Test all interactive components, such as buttons and links. Verify that the screen reader describes these items and that you can activate them when the screen reader is enabled.
  • Make sure that you can select all buttons, links, and other interactive elements with voice requests.
  • Make sure that you can scroll any content both by touch and by voice.

For more about testing skills, see Test and Debug Your Skill.


Was this page helpful?

Last updated: Nov 28, 2023