Build visually rich experiences using APL

(beta)

About this learning series

Devices with screens provide an opportunity to add visual experiences to your voice-only skills. While Alexa is inherently voice first, adding visuals greatly enriches the customer experience. There are now hundreds of millions of Alexa-enabled devices out there with customers and many more from other third party manufacturers with Alexa Built-in and this number is growing.

This learning series will help you learn how to build multimodal interactions to make your voice-only skill more engaging and easier to use. You will learn fundamentals such as the various elements of a multimodal skill, the request and response lifecycle, and Alexa Presentation Language (APL), the language used to create visuals for devices with screens. After learning the foundations, you will embark on a practical hands-on journey where you can practice what you learn within our state-of-the-art e-learning solution. You will write code and receive real-time feedback to help you progressively advance your skills.

Get started today.

Total duration: 4-5 hrs


*By clicking on “Launch course”, you will leave the Amazon Developer Portal and you will be taken to our partner portal alexa.sana.ai. Information collected by or on behalf of Amazon about your use of alexa.sana.ai will be subject to the applicable Amazon Privacy Notice

Prerequisites

Learners taking this course should have at least one of the following:

  • Familiarity developing Alexa skills
  • Experience publishing Alexa skills
     

What will you learn?

Alexa Presentation Language (APL) provides you with features that you can use to add visuals to your voice-only skills. In this curriculum, you will learn the fundamentals of APL and how to build multimodal Alexa skills.

Course objectives

After completing this course, you will be able to:

  1. Describe how APL enables you to enrich user experience
  2. Describe different ways a user can interact with APL
  3. Understand APL visual response and describe how to update the visuals on the screen using commands

APL introduces a new dimension to engage customers by adding complementary visuals with your voice responses. When developing a multimodal skill, you’ll need to think about the visual display in addition to voice to enable customer interaction with the skill in multiple ways.

In the previous course, you learned about APL concepts. Knowledge about how customers interact with APL and the APL lifecycle will help you make the right design choices. This course will leverage your knowledge of APL concepts to help you get started with creating a basic multimodal skill.

Prerequisites

Before you start this course, we recommend that you have the following knowledge/skills:

  • Understand foundational APL concepts that help you build an enriching user experience
  • Familiarity with JSON and Node.js

Course objectives

After completing this course, you will be able to:

1. Create a visual layout using APL

2. Test whether a device supports APL in Node.js

3. Bind data for the welcome screen

4. Respond to voice intent

In-course exercise 

In this course, you will create the welcome screen for a game. Tasks include:

1. Displaying a text using APL

2. Testing whether a display supports APL

3. Binding data displayed in a visual

4. Displaying customer intent on screen

Devices today have varying viewports. If your skill adapts to different display types, then customers will be able to use your skill across various devices. This in turn improves your customer's experiences with your skills.

In the previous course, you learned how to create a basic multimodal skill. Ensuring your skill renders appropriately in different display types can be a daunting task. APL provides you with features to easily address this challenge. This course will enable you make your skill responsive to Alexa devices with screens.

Prerequisites

Before you start this course, we recommend that you have the following knowledge/skills:

  • Create a skill that displays visuals on a device with a screen

Course objectives

After completing this course, you will be able to:

1. Create a display using responsive APL components

2. Create responsive APL documents

In-course exercise 

In this course, you will convert your skill to be responsive to a device’s display capabilities. Tasks include:

1. Create a display using responsive APL components

2. Create a custom responsive APL document

Most devices that support Alexa display also support touch or pointer input. To create a holistic experience for the user, your skill should be able to respond to touch or pointer input.

In the last course, you learned how to make your skill responsive to different display types and specifications. Touch, or pointing and selecting, is an intuitive form of user interaction in displays that support touch or pointer inputs. To ensure the user is able to interact with your skill intuitively, you will need to enable your skill to work with touch inputs. 

Throughout this course, you will work to create an APL document that will allow the user to click on up/down buttons to select a number they think is the correct guess. Once this is done, the user can click the submit button to send the guess to the skill’s backend for validation. 

Prerequisites

Before you start this course, it is recommended that you already possess the following knowledge/skills:

  • Create a skill that uses responsive components to display visuals.

Why is this course important?

Many Alexa-enabled devices support visual content displayed on screen. These Alexa-enabled devices with screens allow touch or pointer input as an essential factor that will make your skill more engaging for users. 

Course objectives

After completing this course, you will be able to:

  • Create an APL document that responds to touch input using AlexaIconButton and AlexaButton
  • Modify data-bound variables based on button interaction
  • Send an event to your skill's backend and capture it