Build Interactive, Multimodal Experiences with Alexa Presentation Language (APL) 1.4

Arunjeet Singh Jul 22, 2020
Share:
Multimodal Intermediate Alexa Live
Blog_Header_Post_Img

We are excited to announce the next version of the Alexa Presentation Language (APL) that enables you to easily build interactive visual experiences with new capabilities and improved tooling. APL 1.4 lets you add editable text boxes, drag and drop UI controls, and back navigation so customers can return to previous screens. You can utilize new Alexa responsive components and templates to quickly add visuals to your skills to enhance the voice experience, and live preview your APL documents in the authoring tool. Also, we added a major update to the Alexa Skills Kit (ASK) toolkit for Visual Studio Code (VS Code) that adds APL rendering and local debugging. Learn more about APL 1.4 in our technical documentation

New APL Features

APL 1.4 supports user gestures and new components you can incorporate into your multimodal skills.

  • New Gestures: Take advantage of new drag and drop UI controls such as single finger drag, swipe to delete, and long press gestures.
  • Editable Text Boxes: Add new editable text boxes that allow customers to input a text response via touch or TV remote.
  • Grid Layout Support: Use the GridSequence component to add a list of text and images to a fixed grid layout that scrolls in a single direction, vertically or horizontally. You can take advantage of the pre-built AlexaGridList responsive template that uses GridSequence.
  • Back Navigation: Add back navigation so customers can easily return to previous screens in your skill with touch. Back navigation requires requesting the backstack extension from the APL extension framework.

New Alexa Responsive Components & Templates

Responsive Components:

  • AlexaProgressBar: Display a linear progress bar so customers understand ongoing activity.
  • AlexaProgressBarRadial: Display a circular progress bar so customers understand ongoing activity on Echo Spot devices.
  • AlexaProgressDots: Display animated dots so customers understand an action is in progress.
  • AlexaSlider: Display an interactive progress bar that customers can drag back and forth to change settings.
  • AlexaSliderRadial: Display an interactive circular progress bar that customers can interact with on Echo Spot devices.

Responsive Templates:

  • AlexaDetail: Display text alongside an image about a specific entity such as a person, place or thing. Includes four variants: Generic, Movies and TV, Location, and Recipe.
  • AlexaGridList: Display a list of images and text in a grid. Configure the appearance of the list such as including dividers and numbering the items. 

Improved APL Tooling Options

Preview Mode in Authoring Tool
You can now use preview mode to preview touch events, commands, video, and other aspects of your APL documents in the APL authoring tool. You can also preview your APL for audio documents in preview mode as well, with the launch of APL for audio (beta) today.

Screenshot of APL authoring tool in preview mode

Alexa Skills Kit (ASK) Toolkit for VS Code
You can now build, edit, and preview your APL documents from within your local IDE with the Alexa Skills Kit (ASK) toolkit for VS Code. Starting today, you can add visuals to your skills without leaving your favorite IDE with built-in features such as code snippets, validation, instant preview, and download or save APL documents.

Screenshot of APL authoring tool in preview mode

JSX for APL
JSX for APL is an experimental, JSX-based framework that enables you to author APL documents using JSX and React along with SDKv2 and the latest SDK. With JSX for APL, you can leverage your existing knowledge of web technologies to add rich visual experiences to your skills. You can also share your components and reuse others' components on npm or GitHub.

Multimodal Responses Developer Preview
Multimodal responses enables you to more easily design and implement audio and visual responses in your skills. Specifically, you'll be able link audio and visual responses, use a simplified workflow to navigate to audio and visual authoring tools, link multiple audio responses to a single visual response, and render a unique runtime multimodal responses ID for an integrated multimodal response. You can sign up for the developer preview here.

Get Started Today

Adding visuals and touch can enhance voice experiences and make skills even more engaging and interactive for customers. As a reminder, you can take advantage of many different APL features to create visually rich Alexa experiences. For example, you can use the AnimateItem command to animate the position, scale, rotation, or opacity of any APL component or layout. You can also combine animation with Alexa Vector Graphics to create new visually engaging experiences.

Get started today and learn more here. Please reach out to the Amazon Product Manager, Arun, @aruntalkstech on Twitter if you have any questions.

Related Content

Related Articles

31 New Features to Unlock More Natural and Immersive Alexa Experiences
Introducing Alexa Conversations (beta), a New AI-Driven Approach to Providing Conversational Experiences That Feel More Natural
Reach More Customers with Quick Links for Alexa (Beta) and New In-Skill Purchasing Options

Subscribe