We are excited to announce the next version of the Alexa Presentation Language (APL) that enables you to easily build interactive visual experiences with new capabilities and improved tooling. APL 1.4 lets you add editable text boxes, drag and drop UI controls, and back navigation so customers can return to previous screens. You can utilize new Alexa responsive components and templates to quickly add visuals to your skills to enhance the voice experience, and live preview your APL documents in the authoring tool. Also, we added a major update to the Alexa Skills Kit (ASK) toolkit for Visual Studio Code (VS Code) that adds APL rendering and local debugging. Learn more about APL 1.4 in our technical documentation.
APL 1.4 supports user gestures and new components you can incorporate into your multimodal skills.
Responsive Components:
Responsive Templates:
Preview Mode in Authoring Tool
You can now use preview mode to preview touch events, commands, video, and other aspects of your APL documents in the APL authoring tool. You can also preview your APL for audio documents in preview mode as well, with the launch of APL for audio (beta) today.
Alexa Skills Kit (ASK) Toolkit for VS Code
You can now build, edit, and preview your APL documents from within your local IDE with the Alexa Skills Kit (ASK) toolkit for VS Code. Starting today, you can add visuals to your skills without leaving your favorite IDE with built-in features such as code snippets, validation, instant preview, and download or save APL documents.
JSX for APL
JSX for APL is an experimental, JSX-based framework that enables you to author APL documents using JSX and React along with SDKv2 and the latest SDK. With JSX for APL, you can leverage your existing knowledge of web technologies to add rich visual experiences to your skills. You can also share your components and reuse others' components on npm or GitHub.
Multimodal Responses Developer Preview
Multimodal responses enables you to more easily design and implement audio and visual responses in your skills. Specifically, you'll be able link audio and visual responses, use a simplified workflow to navigate to audio and visual authoring tools, link multiple audio responses to a single visual response, and render a unique runtime multimodal responses ID for an integrated multimodal response. You can sign up for the developer preview here.
Adding visuals and touch can enhance voice experiences and make skills even more engaging and interactive for customers. As a reminder, you can take advantage of many different APL features to create visually rich Alexa experiences. For example, you can use the AnimateItem command to animate the position, scale, rotation, or opacity of any APL component or layout. You can also combine animation with Alexa Vector Graphics to create new visually engaging experiences.
Get started today and learn more here. Please reach out to the Amazon Product Manager, Arun, @aruntalkstech on Twitter if you have any questions.