Today, we introduced new products and services that bring ambient intelligence to life. Ambient intelligence unlocks new experiences for customers, and enables devices and services to work seamlessly together. These new products are in addition to our lineup of ambient devices — and you can be at the forefront of building new experiences with them today.
Today’s announcement of Echo Show 15 provides even more opportunities for you to create immersive, proactive, and personalized experiences for your customers. Echo Show 15 is an entirely new type of Echo Show that helps keep families organized, connected, and entertained. It's a 15.6" display that can be wall mounted or placed on a counter stand in landscape or portrait orientation. With Echo Show 15, we’re also introducing widgets, so customers can add information right to the home screen of their device giving them more opportunities to interact with your content.
Echo Show 15 is powered by the next generation Amazon AZ2 Neural Edge processor, which is capable of processing speech recognition locally, on the edge, like AZ1, and also adds the ability to process computer vision (CV) workloads in parallel. This means the CV algorithms that once required the immense computer power of the cloud can now be processed entirely on the edge.
Earlier this year at Alexa Live, we announced Alexa Presentation Language Widgets for Echo Show devices. Widgets are a great new way for customers to interact with content from the home screen of their device, including rich, customizable, glanceable, self-updating views of skill content. Imagine a customer being able to “check an item off a list” or tap a widget to be taken straight to the latest content in your skill.
Some developers have already created Widgets for Echo Show 15. For example, The Daily Show widget surfaces a new clip from the show every day, and Domino’s and Blue Apron will offer deals for customers. You can use APL documents to create Widgets just as you would with your multi-modal skill responses. Apply for the developer preview here.
Customers already enjoy portrait mode experiences on Fire tablets (Fire 7, Fire HD8 and Fire HD10) and certain partner devices. Multimodal skills – built with Alexa Presentation Language – see more than 3x the amount of monthly active users compared to voice-only skills on multimodal devices. And when developers implement APL features such as APL video, they get nearly double the customer engagement of a voice-only skill on multimodal devices.
In the coming days we’re launching two new viewport profiles optimized for the Echo Show 15: hubLandscapeExtraLarge and hubPortraitMedium. These new profiles enable you to create custom layouts for new screen sizes and adapt responses as devices flip between portrait and landscape orientations.
In addition, in APL 1.8 we’re improving support for loading images, videos, and vector graphics. Image and Vector Graphic components now have onLoad/onFail events, and Video components now support onTrackReady, onTrackFail, and onTrackUpdate events. These events allow APL commands to be run when any sources load, fail to load, or change status. For example, if an image fails to load because the URL is invalid, the component invokes the new onFail handler which could be configured to run SetValue to change the source to a different placeholder image.
Join us for a special Alexa Tech Talk hosted by our own Jeff Blankenburg this Thursday, September 30, 2021 for a deeper look at the new Echo Show 15, Widgets, APL 1.8 and more. Register today. You can learn more about APL 1.8 and how to optimize your multimodal experiences to help you get ready for Echo Show 15’s release later this year.