New ways to upgrade your visual experience with Alexa Presentation Language (APL) 2023.1

Racheal Chimbghandah Feb 09, 2023
Share:
Alexa Skills
Blog_Header_Post_Img

We are excited to announce the launch of the first release of the year for the Alexa Presentation Language - APL 2023.1. There are several exciting features we look forward to you using as you enhance your multimodal experiences when you upgrade to APL2023.1, including: closed captions, additional image blend modes, selector syntax support, and speech mark support. Developers can start building new documents or add these new features to existing documents with the APL Authoring Tool.

Make your videos more accessible to customers who are deaf or hard of hearing
The APL video component now supports Closed Captions for videos. We added a new textTrack property to the source object of the video component to hold data about the caption of the video being played. Closed Captions can be rendered in a number of formats, and we have launched with SubRip subtitles (SRT), one of the most common ones. Later in the year, we will launch support for additional closed captioning formats for video (e.g., WebVTT, CTA708), as well as launch Closed Captioning for custom audio sources and APL-A. Stay tuned for these updates. You can learn more about how to make your APL skill more accessible.

Example: textTrack property:

{

    "type": "Video",

    "autoplay": true,

    "source": [

        {

            "description": "intro",

            "url": "https://my.server/video-1.mp4",

            "textTrack": [

                {

                    "description": "video 1 caption",

                    "type": "caption""url": "https://my.server/captions-1.srt"

                }

            ]

        },

        {

            "description": "main",

            "url": "https://my.server/video-2.mp4",

            "textTrack": [

                {

                    "description": "video 2 caption",

                    "type": "caption""url": "https://my.server/captions-2.srt"

                }

            ]

        }

    ]

}

Explore new ways to blend images

APL now supports new image filter blend modes. The blend filter merges two images from the sources array of an APL Image component and appends the new image to the end of the array. We have added support for three Porter-Duff operations: source-in, source-out, and source-atop to image filters.

source-in: Displays the source image in to the destination image, within the bounds of the destination image. The destination image is transparent. 
source-out: Displays the source image out of the destination image. The part of the source image inside the destination image is transparent. 
source-atop: Displays the source image on top of the destination image, only within the bounds of the destination image.

Sample filter:

{

    "type": "Image",

    "sources": [

        "https://my.server/redcircle.png",

        "https://my.server/bluesquare.png"

    ],

    "filters": [

        {

            "type": "Blend",

            "mode": "source-atop"

        }

    ]

}

More flexible ways to target commands 

We have added new selector syntax to support APL commands. Selectors provides more flexibility to APL developers to target a component, for example with selector you can target the parent of a component which has 'FOO' as the id with the following syntax: FOO:parent(1).Previously, you could only use the the id of a component to target it. In addition to using the component id, you can target commands with relative references. For example, you can create a selector that targets the parent or child of a component, or finds the component based on its type. You can also use this syntax to target bind variables defined at the root of the document.

Synchronize your visual effects with audio
Speech marks allow synchronization of visual effects (e.g. animations) with ongoing audio. We have added onSpeechMark as a base component event handler to allow for speech mark hits to be passed in the APL document. The onSpeechMark handler can add commands to execute when speech specified audio reaches the position defined by a speech mark.

Let’s connect!

Please take a moment to take our developer survey and let us know how we can better help you. If you haven’t already, also check out the Alexa Community Slack and meet other multimodal developers, ask questions or share helpful tips. You can also find us on Twitter at @pkarthikr@austinvach and @smrudula.

On the 28th of February 2023, developers will be able to chat live with APL Solutions Architects during office hours. Look out for details in the community slack over the coming days.

Recommended Reading

The Alexa Shopping Kit allows skill developers to sell products from within their skill and earn commissions
Certification requirements for privacy policy URLs
Six tips for designing smooth Alexa shopping experiences to help grow revenue

Subscribe