What's New in APL 2023.1

Use speech marks to synchronize commands with speech

All components now have the onSpeechMark event handler. This handler defines the commands to run when the audio specified in the speech property encounters a speech mark. The handler runs during the duration of the SpeakItem command when the speech property on the component is set to audio that contains speech marks.

The handler recognizes four different types of speech marks. Use these types in conditional logic to synchronize animations or other commands with the speech.

Type Description


Indicates a sentence element in the input text.


Indicates a word element in the text.


Describes the face and mouth movements corresponding to each phoneme being spoken.


Describes a <mark> element from the SSML input text.

For example, you can use word speech marks define the commands to run every time Alexa reads a specific word.

For details and examples, see onSpeechMark.

Improve accessibility with video closed captions

When your skill plays video content using the Video component, you can include captions in a text track file. The video player displays these captions during video playback if the device has video closed captions enabled.

You provide these captions in the textTrack property on the source object that identifies the video. For details, see Video component: textTrack property. For details about how users enable closed captioning on Echo Show devices, see Turn On Captioning on Echo Devices with a Screen.

New selector syntax for flexibility when targeting commands

Several APL commands act on components. The componentId property on these commands identifies the component to target with the command. In earlier versions of APL, you could set componentId property on a command to an identifier that you defined with the id component property.

A new selector syntax provides more flexibility when identifying the component to target with a command. In addition to using the component id, you can target commands with relative references. For example, you can create a selector that targets the parent or child of a component, or finds the component based on its type. You can also use this syntax to target bind variables defined at the root of the document.

For details about the new selector syntax, see Selector.

New modes for the Blend filter

The Blend filter now supports the following Porter-Duff operations:

  • source-atop
  • source-in
  • source-out

For details, see Blend.

Work with APL versions

In your APL document, set the version to "2023.1".

A user can invoke your skill on older devices that don't support the latest version of APL. When working with features introduced in a specific version of APL, provide an alternative experience for devices running earlier versions of APL. The environment.aplVersion property in the data-binding context returns the version of APL on the device. This property returns null for APL 1.0 and the actual version for 1.1 or later. Use this property in when statements to create conditional blocks based on version.

For example, this renders a VectorGraphic on a device with APL 1.1 or later, and an image for APL 1.0:

    "type": "Image",
    "when": "${environment.aplVersion == null}",
    "width": 100,
    "height": 100,
    "source": "https://example.com/alternate/image/for/older/versions.jpg"
    "type": "VectorGraphic",
    "when": "${environment.aplVersion != null}",
    "source": "lightbulb",
    "width": 100,
    "height": 100,
    "scale": "best-fit"

The APL version is also available in requests sent to your skill, in:


This example shows a request from a device with 2023.1. Note that the session, request, and viewport properties are omitted for brevity.

  "version": "1.0",
  "session": {},
  "context": {
    "System": {
      "application": {
        "applicationId": "amzn1.ask.skill.1"
      "user": {
        "userId": "amzn1.ask.account.1"
      "device": {
        "deviceId": "amzn1.ask.device.1",
        "supportedInterfaces": {
          "Alexa.Presentation.APL": {
            "runtime": {
              "maxVersion": "2023.1"
      "apiEndpoint": "https://api.amazonalexa.com",
      "apiAccessToken": "eyJ..."
    "Viewport": {}
  "request": {}

Last updated on 2023-02-07

Last updated: Feb 07, 2023