What's New in APL 2023.1

April 2023 widgets and tooling updates

The April 2023 launch includes support for widgets and updates to the responsive components and templates.


Alexa widgets are now available to all custom skills. A widget displays quick, essential information related to a skill and lets users perform quick actions without leaving the current screen context or asking for updates.

To get started with widgets, see About Widgets and Alexa Presentation Language (APL) and Add a Widget to Your Skill.

Responsive components and templates

Version 1.7.0 of the alexa-layouts package provides several updates to the responsive components and templates.

Several components and templates support the new widget viewports. See the "Compatibility" section for each component or template to determine the supported viewports. The following topics list all the components and templates and indicate the viewports they support:

The package includes the following two new responsive components:

The following components each have a componentSlot property. Use the componentSlot to display a component, such as an AlexaCheckbox or AlexaButton at the end of a list item in a text list.

You can configure the AlexaTextList template to let users rearrange the list items. Users can move an item up or down, and the template automatically sends your skill a UserEvent request with the revised list. For details, see Let users change the order of the list items

The following components now support new properties for accessibility:

APL version

The current APL version number hasn't changed and remains at version 2023.1.

APL 2023.1 (February 7, 2023)

Use speech marks to synchronize commands with speech

All components now have the onSpeechMark event handler. This handler defines the commands to run when the audio specified in the speech property encounters a speech mark. The handler runs during the duration of the SpeakItem command when the speech property on the component is set to audio that contains speech marks.

The handler recognizes four different types of speech marks. Use these types in conditional logic to synchronize animations or other commands with the speech.

Type Description


Indicates a sentence element in the input text.


Indicates a word element in the text.


Describes the face and mouth movements corresponding to each phoneme being spoken.


Describes a <mark> element from the SSML input text.

For example, you can use word speech marks define the commands to run every time Alexa reads a specific word.

For details and examples, see onSpeechMark.

Improve accessibility with video closed captions

When your skill plays video content using the Video component, you can include captions in a text track file. The video player displays these captions during video playback if the device has video closed captions enabled.

You provide these captions in the textTrack property on the source object that identifies the video. For details, see Video component: textTrack property. For details about how users enable closed captioning on Echo Show devices, see Turn On Captioning on Echo Devices with a Screen.

New selector syntax for flexibility when targeting commands

Several APL commands act on components. The componentId property on these commands identifies the component to target with the command. In earlier versions of APL, you could set componentId property on a command to an identifier that you defined with the id component property.

A new selector syntax provides more flexibility when identifying the component to target with a command. In addition to using the component id, you can target commands with relative references. For example, you can create a selector that targets the parent or child of a component, or finds the component based on its type. You can also use this syntax to target bind variables defined at the root of the document.

For details about the new selector syntax, see Selector.

New modes for the Blend filter

The Blend filter now supports the following Porter-Duff operations:

  • source-atop
  • source-in
  • source-out

For details, see Blend.

Work with APL versions

In your APL document, set the version to "2023.1".

A user can invoke your skill on older devices that don't support the latest version of APL. When working with features introduced in a specific version of APL, provide an alternative experience for devices running earlier versions of APL. The environment.aplVersion property in the data-binding context returns the version of APL on the device. This property returns null for APL 1.0 and the actual version for 1.1 or later. Use this property in when statements to create conditional blocks based on version.

For example, this renders a VectorGraphic on a device with APL 1.1 or later, and an image for APL 1.0:

    "type": "Image",
    "when": "${environment.aplVersion == null}",
    "width": 100,
    "height": 100,
    "source": "https://example.com/alternate/image/for/older/versions.jpg"
    "type": "VectorGraphic",
    "when": "${environment.aplVersion != null}",
    "source": "lightbulb",
    "width": 100,
    "height": 100,
    "scale": "best-fit"

The APL version is also available in requests sent to your skill, in:


This example shows a request from a device with 2023.1. Note that the session, request, and viewport properties are omitted for brevity.

  "version": "1.0",
  "session": {},
  "context": {
    "System": {
      "application": {
        "applicationId": "amzn1.ask.skill.1"
      "user": {
        "userId": "amzn1.ask.account.1"
      "device": {
        "deviceId": "amzn1.ask.device.1",
        "supportedInterfaces": {
          "Alexa.Presentation.APL": {
            "runtime": {
              "maxVersion": "2023.1"
      "apiEndpoint": "https://api.amazonalexa.com",
      "apiAccessToken": "eyJ..."
    "Viewport": {}
  "request": {}

Last updated: May 09, 2023