Alexa.Presentation.APLA Interface Reference


The Alexa.Presentation.APLA interface provides directives for rendering an audio response defined in an APL document.

RenderDocument directive

Instructs the device to play the audio response defined in the specified document. You can also optionally provide one or more datasources to bind content to a document.

You can return the APL for audio RenderDocument directive as part of your standard response or as part of a reprompt. For details, see the Response format.

The following example passes a full document to play as an audio response:

{
  "type": "Alexa.Presentation.APLA.RenderDocument",
  "token": "developer-provided-string",
  "document": {
    "version": "0.91",
    "type": "APLA",
    "mainTemplate": {
      "parameters": [
        "payload"
      ],
      "item": {
        "type": "Selector",
        "items": [
          {
            "type": "Speech",
            "when": "${payload.user.name == ''}",
            "content": "Hello!"
          },
          {
            "type": "Speech",
            "content": "Hi ${payload.user.name}!"
          }
        ]
      }
    }
  },
  "datasources": {
    "user": {
      "name": "John"
    }
  }
}

Properties

Name Description Type Required

` datasources`

Map of data source objects to provide data to the document. Use this to separate your template from the data. See Data-binding Evaluation and Data-binding Syntax.

Object

No

` document`

An object representing the APL document to convert to an audio response and play on the user's device.

When document.type is "APLA", document must contain the complete JSON document.

When document.type is "Link", document must contain the src property with the URL link to the document.

Object

Yes

document.src

The URL that identifies the document in the authoring tool. For APLA, the link has the following syntax: doc://alexa/apla/documents/<RESPONSE-NAME>. Replace <RESPONSE-NAME> with the name you used when saving the document. Don't include this property when document.type is APLA.

doc://alexa/apla/documents/<RESPONSE-NAME>

No

document.type

Indicates the type of document to send. Set to "APLA" when document contains the full document object. Set to "Link" when document.src contains a document link.

APLA | Link

No

token

A unique identifier for the presentation. Each document is considered an independent presentation. This is used to associate future events and directives with the appropriate presentation.

String

No

type

Always Alexa.Presentation.APLA.RenderDocument.

String

Yes

You can save an APLA document in the authoring tool and then use a link to that document in the RenderDocument directive. This means you don't need to export the JSON for your document and copy it into your code.

A link to an APLA document in the authoring tool has the following syntax:

doc://alexa/apla/documents/<RESPONSE-NAME>

The <RESPONSE-NAME> is the name you used when saving the document in the authoring tool.

The following example uses a link to specify a linked document specified by the src for the audio response:

{
   "type": "Alexa.Presentation.APLA.RenderDocument",
   "token": "developer-provided-string",
   "document": {
     "type": "Link",
     "src":  "doc://alexa/apla/documents/<RESPONSE-NAME>",
   },
   "datasources": {
     "user": {
       "name": "John"
     }
   }
 }

RuntimeError request

Sent to notify the skill about any errors that happened during APL audio processing. This request is for notification only. The skill can't return a response to a RuntimeError request

Name Description Type

errors

An array of error objects representing the reported errors.

Array

token

A unique identifier for the presentation. Identifies the APL for audio document previously sent to the device with RenderDocument.

String

type

Always Alexa.Presentation.APLA.RuntimeError.

String

{
  "type": "Alexa.Presentation.APLA.RuntimeError",
  "token": "developer-provided-string",
  "errors": [
    {
      "type": "UNKNOWN_ERROR",
      "reason": "Describes the type of error which occurred.",
      "message": "A human-readable description of the error."
    }
  ]
}

errors

An array of of error objects representing the errors that occurred. Each error object has the following structure.

{
  "type": "Polymorphic error type indicator.",
  "reason": "Describes the type of error which occurred.",
  "message": "A human-readable description of the error."
}
Property Type Description
message String A human-readable description of the error.
reason String Describes the reason for the error that occurred.
type String Polymorphic error type indicator.

type

Polymorphic error type indicator. Each error type can have type-specific parameters.

Property Description
RENDER_ERROR Errors related to the cloud-based audio mixing service.
LINK_ERROR Errors related to linked documents.

reason

Provides an error code indicating the reason for the error. The following generic values are available for any type of error.

Error code Description
UNKNOWN_ERROR Unexpected issue of unknown origin.
INTERNAL_SERVER_ERROR Unexpected issue in the Alexa service.

In addition to the generic values, LINK_ERROR can return the error codes shown in the following table.

Error code Description
NOT_FOUND_ERROR The linked document was not found. This typically means that the document.src provided is invalid or that you need to rebuild the skill to make the document available.

Combine RenderDocument with outputSpeech and reprompt

APL for audio works alongside the existing outputSpeech and reprompt properties.

When your response includes both outputSpeech and the RenderDocument directive, the device renders the outputSpeech first, followed by the audio defined in the APL document. During an interaction in which Alexa listens for the user to response (shouldEndSession is false), the reprompt plays if the user doesn't respond within a few seconds.

The following example illustrates a response that includes outputSpeech, reprompt, and RenderDocument. The document content is omitted for brevity.

Copied to clipboard.

This code example uses the Alexa Skills Kit SDK for Node.js (v2).

return handlerInput.responseBuilder
     .speak("hello")
     .addDirective({
            "type": "Alexa.Presentation.APLA.RenderDocument",
            "token": "launch_a",
            "document": {
                 "version": "0.91",
                 "type": "APLA",
                 "mainTemplate": {
                      "parameters": [
                           "payload"
                      ],
                     "item": {}
                 }
            }
     })
     .reprompt("This is the re-prompt.")
     .getResponse();

Service Interface Reference (JSON)

Request Format and Standard Request Types:

Interfaces:


Was this page helpful?

Last updated: Nov 28, 2023