Build Documents in the Developer Console
Use tools in the developer console to build, preview, and test an Alexa Presentation Language (APL) document. An APL document is a JSON structure that defines a template for a skill response. The document can define either a visual response or an audio response.
Use the Multimodal Response Builder
The Multimodal Response Builder provides a guided experience to create a visual response in three steps:
- Select from a set of templates designed to look good across a broad range of devices.
- Customize the response by specifying the content to display within the template
- Preview the response both in the developer console and on a device.
After you finish customizing the response, the Multimodal Response Builder generates a code example you can copy into a request handler in your skill to display the response. Rebuild the interaction model for your skill and then test your skill on a device or with the developer console simulator.
You can create a new document in the Multimodal Response Builder to get started, and then edit the document in the full authoring tool if you want to do more complex customizations.
To open the Multimodal Response Builder
- In the developer console, open the skill for which you want to create this document.
- In the left-hand navigation, click Multimodal Responses.
- Click Create with Response Builder.
- Step through the pages to complete the three steps
- Select a template. The set of templates includes responsive templates as well as other visual designs.
- Customize
- Preview and test
Use the authoring tool
The authoring tool provides a complete authoring environment for APL. You can start from scratch or start from an existing template. You can also import Lottie animations and scalable vector graphics (SVG).
In the authoring tool, you have access to all APL features as you develop your template. The authoring tool is more powerful than the Multimodal Response builder, but it also requires more knowledge about how APL works.
For details about creating and editing APL documents, see Create and Edit an APL Document.
For the language reference for APL, see the following:
- Visual responses: APL for Screen Devices Reference
- Audio responses: APL for Audio Reference
Import a Lottie animation
Import animations in Lottie format into the authoring tool to use in your APL documents. Lottie is a JSON animation format that you can export from Adobe After Effects.
For details about importing a Lottie animation to use in your APL document, see Import a Lottie Animation.
Import a Scalable Vector Graphic (SVG)
Use the APL authoring tool to convert Scalable Vector Graphics (SVG) files into Alexa Vector Graphics (AVG) objects to use in your APL documents.
The SVG format is an XML-based markup language for describing vector graphics. AVG is a parameterized subset of SVG. You can display an AVG-defined graphic in your document with the VectorGraphic
component.
For details about import an SVG, see Import a Scalable Vector Graphic (SVG) (Beta).
Import and export an APL document
You can export an APL document in JSON format. Export creates a JSON file with the document and data source. You can import an exported document to create a new document.
For details about import and export, see Import and Export APL Documents.
Preview an APL document
Use the authoring tool to preview how your document renders to your users.
For a visual response, you can see how the document looks on devices of different sizes. You can preview tap events, commands, video, and other aspects of the document. For an audio response, you can listen to the audio clip generated by the document.
For details about previewing and testing your document in the authoring tool, see Preview an APL Document.
Integrate your APL document into your skill response
To use your APL document in your skill, include the RenderDocument
directive in the response your skill sends to Alexa. For a visual response, Alexa displays the document on the screen. For an audio response, Alexa plays the audio generated from the document.
When you send the RenderDocument
directive, you must provide the document you want to display or play. Provide the document in one of the following ways:
-
Save the visual response in the authoring tool and pass a link to the document to the
RenderDocument
directive. Your document remains associated with your skill in the developer console, and changes you make in the tools are reflected in the skill response. For details, see Link to an APL document saved in the developer console (Alexa.Presentation.APL)Tip: Use the Integrate with Skill option in the developer console to copy a code snippet with theRenderDocument
directive and a link to the document. The Integrate with Skill option is available on all APL documents saved in the developer console. - Save the audio response in the authoring tool and pass a link to the document to the
RenderDocument
directive. Your document remains associated with your skill in the developer console, and changes you make in the tools are reflected in the skill response. For details, see Link to an APLA document saved in the authoring tool (Alexa.Presentation.APLA). - Copy the JSON for the document into your skill code and pass the full JSON for the document to the
RenderDocument
directive. When you make changes to the document, you must update your skill code.
For details about exporting the JSON for your document, see Import and Export APL Documents
For details about RenderDocument
, see the following:
- Visual response: Alexa.Presentation.APL Interface Reference.
- Audio response: Alexa.Presentation.APLA Interface Reference.
For examples that show how to send RenderDocument
with the Alexa Skills Kit (ASK) SDK, see Use Alexa Presentation Language with the ASK SDK v2.
Related topics
Last updated: Jan 03, 2022