TouchWrapper(s) Implementation and Best Practices with the Alexa Presentation Language (APL)

Gaetano Ursomanno Jul 27, 2020
Share:
Build Training & Tutorials Tutorial Smart Screens
Blog_Header_Post_Img

When designing a multi-modal experience for an Alexa skill, the first thing that may come to mind is how to make our user interact with the touch screen on screen-enabled Echo devices.

This blog post will explain the core APL component that allows this kind of interaction, alongside some best practices and real-world applications.

By the end of this blog post, we will be able to handle the following scenarios within our skill:

1. Reacting to screen touch without sending data to the backend (Sound effect selector):

APL_touchwrappers_sound_effect_selector

2. Reacting to screen touch and sending data to the backend (Quiz game):

APL_touchwrappers_screen_touch_reaction

Prerequisites:

  • An Amazon developer account. You can create one here
  • Knowledge of the APL Authoring Tool
  • A screen-enabled Alexa device like the Amazon Echo Show. If you don't have one, please follow the optional "Render the document from a skill" step

Let’s start with the basics: how can I make the portions of the screen “touchable” within my skill?

The APL component that allows us to do this is called TouchWrapper. It literally "wraps" a child component to make it respond to touch events meaning that if we set a TouchWrapper to be parent of a Text, its surface will automatically become touchable. Example hierarchy here:

APL_touchwrappers_text_hierarchy_example

How can I handle the event when my users touch the screen?

The TouchWrapper has a property called onPress in which we should declare the command to execute when the component is pressed. 

Implementing Scenario 1 - A sound effect selector

This scenario requires you to understand the concept of data-binding, transformers and how the SpeakItem command works. You can find those explained in this blog post. 

For our sound effect selector project, we will be adding a 100vw/100vh Container as the base of our layout,  and our "buttons" will be represented by a TouchWrapper, a Frame and a Text

The component hierarchy should then look like this: 

APL_touchwrappers_component_hierarchy

Select the Text component and set its properties like the following:

           

                         Property                                                     

                               

                      Value                     

                                                                    

id  

"myTouchableText"

text

    "Sound Effect 1"
speech                          "<URL to your audio file>"                                                                                                                        

        

Why are we setting the speech property to have the URL of the audio effect we are going to play?

This is important because the SpeakItem command that we are going to fire, requires the speech property of the destination (myTouchableText) to have an audio URL or the output of a transformer.

Now, select the TouchWrapper and set its onPress property to the following:

Copied to clipboard
{
   "type": "SpeakItem",  
   "componentId": "myTouchableText"
}

Done!

Now every time that the button is pressed, the SpeakItem command will be fired and the sound effect will be played.

Please note that the approach mentioned above only implements one button, this is just to explain the concept behind a TouchWrapper.

If you are looking to have multiple buttons for different sound effects as shown in the introduction, I would suggest binding data to the main Container so that multiple buttons will be generated according to the size of the input array

You can find more information about data-binding in this blog post.

Implementing Scenario 2 – The quiz game

In this scenario we will build the interface of a quiz game, with a question at the top and four buttons at the bottom. Again, our buttons will be represented by a TouchWrapper, a Frame, and a Text.

The layout here is made of two Container(s), taking up respectively 70% and 30% of the screen real estate. The bigger Container hosts the Text containing the question (questionText), and the smaller Container holds four TouchWrapper(s). The component hierarchy should look like this:

apl_touchwrappers_component_hierarchy

Pressing of each button will send a different information to our backend that will evaluate if the answer is correct or not. 

How does a TouchWrapper send information to the backend?

This is done by implementing the SendEvent command within the TouchWrapper's onPress property.

Select the TouchWrapper representing one of the four buttons and set its onPress property as follows:

Copied to clipboard
{
   "type": "SendEvent",
   "arguments":	["answer_1"],
   "components": "questionText"
}

Repeat this step for all the buttons, and change just the “arguments” value accordingly.

Now, touching the area will send an Alexa.Presentation.APL.UserEvent to our backend, that should look like this:

Copied to clipboard
{
    "type": "Alexa.Presentation.APL.UserEvent",
    "requestId": "amzn1.echo-api.request.xxx ",
    "timestamp": "2020-06-23T00:00:00Z",
    "locale": "en-US",
    "arguments": [
        "answer_1"
    ],
    "components": {
        "questionText": "What is the APL component that lets you react to the user's touch?"
    },
    "source": {
        "type": "TouchWrapper",
        "handler": "Press",
        "id": "",
        "value": false
    },
    "token": "documentToken"
}

As we can see the arguments[] array contains the value that we set into the SendEvent command within our TouchWrapper, so “answer_1” if we press the first button.

The event also carries the value of “questionText”, that represents the question currently displayed on the screen. It will be useful for our backend logic below.

So how can I intercept this event from my code?

Easy! Just declare a handler that reacts to the Alexa.Presentation.APL.UserEvent event.

Node.js:

Copied to clipboard
const  sendEventHandler = {
canHandle(handlerInput) {
const  request = handlerInput.requestEnvelope.request;
// listening for an APL.UserEvent
return  request.type === 'Alexa.Presentation.APL.UserEvent' && request.arguments.length > 0;

},

handle(handlerInput) {

// logging the incoming UserEvent
console.log("APL UserEvent sent to the skill: " + JSON.stringify(handlerInput.requestEnvelope.request))

// getting the answer. (arguments[] declared within the SendEvent command)
let  answer = (handlerInput.requestEnvelope.request.arguments[0])

// getting the question displayed. (components declared within the SendEvent command)
let question = (handlerInput.requestEnvelope.request.components.questionText);

// add the logic here to check if "answer_1" is the right answer
let  response = checkAnswer(answer, question)

  
return  handlerInput.responseBuilder
.speak(`Your answer is ${response}`)
.getResponse();

}
}

Python:

Copied to clipboard
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_model.interfaces.alexa.presentation.apl import UserEvent
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_model import Response


class SendEventHandler(AbstractRequestHandler):
    """APL UserEvent handler (TouchWrapper)"""
    def can_handle(self, handler_input):
        # type: (HandlerInput) -> bool
        request = handler_input.request_envelope.request
        if isinstance(request, UserEvent):
            # return true for userEvent request with at least 1 argument
            return len(request.arguments) > 0

        return False

    def handle(self, handler_input):
        # type: (HandlerInput) -> Response
        request = handler_input.request_envelope.request  # type: UserEvent

        # Logging the incoming UserEvent
        print("APL UserEvent sent to skill: {}".format(request))

        # getting the answer. (arguments[] declared within the SendEvent command)
        answer = request.arguments[0]

# getting the questions. (components[] declared within the SendEvent command)
        question = request.components.questionText

        # add the logic here to check if "answer_1" is the right answer
        response = check_answer(answer, question)

        return handler_input.response_builder.speak(
            "Your answer is {}".format(response)).response

Done! Now we are able to understand which answer our users provided, and notify them accordingly.

(Optional) Render the document from a skill (Developer portal or a device)

Make sure to export the document from the Authoring Tool by pressing the download button on the upper-right corner, and make the file available in our backend.

From the endpoint code, send the Alexa.Presentation.APL.RenderDocument directive referencing the just downloaded file:

Node.js:

Copied to clipboard
// from the LaunchRequest handler:
let speakOutput = 'Here is your layout!'
let aplDocument = require('./myLayout.json'),

return handlerInput.responseBuilder
    .speak(speakOutput)
    .addDirective({
        type:'Alexa.Presentation.APL.RenderDocument',
        token :'documentToken',
        document: aplDocument.document,
        datasources: aplDocument.datasources,
        })
    .getResponse();

Python:

Copied to clipboard
# function declaration:
def _load_apl_document(file_path):
    # type: (str) -> Dict[str, Any]
    """Load the apl json document at the path into a dict object."""
    with open(file_path) as f:
        return json.load(f)

# from LaunchRequest handler:
speakOutput = 'Here is your layout!'
aplDocument =  _load_apl_document('./myLayout.json')

handler_input.response_builder.speak(speakOutput).add_directive(
            RenderDocumentDirective(
                token='documentToken',
                document=aplDocument['document']
                datasources=aplDocument['datasources']
            )
        )
        return handler_input.response_builder.response

Related Content

Next steps

  • Tweak the document! Check out all the properties of the TouchWrapper, Frame, and Text components
  • The list of sound effects doesn’t fit the screen? Try to make it scrollable with a Sequence
  • Want to build more? Have a look at all the APL components and commands

About & Links

If you are looking for additional support, post your question on the Alexa Developer Forums, or contact us. Also feel free to reach out to me on Twitter at @ugaetano_.

Related Articles

Understanding data-binding, transformers, commands, and custom components - How to build a slideshow project with Alexa Presentation Language (APL)

Gaetano Ursomanno

Subscribe