The Alexa Web API for Games allows you to build rich and immersive voice-enabled games using the web technologies and tooling of your choice. With this flexibility comes a large number of ways to approach development and debugging. This blog will go over four different strategies you can use for debugging and monitoring your Alexa game skill.
One benefit to using the Alexa Web API for Games is that you can make use of familiar web development workflows. Even though the Alexa client side library is not available in a browser, you can still develop and debug the non-Alexa parts of the web application’s presentation with a little bit of code separation. Abstract away the Alexa parts so that you can use a local web server and rapidly iterate on other parts of your presentation like your WebGL canvas rendering, HTML element styling, and logic which controls the view. For some games, you can even debug the core game logic, if your game is driven from the HTML side. Here is an approach I take to separate out my logic and start up the game with useful data. First, let’s look at how you might want to initialize the Alexa library:
const mockData = require('./mockStartupData.json');
...
// 1. Kick off the game loop, do any non-alexa setup for the local experience.
initgame();
// 2. Initialize the Alexa client
Alexa.create({version: '1.0'})
.then((args) => {
const {
alexa,
message
} = args;
alexaClient = alexa;
alexaLoaded = true;
...
//3. Initialize the game with any data you need from the Alexa startup data payload
setupGame(message);
//4. Initialize your alexa callbacks like speech.onStarted and skill.onMessage
...
})
.catch(error => {
console.log(JSON.stringify(error));
...
alexaClient = null;
//5. Same as above, but note, we pass in mock Data.
setupGame(mockData);
});
The code above will do a few things:
Now that the Alexa logic is abstracted away, the rest of your code can test different starting scenarios just by adjusting the parameters in the mock data file. You can test most scenarios that do not require voice inputs with your local browser alone. To start debugging, you will need to start up a local web application. I’ve written about using the NPM package, http-server in more detail here, but any local server will do.
http-server . -p 8080 -o /dist/ -c-1
Now, open up your favorite browser and navigate to your localhost. In my case, that is:
http://127.0.0.1:8080/dist/
To see a full working example of this setup, check out the README in the webapp directory of our sample repository.
This misses the end-to-end experience as it would run on an Alexa-enabled device, but it excels at rapid iteration on the non-Alexa parts of the experience. While you will not be able to use voice commands, nor communicate with your skill back end, this approach will get you pretty far with developing your game and is useful in combination with the next couple of strategies.
Building a debug overlay for the web application will allow you to see JavaScript console logs on the screen of a running device. By launching a web Application, you have the ability to write to the HTML document from your local JavaScript code. To start, create a div to write logs to and show this on top of everything else. Open your main HTML page and add this to the body:
<div class="hud scrollable" id="debugInfo">Debug</div>
There are two classes on this which need some CSS. Add the following to the page styling:
.scrollable {
overflow-y:auto;
}
.hud {
position: absolute;
z-index: 100;
display:block;
}
The hud class is for overlaying elements tagged with it over other HTML elements that might be in your game, such as the canvas (assuming their z-index is less than 100). The scrollable class simply makes the div element scroll when it is full. Our last CSS will target this element directly.
#debugInfo {
width: 100%;
background-color: rgba(0,0,0,0.05);
top: 0%;
text-align: left;
white-space: pre-wrap;
}
Now, you have to write your logs to the screen. In your JavaScript code, you can grab a reference using:
var debugElement = document.getElementById("debugInfo");
To write to the element, you can create a text node, then append it to the div. For instance, here’s a startup failure added to the screen:
debugElement.appendChild(document.createTextNode("\n" + JSON.stringify(errMessage)));
Inserting a line break character with pre-wrap for the white-space property will make sure the logs stay readable. Now you can log anything to the screen using this div which should be on top of the rest of your elements.
This is a useful approach for debugging since you can follow this strategy for every Alexa Web API supported device without any special setup. However, it is not for use in the final application since it will both cover the screen with logs and unnecessarily write to the console, so remember to turn this code off before you update your live stage skill. Additionally, you won’t have as much detail as using an attached logger.
For this option, you will need the following:
Using the Chrome browser will allow you to attach a local instance of the browser, so you can make use of the full suite of debugging tools offered by Chrome. By doing this you can:
1) Monitor web requests in real time using the Network tab.
2) Look for errors, warnings, and general logs in real time using the Console tab.
3) Set breakpoints to debug your code. Since you’re using the actual device and not a simulation or proxy, you can debug the actual experience on real hardware using your local browser tools.
To accomplish this, you’ll need to install Android debug bridge (ADB). This is a command line tool which allows you to communicate with an Android device over local network or USB. This is necessary in order to leverage your local Chrome browser debug tools.
1) Connect your FireTV to your computer over ADB. Check out the full set of instructions here.
2) Connect the FireTV to your monitor or television.
3) Start your Alexa skill.
4) Open Chrome browser and navigate to chrome://inspect Once the WebView has started on device, you can click “inspect” under the attached device (WebView in com.amazon.csm.htmlruntime ...)
Click the inspect button on the chrome://inspect page once the WebView initializes.
The view that pops up should be an approximation of what is running on the device with the local browser tools needed to debug. This is only an approximation because some things may not render locally on your debugger view, such as the WebGL canvas.
This approach gives you the full power of your local browser tooling on actual hardware with integrations to your back end skill code. While this approach works well on FireTV, you will need to follow one of the other methods in this blog to debug your code on Echo Show devices.
This option can be used in live Alexa skills to help debug as well as monitor potential client side issues in our web application across all devices. The alexa.skill.sendMessage function can be used to write arbitrary logs to your back end where they can be stored in a cloud-side logger like Amazon CloudWatch. This example uses an AWS Lambda function with Amazon CloudWatch and Node.js, but any service and logging method will work.
The most basic form is to send a message to our skill back end with a wrapped function in our client side JavaScript.
First, write the console log wrapper:
/**
* Logs a message locally and if alexa is initialized, pushes the message to the lambda to log.
* @param {*} messageStr
*/
function cloudLog(payload) {
console.log(payload);
if(alexaLoaded) {
alexaClient.skill.sendMessage({
intent: "log",
log: payload
});
}
}
This has two properties which we defined: intent and log. You’ll need to pass the log message as the payload. Now, you can replace all instances of console.log() with this new CloudLog function. Here is the original code:
console.log("Here is a log");
Which changes to:
cloudLog("my message");
Now, you’ll have to handle this in the skill code. Recall that the sendMessage API will send a request of the form Alexa.Presentation.HTML.Message to your back end. You’ll need to add a new handler to respond to it:
/**
* Simple handler for logging messages sent from the webapp
*/
const WebAppCloudLogger = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === "Alexa.Presentation.HTML.Message"
&& getMessageIntent(handlerInput.requestEnvelope) === 'log';
},
handle(handlerInput) {
const messageToLog = handlerInput.requestEnvelope.request.message.log;
console.log(messageToLog);
return handlerInput.responseBuilder
.getResponse();
}
}
Where the getMessageIntent() helper function looks like:
function getMessageIntent(requestEnvelope) {
const requestMessage = requestEnvelope.request.message;
if(requestMessage) {
if(requestMessage.intent) {
return requestMessage.intent;
}
}
return null; // Otherwise no intent found in the message body
}
This will run the handler only when the intent property in the message request payload is logged. The handler itself is simply extracting the payload and logging it to Amazon CloudWatch.
This particular method has a few issues and should be using sparingly. It does not account for or handle rate limitations on the sendMessage JavaScript API. If you implement this approach and liberally use the logger to debug, you will quickly start becoming rate limited. This may even cause concurrent Lambda runtimes to start, making debugging even harder by splitting your logs across multiple CloudWatch Logstreams! We can make this better by batching our logs.
To get around this issue, you’ll need to implement a different message sender wrapper which batches messages and sends them on a set cadence of every half of a second. This will keep it under the limit of 2 messages per second.
First, make a class called messageSender. The specifics of how you do this will depend on how you manage your web application JavaScript code. Inside here, we will create a few methods. First, init:
init(alexa) {
alexaClient = alexa;
}
This method is simple and just stores a reference to the Alexa client object. You’ll call this from your Alexa successful initiation block:
Alexa.create({version: '1.0'})
.then((args) => {
const {
alexa,
message
} = args;
alexaClient = alexa;
...
//initialize our messageSender class
messageSender.init(alexaClient);
...
});
Next, add in an update function to be called from the local game loop:
const MESSAGE_CADENCE_MS = 500; // How often we send the message
update(deltaTime) {
if(currentTime >= MESSAGE_CADENCE_MS) {
//reset time tracker
currentTime = currentTime - MESSAGE_CADENCE_MS;
//Clear message queue
this.flushMessageQueue(messageQueue);
messageQueue = []; // clear the message queue.
} else {
currentTime += deltaTime;
}
},
This will take one parameter, deltaTime, which will represent the amount of time that has passed since the last frame. Even if your game is running at a specific frame rate, you should not rely on those frames being equally spaced apart for game logic. The above function relies on a flushMessageQueue function which will send the internally held message queue (named messageQueue, above):
flushMessageQueue(queue) {
if(queue.length <= 0) {
return Promise.resolve("No messages in queue.");
}
const messagePromise = new Promise((resolve, reject) => {
alexaClient.skill.sendMessage({
intent:"log",
messageQueue:queue
},
function(messageSendResponse) {
console.log(messageSendResponse.statusCode);
switch(messageSendResponse.statusCode) {
case 500:
case 429:
//TODO check messageSendResponse.rateLimit.timeUntilResetMs and timeUntilNextRequestMs
//USe these fields for smart retries split from 500 when this happens
console.error(messageSendResponse.reason);
reject(messageSendResponse.reason);
break;
case 200:
default:
resolve("Successfully called Alexa skill.");
}
});
});
return messagePromise;
}
This is mostly similar to the previous examples, except this has some added stub code for handling various status codes that is wrapped in a promise for asynchronous execution. This way you won’t block the render loop and cause stuttering in the experience. Don’t send a request if the queue is empty.
From here, there are a few different options of how you want to expose the messageQueue and integrate this into your code. You could route all Alexa send message requests through this, or only use it for logs. I decided only to use it for logs in the sample and exposed methods, error, warn, and info. Here is one example:
warn(payload) {
//Print locally.
console.log(payload);
//Push a new log onto the end of the queue.
this.pushMessage(payload, "warn");
},
pushMessage(payload, level) {
messageQueue.push({
level: level,
log: payload
});
}
This is a replacement for the cloudLog function from above. Now I can freely log without worrying about hitting the limit. To match up with the new messageSend API, adjust the skill side message handler:
handle(handlerInput) {
const {
messageQueue
} = handlerInput.requestEnvelope.request.message;
messageQueue.forEach(message => {
const {
level,
log
} = message;
switch (level) {
case "error":
console.error(log);
break;
case "warn":
console.warn(log);
break;
case "info":
console.log(log);
break;
}
});
return handlerInput.responseBuilder
.getResponse();
}
Since it’s sending a list of messages, you’ll want to iterate over them and log them at the appropriate level. This lets you view and monitor these logs in the cloud while the application is running at scale. By logging to the cloud, you can now send errors and warnings and set up monitoring (using Amazon CloudWatch monitoring tools) on these to detect problems. This also lets you add arbitrary messages to help debugging on devices without access to the debugger. To see this all in action, check out the My Cactus simulation game on GitHub or check out just the code for the messageSender class.
NOTE: If you are using Amazon CloudWatch, consider using the CloudWatch Embedded Metric Format for generating CloudWatch metrics from your game data.
I hope this has given you some ideas of different strategies you can use to help create a game using the Alexa Web API for Games. Start with local debugging to rapidly iterate on your experience, then use an on-screen debugger to see logs across all Alexa Web API for Games devices. Dive deeper with local browser tools running on a FireTV device, and then monitor your production issues across devices with a cloud-side logger. Let me know if you use another way to debug your Alexa Web API powered games @JoeMoCode on Twitter.