When building skills, it's incredibly important to deliver high-quality experiences. Through frequent testing and troubleshooting, you can ensure you're offering consistently great experiences to customers. But as with any technology, you may encounter unexpected issues that arise during the testing process. For example, when testing your Alexa skill, you may be faced with the message, “There was a problem with the requested skill’s response.” In this case, your objective is to track down the cause of the response error, but many times you won't know where to start. That's where troubleshooting comes in. We've identified a few tips for helping you troubleshoot your custom skill's back end with one objective in mind: to help you resolve issues faster and deliver higher-quality skills. In our experience, these tips can save hours in debugging time.
Before we dive into each tip, let's break down this message: “There was a problem with the requested skill's response.” Alexa communicates with the skill service via a request-response mechanism using HTTP over SSL/TLS. When a customer interacts with an Alexa skill, your service receives a POST request containing a JSON body. The request body contains the parameters necessary for the service to perform its logic and generate a JSON-formatted response. This JSON-formatted response must comply with a response format specific to Alexa. When the response JSON does not comply with this format, Alexa will emit the above mentioned message.
As the skill developer, your objective is to track down why the skill service returns a null or invalid response to Alexa.To make your troubleshooting journey a success, we suggest the following troubleshooting approaches. Note: The code samples used in this post are based on the Alexa Skills Kit (ASK) Software Development Kit (SDK) for Node.js but the same mechanisms can be made with either the Java or Python ASK SDK.
Logs are your eyes on the field. They will allow you to track down any erratic behavior.
To get more visibility how what's happening on your back end, you need to log at least three things on each turn:
To do so, the Alexa Skills Kit SDK provides you with two powerful features to easily output those logs :
You can see Interceptors as hooks where you insert a piece of code in one central location to be executed before or after a RequestHandler execution (i.e. any time a request hits your skill). For example, you can leverage interceptors to connect to your database, get localized strings and log incoming requests or outgoing responses. For logging purposes, you can do this by simply including a console.log() statement.
Here’s how you would do that using interceptors :
Step 1: Set Up the Request and Response Interceptors
/**
* Request Interceptor to log the request sent by Alexa
*/
const LogRequestInterceptor = {
process(handlerInput) {
// Log Request
console.log("==== REQUEST ======");
console.log(JSON.stringify(handlerInput.requestEnvelope, null, 2));
}
}
/**
* Response Interceptor to log the response made to Alexa
*/
const LogResponseInterceptor = {
process(handlerInput, response) {
// Log Response
console.log("==== RESPONSE ======");
console.log(JSON.stringify(response, null, 2));
}
}
Step 2: Register the Interceptors
exports.handler = skillBuilder
.addRequestHandlers(...)
.addRequestInterceptors(LogRequestInterceptor)
.addResponseInterceptors(LogResponseInterceptor)
.lambda();
Error handlers are similar to request handlers, but are instead responsible for handling one or more types of errors. They are invoked by the SDK when an unhandled error is thrown during the course of request processing. You can see ErrorHandlers as a big try catch exception handling surrounding all RequestHandlers you have defined.
You can define a global ErrorHandler for every exception happening in your code or you can provide multiple ErrorHandler depending on the type of error thrown.
Here’s how you would include a global ErrorHandler :
Step 1: Set Up the Error Handler
/**
* Handler to catch exceptions from RequestHandler
* and respond back to Alexa
*/
const GlobalErrorHandler = {
canHandle(handlerInput, error) {
// handle all type of exceptions
// Note : To filter on certain type of exceptions use error.type property
return true;
},
handle(handlerInput, error) {
// Log Error
console.log("==== ERROR ======");
console.log(error);
// Respond back to Alexa
const speechText = "I'm sorry, I didn't catch that. Could you rephrase ?";
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.getResponse();
},
};
Step 2: Register the Error Handler
exports.handler = skillBuilder
.addRequestHandlers(...)
.addErrorHandlers(GlobalErrorHandler)
.addRequestInterceptors(LogRequestInterceptor)
.addResponseInterceptors(LogResponseInterceptor)
.lambda();
If you are running your service on an Alexa-Hosted backend or AWS-Hosted backend using AWS Lambda, your logs will be available from Amazon CloudWatch. You can access them from the AWS Console.
For AWS-Hosted backend using AWS Lambda only, you can access your CloudWatch Logs from the Alexa Skills Kit Command Line Interface (ASK CLI) using ask lambda log command.
For example, using the below command, you will retrieve all logs within the last hour :
$ ask lambda log --function {_YOUR_LAMBDA_FUNCTION_NAME_} --start-time 1hago
The Alexa service sends your service a request using different types of requests depending on how users engage with your skill by voice. You must accept and respond accordingly to every type of request. The full list of requests to be handled can be found here. The rule of thumb is to create one RequestHandler for each intent (either built-in or custom) defined in your interaction model.
You might consider adding a final RequestHandler in the request chain to return a common response to Alexa for all types of requests you don't handle individually. Be mindful in this situation : valid response type for each request type vary from one request to another request! Typically, not all responses can include standard response properties such as outputSpeech or card or reprompt nor even need an actual response.
For example, one type of request which is often forgotten is SessionEndedRequest. It is sent by Alexa to your service when the current skill session ends for any reason other than your code closing the session. Usually, you will receive such a request's type when a user does not respond to your Skill. Alexa is expecting an empty response (no response) from your service. If you do respond something else, this will lead to an exception.
/**
* Handler to handle SessionEndedRequest request sent by Alexa
* Send when the current skill session ends for any reason other
* than your code closing the session.
*/
const SessionEndedRequestHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'SessionEndedRequest';
},
handle(handlerInput) {
// Log the reason why the session was ended
const reason = handlerInput.requestEnvelope.request.reason;
console.log("==== SESSION ENDED WITH REASON ======");
console.log(reason);
// Respond back to Alexa (empty response)
return handlerInput.responseBuilder.getResponse();
},
};
Another typical example where you would be struggling with the response format is when implementing the AudioPlayer Interface. This interface provides directives and requests for streaming audio and monitoring playback progression. Most requests from this Interface (4 out of 5) allow a response from your service with only an AudioPlayer directive. If you do respond with a outputSpeech, card, or reprompt, this will lead to an exception.
For reference, here are the different types of requests where you can fall into such a situation :
Type of requests where your service cannot return a response (empty response):
Type of requests where your service cannot include any standard properties such as outputSpeech, card, or reprompt nor any other directives from other interfaces, such a Dialog directive:
Each Alexa-enabled device has different characteristics. For example, not every device has a screen and your service must adapt its response accordingly. It would not make sense to send a response with visuals if your skill is invoked from an Amazon Echo Dot. The Alexa service let you know each interface the device supports from the context.System.device.supportedInterfaces property.
In your service, you will detect whether you can use a specific interface before actually using it in your response.
Here is the typical way you would test the availability of an interface in your back end.
// Generic function to check interface availability on calling device
function supportsInterface(handlerInput, interfaceName){
const interfaces = ((((
handlerInput.requestEnvelope.context || {})
.System || {})
.device || {})
.supportedInterfaces || {});
return interfaces[interfaceName] !== null && interfaces[interfaceName] !== undefined;
}
// Check for AudioPlayer Interface availability on calling device
function supportsAudioPlayer(handlerInput) {
return supportsInterface(handlerInput, 'AudioPlayer')
}
// Check for APL Interface availability on calling device
function supportsAPL(handlerInput) {
return supportsInterface(handlerInput, 'Alexa.Presentation.APL')
}
// Check for Display Interface availability on calling device
function supportsDisplay(handlerInput) {
return supportsInterface(handlerInput, 'Display')
}
/**
* Sample LaunchRequest Handler to illustrate
* Interface support on calling device
*/
const LaunchRequestHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
},
handle(handlerInput) {
// Typical voice response
const speechText = '...';
const repromptText = '...';
var builder = handlerInput.responseBuilder
.speak(speechText)
.reprompt(repromptText);
// Is the device having a screen ?
if (supportsAPL(handlerInput)) {
// ok, I add my APL Directive then
builder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
version: '1.0',
document: require('./documents/template.json'),
datasources: require('./datasources/datasource.json')
})
}
return builder.getResponse();
},
};
Once you have enabled your skill for test from the Alexa Developer Console, it will be available to all devices registered on the very same Amazon Alexa account and you will be able to put yourself in the user's shoes. This is where the Amazon Alexa App will help you getting more insights about the interaction.
The Amazon Alexa App provides a complete history of a user's interactions, regardless of whether the correct intent was triggered. This can be useful for seeing exactly how Alexa interpreted an utterance and for tracking down problems in your voice interface. To see the history, open the Amazon Alexa App and go to the main menu. From the menu, navigate to Settings > Alexa Account > History.
Cards are visual response that describe or enhance the voice interaction. To send a card to the Alexa app, you include the home card in the response your service sends back to Alexa. To see the cards, on desktop web browsers navigate directly to Home. On Fire OS, Android, iOS, open the Amazon Alexa App and from the menu, navigate to Activity.
There are four types of cards you have control over:
During the early stages of development, you may want to send back Simple cards from all LaunchRequest and IntentRequest requests with basic debugging information (such as the name of the intent that was triggered) to help you see how your service is working. This will typically be done in a ResponseInterceptor where you will capture the generated response and update the card properties to include information from the incoming request:
const CardDebuggerResponseInterceptor = {
process(handlerInput, response) {
const { request } = handlerInput.requestEnvelope;
const { applicationId } = handlerInput.requestEnvelope.session.application;
// check whether card can be added
if (Constants.DEBUG // <-- constant defined in your code
&& response
&& (request.type === 'LaunchRequest'
|| request.type === 'IntentRequest')) {
// clear previous card if any
response.card = undefined;
// generate new card data
const cardTitle = `Skill ID : ${applicationId}`;
let cardContent = `Locale : ${request.locale}\n`;
cardContent += `Request ID : ${request.requestId}\n`;
cardContent += `Request Type : ${request.type}\n`;
if (request.type === 'IntentRequest') {
// add intent name
cardContent += `Intent Name : ${request.intent.name}\n`;
// add slots if any
const { slots } = request.intent;
if (slots) {
cardContent += 'Slots : \n ***************\n';
Object.keys(slots).forEach((item) => {
cardContent += `* Name : ${slots[item].name}\n`;
cardContent += `* Value : ${slots[item].value}\n`
cardContent += `***************\n`;
});
}
}
// set new reponse card with request information
response.card = {
type: 'Simple',
title: cardTitle,
content: cardContent
};
}
}
};
The Simple card generated from the above ResponseInterceptor sample code will look like:
In addition to the cards you control, Alexa also sends back cards in response to errors communicating with your service. For example, when Alexa prompts you the message “There was a problem with the requested skill’s response”, the below card will be sent to your Amazon Alexa App with the requestId responsible for this failure. But as you have implemented Tip 1, you can look at your logs for this specific request and troubleshoot the problem.
You can find additional error cards and possible causes in our documentation.
Next time, you will hear the prompt “There was a problem with the requested skill’s response” from Alexa, those troubleshooting techniques may save you hours in debugging. Aside from troubleshooting advice, these tips are also best practices to take into account during every skill's development phase. To summarize, remember the three key troubleshooting tips:
We can't wait to see what you build next. If you have questions, find me on Twitter @bnachawati.