Test Your Utterances as You Build Your Model


Test your utterances with the utterance profiler as you build your interaction model. You can enter utterances and see how they resolve to the intents and slots. When an utterance does not invoke the right intent, you can update your sample utterances and retest, all before writing any code for your skill.

Prerequisites

You can use the utterance profiler once you have defined and built an interaction model.

The utterance profiler does not call an endpoint, so you do not need to develop the service for your skill to test your model.

Test an utterance

  1. From any section of the Build page in the developer console, click the Evaluate Model button in the upper-right corner.
  2. Select the Utterance Profiler tab.
  3. Enter the utterance to test and click Submit.

Review the utterance profiler results

After you submit an utterance, the selected intent and other considered intents are displayed in the bottom part of the profiler. When the selected intent has a dialog model, the utterance profiler displays the relevant prompt from the dialog model and you can continue the dialog. When the selected intent does not have a dialog model, the "session" ends and no prompts are displayed.

Selected intent
Displays the intent that would be sent to your skill for this utterance. If no intents were selected, this displays "N/A". If this intent is not what you expected, update the sample utterances in your model.
Other considered intents
Displays other intents that Alexa considered, but did not select, for the utterance. If the intent you expected to invoke is displayed here, you likely need to update the sample utterances to remove the ambiguity.

Each intent in the results displays the following information:

Intent
Name of the selected or considered intent, as specified in your interaction model. If Alexa could not match the utterance with any of your intents, this displays "N/A". A check box next to the intent name indicates whether the multi-turn dialog confirmed or denied the entire intent. This applies when the selected intent has a dialog model and is configured to require confirmation for the entire intent.
Slots
Lists each slot for the intent and shows the corresponding slot values identified from the utterance. A check box next to the slot name indicates whether the multi-turn dialog confirmed or denied the slot. This applies when the slot has a dialog model and is configured to require confirmation for the slot.
If the slot is configured to collect multiple values, multiple values are separated with commas. For details about collecting multiple values in a slot, see Collect Multiple Values in a Slot.
Next Dialog Act
Displays the dialog act for the next response from the user. This applies when the intent has a dialog model. The dialog act is the dialog step that Alexa attempts to complete, using the prompts defined in your dialog model. Possible actions:
Utterance profiler for an intent with both intent and slot (`fromCity`) confirmation enabled.
Utterance profiler for an intent with both intent and slot (`fromCity`) confirmation enabled

Format for test utterances

When you enter a test utterance, you can use either written form or spoken form. For example, you can use numerals ("5") or write out numbers ("five"). For more examples, see the rules for custom slot type values.

Test a multi-turn dialog

When your skill has a dialog model, you can use the utterance profiler to test the flow of the multi-turn conversation. Enter an utterance that invokes a intent with a dialog model. The skill automatically delegates the dialog to Alexa, which lets Alexa determine the next step of the dialog. The profiler displays the prompt that Alexa would use to get more information from the user and you can enter a response. You can continue the back-and-forth conversation until the dialog is considered complete.

As noted earlier, the Next Dialog Act column indicates the next dialog action for the user. For example, suppose your skill has a PlanMyTrip intent with three required slots: fromCity, toCity, and travelDate. You could test the following dialog:

User: Alexa, tell Plan my Trip that I'm going on a trip on Friday

Alexa prompt: What city are you leaving from? (Dialog act: ElicitSlot)
User: Seattle

Alexa prompt: What city are you going to? (Dialog act: ElicitSlot)
User: Chicago

Alexa prompt: OK, I am planning your trip from Seattle to Chicago on January 18, 2019. Is that correct? (Dialog act: ConfirmIntent)
User: Yes.

Dialog is now complete, so the utterance test ends. The Selected Intent now shows PlanMyTrip with the three slots filled in.

Limitations when testing dialogs

The flow when you test a multi-turn dialog is equivalent to simple dialog scenarios in which you delegate every turn of the dialog to Alexa rather than handling the dialog in your code. Because the utterance profiler does not call an endpoint, you cannot test more complex dialogs in which the skill code would make run-time decisions or use the other Dialog directives (Dialog.ElicitSlot, Dialog.ConfirmSlot, or Dialog.ConfirmIntent).

In addition, the utterance profiler does not support testing slot validation. Prompts are displayed only for slot elicitation, slot confirmation, and intent confirmation.


Was this page helpful?

Last updated: Nov 28, 2023