Home > Alexa > Alexa Skills Kit

How to Build a City Guide for Alexa


We are all fans of the places we’ve lived. Maybe it was your childhood home, or the town you went to college, or a city you visited and want to share with others. In any of those cases, we have a new skill template to help you share your favorite places! This template uses AWS Lambda, the Alexa Skills Kit (ASK), and the ASK SDK, in addition to the New York Times Article Search API for news. We provide the business logic, error handling, and help functions for your skill, you just need to provide the data and credentials.

For this example, we will be creating a skill for the city of Seattle, Washington. The user of this skill will be able to ask things like:

  • “Alexa, ask Seattle Guide what there is to do.”
  • “Alexa, ask Seattle Guide about the Space Needle.”
  • “Alexa, ask Seattle Guide for the news.”

You will be able to use your own city in the sample provided, so that users can learn to love your location as much as you do! This might also be a good opportunity to combine the knowledge from this template with our Calendar Reader sample, so that you can provide information about the events in your town, as well as the places!

After completing this tutorial, you’ll know how to do the following:

  • Create a city guide skill - This tutorial will walk Alexa skills developers through all the required steps involved in creating a skill that shares information about a city, and can search for news about that location.
  • Understand the basics of VUI design - Creating this skill will help you understand the basics of creating a working Voice User Interface (VUI) while using a cut/paste approach to development. You will learn by doing, and end up with a published Alexa skill. This tutorial includes instructions on how to customize the skill and submit for certification. For guidance on designing a voice experience with Alexa you can also watch this video.
  • Use JavaScript/Node.js and the Alexa Skills Kit to create a skill - You will use the template as a guide but the customization is up to you. For more background information on using the Alexa Skills Kit please watch this video.
  • Manage state in an Alexa skill - Depending on the user’s choices, we can handle intents differently.
  • Get your skill published - Once you have completed your skill, this tutorial will guide you through testing your skill and sending your skill through the certification process so it can be enabled by any Alexa user. You may even be eligible for some Alexa swag!
  • Interact with the Bing Search API.

Let’s Get Started

Step 1. Setting up Your Alexa Skill in the Developer Portal

Skills are managed through the Amazon Developer Portal. You’ll link the Lambda function you created above to a skill defined in the Developer Portal.

  1. Navigate to the Amazon Developer Portal. Sign in or create a free account (upper right). You might see a different image if you have registered already or our page may have changed. If you see a similar menu and the ability to create an account or sign in, you are in the right place.

  2. Once signed in, navigate to Alexa and select “Getting Started” under Alexa Skills Kit.

  3. Here is where you will define and manage your skill. Select “Add a New Skill”

  4. There are several choices to make on this page, so we will cover each one individually.
    1. Choose the language you want to start with. You can go back and add all of this information for each language later, but for this tutorial, we are working with “English (U.S.)”
    2. Make sure the radio button for the Custom Interaction Model is selected for “Skill Type”.
    3. Add the name of the skill. Give your skill a name that is simple and memorable, like “Seattle Guide.” The name will be the one that shows up in the Alexa App (and now at amazon.com/skills) when users are looking for new skills. (Obviously, don’t use Seattle Guide. Use a name that describes the city you plan to use for your skill.)
    4. Add the invocation name. This is what your users will actually say to start using your skill. We recommend using only two or three words, because your users will have to say this every time they want to interact with your skill.
    5. Under “Global Fields,” select “no” for Audio Player, as our skill won’t be playing any audio.
    6. Select Next.

  5. Next, we need to define our skill’s interaction model. Let’s begin with the intent schema. In the context of Alexa, an intent represents an action that fulfills a user’s spoken request.

  6. Review the Intent Schema below. This is written in JSON and provides the information needed to map the intents we want to handle programmatically. Copy this from the intent schema in the GitHub repository here.

    Below you will see a collection of intents that we expect our users to indicate by voice. They can ask for an overview of your city, they can ask about the Top Five attractions (in addition to asking for more information about those attractions), and they can ask for the news for your city. Intents can optionally have arguments called slots.

    Slots are predefined data types that we expect the user to provide. This is not a closed list (like an enum), so you must anticipate that you will receive values that are not in your slot value list.. For example, you could say “tell me about attraction number two,” and it would be able to return a specific number to our skill’s code. This data also becomes training data for Alexa’s Natural Language Understanding (NLU) engine. You will see how this works more clearly when we define our sample utterances below.

    For the getMoreInfoIntent, the user will be providing a number, like “Tell me about attraction number one.” For more on the use of built-in intents, go here.

      "intents": [
        { "intent": "getOverview", "slots": [] },
        { "intent": "getTopFiveIntent", "slots": [] },
        { "intent": "getAttractionIntent", "slots": [] },
        { "intent": "getMoreInfoIntent", "slots": [{ "name": "attraction", "type": "AMAZON.NUMBER" }] },
        { "intent": "getNewsIntent", "slots": [] },
        { "intent": "AMAZON.YesIntent", "slots": [] },
        { "intent": "AMAZON.NoIntent", "slots": [] },
        { "intent": "AMAZON.HelpIntent", "slots": [] },
        { "intent": "AMAZON.RepeatIntent", "slots": [] }

    You can see that we have defined four different built-in intents: Yes, No, Help, and Repeat. These are built-in intents that we can use for common commands our users will indicate.

  7. The next step is to build the utterance list. This is meant to be a thorough, well-thought-out list of the ways users will try to interact with your skill. You don’t have to get every possible phrase, but it is important to cover a variety of utterances so that the Natural Language Understanding(NLU) engine can best interpret your user’s intent.

  8. Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Providing these different phrases in your sample utterances will help improve voice recognition for the abilities you add to Alexa. It is important to include as wide a range of representative samples as you can -– all the phrases that you can think of that are possible in use (though do not include samples that users will never speak). Alexa also attempts to generalize based on the samples you provide to interpret spoken phrases that differ in minor ways from the samples specified.

    Now it is time to add the Utterances. Copy/paste the sample utterances from GitHub. An example of utterances is listed below.

    getOverview tell me about Seattle
    getTopFiveIntent tell me top five things to do
    getTopFiveIntent what are the top five things to do
    getTopFiveIntent what I should see
    getAttractionIntent tell me what to do
    getAttractionIntent give me an attraction
    getMoreInfoIntent tell me more about {attraction}
    getMoreInfoIntent open attraction {attraction}
    getMoreInfoIntent open number {attraction}
    getNewsIntent get me the news
    getNewsIntent tell me the news

    As you can see in the example above, we are using our custom intents with phrases that our users might use to interact with our skill. Each example is a different way that a user might ask for that intent. getMoreInfoIntent expects an AMAZON.NUMBER slot, so we have specified this in our utterances with {attraction}. (More information on slots can be found here.)

  9. Select Save. You should see the interaction model being built (this might take a minute or two). If you select Next, your changes will be saved and you will go directly to the Configuration screen. After selecting Save, it should now look like this:

Select “Next” to configure the AWS Lambda function that will host the logic for our skill.