Home > Alexa > Alexa Skills Kit

How to Build a Calendar Reader for Alexa

1
1
2
2
3
3
4
4
5
5
6
6

To introduce another way to help you build useful and meaningful skills for Alexa quickly, we’ve launched a calendar reader skill template. This new Alexa skill template makes it easy for developers to create a skill like an “Event Calendar,” or “Community Calendar,” etc. The template leverages AWS Lambda, the Alexa Skills Kit (ASK), and the Alexa SDK for Node.js, while providing the business logic, use cases, error handling and help functions for your skill.

For this tutorial, we'll be working with the calendar from Stanford University. The user of this skill will be able to ask things like:

  • "What is happening tonight?"
  • "What events are going on next Monday?"
  • "Tell me more about the second event."

You will be able to plug your own public calendar feed (an .ICS file) into the sample provided, so that you can interact with your calendar in the same way. This could be useful for small businesses, community leaders, event planners, realtors, or anyone that wants to share a calendar with their audience.

Using the Alexa Skills Kit, you can build an application that can receive and respond to voice requests made on the Alexa service. In this tutorial, you’ll build a web service to handle requests from Alexa and map this service to a skill in the Amazon Developer Portal, making it available on your device and to all Alexa users after certification.

After completing this tutorial, you’ll know how to do the following:

  • Create a calendar reader skill - This tutorial will walk first-time Alexa skills developers through all the required steps involved in creating a skill that reads calendar data, called “Stanford Calendar”.
  • Understand the basics of VUI design - Creating this skill will help you understand the basics of creating a working Voice User Interface (VUI) while using a cut/paste approach to development. You will learn by doing, and end up with a published Alexa skill. This tutorial includes instructions on how to customize the skill and submit for certification. For guidance on designing a voice experience with Alexa you can also watch this video.
  • Use JavaScript/Node.js and the Alexa Skills Kit to create a skill - You will use the template as a guide but the customization is up to you. For more background information on using the Alexa Skills Kit please watch this video.
  • Get your skill published - Once you have completed your skill, this tutorial will guide you through testing your skill and sending your skill through the certification processso it can be enabled by any Alexa user.

Let’s Get Started

Step 1. Setting up Your Alexa Skill in the Developer Portal

Skills are managed through the Amazon Developer Portal. You’ll link the Lambda function you created above to a skill defined in the Developer Portal.

  1. Navigate to the Amazon Developer Portal. Sign in or create a free account (upper right). You might see a different image if you have registered already or our page may have changed. If you see a similar menu and the ability to create an account or sign in, you are in the right place.

  2. Once signed in, navigate to Alexa and select “Getting Started” under Alexa Skills Kit.

  3. Here is where you will define and manage your skill. Select “Add a New Skill”

  4. There are several choices to make on this page, so we will cover each one individually.
    1. Choose the language you want to start with. You can go back and add all of this information for each language later, but for this tutorial, we are working with “English (U.S.)”
    2. Make sure the radio button for the Custom Interaction Model is selected for “Skill Type”.
    3. Add the name of the skill. Give your skill a name that is simple and memorable, like “Stanford Calendar.” The name will be the one that shows up in the Alexa App when users are looking for new skills. (Obviously, don't use Stanford Calendar. Use a name that describes the calendar you plan to read. For ideas, check out iCalShare for a huge list of user created calendars.)
    4. Add the invocation name. This is what your users will actually say to start using your skill. Like in Step #5, use one or two words, because your users will have to say this every time they want to interact with your skill.
    5. Under “Global Fields,” select “no” for Audio Player, as our skill won’t be playing any audio.
    6. Select Next.

  5. Next, we need to define our skill’s interaction model. Let’s begin with the intent schema. In the context of Alexa, an intent represents an action that fulfills a user’s spoken request.

  6. Review the Intent Schema below. This is written in JSON and provides the information needed to map the intents we want to handle programmatically. Copy this from the intent schema in the GitHub repository here.

    Below you will see a collection of built-in intents to simplify handling common user tasks, and then two additional custom intents for querying our calendar source. Intents can optionally have arguments called slots. For our two custom intents, “searchIntent” and “eventIntent,” we will use these slots to define the data type that we are expecting the user to provide.

    Slots are predefined data types that we expect the user to provide. This helps resolve data to a standardized format (like an enum). For example, you could say "next Monday," and it would be able to return a specific date. This data also becomes training data for Alexa's Natural Language Understanding (NLU) engine.

    For our searchIntent, we expect the user to provide a date, like “October 7th.” For the eventIntent, the user will be providing a number, like “Tell me about event #1.” For more on the use of built-in intents, go here.

    {
      "intents":  [
        { "intent": "AMAZON.HelpIntent", "slots": [] },
        { "intent": "AMAZON.StopIntent", "slots": [] },
        { "intent": "AMAZON.RepeatIntent", "slots": [] },
        { "intent": "AMAZON.CancelIntent", "slots": [] },
        { "intent": "AMAZON.YesIntent", "slots": [] },
        { "intent": "AMAZON.NoIntent", "slots": [] },
        { "intent": "searchIntent", "slots":
                        [{ "name": "date", "type": "AMAZON.DATE" }] },
        { "intent": "eventIntent", "slots":
                        [{ "name": "number", "type": "AMAZON.NUMBER" }]}
      ]
    }
    

    You can see that we have defined six different built-in intents: Help, Stop, Repeat, Cancel, Yes, and No. Our two custom intents, searchIntent and eventIntent, each have a slot defined for them. This means that we expect a specific data type from the user when they use these intents. You will see how this works more clearly when we define our sample utterances below.

  7. The next step is to build the utterance list. This is meant to be an thorough, well-thought-out list of the ways users will try to interact with your skill. You don't have to get every possible phrase, but it is important to cover a variety of ways so that the NLU engine can best understand your user's intent.

  8. Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Providing these different phrases in your sample utterances will help improve voice recognition for the abilities you add to Alexa. It is important to include as wide a range of representative samples as you can -– all the phrases that you can think of that are possible in use (though do not include samples that users will never speak). Alexa also attempts to generalize based on the samples you provide to interpret spoken phrases that differ in minor ways from the samples specified.

    Now it is time to add the Utterances. Copy/paste the sample utterances from GitHub. An example of utterances is listed below.

    searchIntent get me stuff happening {date}
    searchIntent get me events for {date}
    searchIntent whats on {date}
    searchIntent whats happening {date}
    searchIntent tell me whats happening {date}
    searchIntent what is happening {date}
    searchIntent what is happening on {date}
    searchIntent what events are happening {date}
    searchIntent what events are happening on {date}
    
    eventIntent tell me about event {number}
    eventIntent whats event {number}
    eventIntent number {number}
    

    As you can see in the example above, we are using our two custom intents: searchIntent and eventIntent. Each example is a different way that a user might ask for that intent. searchIntent expects a AMAZON.DATE slot, and eventIntent expects a AMAZON.NUMBER slot. (More information on slots can be found here.)

  9. Select Save. You should see the interaction model being built (this might a take a minute or two). If you select Next, your changes will be saved and you will go directly to the Configuration screen. After selecting Save, it should now look like this:

Select “Next” to configure the AWS Lambda function that will host the logic for our skill.