*Event is at capacity*
Building Compelling Voice Experiences with Alexa
The Amazon Alexa team has collaborated with Big Nerd Ranch, known globally for its highly effective immersive application development bootcamps and app development services, to develop deep technical training courses for the Alexa Skills Kit (ASK). This free event will consist of a welcome address followed by six educational modules that will dive into building voice user interfaces and skills using the Alexa Skills Kit. Enjoy breakfast, lunch, and an evening happy hour with fellow Alexa developers and members of the Alexa team.
Attendees are encouraged to come back the following day (11 October) for the Hello Alexa bootcamp. Here you will refine your skill building technique and learn to build your second and third skills. Please note space for these events is limited. Registration is required and your attendance will be confirmed in an email which will be sent within 48 hours of completing the registration forms.
Date: Monday, October 10, 2016
Time: 8:30 a.m. – 6:00 p.m. BTC
Location: Look for the location in your confirmation email.
Event is at capacity. Please join us for Office Hours or another Alexa hosted event. For more details click here.
In this module, we’re going to introduce the Alexa Skills Kit platform and teach you how to create skills, which are voice driven applications for Alexa. We will build and deploy a basic skill. This skill will be called the “Greeter” skill, and will say hello to users when they invoke the skill using the words that we specify. We'll see an overview of the skill interface and skill service components that comprise a basic skill, and how to configure them to create a working skill. We'll see how to build and then deploy a skill to Amazon Web Services. Our skill will respond to a user’s words with a greeting on any Amazon Echo or Alexa-enabled device. Finally, we'll see how to test the behavior of our skill once deployed.
- Using the Alexa Skill Prototype to implement an Intent
- Components of Skill
- Skill Interface & Service Configuration
- Understanding the Skill Request Cycle
- Defining the Interaction Model
- Sample Utterances and Intent Schema Definition
- Configuring AWS Lambda
- Registering a Skill with Alexa
- Defining the Invocation Name
Amazon Node.js official examples: https://github.com/amzn/alexa-skills-kit-js
AWS Lambda: https://aws.amazon.com/lambda/
Slots and Slot Types
With the Greeter skill you learned about the Alexa skill architecture and interface configuration. Now we’ll expand on what we learned with the Greeter skill by building a more feature rich skill called Airport Info. Airport Info will make requests to the Federal Aviation Administration’s JSON backed web service, and inform users if there is any delay at an airport that they specify.
We will see several new features of the skill interaction model that let us build more sophisticated skills. The Airport Info skill will also show how local testing against an intent and slot can be accomplished using the alexa-app-server module. The skill will also show common patterns for dealing with 3rd party JSON Web APIs to retrieve data in response to a user request.
- Developing Locally with alexa-app-server and alexa-app NPM Modules
- Generating Utterances with the alexa-app Module
- Creating and Registering Custom Slot Types
- Web Service Interaction and Asynchronous Requests
- Asking vs Telling Interactions
- Displaying Data on Cards
FAA Services: http://services.faa.gov/
Invocation Name guidelines: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/c…
Sessions and Voice User Interfaces
In the last module with Airport Info, you learned about the slots and slot types features of the interaction model. Slots and slot types expand the possibilities of what a skill can offer by allowing users to provide spoken variables to your skill’s intent handlers.
In this module, we’ll learn about the user sessions feature. This feature allows our skill to break more complicated data requirements into a series of steps spanning multiple requests to the skill service. We’ll also learn about Amazon’s voice user interface requirements. Following these requirements is important for getting a skill certified for public availability in the Alexa app. Lastly we’ll introduce cards. Cards are a graphical user interface element that can be sent from a skill to the Alexa app.
We'll learn through doing by building a game called the Madlib Builder skill to see the sessions feature and cards feature, as well as examples of voice design. We’ll also cover best practices for each of these along the way.
- Understanding Voice User Interface design
- Implementing Contextual Help
- Multistep Skills
- Amazon's Built-in Intents
- Stream Management
Voice User Interface Design Handbook: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/a…
Providing Home Cards: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/p…
In this module we will discuss how to link a skill with a database so that it can save an unfinished madlib for later use in another session. Amazon DynamoDB is an easy way to read and write data from an AWS Lambda function skill like previously in Madlib Builder. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It requires no schema and can be used as a key-value or document-based store. We'll extend Madlib Builder to support saving and loading a madlib so that users can return to their previous work at a later time.
- AWS DynamoDB Overview
- Database Persistence with DynamoDB
- Database Management on AWS
- Extending the Madlib Builder Service to support Saving/Loading
AWS DynamoDB: https://aws.amazon.com/dynamodb/
In the previous module, we learned about data persistence with Amazon DynamoDB. In this module, we'll learn about the account linking feature, which allows customers to easily link their accounts with an existing account or service. Account linking allows your skill to securely authenticate a user with popular websites and services like Twitter, Facebook, Amazon, and many others. Once the user has granted your skill access to the external account, the skill can perform operations with that account on behalf of the user. We'll learn how account linking works by adding social features to the Airport Info skill we created earlier. Users will be directed to log in to their Twitter accounts. Once authorized by the account linking process, the skill will be able to post the results of the FAA web service request we made to the user’s Twitter timeline.
- Understanding Account Linking
- How to Configure an Account Linking Flow for your Skill
- OAuth 1.0 and 2.0 Differences
- Integrating with an OAuth 1.0a Based Service
- Extending Airport Info to Support Social Features
Twitter API: https://dev.twitter.com/overview/documentation
Linking an Alexa user with a user in your system: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/linking-an-alexa-user-with-a-user-in-your-system
Certification and Testing
In the last module, you learned how to enable account linking functionality, allowing your skill to leverage a user's existing accounts on OAuth-based sites and services like Amazon, Facebook, Twitter, and many others. In this module, we’ll cover publishing a skill and the Amazon certification process. Completing the certification process allows our skill to be offered to the public in the Alexa app.
We’ll take a look at common skill development mistakes and pitfalls that prevent certification and we’ll also review certification guidelines for ensuring your skill is accepted. We'll also explore adding automated unit tests to improve the quality and reliability of the skill’s behavior.
- Skill Submission Steps
- The Skill Certification Process
- Certification Requirements
- Common Pitfalls for Certification
- Unit Testing a Skill locally
Submitting a skill for certification: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/publishing-an-alexa-skill