No results found

Try a different or more specific query
Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts

Archive

Showing posts by David Isbitski

October 13, 2016

David Isbitski

The beta is now closed. Sign up to be notified when the List Skill API is publicly available.

Today we announced a limited participation beta for the List Skill API, a new addition to the Alexa Skills Kit, which enables developers to add capabilities, called skills, to Alexa. Developers can now teach Alexa how to interface with their list applications so that customers can simply say, “Alexa, add bananas to my Shopping List” or “Alexa, add ‘Go for a Jog’ to my To-do list.” The List Skill API taps into Amazon’s standardized language model so you don’t have to build a voice interaction model to handle customer requests. You create skills that connect your applications directly to Alexa’s Shopping and To-do list capabilities so that customers can add or review items on their lists—without lifting a finger.

How it works

The List Skill API has a bi-directional interface that ensures lists are updated across all channels. That means the API notifies developers when a customer tells Alexa to add something to their list or makes a change to an existing item. Alexa understands the user’s speech request, converts it to a To-do or Shopping item, and sends you a notification with the new item that was added to the list. The List Skill API also updates the lists for Alexa when users make changes to their lists online or in your mobile application.

Customers are increasingly using voice interfaces as a hands-free way to manage their lives. By using Alexa’s built-in Shopping and To-do lists to keep track of items to buy and things to do, customers on millions of Alexa-enabled devices only have to "ask" and it's at their command, often becoming a daily habit. By integrating with the List Skill API, you will make it easier for your existing customers to keep track of their important tasks and shopping items in the home, and introduce your brand to a new group of Alexa customers.

Here's what developers are saying

Today we announced that Any.do and Todoist created the first skills using the List Skill API. 

 “We’ve been huge fans of Alexa for a long time. Once the opportunity to work with Alexa in a deep way presented itself, we we’re extremely happy to push it forward" says Omer Perchik, the Founder and CEO of Any.do. "The work with the new Alexa List Skill API was simple, straightforward and our experience as a beta participant was smooth due to the support from Amazon.”

“At Todoist, we're very excited about the potential of AI and AI-powered services. Amazon’s Alexa is one of the earliest and best examples of making this technology useful in people's everyday lives,” says Doist founder and CEO Amir Salihefendic. “That's why we're thrilled to have collaborated with the Amazon team as part of their limited participation beta for the Alexa List Skill API. We’re sure our customers will find Alexa extremely helpful in staying organized and productive, and we're looking forward to working with Amazon to make the Todoist skill even more useful as Alexa continues to evolve and get smarter.”

Get started now

Going forward, we’re excited to open the List Skill API to more developers as part of our limited participation beta.

For more information about getting started with the Alexa Skills Kit and to apply to participate in the List Skill API beta, check out the following additional assets:

About the List Skill API
Alexa Dev Chat Podcast
Alexa Training with Big Nerd Ranch
Alexa Skills Kit (ASK)
Alexa Developer Forums

-Dave (@TheDaveDev)

October 03, 2016

David Isbitski

Today we are introducing the Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Flash Briefing on Alexa, which delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.

The Flash Briefing Skill API is free to use. Get Started Now >

Creating Your Skill with the Flash Briefing Skill API

To get started, you’ll configure a JSON or RSS feed and submit descriptive information about your skill in the portal. This can be done through the following steps:

 1.  Register for a free Amazon Developer Account if you have not done already and navigate to the Alexa Skills Kit box in the Alexa menu here.

2.  Click on Add a New Skill

3.  Select Flash Briefing Skill API, fill out a name and then click Next.

4.  Unlike custom skills, the interaction model for Flash Briefing Skills will automatically be generated for you, simply hit Next.

5.  Now we will need to define our Content Feed(s). Your Flash Briefing Skill can include one or more defined feeds.



Then, click on the Add new feed button.

6.  You will then enter information about your content feed including name, how often the feed will be updated, the content type (audio or text), the genre, an icon as well as the URL for where you are hosting the feed.

7.  Repeat these steps each feed you wish to include in the skill. The first feed you add will automatically be marked as the default feed. If you add more feeds, you can choose which feed is the default, by selecting it in the Default column.

8.  Click Next when you are finished adding feeds and are ready to test your skill.

For additional information check out the Steps to Create a Flash Briefing Skill page here.

[Read More]

September 07, 2016

David Isbitski

When creating your own Alexa skill, there may be times when you would like to change the way Alexa speaks.  Perhaps she isn’t pronouncing a word correctly, maybe her inflections are too serious or you may find the need to include a short audio clip. Speech Synthesis Markup Language, or SSML, is a standardized markup language that provides a way to markup text for changing how speech is synthesized. Numerous SSML tags are currently supported by the Alexa Skills Kit including: speak, p, s, break, say-as, phoneme, w and audio.

This 20-minute video will walk you through adding SSML support to your Alexa skill and shows exactly how to pause Alexa’s speech, change how she pronounces a word and how to create and embed your own audio tags.

For more information about getting started with Alexa and SSML, check out the following:

Speech Synthesis Markup Language (SSML) Reference
Alexa Dev Chat Podcast
Intro to Alexa Skills On Demand
Voice Design 101 On Demand
Alexa Skills Kit (ASK)
Alexa Developer Forums

-Dave (@TheDaveDev)

 

August 24, 2016

David Isbitski

Before today, the Alexa Skills Kit enabled short audio via SSML audio tags on your skill responses. Today we are excited to announce that we have now added streaming audio support for Alexa skills including playback controls. This means you can easily create skills that playback audio content like podcasts, news stories, and live streams.

New AudioPlayer and PlaybackController interfaces provide directives and requests for streaming audio and monitoring playback progression. With this new feature, your skill can send audio directives to start and stop the playback. The Alexa service can provide your skill with information about the audio playback’s state, such as when the track is nearly finished, or when playback starts and stops. Alexa can also now send requests in response to hardware buttons, such as those on a remote control.

Enabling Audio Playback Support in Your Skill

To enable audio playback support in your skill you simply need to turn the Audio Player functionality on and handle the new audio Intents. Navigate to the Alexa developer portal and do the following:

  • On the Skill Information page in the developer portal, set the Audio Player option to Yes.
     
  • Include the required built-in intents for pausing and resuming audio in your intent schema and implement them in some way:
    • AMAZON.PauseIntent
    • AMAZON.ResumeIntent
       
  • Call the AudioPlayer.Play Directive from one of your Intents to start the Audio Playback
     
  • Handle AudioPlayer and PlaybackController Requests and optionally respond

In addition to the required built-in intents, your skill should gracefully handle the following additional built-in intents:
 

  • AMAZON.CancelIntent
  • AMAZON.LoopOffIntent
  • AMAZON.LoopOnIntent
  • AMAZON.NextIntent
  • AMAZON.PreviousIntent
  • AMAZON.RepeatIntent
  • AMAZON.ShuffleOffIntent
  • AMAZON.ShuffleOnIntent
  • AMAZON.StartOverIntent

Note: Users can invoke these built-in intents without using your skill’s invocation name. For example, while in a podcast skill you create, a user could say “Alexa Next” and your skill would play the next episode.

If your skill is currently playing audio, or was the skill most recently playing audio, these intents are automatically sent to your skill. Your code needs to expect them and not return an error. If any of these intents does not apply to your skill, handle it in an appropriate  way in your code. For instance, you could return a response with text-to-speech indicating that the command is not relevant to the skill. The specific message depends on the skill and whether the intent is one that might make sense at some point, for example:
 

  • For a podcast skill, the AMAZON.ShuffleOnIntent intent might return the message: “I can’t shuffle a podcast.”
  • For version 1.0 of a music skill that doesn’t yet support playlists and shuffling, the AMAZON.ShuffleOnIntent intent might return: “Sorry, I can’t shuffle music yet.”


Note: If your skill uses the AudioPlayer directives, you cannot extend the above built-in intents with your own sample utterances.

[Read More]

July 26, 2016

David Isbitski

Today, we’re excited to announce the Amazon Alexa session track at AWS re:Invent 2016, the largest gathering of the global Amazon developer community. AWS re:Invent provides an opportunity to connect with peers and technology experts, engage in hands-on labs and bootcamps, and learn about new technologies and how to improve productivity, network security, and application performance, all while keeping infrastructure costs low. AWS re:Invent runs November 28 through December 2, 2016.

The Alexa track at AWS re:Invent will dive deep into the technology behind the Alexa Skills Kit and the Alexa Voice Service, with a special focus on using AWS Services to enable voice experiences. We’ll cover AWS Lambda, DynamoDB, CloudFormation, Cognito, Elastic Beanstalk and more. You’ll hear from senior evangelists and engineers and learn best practices from early Alexa developers. Here’s an early peek at the Alexa sessions.

Title

Time

Level

Description

ALX 201: How Capital One Built a Voice Experience for Banking

Tuesday, November 29, 2016

10:00 AM - 11:00 AM

Introductory

As we add thousands of skills to Alexa, our developers have uncovered some basic and more complex tips for building better skills. Whether you are new to Alexa skill development or if you have created skills that are live today, this session will help you understand how to create better voice experiences. Last year, Capital One joined Alexa on stage at re:Invent to talk about their experience building an Alexa skill. Hear from them one year later to learn from the challenges that they had to overcome and the results they are seeing from their skill.

ALX 202: How Amazon is Enabling the Future of Automotive

Thursday, December 1, 2016

11:30 AM - 12:30 PM

Introductory

The experience in the auto industry is changing. For both the driver and the car manufacturer, a whole new frontier is on the near horizon. What do you do with your time while the car is driving itself? How do I have a consistent experience while driving shared or borrowed cars? How do I stay safer and more aware in the ever increasing complexity of traffic, schedules, calls, messages and tweets? In this session we will discuss how the auto industry is facing new challenges and how the use of Amazon Alexa, IoT, Logistics services and the AWS Cloud is transforming the Mobility experience of the (very near) future.

ALX 301: Alexa in the Enterprise: How JPL Leverages Alexa to Further Space Exploration with Internet of Things

Wednesday, November 30, 2016

5:00 PM - 6:00 PM

Advanced

The Jet Propulsion Laboratory designs and creates some of the most advanced space robotics ever imagined.  JPL IT is now innovating to help streamline how JPLers will work in the future in order to design, build, operate, and support these spacecraft. They hope to dramatically improve JPLers' workflows and make their work easier for them by enabling simple voice conversations with the room and the equipment across the entire enterprise.

What could this look like? Imagine just talking with the conference room to configure it. What if you could kick off advanced queries across AWS services and kick off AWS Kinesis tasks by simply speaking the commands? What if the laboratory could speak to you and warn you about anomalies or notify you of trends across your AWS infrastructure? What if you could control rovers by having a conversation with them and ask them questions? In this session, JPL will demonstrate how they leveraged AWS Lambda, DynamoDB and CloudWatch in their prototypes of these use cases and more.  They will also discuss some of the technical challenges they are overcoming, including how to deploy and manage consumer devices such as the Amazon Echo across the enterprise, and give lessons learned.  Join them as they use Alexa to query JPL databases, control conference room equipment and lights, and even drive a rover on stage, all with nothing but the power of voice!

ALX 302: Build a Serverless Back End for Your Alexa-Based Voice Interactions

Thursday, December 1, 2016

5:00 PM - 6:00 PM

Advanced

Learn how to develop voice-based serverless back ends for Alexa Voice Service (AVS) and Alexa devices using the Alexa Skills Kit (ASK), which allows you to add new voice-based interactions to Alexa. We’ll code a new skill, implemented by a serverless backend leveraging AWS services such as Amazon Cognito, AWS Lambda, and Amazon DynamoDB. Often, your skill needs to authenticate your users and link them back to your backend systems and to persist state between user invocations. User authentication is performed by leveraging OAuth compatible identity systems. Running such a system on your back end requires undifferentiated heavy lifting or boilerplate code. We’ll leverage Login with Amazon as the identity provider instead, allowing you to focus on your application implementation and not on the low-level user management parts. At the end of this session, you’ll be able to develop your own Alexa skills and use Amazon and AWS services to minimize the required backend infrastructure. This session shows you how to deploy your Alexa skill code on a serverless infrastructure, leverage AWS Lambda, use Amazon Cognito and Login with Amazon to authenticate users, and leverage AWS DynamoDB as a fully managed NoSQL data store.

ALX 303: Building a Smarter Home with Alexa

Thursday, December 1, 2016

1:00 PM - 2:00 PM

Advanced

This session introduces the beta process, the Smart Home Skill API, and how to quickly and easily set up a smart home so you can begin using Alexa to control lighting, blinds, and small appliances. We begin by going over what devices you can buy and share and some common best practices when enabling these devices in your home or office. We also demonstrate how to enable these devices and connect them with Alexa. We show you how to create groups and manage your home with your voice, as well as some tips and tricks for managing your home when you are away. This session explains how to use the Smart Home Skill API to create a custom skill to manage your smart home devices as well as lessons learned from dozens of customers and partners. Alexa smart home partner Ecobee joins us to talk about their experience in the Smart Home Skill API beta program.

ALX 304: Tips and Tricks on Bringing Alexa to Your Products

 

Friday, December 2, 2016

9:30 AM - 10:30 AM

 

Advanced

Ever wonder what it takes to add the power of Alexa to your own products?  Are you curious about what Alexa partners have learned on their way to a successful product launch?  In this session you will learn about the top tips and tricks on how to go from VUI newbie to an Alexa-enabled product launch.  Key concepts around hardware selection, enabling far field voice interaction, building a robust Alexa Voice Service (AVS) client and more will be discussed along with customer and partner examples on how to plan for and avoid common challenges in product design, development and delivery. 

ALX 305: From VUI to QA: Building a Voice-Based Adventure Game for Alexa

Friday, December 2, 2016

11:00 AM - 12:00 PM

Advanced

Hitting the submit button to publish your skill is similar to sending your child to their first day of school. You want it to be set up for a successful launch day and for many days thereafter. Learn how to set your skill up for success from Andy Huntwork, Alexa Principal Engineer and one of the creators of the popular Alexa skill "The Magic Door." You will learn the most common reasons why skills fail and also some of the more unique use cases. The purpose of this session is to help you build better skills by knowing what to look out for and what you can test for before submitting. In this session, you will learn what most developers do wrong, how to successfully test and QA your skill, how to set your skill up for successful certification, and the process of how a skill gets certified.  

MAC 202: Deep Learning in Alexa   Introductory

Neural networks have a long and rich history in automatic speech recognition. In this talk, we present a brief primer on the origin of deep learning in spoken language, and then explore today’s world of Alexa. Alexa is the AWS service that understands spoken language and powers Amazon Echo. Alexa relies heavily on machine learning and deep neural networks for speech recognition, text-to-speech, language understanding, and more. We also discuss the Alexa Skills Kit, which lets any developer teach Alexa new skills.

We encourage you to check back because we’ll have more content announcements in the coming months.

Hope to see you there! Haven’t signed up yet? Register now.

-Dave (@TheDaveDev)

 

July 19, 2016

David Isbitski

Today we’re happy to announce the new alexa-sdk for Node.js to help you build skills faster and with less complexity. Creating an Alexa skill using the Alexa Skills Kit, Node.js and AWS Lambda has become one of the most popular ways we see skills created today. The event-driven, non-blocking I/O model of Node.js is well suited for an Alexa skill and Node.js is one of the largest ecosystems of open source libraries in the world. Plus, with AWS Lambda is free for the first one million calls per month, which can support skill hosting for most developers. And you don’t need to manage any SSL certificates when using AWS Lambda (since the Alexa Skills Kit is a trusted trigger).

While setting up an Alexa skill using AWS Lambda, Node.js and the Alexa Skills Kit has been a simple process, the actual amount of code you have had to write has not. We have seen a large amount of time spent in Alexa skills on handling session attributes, skill state persistence, response building and behavior modeling. With that in mind the Alexa team set out to build an Alexa Skills Kit SDK specifically for Node.js that will help you avoid common hang-ups and focus on your skill’s logic instead of boiler plate code. 

Enabling Faster Alexa Skill Development with the Alexa Skills Kit for Node.js (alexa-sdk)

With the new alexa-sdk, our goal is to help you build skills faster while allowing you to avoid unneeded complexity. Today, we are launching the SDK with the following capabilities:

  • Hosted as NPM package allowing simple deployment to any Node.js environment
  • Ability to build Alexa responses using built-in events
  • Helper events for new sessions and unhandled events that can act as a ‘catch-all’ events
  • Helper functions to build state-machine based Intent handling
    • This makes it possible to define different event handlers based on the current state of the skill
  • Simple configuration to enable attribute persistence with DynamoDB
  • All speech output is automatically wrapped as SSML
  • Lambda event and context objects are fully available via this.event and this.contextAbility to override built-in functions giving you more flexibility on how you manage state or build responses. For example, saving state attributes to AWS S3.
[Read More]

June 29, 2016

David Isbitski

Today, we are launching a bi-weekly podcast focused exclusively on the Alexa developer community and the Amazon teams building Alexa technology. Each episode will be 20-30 minutes long and air twice a month. We’ll discuss various aspects of Alexa, including the Alexa Skills Kit, Alexa Voice Service, natural language understanding, voice recognition, and first hand experiences directly from developers like you.  

To kick it off, our first episode is a chat between myself and Charlie Kindel, director of Alexa Smart Home at Amazon. Charlie and I go into the details behind the launch of the Smart Home Skill API and some of the decisions the team had to make along the way. I also had the opportunity to learn about Charlie’s experience  in smart home and his thoughts on how he sees it evolving over time.

Check out the first episode.

-Dave (@TheDaveDev)

June 28, 2016

David Isbitski

Alexa, Amazon’s cloud-based voice service, powers voice experiences on millions of devices, including Amazon Echo and Echo Dot, Amazon Tap, Amazon Fire TV devices, and devices like Triby that use the Alexa Voice Service. One year ago, Amazon opened up Alexa to developers, enabling you to build Alexa skills with the Alexa Skills Kit and integrate Alexa into your own products with the Alexa Voice Service. Today, tens of thousands of developers are building skills for Alexa, and there are over 1,400 skills for Alexa – including Lyft and Honeywell, which were added today.

A New Experience for Discovering Skills
Today, we announced new ways for customers to discover and use your Alexa skills, including a new voice-enablement feature and a completely redesigned Alexa app. Customers can now quickly search, discover and use your skills. Starting today, customers can browse Alexa skills by categories such as “Smart Home” and “Lifestyle” in the Alexa app, apply additional search filters, and access their previously enabled skills via the “Your Skills” section.

         

[Read More]

May 20, 2016

David Isbitski

When creating a custom Alexa skill, you will need to provide an invocation name that users will use to invoke and interact with your skill. The invocation name does not need to be the same as your skill’s name but it must meet certain criteria to ensure a positive user experience. The invocation name you provide should also easily identify your skill’s capabilities, be memorable and also be accurately recognized by Alexa herself.

Invoking Your Custom Skill

Your service gets called when customers use your invocation name, such as “Alexa, ask dungeon dice for a d20.” In this example, users invoke the custom Alexa skill by using the Invocation Name ‘dungeon dice’ along with a supported phrase for requesting the service.

You can change your invocation name at any time while developing a skill. You cannot change the invocation name after a skill is certified and published.

Note that the invocation name is only needed for custom skills. If you are using the Smart Home Skill API, users do not need to use an invocation name for the skill. For more about the different types of skills you can create, see Understanding the Different Types of Skills.

It is also important to think about how the rest of the invocation phrase will sound when using your invocation name. Remember, there are three ways in which users will always invoke your skill. A good invocation name will make sure it works well in all of these contexts:

  • Invoking the skill with a particular request. For example, “Alexa, Ask Daily Horoscopes for Gemini.”
  • Invoking the skill without a particular request, using a defined phrase such as “open” or “start.” For example, “Alexa, open Daily Horoscopes.”
  • Invoking the skill using just the invocation name and nothing else: “Alexa, Daily Horoscopes.

Here are some additional examples of the supported phrases for requesting an Alexa skill. For a complete list of all launch phrases, see Understanding How Users Invoke Custom Skills.

Starting Phrase

Example

<invocation name>

Alexa, Daily Horoscopes

Ask <invocation name>

AlexaAsk Daily Horoscopes

Begin <invocation name>

AlexaBegin Trivia Master

Do <invocation name>

AlexaDo Trivia Master

Launch <invocation name>

AlexaLaunch Car Fu

Load <invocation name>

AlexaLoad Daily Horoscopes

Open <invocation name>

AlexaOpen Daily Horoscopes

Play <invocation name>

AlexaPlay Trivia Master

Play the game <invocation name>

AlexaPlay the game Trivia Master

Resume <invocation name>

AlexaResume Trivia Master

Run <invocation name>

AlexaRun Daily Horoscopes

Start <invocation name>

AlexaStart Daily Horoscopes

Start playing <invocation name>

AlexaStart playing Trivia Master

Start playing the game <invocation name>

AlexaStart playing the game Trivia Master

Talk to <invocation name>

AlexaTalk to Daily Horoscopes

Tell <invocation name>

AlexaTell Daily Horoscopes

Use <invocation name>

AlexaUse Daily Horoscopes

 

New Invocation Name Requirements

In order to simplify the process for choosing acceptable invocation names, we are providing new guidance. You’ll need to meet the following requirements in order to pass certification starting 5/25.

  1. The skill invocation name must not infringe upon the intellectual property rights of an entity or person.
  2. One-word invocation names are not allowed, unless the invocation name is unique to your brand/intellectual property.
  3. Invocation names which are names of people or places (for example, “molly,” “seattle”) are not allowed, unless they contain other words in addition to the name (for example, “molly’s horoscope”).
  4. Two-word invocation names are not allowed if one of the words is a definite article (“the”), indefinite article (“a,” “an”) or preposition (“for,” “to,” “of”). For example, “a bicycle,” “an espresso,” “to amuse,” “for fun.”
  5. The invocation name must not contain any of the Alexa skill launch phrases and connecting words. Launch phrase examples include “launch,” “ask,” “tell,” “load,” and “begin.” Connecting word examples include “to,” “from,” “by,” “if,” “and,” “whether.” See Understanding How Users Invoke Custom Skills for a complete list of skill launch phrases and connecting words.
  6. The invocation name must not contain the wake words “Alexa,” “Amazon,” “Echo,” or the words “skill” or “app.”
  7. The invocation name must contain only lower-case alphabetic characters, spaces between words, possessive apostrophes (for example, “sam’s science trivia”), or periods used in abbreviations (for example, “a. b. c.”). Other characters like numbers must be spelled out. For example, “twenty one.” The name must be easy to pronounce correctly and be phonetically distinct to avoid being misinterpreted as other similar sounding words. 
  8. The invocation name must not create confusion with existing Alexa features. If your invocation name overlaps with common Alexa commands, users may get confused by Alexa's response and not enable your skill. For example, if your invocation name is too similar to the built-in "weather" command, Alexa may sometimes respond with your skill and sometimes respond with the built-in weather feature, providing an inconsistent user experience.

The following recommendations are not required for certification, but will provide your users with a better experience and are highly recommended:

  • The skill invocation name should be specific to the functionality of the skill, unless the invocation name is unique to your brand or intellectual property (for example, “uber,” “dominos”). One way to achieve relevance is to qualify the invocation name with something that describes the skill’s functionality or something relevant to your company or developer name. For example, “boston transit,” “cricket trivia,” “math tutor,” “magic eight ball,” “baby stats,” “tim’s jokes.”
  • The invocation name should also fit smoothly with at least one of the Alexa skill launch phrases (for example, “launch,” “ask,” “tell,” “load,” “begin”) to allow customers to naturally invoke the skill.

Finally, plan on spending some time testing your invocation name once you have an initial version of your service up and running. When testing with an Alexa-enabled device, you can see how Alexa interpreted your invocation name by reviewing the history in the Amazon Alexa App (in the app, navigate to Settings and then History).

For more guidance on creating a Custom Skill for Alexa, check out the following additional assets:

Voice Design Handbook

Understanding How Users Invoke Custom Skills

Steps to Build a Custom Skill

Voice Design Best Practices

-Dave (@TheDaveDev)

 

May 13, 2016

David Isbitski

By Juan Pablo Claude, software developer at Big Nerd Ranch

Editor’s note: This is part six of the Big Nerd Ranch series. Check out parts five, four, three, two, and one.

One of the greatest features of Alexa is that it functions as a personal assistant you can interact with without having to physically touch the device. This allows you to get information or accomplish tasks while you are, for example, baking a cake. One of the tasks you could accomplish in such a sticky situation could be to post a tweet about your baking adventures.

From an Alexa developer’s point of view, the task of posting a tweet is a pretty sophisticated operation because the skill needs to authenticate with the user’s Twitter account on the web, then get authorization to access the API in order to make a posting.

From a convenience and security point of view, it would be a terrible idea for the skill to ask for the user’s credentials verbally every time access to the Twitter API is needed. Furthermore, an Alexa-enabled device does not have a way to store these credentials locally, so another approach must be used.

Fortunately, the Alexa Skills Kit features account linking, which lets you access user accounts on other services, Twitter among them, using the OAuth protocol. In this post, we will use account linking and OAuth to grant delegated authority to our Airport Info skill so that it can post an airport’s flight status to a user’s Twitter account. Delegated authority means that the Airport Info skill will be granted permission to post to the user’s Twitter account without ever having access to the actual account credentials.

Note that Alexa uses the OAuth 2.0 protocol, and some services like Twitter still use version 1.0. The differences in the implementation are not great. Essentially, dealing with OAuth 1.0 requires an additional token request step that will be handled in this exercise by a separate web application.

 

 

Registering Airport Info as a Twitter App

If you haven’t already built an Alexa Skill, check out our previous posts on building Airport Info to get started.

The first step in enabling Twitter delegated authority to the Airport Info skill is to let Twitter know that the skill exists. We must register Airport Info as a Twitter App, so that Twitter knows the skill will later ask for authorization to post on a user’s behalf. To accomplish this, first log in to your Twitter account and visit the Twitter Apps page.

[Read More]

April 27, 2016

David Isbitski

By Juan Pablo Claude, software developer at Big Nerd Ranch

Editor’s note: This is part five of the Big Nerd Ranch series. Check out parts four, three, two, and one.

If you are reading this post, it is likely that you have finished writing a shiny new Alexa skill and you are ready to submit it to Amazon for review and publication. In this post, we’ll guide you through the submission process and help you get your skill published as quickly as possible.

Haven’t written your skill yet? Read on to learn about Amazon’s guidelines so that you can have a rapid and successful skill review.

What to Keep in Mind When Designing and Submitting an Alexa Skill for Review

If you want to have your own skill available to Alexa users, you will need to submit your skill to the Alexa Team for certification.

That means that you, as a skill developer, need to follow Amazon’s content and security policies if you wish to have your skill certified for distribution. Amazon offers an official checklist for skill submission, along with policy guidelines and security requirements.

As you might expect, skills with obscene, offensive or illegal content or purposes are terminally frowned upon. What you might not expect is that the content policies do not allow skills targeted to children, as they may compromise a child’s online safety. This is a less evident restriction you should consider when a new skill idea hits you.

Security for the server-side part of your skill is also an important consideration, and it may be tricky if you decide to host the skill yourself outside of AWS Lambda. In that case, your server will need to comply with Amazon’s security requirements. As an example, any certificates for your skill service need to be issued by an Amazon-approved certificate authority.

The good news is that if you host your skill services as Amazon Web Services Lambda functions as we have done in the Developing Alexa Skills blog series, all major security requirements are automatically satisfied.

[Read More]

April 15, 2016

David Isbitski

By Josh Skeen, software developer at Big Nerd Ranch

This is part four of the Big Nerd Ranch series. Click here for part three.

By now, we’ve made a lot of progress in building our Airport Info skill. We tested the model and verified that the skill service behaves as expected. Then we tested the skill in the simulator and on an Alexa-enabled device. In this post, we’ll implement persistence in a new skill so that users will be able to access information saved from their previous interactions.

We'll go over how to write Alexa skill data to storage, which is useful in cases where the skill would time out or when the interaction cycle is complete. You can see this at work in skills like the 7-Minute Workout skill, which allows users to keep track of and resume existing workouts, or when users want to resume a previous game in The Wayne Investigation.

[Read More]

April 11, 2016

David Isbitski

The Smart Home Skill API is a new addition to the Alexa Skills Kit, which enables developers to add capabilities, called skills, to Alexa. With this new API you can teach Alexa how to control your own cloud-controlled lighting and thermostat devices. For example, customers can simply say, “Alexa, turn on the kitchen lights” or “Alexa, turn up the heat downstairs” and Alexa will communicate directly with your Smart Home device. Smart home skills are created in the same developer portal as existing custom skills and follow a similar process.

Creating a Smart Home Skill

To create your smart home skill, you’ll first configure your skill using a new Smart Home Skill API flow in the developer portal. Ensure you have selected the Smart Home Skill API skill type, enter a Name for your skill and then simply click Next.

Unlike custom skills, smart home skills already have an existing interaction model for you. This means you won’t have to define the intent schema and sample utterances like you would in a custom skill. Click Next to move to the Configuration tab.

[Read More]

April 05, 2016

David Isbitski

Today we are introducing the Smart Home Skill API, a new addition to the Alexa Skills Kit, which enables developers to add capabilities, called skills, to Alexa. Developers can now teach Alexa how to control their cloud-controlled lighting and thermostat devices so customers can simply say, “Alexa, turn on the kitchen lights” or “Alexa, turn up the heat.” You no longer need to build a voice interaction model to handle customer requests. This work is now done for you when you use the Smart Home Skill API. You create skills that connect your devices directly to our lighting and thermostat capabilities so that customers can control their lights, switches, smart plugs or thermostats—without lifting a finger.

We first introduced the Smart Home Skill API as a beta called the Alexa Lighting API in August 2015. As part of the beta program, we worked with companies including Nest, Ecobee, Sensi, Samsung SmartThings, and Wink in order to gather developer feedback, while extending Alexa’s smart home capabilities to work with their devices.

It’s easy and free for developers to use the Smart Home Skill API to connect Alexa to hubs and devices for both public and personal use. Get Started Now >

Creating Your Skill with the Smart Home Skill API

When you create a custom skill, you build the voice interaction model. When using the Smart Home Skill API, you tap into Amazon’s standardized language model so you skip the step of creating an interaction model. Alexa understands the user’s speech, converts it to a device directive and sends that directive to that skill adapter that you build in AWS Lambda.

[Read More]

April 01, 2016

David Isbitski

By Josh Skeen, software developer at Big Nerd Ranch

This is part three of the Big Nerd Ranch series. Click here for part one and part two

Now that we have tested the model for our Airport Info Alexa Skill and verified that the skill service behaves as expected, it's time to move from the local development environment to staging, where we’ll be able to test the skill in the simulator and on an Alexa-enabled device.

What's Next to Deploy an Alexa Skill

To deploy our Alexa skill to the staging environment, we first need to register the skill with the skill interface, then configure the skill interface's interaction model. We'll also need to configure an AWS Lambda instance that will run the skill service we developed locally.

The Alexa skill interface is what’s responsible for resolving utterances (words a user spoke) to intents (events our skill service receives) so that Alexa can correctly respond to what a user has asked. For example, when we ask our Airport Info skill to give status information for the airport code of Atlanta, Georgia (ATL), the skill interface determines that the AirportInfo intent matches the words that were spoken aloud, and that ATL is the airport code a user would like information about.

Here's what the journey from a user's spoken words to Alexa's response looks like:

 

In our post on implementing Alexa intents, we simulated the skill interface with alexa-app-server so that we could test our skill locally. We sent a mock event to the skill service from alexa-app-server by selecting IntentRequest with an intent value of airportInfo and an AIRPORTCODE of ATL in the Alexa Tester interface.

By comparison, in a deployed skill, the skill interface lives on Amazon's servers and works with users’ utterances that are sent from Alexa to the skill service.

[Read More]

Want the latest?

alexa topics

Recent Posts

Archive