Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts

Archive

October 14, 2016

Thom Kephart

We participate in a number of events across the globe throughout the year – and we’d love to see you at the next one.

To stay tuned to the latest events near you, check out our new events page. There you’ll be able to find information about hackathons where you can get hand-on education and build Alexa skills, conferences and presentations where you can join the conversation and meet Alexa team members, as well as community-run meetups where you can connect with fellow developers.

Bookmark the events page today, register for one near you, and we’ll see you there.

 

October 13, 2016

David Isbitski

The beta is now closed. Sign up to be notified when the List Skill API is publicly available.

Today we announced a limited participation beta for the List Skill API, a new addition to the Alexa Skills Kit, which enables developers to add capabilities, called skills, to Alexa. Developers can now teach Alexa how to interface with their list applications so that customers can simply say, “Alexa, add bananas to my Shopping List” or “Alexa, add ‘Go for a Jog’ to my To-do list.” The List Skill API taps into Amazon’s standardized language model so you don’t have to build a voice interaction model to handle customer requests. You create skills that connect your applications directly to Alexa’s Shopping and To-do list capabilities so that customers can add or review items on their lists—without lifting a finger.

How it works

The List Skill API has a bi-directional interface that ensures lists are updated across all channels. That means the API notifies developers when a customer tells Alexa to add something to their list or makes a change to an existing item. Alexa understands the user’s speech request, converts it to a To-do or Shopping item, and sends you a notification with the new item that was added to the list. The List Skill API also updates the lists for Alexa when users make changes to their lists online or in your mobile application.

Customers are increasingly using voice interfaces as a hands-free way to manage their lives. By using Alexa’s built-in Shopping and To-do lists to keep track of items to buy and things to do, customers on millions of Alexa-enabled devices only have to "ask" and it's at their command, often becoming a daily habit. By integrating with the List Skill API, you will make it easier for your existing customers to keep track of their important tasks and shopping items in the home, and introduce your brand to a new group of Alexa customers.

Here's what developers are saying

Today we announced that Any.do and Todoist created the first skills using the List Skill API. 

 “We’ve been huge fans of Alexa for a long time. Once the opportunity to work with Alexa in a deep way presented itself, we we’re extremely happy to push it forward" says Omer Perchik, the Founder and CEO of Any.do. "The work with the new Alexa List Skill API was simple, straightforward and our experience as a beta participant was smooth due to the support from Amazon.”

“At Todoist, we're very excited about the potential of AI and AI-powered services. Amazon’s Alexa is one of the earliest and best examples of making this technology useful in people's everyday lives,” says Doist founder and CEO Amir Salihefendic. “That's why we're thrilled to have collaborated with the Amazon team as part of their limited participation beta for the Alexa List Skill API. We’re sure our customers will find Alexa extremely helpful in staying organized and productive, and we're looking forward to working with Amazon to make the Todoist skill even more useful as Alexa continues to evolve and get smarter.”

Get started now

Going forward, we’re excited to open the List Skill API to more developers as part of our limited participation beta.

For more information about getting started with the Alexa Skills Kit and to apply to participate in the List Skill API beta, check out the following additional assets:

About the List Skill API
Alexa Dev Chat Podcast
Alexa Training with Big Nerd Ranch
Alexa Skills Kit (ASK)
Alexa Developer Forums

-Dave (@TheDaveDev)

October 12, 2016

Zoey Collier

Brian Donohue, New Jersey-born software engineer and former CEO of Instapaper, wasn't an immediate Alexa fan. In fact, his first reaction to the 2014 announcement of the Amazon Echo was "That's cool, but why would I buy one?"

All that changed over the course of one whirlwind weekend in March 2016. Almost overnight, Brian went from almost indifferent to being one of the most active developers in the Alexa community. Today he’s recognized as an Alexa Champion and a master organizer of Alexa meetups.

We sat down with Brian to find out how Alexa changed his entire view of voice technology... and why he wanted to share his excitement with other Alexa developers.

An overnight Alexa convert

Brian has led Instapaper for the last two and a half years. Its former owner, Betaworks, always encouraged employees—including Brian—to check and innovate with new technology. Brian has built apps for Google Glass and other devices, just because the company had them lying around the office.

When the company bought an Echo device in March, Brian had to take another look. He took it home one Friday night and decided to try building a skill using the Alexa Skills Kit (ASK). He selected something simple, inspirational and personal to him. The skill—which later became Nonsmoker—keeps track of when you stopped smoking and tells you how long it's been since your final cigarette.

The first version took Brian half a day to create. It was full of hardcoded values, but it was empowering. Then, in playing with this and other Alexa skills, Brian recognized something exciting. A fundamental technology shift was staring right at him. When he returned the Echo to the office on Monday, he was hooked.

“Interacting with Alexa around my apartment showed me the real value proposition of voice technology,” says Brian. “I realized it’s magical. I think it’s the first time in my life that I’d interacted with technology without using my hands.”

Bringing NYC Alexa users together

Brian wanted immediate and more active involvement in Alexa development. The following day he was searching meetup.com for Alexa user gatherings in New York City. He found none, so Brian did what always came naturally. He did it himself.

His goal was to find 20 or so interested people before going to the effort of creating a meetup. The demand was far greater than he expected. By the third week of March, he was hosting 70 people at the first-ever NYC Amazon Alexa Meetup, right in the Betaworks conference room.

After a short presentation about Echo, Tap and Dot, Brian did the rest of the program solo. He created a step-by-step tutorial with slides, a presentation and code snippets, all to explain how to create a simple Alexa skill. He walked attendees through the program, then let them test and demo their skills on his own Echo, in front of the class.

“A lot of them weren’t developers, but they could cut and paste code,” says Brian. “About half completed the skill, and some even customized the output a bit.” Brian helped one add a random number generator, so her skill could simulate rolling a pair of dice.

[Read More]

October 11, 2016

Zoey Collier

In 2012, a “Down Under” team from Melbourne, Australia recognized LED lighting had finally reached a tipping point. LED technology was the most efficient way to create light, and affordable enough to pique consumers’ interest in bringing colored lighting to the home. And LIFX was born.

John Cameron, vice president, says LIFX launched as a successful Kickstarter campaign. From its crowd-funded beginnings, it has grown into a leading producer and seller of smart LED light bulbs. With headquarters in Melbourne and Silicon Valley, its bulbs brighten households in 80 countries around the globe.

Cameron says LIFX makes the world’s brightest, most efficient and versatile Wi-Fi LED light bulbs. The bulbs fit standard light sockets, are dimmable and can emit 1,000 shades of white light. The color model adds 16 million colors to accommodate a customer’s every mood.

From smartphone apps to brilliant voice control

Until 2015, LIFX customers controlled their smart bulbs using smartphones apps. Customers could turn them on or off by name, dim or brighten them, and select the color of light. They could also group the devices to control an entire room of lights at once. Advanced features let customers create schedules, custom color themes, even romantic flickering candle effects.

Without the phone, though, customers had no control.

Like Amazon, the LIFX team knew the future of customer interfaces lay in voice control. “We’re always looking for ways to let customers control [their lights] without hauling out their phone,” said Cameron. “When Alexa came along, it took everybody by storm.”

“That drove us to join Amazon's beta program for the Alexa Skills Kit (ASK)” says Daniel Hall, LIFX’s lead cloud engineer. Hall says the ASK documentation and APIs were easy to understand, making it possible for them to implement the first version of the LIFX skill in just two weeks. By the end of March 2015, LIFX had certified the skill and was ready to publish. The skill let customers control their lights just by saying “Alexa, tell ‘Life-ex’ to…

Since the LIFX skill launch, ASK has added custom slots, a simpler and more accurate way of conveying customer-defined names for bulbs and groups of bulbs. Hall says that custom slots is something that LIFX would be interested in implementing in the future.

[Read More]

October 07, 2016

Dean Bryen

If you’ve already created your first Alexa Skill, you may be using local environments, the AWS CLI, and other DevOps processes. This blog post is for advanced developers who want to level up skill creation by adding some automation, version control, and repeatability to skill deployments.

In this post we're going to programmatically create our skill backend using AWS CloudFormation. CloudFormation is an AWS service that enables you to describe your AWS resources as a JSON file, these JSON files can later be ‘executed’ to tear up and tear down your AWS environments. This gives us a number of benefits, including version control and repeatability. You can read more about AWS CloudFormation in general over in the AWS developer docs here. To put this into context, when looking at the Alexa Skills Kit Architecture below, the resources in the red box below are what we will be creating within our CloudFormation Template.

 

The Template

The CloudFormation template is a JSON object that describes our infrastructure. This will consist of three components.

Parameters - Where we define the input parameters we want to inject into our template, such as ‘function-name.

Resources - The AWS resources that make up our skill backend, Such as the lambda function.

Outputs – Any information that we would like to retrieve from the resources created in our CloudFormation stack. Such as the lambda function ARN.

The template that we will create in this tutorial can be used as a starting point to create the backend for any of your Alexa skills.

[Read More]

October 06, 2016

Ted Karczewski

What makes the Amazon Echo so appealing is the fact that customers can control smart home devices, access news and weather reports, stream music, and even hear a few jokes just by asking Alexa. It’s simple and intuitive.

We’re excited to announce an important Alexa Voice Service (AVS) API update that now enables you to build voice-activated products that respond to the “Alexa” wake word. The update includes new hands-free speech recognition capabilities and a “cloud endpointing” feature that automatically detects end-of-user speech in the cloud. Best of all, these capabilities are available through the existing v20160207 API—no upgrades needed.

You can learn more about various use cases in our designing for AVS documentation.

Get Started with Our New Raspberry Pi Project

To help you get started quickly, we are releasing a new hands-free Raspberry Pi prototyping project with third-party wake word engines from Sensory and KITT.AI. Build your own wake word enabled, Amazon Alexa prototype in under an hour by visiting the Alexa GitHub.

And don’t forget to share your finished projects on Twitter using #avsDevs. AVS Evangelist Amit Jotwani and team will be highlighting our favorite projects, as well as publishing featured developer interviews, on the Alexa Blog. You can find Amit on Twitter here: @amit.

Learn more about the Alexa Voice Service, its features, and design use cases. See below for more information on Alexa and the growing family of Alexa-enabled products and services:

Alexa Developer Resources
Alexa Voice Service (AVS)
Alexa Skills Kit (ASK)
The Alexa Fund
AVS Developer Forums
Alexa on a Raspberry Pi

Alexa-Enabled Devices
Triby
CoWatch
Pebble Core
Nucleus

Amazon Alexa Devices
Amazon Echo
Amazon Echo Dot
Amazon Tap
Amazon Fire TV
Amazon Fire TV Stick

Have questions? We are here to help. Visit us on the AVS Forum or Alexa GitHub to speak with one of our experts.

AVS is coming soon to the UK and Germany. Read the full announcement here.

 

October 05, 2016

Liz Myers

Now that Alexa is multi-lingual, it’s a new day in Alexa skill making. Not only can you publish to customers around the globe, you can do so from a single code base.

In this article, we’ll review two concepts: 1) separating content from logic and 2) using the locale attribute to serve the right content to the right users.

Getting Organized

As an example, I’ve made a new skill: Classical Guitar Facts (using this template), which has content in both English and German. Although one might assume that I could get away with US English in the UK, differences in spelling and word choice will show up in the cards within the Alexa app, and this is not the best user experience. So, we’ll create content files in three separate folders, one per language, as shown below.

Create the Content Files

Moving the content out of the index.js files means that I’ve copied the FACTS array into a separate file and saved the file as de-facts.js, gb-facts.js, and us-facts.js respectively. Remember the last item in the FACTS array does not have a comma at the end. Also, remember the last line of this file “module.exports = FACTS”, otherwise the calling file (index.js) won’t be able to find it.

var FACTS = [
    "The strings of guitars are often called gut strings because…”,
    " …”,
    " …”
];
module.exports = FACTS;  

Calling External Content

At the top of the index.js file, we need to declare the FACTS variable:

var FACTS = [ ];

so that we can call it later like this:

FACTS = require('./content/en-US/us-facts.js');

Of course, we can substitute en-US/us-facts.js with en-GB/gb-facts.js and de-DE/de-facts.js when needed. Now we’re well organized to swap separate content files based on language – but how do we know which language is calling our service?

[Read More]

October 04, 2016

Jen Gilbert

Today’s guest blog post is from Monica Houston, who leads the Hackster Live program at Hackster. Hackster is dedicated to advancing the art of voice user experience through education.

Even though it’s a sunny Saturday morning, men, women, and perhaps a few teens filter into a big room, laptops in hand, ready to build Alexa skills. They’re here to change the future of voice user experience.

Hackster, the community for open source hardware, has run 12 events with Amazon Alexa this year and 13 more are in the planning stages. All 25 events are organized by Hackster Ambassadors, a group of women and men hand-picked from Hackster’s community for their leadership skills, friendliness, and talent for creating projects.

Hackster Ambassadors pour their time and energy into helping to evangelize Alexa. Ambassador Dan Nagle of Huntsville, Alabama, created a website where you can find Hackster + Alexa events by city. Ambassador Paul Langdon set up a helpful github page where you can see skills that were published at the event he ran in Harford. He also volunteered his time and knowledge to run a series of “office hours” to help people develop their skills.

While Hackster provides venues and catering for these events and Hackster Ambassadors spread the word to their communities, Amazon sends a Solution Architect to teach participants how to build skills for Alexa and answer questions.

Amazon Solutions Architects go above and beyond to help people submit their skills for certification. Not only do they answer questions on Hackster’s developer slack channel, they also have hosted virtual “office hours,” run webinars, and conducted two “slackathons” with Hackster’s community.

Although the 25 Alexa events are being held in US cities, Hackster Live is a global program with 30 international Ambassadors. Hackster shipped Amazon Echos to our Ambassadors in South America, Asia, Africa, and Europe. Virtual events like slackathons and webinars run by Solutions Architects make it possible for people from around the world to learn skill building and add to the conversation.

[Read More]

October 03, 2016

David Isbitski

Today we are introducing the Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Flash Briefing on Alexa, which delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.

The Flash Briefing Skill API is free to use. Get Started Now >

Creating Your Skill with the Flash Briefing Skill API

To get started, you’ll configure a JSON or RSS feed and submit descriptive information about your skill in the portal. This can be done through the following steps:

 1.  Register for a free Amazon Developer Account if you have not done already and navigate to the Alexa Skills Kit box in the Alexa menu here.

2.  Click on Add a New Skill

3.  Select Flash Briefing Skill API, fill out a name and then click Next.

4.  Unlike custom skills, the interaction model for Flash Briefing Skills will automatically be generated for you, simply hit Next.

5.  Now we will need to define our Content Feed(s). Your Flash Briefing Skill can include one or more defined feeds.



Then, click on the Add new feed button.

6.  You will then enter information about your content feed including name, how often the feed will be updated, the content type (audio or text), the genre, an icon as well as the URL for where you are hosting the feed.

7.  Repeat these steps each feed you wish to include in the skill. The first feed you add will automatically be marked as the default feed. If you add more feeds, you can choose which feed is the default, by selecting it in the Default column.

8.  Click Next when you are finished adding feeds and are ready to test your skill.

For additional information check out the Steps to Create a Flash Briefing Skill page here.

[Read More]

October 03, 2016

Zoey Collier

In the latest headlines from KIRO7:

[stirring theme music begins] Hello from KIRO7 in Seattle. I’m Michelle Millman…

And I’m John Knicely. Here are the top stories we’re following on this Friday.

A car erupted in flames around 5:30 this morning on northbound I-5. This was just south of downtown and caused a major traffic backup, but you can get around it by…

This might sound like a local daybreak newscast blaring from the TV in the kitchen or the bedroom, as you rush around trying to get ready for work – but it isn’t.

It’s actually a Alexa Flash Briefing skill. Flash Briefing streams today’s top news stories to your Alexa-enabled device on demand. To hear the most current news stories from whatever sources you choose, just say “Alexa, play my flash briefing” or “Alexa, what’s the news?

The particular Flash Briefing skill in question, though, is rather unique. With all its realism and personality, you might be fooled into thinking it’s an actual news desk, complete with bantering anchors, perky weather forecast, and the day’s top local headlines.

That’s because it is—and that’s what sets KIRO7 apart from the rest.

How Flash Briefing works

Using the Alexa app, you can select different skills for your Flash Briefing from a number of different news sources. These include big-named outlets like NPR, CNN, NBC, Bloomberg, The Wall Street Journal, and more. These all give you snapshots of global news. Now more and more local stations are creating their own Flash Briefing skills for Alexa.

The Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Alexa’s Flash Briefing,  delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.

Setting a higher bar for listener engagement

If you’ve activated Flash Briefing before, you know that several content providers leverage Alexa to read text in her normal voice. That’s because most skills in Flash Briefing repurpose content that is already available in an RSS-style feed. They plug the same text into the feed for Alexa to ingest.

Jake Milstein, news director for KIRO7, said KIRO7 was one of the first local news channels to create a Flash Briefing. While Alexa has a wonderful reading voice, the KIRO7 team wanted to do something a bit more personal for its listeners. Working with the Alexa team, they discovered they could upload MP3 files as an alternative to text. Instead of reading from canned text files, Alexa would play the audio files.

Milstein said using real people’s voices was an obvious choice, because “We have such great personalities here at KIRO7.” The station tested various formats, but eventually settled on using two of its morning news anchors. Christine Borrmann, KIRO7 Producer, says, “We tinkered with the format until Michelle and John just started talking about the news in a very conversational way. Then we added a little music in the background. It felt right.”

KIRO7 started out with a single daily feed but now has three. The morning anchors, Michelle Millman and John Knicely, record the first ‘cast around 4 a.m. and the second shortly after their live broadcast at 8 a.m. Other news anchors record the third feed in late afternoon, so it captures the evening news topics. Each ‘cast’ is roughly two minutes long and ends by encouraging listeners to consume more KIRO7 content through the app on Amazon FireTV.

Alexa, the news never sounded so good

The whole KIRO7 team is proud to be the first local news station to produce a studio-quality audio experience in a Flash Briefing and the KIRO7 skill launched alongside several established networks with national scale.

Early feedback on Facebook showed KIRO7 listeners loved the skill and wanted even more. Now that Flash Briefings are skills, though, the KIRO7 team can start collecting its own reviews and star-ratings.

Milstein says it is important that KIRO7 stay at the forefront of delivering Seattle-area news the way people want to get their news. “Having our content broadcast on Alexa-enabled devices and available on Amazon Fire TV is something we're really proud of. For sure, as Amazon develops more exciting ways to deliver the news, we'll be there.”

 


Get Started with Alexa Skills Kit

Are you ready to build your first (or next) Alexa skill? Build a custom skill or use one of our easy tutorials to get started quickly.

Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.

September 30, 2016

Michael Palermo

Today we are happy to announce the support for scenes, a new feature in Alexa skills developed with the Smart Home Skill API. With scenes, customers can issue a single voice command to an Alexa-enabled device such as the Amazon Echo or Echo Dot to set a predefined configuration of one or more devices to a desired state. For example, a customer could say, “Alexa, turn on bedtime” resulting with specific lights turning off, a bedroom light changing color to a low-lit orange-hue, a ceiling fan turned on, and the temperature changed to an ideal setting for sleep.

At first glance scenes might appear similar to the groups feature found in the Smart Home section of the Alexa app as both allow control over multiple devices with one voice command. However, scenes differ from groups in the following ways:

  • Scenes allow each device configured within it to be set to a desired state, whereas groups are stateless and simply turn devices on or off.
  • Scenes are configured by customers through a device manufacturer’s app, whereas groups are configured in the Alexa app.
  • Scenes only contain devices managed by the device manufacturer’s app, whereas groups can contain any device discovered in the Alexa app.

With scenes customers have another option to groups for controlling multiple devices. Customers may already have scenes configured in device manufacturer apps such as those provided by Control4, Crestron, Insteon, Lutron Caseta, SmartThings, or Wink. Prior to today, these scenes were invoked by using the device manufacturer’s app. Now customers can find these scenes listed as devices in their Alexa app after requesting device discovery and control via voice interaction. 

How Scenes Work

Figure 1: Scene control process


Once a customer has configured a scene through the device manufacturer’s app and requests a device discovery to Alexa, the scene name will appear in the device list in the Alexa app. Consider what happens from a developer perspective, when a voice command is made to turn a scene on.

Let’s examine each step above in more detail.

  1. Customer says, “Alexa, turn on bedtime.”
  2. Alexa service receives the request and routes this intent to the Smart Home Skill API.
  3. A directive is composed including the ‘TurnOnRequest’ name in the directive header and the appliance ID (located in directive payload) corresponding to the friendly name of the scene “bedtime.”
  4. The skill adapter hosted in AWS Lambda receives the directive. Included in the directive is an access token to determine the customer’s account making the request. A call is made to device cloud API to turn on the scene matching the appliance ID for the associated customer.
  5. The device cloud (likely owned by the device maker) receives a request from the skill adapter, and communicates to a device hub or controller to turn on the scene preconfigured by the customer.
  6. The device hub sets the desired state of each device configured by the customer. Note in this “bedtime” example, turning on a scene may result in turning off a light, since this could be the desired state of that device for the scene.
[Read More]

September 29, 2016

Ashwin Ram

Artificial intelligence (AI) is becoming ubiquitous. With advances in technology, algorithms, and sheer compute power, it is now becoming practical to utilize AI techniques in many everyday applications including transportation, healthcare, gaming, productivity, and media. Yet one seemingly intuitive task for humans still eludes computers: natural conversation. Simple and natural for humans, voice communication in everyday language continues to be one of the ultimate challenges for AI. Human conversation requires the ability to understand the meaning of spoken language, relate that meaning to the context of the conversation, create a shared understanding and world view between the parties, model discourse and plan conversational moves, maintain semantic and logical coherence across turns, and to generate natural speech.

Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. Teams of university students around the world are invited to participate in the Alexa Prize (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses coherently and engagingly with humans on popular topics for 20 minutes. We challenge teams to invent an Alexa socialbot smart enough to engage in a fun, high quality conversation on popular topics for 20 minutes.

Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Alexa users will experience truly novel, engaging conversational interactions.

Up to ten teams of students will be selected to receive a $100,000 research grant as a stipend, Alexa-enabled devices, free AWS services to support their development efforts, and support from the Alexa Skills Kit (ASK) team. Additional teams not eligible for funding may be invited to participate. University teams can submit their applications between September 29 and October 28, 2016, here. The competition will officially start on November 14, 2016 and run until November 2017, concluding with an award ceremony to be held at AWS re:Invent in Las Vegas, NV.

As we say at Amazon, this is Day 1 for conversational AI. We are excited to see where you will go next, and to be your partners in this journey. Good luck to all of the teams.

Learn more about Alexa Prize.

September 28, 2016

Michael Palermo

Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn how to respond to control directives in code to turn devices on or off, set temperature, and set percentages.

When you build a skill with the Smart Home Skill API, the ultimate goal is to control a device. That control can include turning a device on or off, setting a temperature, or setting a percentage, such as when you’re dimming a light bulb. This post will cover the general process of device control and teach the fundamentals by demonstrating control of the ‘on’ or ‘off’ state in code using Node.js.

This technical walkthrough is a continuation in a series of smart home skill posts focused on development. Please read and follow the instructions found below to reach parity.

How Device Control Works

Figure 1: Device control process


Once a customer has properly installed, configured, and discovered all smart home devices, verbal control commands can be issued to an Alexa-enabled device, such as the Amazon Echo. Consider what happens from a developer perspective when a control command is made, such as turning on a light.

Let’s examine each step above in more detail.

  1. Customer says, “Alexa, turn on desk light.”
  2. Alexa service receives the request and routes this intent to the Smart Home Skill API.
  3. A directive is composed and contains, among other things, the ‘TurnOnRequest’ name in the directive header and the appliance ID matching the friendly name “desk light” in the payload.
  4. The skill adapter hosted in AWS Lambda receives the directive. Included in the directive is an access token to determine the customer’s account making the request. A call is made to device cloud API to turn on the device matching the appliance ID for the associated customer.
  5. The device cloud (likely owned by the device maker) receives a request from the skill adapter, and communicates in its own fashion to the device identified by appliance ID to turn on.
  6. The device, in this example, a desk light), turns on.
[Read More]

September 28, 2016

Zoey Collier

Need a ride? Lyft is an on-demand transportation platform that lets you book a ride in minutes. It’s as easy as opening up the Lyft app, tapping a button and a driver arrives to get you where you need to go. Now, they’ve made it even easier. Simply say, “Alexa, ask Lyft to get me a ride to work.”

A culture of hackathons and rapid innovations

Roy Williams, the Lyft engineer who built the Alexa skill, said it started with a company hackathon.

Lyft has a long culture of hackathons. Each quarter, the San Francisco company invites employees to experiment with new ideas. The story goes that Lyft itself was born at such a hackathon, with someone’s idea for an “instant” ride service.

“It took about three weeks to go from the original prototype to a finished app,” Williams said. Lyft has been going strong ever since.

Alexa: Yet another innovation for Lyft

That wasn’t the last innovation to spring from a Lyft hackathon.

Williams said he purchased an Amazon Echo during the 2015 Black Friday sale. He immediately knew he wanted create an Alexa skill to let Echo users order a “lyft.” Williams dove into the Alexa Skills Kit (ASK) documentation, and he started building his prototype at the January hackathon. It was a hit.

Beyond the prototype, Williams estimates the project took three weeks of solid engineering time. The team spent one week working on the core functionality, including adding some workflow to their own API. It spent another week working through edge cases and complex decision trees, so the skill would never leave a user confused or at a dead-end. Finally, they spent another week on testing and analytics, before releasing it for an internal beta with 30 users.

Williams says ASK is very comprehensive, and because it is JSON-based, it makes testing easy. He admits having to add some edge testing to account for cases like asking Lyft for “a banana to work.” (Bananas are a favorite test fruit during certification.) In the end, he knew Lyft had a high-quality skill with near-one hundred percent test coverage.

Amazon published the final Lyft skill in July.

Why Alexa?

Megan Robershotte is a member of Lyft’s partner marketing team. She explained the Alexa skill fit well with the company’s primary goal: to get people to take their first ride with Lyft.

[Read More]

September 27, 2016

Nathan Grice

In this post, Nathan Grice, Alexa Smart Home Solutions Architect, shows you how to reduce skill development time by debugging your skill code in an local environment. Learn how to step through your code line by line while preserving roles and AWS services, like DynamoDB, used in the skill when running in AWS Lambda. Share your thoughts and feedback in this forum thread.

Amazon Alexa and the Alexa Skills Kit (ASK) are enabling developers to create voice-first interactions for applications and services. In this article, we will cover how to set up a local development environment using the Amazon Web Services (AWS) SDK for NodeJs.

By following this tutorial, you’ll be able to invoke your AWS Lambda code as if were called by the Alexa service. This will also allow you to interact with any other AWS services you may have added to your skill logic such as Amazon DynamoDB. By the end of this post, you will be able to execute and debug all of your Alexa skill’s Lambda code from your local development environment.

Using the aws-sdk, you should also be able to call any dependent services in AWS as if the skill code were executing in AWS Lambda by leveraging AWS Roles. This way, you can be sure your code is working before deploying into AWS and hopefully decrease the cycle time for applying new changes. For example, you want to persist something about users in a DynamoDB table and the only way to do this was run your code in Lambda. After this tutorial, you should be able to write to the remote Dynamo table from your local environment.

First, let’s take a look at why you would want to streamline this process. The first time I developed a skill, I was not using an integrated development environment and almost all debugging information was obtained through log statements. This presents quite a few challenges from a developer’s point of view.

  1. Extra cycle time for adding functionality and logging to analyze the state of the program at any given moment.
  2. Uploading the new code to AWS Lambda is manual.
  3. Testing the code using various methods was cumbersome, including manually constructing an event in AWS, persisting as a test event, using the developer console, or by invoking my skill on my own Echo or Alexa-enabled device.
  4. Analyzing Amazon CloudWatch logs was taking too long to effectively iterate features.

I wanted a better way to execute and debug my code, but not lose any of the functionality of being constrained to a local environment. 

In the next section we will look at how to setup a local environment to debug your AWS Lambda code using Node,js, Microsoft's Visual Studio code open-source editor, and the aws-sdk npm package. This tutorial will cover setting this up using Node.js but the AWS SDK is available for Python and Java as well.

Setting up your environment

Install Node.js

Install Node.js via the available installer. The installation is fast and easy, just follow the available prompts. For the purposes of this tutorial, I am on OSX, so I selected v4.5.0 LTS. There are versions available for Windows and Linux as well.

../Desktop/Screen%20Shot%202016-09-02%20at%204.06.45%20PM.png

 

Install Microsoft Visual Studio Code

Repeat the process with Microsoft's Visual Studio Code. For the purposes of this tutorial, I am using Microsoft’s Visual Studio Code but others should work as well.

[Read More]

Want the latest?

alexa topics

Recent Posts

Archive