Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn how to respond to control directives in code to turn devices on or off, set temperature, and set percentages.
When you build a skill with the Smart Home Skill API, the ultimate goal is to control a device. That control can include turning a device on or off, setting a temperature, or setting a percentage, such as when you’re dimming a light bulb. This post will cover the general process of device control and teach the fundamentals by demonstrating control of the ‘on’ or ‘off’ state in code using Node.js.
This technical walkthrough is a continuation in a series of smart home skill posts focused on development. Please read and follow the instructions found below to reach parity.
Figure 1: Device control process
Once a customer has properly installed, configured, and discovered all smart home devices, verbal control commands can be issued to an Alexa-enabled device, such as the Amazon Echo. Consider what happens from a developer perspective when a control command is made, such as turning on a light.
Let’s examine each step above in more detail.
Need a ride? Lyft is an on-demand transportation platform that lets you book a ride in minutes. It’s as easy as opening up the Lyft app, tapping a button and a driver arrives to get you where you need to go. Now, they’ve made it even easier. Simply say, “Alexa, ask Lyft to get me a ride to work.”
Roy Williams, the Lyft engineer who built the Alexa skill, said it started with a company hackathon.
Lyft has a long culture of hackathons. Each quarter, the San Francisco company invites employees to experiment with new ideas. The story goes that Lyft itself was born at such a hackathon, with someone’s idea for an “instant” ride service.
“It took about three weeks to go from the original prototype to a finished app,” Williams said. Lyft has been going strong ever since.
That wasn’t the last innovation to spring from a Lyft hackathon.
Williams said he purchased an Amazon Echo during the 2015 Black Friday sale. He immediately knew he wanted create an Alexa skill to let Echo users order a “lyft.” Williams dove into the Alexa Skills Kit (ASK) documentation, and he started building his prototype at the January hackathon. It was a hit.
Beyond the prototype, Williams estimates the project took three weeks of solid engineering time. The team spent one week working on the core functionality, including adding some workflow to their own API. It spent another week working through edge cases and complex decision trees, so the skill would never leave a user confused or at a dead-end. Finally, they spent another week on testing and analytics, before releasing it for an internal beta with 30 users.
Williams says ASK is very comprehensive, and because it is JSON-based, it makes testing easy. He admits having to add some edge testing to account for cases like asking Lyft for “a banana to work.” (Bananas are a favorite test fruit during certification.) In the end, he knew Lyft had a high-quality skill with near-one hundred percent test coverage.
Amazon published the final Lyft skill in July.
Megan Robershotte is a member of Lyft’s partner marketing team. She explained the Alexa skill fit well with the company’s primary goal: to get people to take their first ride with Lyft.[Read More]
In this post, Nathan Grice, Alexa Smart Home Solutions Architect, shows you how to reduce skill development time by debugging your skill code in an local environment. Learn how to step through your code line by line while preserving roles and AWS services, like DynamoDB, used in the skill when running in AWS Lambda. Share your thoughts and feedback in this forum thread.
Amazon Alexa and the Alexa Skills Kit (ASK) are enabling developers to create voice-first interactions for applications and services. In this article, we will cover how to set up a local development environment using the Amazon Web Services (AWS) SDK for NodeJs.
By following this tutorial, you’ll be able to invoke your AWS Lambda code as if were called by the Alexa service. This will also allow you to interact with any other AWS services you may have added to your skill logic such as Amazon DynamoDB. By the end of this post, you will be able to execute and debug all of your Alexa skill’s Lambda code from your local development environment.
Using the aws-sdk, you should also be able to call any dependent services in AWS as if the skill code were executing in AWS Lambda by leveraging AWS Roles. This way, you can be sure your code is working before deploying into AWS and hopefully decrease the cycle time for applying new changes. For example, you want to persist something about users in a DynamoDB table and the only way to do this was run your code in Lambda. After this tutorial, you should be able to write to the remote Dynamo table from your local environment.
First, let’s take a look at why you would want to streamline this process. The first time I developed a skill, I was not using an integrated development environment and almost all debugging information was obtained through log statements. This presents quite a few challenges from a developer’s point of view.
I wanted a better way to execute and debug my code, but not lose any of the functionality of being constrained to a local environment.
In the next section we will look at how to setup a local environment to debug your AWS Lambda code using Node,js, Microsoft's Visual Studio code open-source editor, and the aws-sdk npm package. This tutorial will cover setting this up using Node.js but the AWS SDK is available for Python and Java as well.
Install Node.js via the available installer. The installation is fast and easy, just follow the available prompts. For the purposes of this tutorial, I am on OSX, so I selected v4.5.0 LTS. There are versions available for Windows and Linux as well.
Repeat the process with Microsoft's Visual Studio Code. For the purposes of this tutorial, I am using Microsoft’s Visual Studio Code but others should work as well.[Read More]
Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn the process of device discovery and how to support it in code for your smart home skill.
Developing a smart home skill is different than building a custom skill. One of the main differences is the dependency on devices to control. The device might be a light bulb, thermostat, hub, or other device that can be controlled via a cloud based API. Or maybe you created an innovative IoT gadget and you want to make it discoverable by an Alexa enabled device. In this post, you will learn how the process of device discovery works, and how you can support discovery in your custom skill adapter communicating with the Smart Home Skill API.
To meet prerequisites and set the context of the technical information in this post, start by reading the five steps before developing a smart home skill and set up your initial code to support skill adapter directive communications. This post will be the next in the series of these posts and provides the foundation for code samples to follow.
To appreciate the role of device discovery, consider how a customer is involved in the process. The following steps assume a consumer has an Alexa-enabled device, such as the Echo or Echo Dot, already set up.
Once the first step is completed, the customer is able to control the smart home device typically through an app provided by the device maker, which is graphical user interface that manages device and owner information controlled in it’s own device cloud. The account created in the first step is the same account used in the second step when the consumer enables the associated smart home skill. This explains why account linking is mandatory for skills created with the Smart Home Skill API.
But what happens in the third step when the consumer makes a device discovery request? Does it actually seek for devices emitting some signal within the home? Is it querying everything it can within the local WIFI area? The answer to both questions is no. Although there are a couple of exceptions to enable early support of popular products such as Philips Hue and Belkin WeMo, the process described next is what is supported today and moving forward.
Figure 1: Device discovery process
When a request is made by the customer for devices to be discovered, the Alexa service identifies all the smart home skills associated with the consumers account, and makes a discover request to each one as seen here.
Let’s examine each step above in more detail. Notice the first step is the same as the last step we covered when considering the customer’s perspective, so this is a deeper dive as to what happens next. Also observe in Figure 1 that no communications occur directly between the Amazon Echo and the smart home device.[Read More]
Last month, we announced the launch of Nucleus, the smart home intercom that’s always getting smarter with Alexa. Designed to bring families closer together, Nucleus makes two-way video conferencing between rooms, homes, and mobile devices instantaneous. Following the successful launch of Nucleus on Amazon.com and in hundreds of Lowe’s home improvement stores throughout the US, we’re excited to announce that Alexa Fund has led a $5.6 million Series A investment round in Nucleus, with additional participation from BoxGroup, Greylock Partners, FF Angel (Founders Fund), Foxconn, and SV Angel.
“It’s incredible to receive this level of support in such a short period of time,” said Jonathan Frankel, co-founder and CEO of Nucleus. “It speaks to the importance of our shared vision: Bringing families closer together through intuitive and intelligent interfaces. Amazon has been a stand-out supporter since day one and recognizes the value Nucleus is bringing to families nationwide, and the rapid market traction we’re seeing within our growing community.”
The Alexa Fund provides up to $100 million in venture capital funding to fuel voice technology innovation. We believe experiences designed around the human voice are a more natural way for people to interface with technology. Nucleus combines ease-of-use and the Alexa Voice Service (AVS) to create an intuitive voice experience where customers can stream music, access custom Alexa skills, and more just by asking Alexa. Nucleus joins past Alexa Fund recipients Luma, Sutro, Invoxia, Musaic, Rachio, Scout Alarm, Garageio, Toymail, Dragon Innovation, MARA, Mojio, TrackR, KITT.AI, DefinedCrowd, and Ring.
Nucleus is the first touchscreen device to incorporate AVS, making it easy for customers to stream music, control smart home products such as SmartThings, Insteon and Wink, and access the library of 3,000 Alexa skills. Read more about how Nucleus and the Alexa Voice Service (AVS) worked together to bring the company’s smart video intercom system to life in this morning’s featured developer spotlight interview.Read More]
In early 2014, Jonathan Frankel started renovating a house in Philadelphia. With three kids and multiple floors, he wanted an intercom system, but was frustrated with the persistence of old technology. He found that home intercoms hadn’t changed much in the last 30 years; they were still expensive and difficult to install. What’s more, intercom systems had failed to keep up with today’s modern families who are spread across geographies and constantly on the move.
Frankel, now CEO of Nucleus, wanted to bring families closer together. He wanted to build a device that could bridge generations and let his mom video chat with his children with a simple touch. He wanted to visit with his family over dinner, even while away on business. Whenever, whoever, and wherever they may be, he wanted to talk to them—room-to-room, home-to-home, or mobile-to-home.
Now his vision has come to life. Nucleus, the first smart home intercom with video calling, and with the voice capabilities of Alexa, is delighting customers with easy access to music, news, weather, to-do lists, and even smart home controls.
Amazon created the Alexa Voice Service (AVS) to make it easier for developers to add voice-powered experiences to their products and services. That proved advantageous for Nucleus.
According to Isaac Levy, chief technology officer at Nucleus, hands-free interaction was part of the Nucleus vision from the beginning. They prototyped early Nucleus units with various voice recognition solutions, including open source. When they heard about the commercial availability of AVS, they knew their search was over.
“We knew right away that AVS would be a great fit, and we wanted to incorporate it into our product,” Levy said. “It’s one thing to have basic voice recognition. But being able to unlock everything Alexa can do—weather, sports, flash briefings, all those custom skills…it’s like waking up a genie in our device. AVS helped Nucleus create an even more compelling customer experience.”
Levy says AVS allowed his team to develop a more full-featured Nucleus with capabilities the company hadn’t developed on its own. For example, natural language understanding (NLU) is built into the Alexa service, providing developers with an intelligent and intuitive voice interface that’s always getting smarter. This saved Nucleus many years of development work.[Read More]
Today, we’re excited to announce a new, free video course on Alexa development by A Cloud Guru, a pioneering serverless education company in the cloud space. Instructed by Ryan Kroonenburg, an Amazon Web Services (AWS) Community Hero, the “Alexa development for absolute beginners” course allows beginner developers and non-developers to learn how to build skills for Alexa, the voice service that powers Amazon Echo.
Here is what you can expect to learn in this two-hour course in 12 lessons:
“All in all, it's a great course and it’s even accessible to non-developers, mums and dads who haven’t used Alexa or Amazon Web Services before! We made this available to the general public and give them an everyday use case for AWS Lambda, Amazon DynamoDB, and S3. We can’t wait to see what people build for Alexa.” – Ryan Kroonenburg, instructor and founder of A Cloud Guru.
Watch the course for free today.
A Cloud Guru also offers an extended version of the course. Cloud Solution Engineer Nick Triantafillou will teach you how to build your own Alexa device with a Raspberry Pi, a MicroSD card, a speaker, a USB microphone, and Alexa Voice Service. Learn how to make Alexa rap to Eminem, how to read Shakespeare, how to use iambic pentameter and rhyming couplets with Alexa, and more. This five-hour video course in 47 lessons also covers additional skill templates available on GitHub to customize and build new capabilities for Alexa.
Watch the extended course.
The Internet of Voice Challenge on Hackster.io has officially come to a close. Our spirits are high after seeing the heights of creativity, the quality of code, and the compelling narratives of the 101 entrants. Simply put, we are impressed with how developers connected Alexa with Raspberry Pi.
After careful deliberation, we are announcing the winners!
The cold efficiency of a pitching machine is a great way to learn to hit a ball, but it’s so impersonal. Instead, Robot Roxie is powered by Alexa and lets you ask for the next pitch.
Watch Robot Roxie in action.
2nd Place: Voice-Controlled K’nex Car by Austin Wilson
This developer revived his old builder set and decided it was more fun to control it with his voice. Watch the Alexa-enabled K’nex buggy show off some of its moves.[Read More]
When Belkin International launched its WeMo line of connected devices in 2012, it wasn’t its first foray into consumer electronics. Belkin has been around for 30 years, transforming its business from cabling to connectivity, wireless networking, and eventually into home automation.
According to CJ Pipkin, Belkin’s national account manager for WeMo, the farther the company delved into wireless networking, the more it realized people wanted to remote-control devices of all kinds around the home. So Belkin transformed its Zensi energy-monitoring devices into what become WeMo—a line of smart, remote-controlled and remotely-monitored switches.
“We built a smart ecosystem of connected devices as early as anyone in the industry,” Pipkin says.
Belkin makes a variety of devices, but high-quality switches dominate its WeMo home automation lineup:
But since Amazon Echo and Alexa came on the scene, it’s completely changed Belkin’s way of thinking. They realized one household user—the techiest one—had previously dominated WeMo usage. With Alexa, though, anyone can operate a connected device with ease.
Tom Hudson, software product manager for WeMo, says smartphones were a natural way to control home devices at first, especially lighting. They are handy for configuring set-it-and-forget-it automations to respond to specific events. For more immediate actions, though, voice actuation is so much better. “It’s a lot easier to just say, ‘Turn that light on’ than it is to pull out your phone, find and load up the app, then locate and tap the right command.”[Read More]
We teamed up with hack.guides() to bring you a Tutorial Contest in June. Hack.guides() is a community of developers focused on creating tutorials to help educate and share technical knowledge. The purpose of the contest was to provide developers the opportunity to share knowledge, help other developers, contribute articles to an open-source project, and win a prize along the way.
Today we’re excited to announce the winner of the hack.guides() tutorial contest.
Alexa developer, ”piratemrs”, built a tutorial that outlines how to build a working, voice-controlled device that can be used to feed pet fish while you are away. The tutorial helps developers learn three broad technical areas: hardware, AWS, Alexa.
Both cloud and hardware technologies were integrated to build this project. The tutorial starts with a lesson on how to add external circuits and motors (servos) to a Raspberry Pi computer. Next, the tutorial steps through how to create an AWS Lambda function and Alexa skill. Finally, the skill and Raspberry Pi system are tied together via a configuration guide using the AWS IoT service. At the end, piratemrs says “Alexa, ask fish tank to feed the fish” and a custom Alexa skill activates a small motor to shake some food into the fish tank.
The tutorial does a great job of breaking down components into separate sections and includes YouTube videos to show the results of testing each piece of the solution. Watch the videos and focus on testing and understanding each component of the solution before moving on.
Read the full tutorial to learn how you can build your own voice-controlled system to feed your fish, control your fish tank lights remotely, and more.
We’d like to thank all the participants who created Alexa tutorials for this contest. The high quality of submissions made selecting a winner a difficult decision. Tutorial submissions were scored using the contest rules provided by hack.guides(), including writing style, communication ability, effective use of technologies/APIs, and overall quality. Here are some honorable mentions.
This tutorial shows you how to design, build, and test an Alexa skill that implements an adventure game. If you are an experienced Node.js developer, but new to Alexa, you will appreciate the thorough breakdown of the ASK functionality and recommended project structure. Read more
This tutorial shows you how to navigate the Amazon developer screens and create your first Alexa skill. If you are a novice developer, you will appreciate the clear screenshots and fun animated GIFs that appear throughout the text. Read more.
To get started, we’ve created easy-to-use skill templates that show new developers the end-to-end process of building an Alexa skill. Visit our trivia game, fact skill, how-to skill, flash cards skill and user guide skill tutorials.
Or check out these Alexa developer resources:
Amazon is happy to announce that Alexa, Echo, and the all-new Echo Dot are now available for customers in the UK and Germany. Developers and hardware makers around the world can create Alexa skills for UK and German customers with the Alexa Skills Kit (ASK) today or integrate Alexa into their hardware with the Alexa Voice Service (AVS) starting in early 2017. Popular European brands have already announced they’re building Alexa skills, including JustEat, the BBC, The Guardian, Jamie Oliver, MyTaxi, Hive, Netatmo, National Rail and Deutsche Bahn. There are over 3,000 skills for Alexa in the US, and now developers can extend their experiences to more customers in Europe. If you publish a skill for the UK or Germany by October 31, 2016, you’ll receive a free, limited edition Alexa t-shirt.
Today we also introduced an all-new version of the groundbreaking Echo Dot for under $50, so you can add Alexa to any room in your home. Both Amazon Echo and Echo Dot are voice-controlled speakers designed entirely around your voice—they’re always ready, hands-free, and fast. Alexa is the brain behind Echo and Echo Dot—just ask, and she’ll answer questions, play music, read the news, set timers and alarms, recite your calendar, check sports scores, control lights around your home, and much more. With far-field voice control, Echo and Echo Dot can do all this from across the room. Echo and Echo Dot will start shipping in the UK in the coming weeks. In Germany, Echo and Echo Dot are available by invitation for customers who want to help shape Alexa as she evolves—the devices will start shipping next month.
It’s easy to get started. Explore our simple tutorials to learn how to build a skill quickly: trivia, flash cards, instructions, facts, decision tree and game helper. If you want to build a multi-language Alexa skill read our technical documentation to learn how to create a skill in all language models (US English, UK English, and German). If you’re already an Alexa developer, you can enhance your existing skill by extending it to support both UK and DE language models.
Join us at an Alexa event or in our webinars and office hours in the coming weeks. These sessions are an opportunity for you to have your questions answered by an Alexa Evangelist or Alexa Solutions Architect.
We have scheduled three introductory live webinars.
We host ASK the Expert sessions to help answer your questions. Join the next one for live Q&A with an Alexa Evangelist.
Technical staff from the Alexa team will be speaking at a number of upcoming events in the UK and Germany. Come join us to get hands-on training, learn about voice design and meet other local developers.
We are offering a free Alexa Dev t-shirt to developers who publish an Alexa skill between September 14, 2016 and October 31, 2016. There are custom, limited edition designs for the UK and Germany. Quantities are limited. See terms and conditions.
Today, we announced that Amazon Echo and Alexa is coming to the UK and Germany. With the announcement comes two new language models: English (UK) and German. You can start developing for these new languages today.
In this tutorial, you’ll build a web service to handle notifications from Alexa and map this service to a skill in the Amazon Developer Portal, making it available on your device and to all Alexa users upon certification.
After completing this tutorial, you’ll know how to do the following:
Skills are managed through the Amazon Developer Portal. You’ll link the Lambda function you created above to a skill defined in the Developer Portal.
1. Navigate to the Amazon Developer Portal. Sign in or create a free account (upper right). You might see a different image if you have registered already or our page may have changed. If you see a similar menu and the ability to create an account or sign in, you are in the right place.
2. Once signed in, navigate to Alexa and select "Getting Started" under Alexa Skills Kit.[Read More]
Do you develop in Amazon Web Services (AWS), have an Echo, and want the latest service availability details without having to open your laptop and scroll through dozens of green checkmarks? A home-schooled student named Kira Hammond has the solution with her newly-released CloudStatus Alexa skill.
CloudStatus summarizes the info on the AWS Service Health Dashboard, both current issues and recent problems. On a challenging day, Alexa’s conversation might start out like this:
“Hello! 3 out of 11 AWS regions are experiencing service issues—Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved—Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”
Interested? Listen to a recording of an example session or try it for yourself, say, “Alexa, enable the CloudStatus skill.”
Kira wrote CloudStatus with AWS Lambda, using Amazon EC2 to build Python modules for Requests and LXML. The modules download and parse the AWS status page to provide the desired data. The Python packages and the skill’s code files are zipped and uploaded to AWS Lambda.
Kira created this skill because her father, Eric Hammond, an AWS Community Hero and Internet startup technologist, wanted a simpler, easier way to access the service availability info himself. He figured having Kira create the skill would enable her to learn about retrieving and parsing web pages in Python—and being a good parent, he wanted to foster her creativity. And Kira is very enthusiastic about the creative process of development. “Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy. Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!”[Read More]
When creating your own Alexa skill, there may be times when you would like to change the way Alexa speaks. Perhaps she isn’t pronouncing a word correctly, maybe her inflections are too serious or you may find the need to include a short audio clip. Speech Synthesis Markup Language, or SSML, is a standardized markup language that provides a way to markup text for changing how speech is synthesized. Numerous SSML tags are currently supported by the Alexa Skills Kit including: speak, p, s, break, say-as, phoneme, w and audio.
This 20-minute video will walk you through adding SSML support to your Alexa skill and shows exactly how to pause Alexa’s speech, change how she pronounces a word and how to create and embed your own audio tags.
For more information about getting started with Alexa and SSML, check out the following:
In April 2016, developer Aaron Roberts put the finishing touches on Alarm.com’s custom Alexa Skill. That wrapped up almost three months of development and internal and beta testing. All that testing led to a smooth certification process.
Rebecca Davenport, Director of Product Management at Alarm.com, says the Alarm.com skill controls more than just home security. It also controls almost every other device that’s part of the company’s home automation ecosystem. That includes security equipment, door locks, garage doors, video cameras, lights and thermostats.
The company's founders recognized the limitations of traditional landline-based alarm systems. Besides relying on phone wires—which can be tampered with and unreliable —customers often forgot to arm their systems. The company saw a unique opportunity to allow customers to arm and disarm the system and know what’s happening at their home from anywhere.
Alarm.com enhanced its offering with its first mobile app. At the same time, it started expanding its core platform beyond security into home automation and video. Today over two million Alarm.com customers control their smart home devices from their phones, tablets, TV, and more.
When Amazon Echo and Alexa debuted, Alarm.com saw another huge opportunity. With the launch of the Alexa Skills Kit (ASK), the company knew voice technology’s time had come. “We had voice technology on our radar,” Davenport says. “Voice control is a compelling way for customers to interact with their devices from within their homes.”
The software team didn’t start developing a custom Alexa skill right away. Instead, Roberts started his own early exploration and prototype during the ASK beta. When the integration project got the green light, he was ready.
Roberts said using the ASK API was straightforward. He found mapping the API responses to Alarm.com’s existing web services was the simplest part of the project. As for the rest, he recalls the major components:
The team members brainstormed all the ways they thought users would request a command. Like many developers new to voice applications, they found customers don’t always say what you expect.[Read More]