Today’s guest post comes from Michael Garcia, EMEA Solutions Architect at AWS. In this post, we'll discuss how you can voice-control any physical devices using Alexa.
Amazon Echo and Alexa Skills Kit (ASK) are enabling developers to create new experiences with voice-enabled applications. It is a really natural interface to interact with the physical world around us. The new Smart Home Skill API enables you to quickly create Alexa skills to control connected devices for the home, like lights and thermostats, from the cloud. What about controlling other types of devices from the cloud?
That is what the Internet Of Things (IoT) is all about. Today we are going to see how you can connect and control any device using the Amazon Web Services (AWS) platform and Alexa Skills Kit. We’ll start with some basics around AWS IoT, a managed service that will enable you to connect securely your objects to the AWS platform. We’ll create a representation of our physical device and then we’ll see how we can create a new skill to voice control our object from the cloud. If this is the first time you are creating an Alexa skill, I highly recommend you build a trivia skill or create a fact skill. Both blog posts provide step-by-step tutorials so you can build a skill in under an hour and learn the end-to-end process of creating a skill with AWS Lambda.
For those who already have a physical device and want to connect it to AWS IoT, you can consult the quickstart documentation for AWS IoT to easily get started by using the AWS SDKs and sending data to the Cloud.
To start, we’ll use a very simple industrial use case to make things feel more concrete. Imagine that you’re a developer who needs to develop a skill so that an operator in an industrial facility could control a water pump remotely with his voice. To achieve that we will focus on the Alexa Skills Kit and we will simulate having a physical device (the water pump) so everyone can perform the steps described below. We are assuming that the reader also has prior knowledge of the AWS Platform. To get up to speed, feel free to visit the AWS training section.
We will provide you with a glimpse of how to use Alexa and the AWS platform so you can create your own voice-enabled IoT application later.
AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. AWS IoT can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely. With AWS IoT, your applications can keep track of and communicate with all your devices, all the time – even when they aren’t connected.
AWS IoT makes it easy to use other AWS services with built-in integration so you can build value-added IoT applications that gather, process, analyze and act on data generated by connected devices, without having to manage any infrastructure.
Let’s start by logging into the AWS Console on the IoT page. By default, this will select the ‘us-east-1’ AWS region, we recommend you stick with that region for this article.[Read More]
Alexa is the cloud-based voice service that powers Amazon Echo. Companies can add new skills to Alexa using the Alexa Skills Kit. The Alexa Fund is a $100M investment fund to fuel innovation in voice technology. Both were announced in June 2015.
After the birth of their first son, Joel Wetzel and his wife decided to start running as a way to get out of the house and get healthy. He soon grew tired of squinting at a watch screen on dark mornings or pulling his mobile phone out of his pocket to check his times or adjust settings. He saw a way to combine his passion for voice design with a love of a healthy lifestyle. Joel is the CEO and Founder of MARA, an intelligent, voice-based running assistant that provides performance data and training information during exercise, serving as a virtual running coach or personal trainer. MARA launched as a mobile app on iOS in May 2014 and Joel continues to expand MARA’s reach to new technologies.
Joel has been interested in voice interaction since childhood. He was fascinated by HAL from Space Odyssey, KITT on Knight Rider, Data on Star Trek, and the computer on the Starship Enterprise. It was all science fiction back then, but fast forward thirty years and Joel realized that it was something he could help make happen. MARA is a next generation running assistant for smart running. The name of both the app and the assistant, MARA uses cutting-edge voice recognition to proactively coach runners to reach new personal bests. As a personality, MARA provides motivation, encouragement and even competition. With the MARA app, runners can talk to her using their earbuds, ask questions about their speed, pace, location, duration or the weather, ask for music, and track run progress over time.
“At MARA, our goal is to push digital interaction beyond mere voice commands - to craft conversations, experiences, and personalities,” said Joel. “We were obviously delighted to be selected by the Alexa Fund because our goals are very similar. We want to see voice interactions become pervasive.”[Read More]
Last year, we introduced a Developer Preview of Alexa Voice Service (AVS) to hobbyists and device makers to help them integrate Alexa into their connected devices and apps, and then a few weeks back, we released an implementation of an Alexa enabled Raspberry Pi on GitHub. We couldn’t be happier with the response we received from the developer community.
Meet Triby – a new connected family-friendly kitchen device that magnetically sticks to the fridge and can play music, make calls, display messages, and is voice activated.
Built by Invoxia, Triby is one of the first ‘Alexa-enabled’ devices built with AVS, which means that you can do almost everything with Alexa on Triby that you can do with Alexa on Echo.
You address Alexa through Triby using the “Alexa” wake word, just as you would on Echo. Simply say “Alexa, play Adele” and Triby can play Adele from Prime Music, “Alexa add milk to my list” and Triby will add it to your shopping list, or “Alexa, turn off the kitchen lights” and Triby becomes a way to access and control the smart home.
“Voice recognition capabilities transform the way we interact with music, content and services. Amazon made it available to the world with its first range of Alexa-enabled devices. Now with a diversified Alexa-enabled device offering, more people can enjoy the Alexa experience. We are excited to be at the forefront of many third party devices to integrate the Alexa Voice Service with Triby. It has great communication features, the ability to hear you from across the room while being portable and an always-on display. We can't wait to equip millions of kitchens with it!" says Sebastien de le Bastie, Invoxia’s Managing Director.
Learn More about Alexa on Triby.
If you are a device maker, service provider or application developer interested adding rich and intuitive experiences to your products – AVS is the right choice for you! Get Started
For more information on Alexa-enabled devices and getting started with Alexa, check out the following resources:
Have Questions? We are here to help! Visit us on the AVS Forum to discuss specific questions with one of our experts.
By Juan Pablo Claude, software developer at Big Nerd Ranch
If you are reading this post, it is likely that you have finished writing a shiny new Alexa skill and you are ready to submit it to Amazon for review and publication. In this post, we’ll guide you through the submission process and help you get your skill published as quickly as possible.
Haven’t written your skill yet? Read on to learn about Amazon’s guidelines so that you can have a rapid and successful skill review.
If you want to have your own skill available to Alexa users, you will need to submit your skill to the Alexa Team for certification.
That means that you, as a skill developer, need to follow Amazon’s content and security policies if you wish to have your skill certified for distribution. Amazon offers an official checklist for skill submission, along with policy guidelines and security requirements.
As you might expect, skills with obscene, offensive or illegal content or purposes are terminally frowned upon. What you might not expect is that the content policies do not allow skills targeted to children, as they may compromise a child’s online safety. This is a less evident restriction you should consider when a new skill idea hits you.
Security for the server-side part of your skill is also an important consideration, and it may be tricky if you decide to host the skill yourself outside of AWS Lambda. In that case, your server will need to comply with Amazon’s security requirements. As an example, any certificates for your skill service need to be issued by an Amazon-approved certificate authority.
The good news is that if you host your skill services as Amazon Web Services Lambda functions as we have done in the Developing Alexa Skills blog series, all major security requirements are automatically satisfied.[Read More]
We are very excited to introduce you to CoWatch - the world’s first ‘Alexa-enabled’ smartwatch built using the Alexa Voice Service API. Boasting a modern watch design, and a high-res touch screen, CoWatch is a companion smartwatch device with built in Wi-Fi and Bluetooth and is the first wearable/smartwatch built on top of the Cronologics OS platform.[Read More]
By Josh Skeen, software developer at Big Nerd Ranch
This is part four of the Big Nerd Ranch series. Click here for part three.
By now, we’ve made a lot of progress in building our Airport Info skill. We tested the model and verified that the skill service behaves as expected. Then we tested the skill in the simulator and on an Alexa-enabled device. In this post, we’ll implement persistence in a new skill so that users will be able to access information saved from their previous interactions.
We'll go over how to write Alexa skill data to storage, which is useful in cases where the skill would time out or when the interaction cycle is complete. You can see this at work in skills like the 7-Minute Workout skill, which allows users to keep track of and resume existing workouts, or when users want to resume a previous game in The Wayne Investigation.[Read More]
Like many industries today, the financial services sector is looking to become more customer-centric—to provide faster, easier, and more secure ways for consumers and businesses to buy goods and services online.
UK-based Lloyds Banking Group is no different. Committed to becoming a world-class, customer-centric digital bank, Lloyds is actively exploring biometrics, including voice recognition. According to Marc Lien, Director of Innovation and Digital Development, the use of speech is exciting not only because it’s convenient, but also because it can empower the 360,000 people registered as blind or partially sighted in the UK.
As Lien says, “Some of our customers cannot enjoy the full benefits of online banking. Understanding how we can break down accessibility barriers is another way in which we are working towards becoming the best bank for customers.”
To that end, Lloyds has created a proof of concept for Alexa, writing test cases for logging in, requesting account balances as well as account details, and asking for help from Lloyds. Watch this video to see the skill in action.
The skill isn’t live, because Alexa-enabled devices and Alexa Skills Kit are not yet available in the UK. But, as Lien explains, “By being at the forefront of exploring technologies we can keep pace with the evolving expectations of our customers. This also means that we can future-proof our products and services by considering how technologies may develop.”
To learn more about how they are developing test of concept for Alexa, read their blog. Look for more to come from Lloyds.
Great news—we've made this month’s t-shirt even more collectible. To recognize your accomplishment of publishing one of the first 1,000 Alexa skills, we’ve added a new badge to the April t-shirt. Simply come up with an idea for a skill, create your next (or first) Alexa skill, and publish it by April 30.
Not sure where to start? Our trivia and fact skill templates make it easy to create a simple skill for Alexa. Both templates and step-by-step guides leverage AWS Lambda and the Alexa Skills Kit, while providing the business logic, use cases, error handling and help functions for your skill.
Don't miss out. Build and publish your Alexa skill by April 30 to score your free Alexa dev t-shirt. Terms and conditions apply.
Hackster is a developer community dedicated to learning hardware and they’ve shared some pretty amazing projects using Alexa. Now, Hackster announced the Alexa Skill Contest to give developers like you a chance to connect your favorite hardware, IoT platform, and everyday life using Alexa.
Natural user interfaces, such as those based on speech, represent the next major disruption in computing. Alexa provides you with an opportunity to take advantage of the new form of interaction. Alexa, the voice service that powers Amazon Echo, provides capabilities, or skills, that enable customers to interact with devices in a more intuitive way using voice. You can build skills using the Alexa Skills Kit.
We’re excited to see what you create with the Alexa Skills Kit. Submit your great skill ideas for our Alexa Skill Contest – extra points when your skill is published by May 30, 2016.
To get started, check out the details of the contest. Here are a few other resources to help you get started quickly:
I’m curious to see what you’ll build. Keep in touch, @PaulCutsinger.
The Smart Home Skill API is a new addition to the Alexa Skills Kit, which enables developers to add capabilities, called skills, to Alexa. With this new API you can teach Alexa how to control your own cloud-controlled lighting and thermostat devices. For example, customers can simply say, “Alexa, turn on the kitchen lights” or “Alexa, turn up the heat downstairs” and Alexa will communicate directly with your Smart Home device. Smart home skills are created in the same developer portal as existing custom skills and follow a similar process.
To create your smart home skill, you’ll first configure your skill using a new Smart Home Skill API flow in the developer portal. Ensure you have selected the Smart Home Skill API skill type, enter a Name for your skill and then simply click Next.
Unlike custom skills, smart home skills already have an existing interaction model for you. This means you won’t have to define the intent schema and sample utterances like you would in a custom skill. Click Next to move to the Configuration tab.
Editor’s note: This tutorial was updated with the new skill submission flow in April 2016.
Programming for the Alexa platform is a new paradigm for everyone. Creating a solid Voice User Interface (VUI), understanding the Alexa platform, how to interact with it and certifying your skill all need to be mastered in addition to actually programing your skill in Node.js, Python, Java or whatever your favorite language may be.
This post attempts to walk the first time Alexa skills developer through the steps involved in creating a solid skill that can actually be submitted for certification. Understanding the scope of what is involved while using a cut/paste approach to the programing required should enable you to grasp the parts involved and how they all fit together. Nothing is better for learning a thing than actually doing a thing – let’s get started!
We are going to take a reference skill called ‘Reindeer Games’, a trivia game popular on the Alexa platform, and adapt it by creating a trivia game of your own to submit for certification. The framework has all of the business logic, use-cases, error handling and help functions already implemented – you just need to plug in your own question/answers and edit a couple lines of script.
Important: Follow the instructions below which step you through setting up the Framework Trivia Game, ‘Reindeer Games’ – be sure you have this working before you move on to adapting it to your set of questions.[Read More]
We’ve been talking about how to get started with Alexa using the Alexa Skills Kit page, and sample skills, such as the Color Expert, using AWS Lambda functions. This article will show you how to setup a deploy script, so that you can manage your code and modules locally, and be able to easily publish changes into AWS Lambda via the command line interface (CLI).
The AWS Lambda console provides a set of management screens for developers to create and configure custom functions, including functions that implement Alexa skills features. Short, simple functions that require only a single block of code can be opened for editing within the Lambda console. However, this online code editor is disabled if you have uploaded a multi-file project. In this case you need to organize source files and any required modules in a folder on your laptop, and then zip these into a package and manually upload them to the Lambda console.
We’ll use the AWS CLI (Command Line Interface) to help us implement DevOps style automation to avoid manually performing the steps repeatedly during an iterative development process.[Read More]
Today we are introducing the Smart Home Skill API, a new addition to the Alexa Skills Kit, which enables developers to add capabilities, called skills, to Alexa. Developers can now teach Alexa how to control their cloud-controlled lighting and thermostat devices so customers can simply say, “Alexa, turn on the kitchen lights” or “Alexa, turn up the heat.” You no longer need to build a voice interaction model to handle customer requests. This work is now done for you when you use the Smart Home Skill API. You create skills that connect your devices directly to our lighting and thermostat capabilities so that customers can control their lights, switches, smart plugs or thermostats—without lifting a finger.
We first introduced the Smart Home Skill API as a beta called the Alexa Lighting API in August 2015. As part of the beta program, we worked with companies including Nest, Ecobee, Sensi, Samsung SmartThings, and Wink in order to gather developer feedback, while extending Alexa’s smart home capabilities to work with their devices.
It’s easy and free for developers to use the Smart Home Skill API to connect Alexa to hubs and devices for both public and personal use. Get Started Now >
When you create a custom skill, you build the voice interaction model. When using the Smart Home Skill API, you tap into Amazon’s standardized language model so you skip the step of creating an interaction model. Alexa understands the user’s speech, converts it to a device directive and sends that directive to that skill adapter that you build in AWS Lambda.[Read More]
Editor’s Note: Due to popular demand, we have extended the promotion period for the Envato Tuts+ offer for one month. Your skill will be eligible for this exciting promotion if you get it certified by May 31st, 2016. See terms and conditions.
Today, I’m excited to announce a limited-offer with Envato Tuts+ for the Alexa developer community. Envato Tuts+ is an e-learning platform that teaches creative and technical skills by providing free how-to tutorials, video courses and e-books to millions worldwide.
To thank you for adding new skills to Alexa, we are offering three free months of Envato Tuts+ monthly subscription to the first 500 developers who get an Alexa skill certified and fill out this form by May 31, 2016.
If you’re just getting started with the Alexa Skills Kit, Envato Tuts+ has published a new step-by-step tutorial that will make it easy and fast to build a trivia quiz for Amazon Echo or any Alexa-enabled device. No experience with Alexa development tools required. This template can be used by non-programmers as well as beginners and intermediate developers. You just need to come up with a trivia idea, plug in your questions, and edit a few lines of script. It is a valuable way to quickly learn the end-to-end process of building and publishing an Alexa skill.[Read More]
By Josh Skeen, software developer at Big Nerd Ranch
Now that we have tested the model for our Airport Info Alexa Skill and verified that the skill service behaves as expected, it's time to move from the local development environment to staging, where we’ll be able to test the skill in the simulator and on an Alexa-enabled device.
To deploy our Alexa skill to the staging environment, we first need to register the skill with the skill interface, then configure the skill interface's interaction model. We'll also need to configure an AWS Lambda instance that will run the skill service we developed locally.
The Alexa skill interface is what’s responsible for resolving utterances (words a user spoke) to intents (events our skill service receives) so that Alexa can correctly respond to what a user has asked. For example, when we ask our Airport Info skill to give status information for the airport code of Atlanta, Georgia (ATL), the skill interface determines that the AirportInfo intent matches the words that were spoken aloud, and that ATL is the airport code a user would like information about.
Here's what the journey from a user's spoken words to Alexa's response looks like:
In our post on implementing Alexa intents, we simulated the skill interface with alexa-app-server so that we could test our skill locally. We sent a mock event to the skill service from alexa-app-server by selecting IntentRequest with an intent value of airportInfo and an AIRPORTCODE of ATL in the Alexa Tester interface.
By comparison, in a deployed skill, the skill interface lives on Amazon's servers and works with users’ utterances that are sent from Alexa to the skill service.[Read More]