Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts


Showing posts tagged with How to

December 02, 2016

Marion Desmazieres

The name of Harrison Kinsley may not ring a bell but if you’re into Python programming you’ve probably heard the name “Sentdex”. With over 125,000 subscribers to his YouTube channel and about 800 free tutorials on his associated website, Harrison has become a reference for learning materials on Python programming.

Today, we’re excited to share a new Alexa skills tutorial for Python programmers available for free on with companion video screencasts to follow along. This three-part tutorial series provides the instructions and code snippets to build an Alexa skill in Python that goes to the World News subreddit, a popular feed on news aggregator Reddit, and reads the latest headlines. To follow along, you will need an Alexa-enabled devicengrok or an https enabled server, and an Amazon Developer account.

In this tutorial, you can expect to learn:

Get started with the Alexa tutorial series here. For more Python tutorials, head to Harrison’s website.

Happy coding!


Learn more

Check out these Alexa developer resources:


November 10, 2016

Sebastien Stormacq

This new technical tutorial by Sr Solutions Architect for Amazon Alexa, Sebastien Stormacq will show you how to use Amazon API Gateway and configure it to act as a HTTP Proxy, sitting between Alexa and your OAuth server.

Have you ever developed an Alexa skill that uses account linking? Do you remember the first time you tried to click on the “Link Account” button and feared for the result? I bet you first saw the dreadful error message: “Unable to Link your skill”. Sometimes trying to figure out what an error is, is like searching for a needle in a haystack. You have no clue.

Most of the errors that I have seen when working with developers, fall in two categories:

  • Error of configuration inside the Alexa developer console. These are the easy ones to catch. We just need to compare the configuration with a working one, such as the one described in this blog post.
  • Errors at the OAuth Server level. These most often happen when you are developing your own OAuth server and it is not fully compliant with OAuth 2.0 specifications.

When you have access to the OAuth server logs, debugging the error message you see in the Alexa App is relatively easy. You just enable full HTTP trace on the server side and search for the error or the misconfiguration on the server. Full HTTP trace includes the full HTTP headers, query string and body passed by the Alexa service to your server.

With a bit of experience, catching an OAuth error in HTTP stack trace takes only a few minutes.

The problem is that most developers we are working with, have no access to the OAuth servers or the server logs. Either they are using a third party OAuth server (Login With Amazon, Login With Facebook, Login with Google and the likes), or they are working in a large enterprises where another team is operating the OAuth server. Meeting that team and asking them to change logging level or to request access to the logs can take weeks, or may not be possible at all.

This article explains how to setup an HTTP proxy between Alexa Skill Service and your OAuth server to capture all HTTP traffic and log it. By analyzing the logs, you can inspect the HTTP URLs, query strings, headers and full bodies exchanged. Setting such a proxy requires infrastructure to host the proxy: a networked server, with a runtime to deploy your code etc … this is unnecessary heavy lifting where Amazon Web Services can help.

We will use Amazon API Gateway instead and will configure it to act as an HTTP Proxy, sitting between Amazon’s Alexa Skill Service and your OAuth server.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services.

API Gateway HTTP Proxy Integration mode is a new feature of API Gateway that was launched on Sept. 20th 2016. You can read the post by AWS Director of Evangelism, Jeff Bar’s, if you want to learn more about this.

The diagram below shows where API Gateway, with HTTP Proxy Integration, fits in the OAuth Architecture.

High level steps to create such a configuration are:

[Read More]

October 28, 2016

Dean Bryen

We recently announced support for Alexa in two new languages, English (UK) and German. In order to easily add all three supported languages to your skills, we have updated the Alexa SDK for Node.js. We’ve also updated our Fact, Trivia and How To skill samples to include support for all three languages using the new SDK feature. You can find these updated samples over at the Alexa GitHub.

Fact – This skill helps you to create a skill similar to “Fact of the Day”, “Joke of the Day” etc. You just need to come up with a fact idea (like “Food Facts”) and then plug in your fact list to the sample provided.

Trivia – With this template you can create your own trivia skill. You just need to come up with the content idea (like “Santa Claus Trivia”) and plug in your content to the sample provided.

How To – This skill enables you to parameterize what the user says and map it to a content catalog. For example, a user might say "Alexa, Ask Aromatherapy for a recipe for focus" and Alexa would map the word "focus" to the correct oil combination in the content catalog.

If you are not familiar with the existing SDK or have not previously created a skill, you can reference the fact skill tutorial or read the SDK Getting Started Guide before continuing.

How it works

Let’s take a look at the new version of the fact skill, and walk through the added multi-language support. You can find the entire skill code here.

The resource object

The first thing that you will notice is that we now define a resource object when configuring the Alexa SDK. We do this by adding this line within our skill handler:

[Read More]

October 28, 2016

Jen Gilbert

Today’s guest post is from Joel Evans from Mobiquity, a professional services firm trusted by hundreds of leading brands to create compelling digital engagements for customers across all channels. Joel writes about how Mobiquity built a portable voice controlled drone for under $500 using Amazon Alexa.

As Mobiquity’s innovation evangelist, I regularly give presentations and tech sessions for clients and at tradeshows on emerging technology and how to integrate it into a company’s offerings. I usually show off live demos and videos of emerging tech during these presentations, and one video, in particular, features a flying drone controlled via Alexa. Obviously, a flying object commanded by voice is an attention getter, so this led me to thinking that maybe I could do a live demo of the drone actually flying.

While there have been a number of articles that detail how to build your own voice-controlled drone, the challenge remains the same: how do you make it mobile since most solutions require you to be tethered to a home network.

I posed the challenge of building a portable voice-controlled drone to our resident drone expert and head of architecture, Dom Profico. Dom has been playing with drones since they were called Unmanned Arial Vehicles (UAVs) and has a knack for making things talk to each other, even when they aren’t designed to do so.

Dom accepted my challenge and even upped the ante. He was convinced he could build the portable drone and accomplish the task for under $500. To make the magic happen, he chose to use a Raspberry Pi 2 as the main device, a Bebop Drone, and an Amazon Echo Dot.

[Read More]

October 27, 2016

Jeff Blankenburg

To introduce another way to help you build useful and meaningful skills for Alexa quickly, we’ve launched a calendar reader skill template. This new Alexa skill template makes it easy for developers to create a skill like an “Event Calendar,” or “Community Calendar,” etc. The template leverages AWS Lambda, the Alexa Skills Kit (ASK), and the Alexa SDK for Node.js, while providing the business logic, use cases, error handling and help functions for your skill.

For this tutorial, we'll be working with the calendar from Stanford University. The user of this skill will be able to ask things like:

  • "What is happening tonight?
  • "What events are going on next Monday?"
  • "Tell me more about the second event."

You will be able to plug your own public calendar feed (an .ICS file) into the sample provided, so that you can interact with your calendar in the same way. This could be useful for small businesses, community leaders, event planners, realtors, or anyone that wants to share a calendar with their audience.

Using the Alexa Skills Kit, you can build an application that can receive and respond to voice requests made on the Alexa service. In this tutorial, you’ll build a web service to handle requests from Alexa and map this service to a skill in the Amazon Developer Portal, making it available on your device and to all Alexa users after certification.

After completing this tutorial, you'll know how to do the following:

  • Create a calendar reader skill - This tutorial will walk first-time Alexa skills developers through all the required steps involved in creating a skill that reads calendar data, called "Stanford Calendar".
  • Understand the basics of VUI design - Creating this skill will help you understand the basics of creating a working Voice User Interface (VUI) while using a cut/paste approach to development. You will learn by doing, and end up with a published Alexa skill. This tutorial includes instructions on how to customize the skill and submit for certification. For guidance on designing a voice experience with Alexa you can also watch this video.
  • Use JavaScript/Node.js and the Alexa Skills Kit to create a skill - You will use the template as a guide but the customization is up to you. For more background information on using the Alexa Skills Kit please watch this video.
  • Get your skill published - Once you have completed your skill, this tutorial will guide you through testing your skill and sending your skill through the certification processso it can be enabled by any Alexa user.
  • Interact with a calendar (.ics file) using voice commands.

Get started and build your first—or next—Alexa skill today.

Special Offer: Free T-Shirts

All published skills will receive an Alexa dev t-shirt. Quantities are limited. See Terms and Conditions.

Check out These Other Developer Resources



October 21, 2016

Jen Rapp

As an Alexa developer, you have the ability to provide Alexa skill cards that contain text and/or images (see Including a Card in Your Skill's Response). There are two main types of cards:

  • Simple Card - contains a title and text body.
  • Standard Card - contains title, text body, and one image.

Customers interacting with your skill can then view these cards via the Alexa app or on Fire TV. While voice experiences allow customers to break from their screens, graphical interfaces act to complement and can enhance the experience users have with your skill.

In our new guide, Best Practices for Skill Card Design, you can learn how to best present information on cards for easy consumption by customers. Skill cards contain the same information (image and text) everywhere they appear, but have differing layouts depending on the access point, the Alexa app or Fire TV.  

To drive engagement with your Alexa skill, we’ve compiled the top 10 tips for effective Alexa skill card design.

Tip #1: Use cards to share additional information or details to the voice experience

Cards do not replace the voice experience, instead, they deliver value-added content. Customers should not need to rely on the cards to enjoy your voice experience and cards should never be required to use an Alexa skill. Instead, they should be used to provide additional information.

For example, imagine a customer asks for a recipe and you want to share details of the recipe. The skill card could add additional context by providing the recipe category, recipe description, cook time, prep time, and number of ingredients,  while Alexa may simply say, “Try chicken parmesan accented by a homemade tomato sauce.”

Tip #2: Show users what they can do with guidance and sample utterances

Cards can be a great way to get a lost user back on track, or enable self-service to show users what they can do. Give enough detail for the user to move forward when lost – without going overboard. Suggest sample utterances when they need help, or when AMAZON.HelpIntent is triggered. Always keep the utterances relevant and in context of the current situation. For example, don't suggest an utterance on how to check your previous scores when the user is in the middle of the game.

Tip #3: Keep it short, informative, and clear

Structure the copy for cards in brief, informative sentences or lines of text and avoid unstructured product details. Don’t rely on large blocks of text and keep details to a minimum so that users can quickly evaluate the card at a glance. For example, show a stock symbol and the current stock quote instead of a full sentence describing the change, which is more difficult to quickly grasp.

Tip #4: Use line breaks

Use line breaks (/n) to help format individual lines of addresses, product details or information. Again, this makes it easier to quickly scan for key information. However, don’t double line break when separating parts of a street address.

Tip #5: Keep URL links short and memorable

Since URLs in cards are not clickable links, don’t only show URLs to direct users to other sites. Instead, provide clear direction on how to get to more information (e.g., “Go to and head to ‘My Account’”). While we don’t encourage the use of URLs in cards, if you do include them, make it easy for the user to consume and remember.

Tip #6: Make it consumable at a glance

A general guideline for card content is to keep it short and easy to read. Cards should provide quick bits of content that users can consume at a glance. Providing images is a helpful way to quickly convey key information (e.g., images of a cheese pizza vs. a pepperoni pizza are instantaneously distinguishable). The card shouldn’t include everything that Alexa says, but instead simply the key information in the card (e.g., a bulleted list of product details vs. the full description).

[Read More]

October 07, 2016

Dean Bryen

If you’ve already created your first Alexa Skill, you may be using local environments, the AWS CLI, and other DevOps processes. This blog post is for advanced developers who want to level up skill creation by adding some automation, version control, and repeatability to skill deployments.

In this post we're going to programmatically create our skill backend using AWS CloudFormation. CloudFormation is an AWS service that enables you to describe your AWS resources as a JSON file, these JSON files can later be ‘executed’ to tear up and tear down your AWS environments. This gives us a number of benefits, including version control and repeatability. You can read more about AWS CloudFormation in general over in the AWS developer docs here. To put this into context, when looking at the Alexa Skills Kit Architecture below, the resources in the red box below are what we will be creating within our CloudFormation Template.


The Template

The CloudFormation template is a JSON object that describes our infrastructure. This will consist of three components.

Parameters - Where we define the input parameters we want to inject into our template, such as ‘function-name.

Resources - The AWS resources that make up our skill backend, Such as the lambda function.

Outputs – Any information that we would like to retrieve from the resources created in our CloudFormation stack. Such as the lambda function ARN.

The template that we will create in this tutorial can be used as a starting point to create the backend for any of your Alexa skills.

[Read More]

October 05, 2016

Liz Myers

Now that Alexa is multi-lingual, it’s a new day in Alexa skill making. Not only can you publish to customers around the globe, you can do so from a single code base.

In this article, we’ll review two concepts: 1) separating content from logic and 2) using the locale attribute to serve the right content to the right users.

Getting Organized

As an example, I’ve made a new skill: Classical Guitar Facts (using this template), which has content in both English and German. Although one might assume that I could get away with US English in the UK, differences in spelling and word choice will show up in the cards within the Alexa app, and this is not the best user experience. So, we’ll create content files in three separate folders, one per language, as shown below.

Create the Content Files

Moving the content out of the index.js files means that I’ve copied the FACTS array into a separate file and saved the file as de-facts.js, gb-facts.js, and us-facts.js respectively. Remember the last item in the FACTS array does not have a comma at the end. Also, remember the last line of this file “module.exports = FACTS”, otherwise the calling file (index.js) won’t be able to find it.

var FACTS = [
    "The strings of guitars are often called gut strings because…”,
    " …”,
    " …”
module.exports = FACTS;  

Calling External Content

At the top of the index.js file, we need to declare the FACTS variable:

var FACTS = [ ];

so that we can call it later like this:

FACTS = require('./content/en-US/us-facts.js');

Of course, we can substitute en-US/us-facts.js with en-GB/gb-facts.js and de-DE/de-facts.js when needed. Now we’re well organized to swap separate content files based on language – but how do we know which language is calling our service?

[Read More]

October 03, 2016

David Isbitski

Today we are introducing the Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Flash Briefing on Alexa, which delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.

The Flash Briefing Skill API is free to use. Get Started Now >

Creating Your Skill with the Flash Briefing Skill API

To get started, you’ll configure a JSON or RSS feed and submit descriptive information about your skill in the portal. This can be done through the following steps:

 1.  Register for a free Amazon Developer Account if you have not done already and navigate to the Alexa Skills Kit box in the Alexa menu here.

2.  Click on Add a New Skill

3.  Select Flash Briefing Skill API, fill out a name and then click Next.

4.  Unlike custom skills, the interaction model for Flash Briefing Skills will automatically be generated for you, simply hit Next.

5.  Now we will need to define our Content Feed(s). Your Flash Briefing Skill can include one or more defined feeds.

Then, click on the Add new feed button.

6.  You will then enter information about your content feed including name, how often the feed will be updated, the content type (audio or text), the genre, an icon as well as the URL for where you are hosting the feed.

7.  Repeat these steps each feed you wish to include in the skill. The first feed you add will automatically be marked as the default feed. If you add more feeds, you can choose which feed is the default, by selecting it in the Default column.

8.  Click Next when you are finished adding feeds and are ready to test your skill.

For additional information check out the Steps to Create a Flash Briefing Skill page here.

[Read More]

September 30, 2016

Michael Palermo

Today we are happy to announce the support for scenes, a new feature in Alexa skills developed with the Smart Home Skill API. With scenes, customers can issue a single voice command to an Alexa-enabled device such as the Amazon Echo or Echo Dot to set a predefined configuration of one or more devices to a desired state. For example, a customer could say, “Alexa, turn on bedtime” resulting with specific lights turning off, a bedroom light changing color to a low-lit orange-hue, a ceiling fan turned on, and the temperature changed to an ideal setting for sleep.

At first glance scenes might appear similar to the groups feature found in the Smart Home section of the Alexa app as both allow control over multiple devices with one voice command. However, scenes differ from groups in the following ways:

  • Scenes allow each device configured within it to be set to a desired state, whereas groups are stateless and simply turn devices on or off.
  • Scenes are configured by customers through a device manufacturer’s app, whereas groups are configured in the Alexa app.
  • Scenes only contain devices managed by the device manufacturer’s app, whereas groups can contain any device discovered in the Alexa app.

With scenes customers have another option to groups for controlling multiple devices. Customers may already have scenes configured in device manufacturer apps such as those provided by Control4, Crestron, Insteon, Lutron Caseta, SmartThings, or Wink. Prior to today, these scenes were invoked by using the device manufacturer’s app. Now customers can find these scenes listed as devices in their Alexa app after requesting device discovery and control via voice interaction. 

How Scenes Work

Figure 1: Scene control process

Once a customer has configured a scene through the device manufacturer’s app and requests a device discovery to Alexa, the scene name will appear in the device list in the Alexa app. Consider what happens from a developer perspective, when a voice command is made to turn a scene on.

Let’s examine each step above in more detail.

  1. Customer says, “Alexa, turn on bedtime.”
  2. Alexa service receives the request and routes this intent to the Smart Home Skill API.
  3. A directive is composed including the ‘TurnOnRequest’ name in the directive header and the appliance ID (located in directive payload) corresponding to the friendly name of the scene “bedtime.”
  4. The skill adapter hosted in AWS Lambda receives the directive. Included in the directive is an access token to determine the customer’s account making the request. A call is made to device cloud API to turn on the scene matching the appliance ID for the associated customer.
  5. The device cloud (likely owned by the device maker) receives a request from the skill adapter, and communicates to a device hub or controller to turn on the scene preconfigured by the customer.
  6. The device hub sets the desired state of each device configured by the customer. Note in this “bedtime” example, turning on a scene may result in turning off a light, since this could be the desired state of that device for the scene.
[Read More]

September 28, 2016

Michael Palermo

Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn how to respond to control directives in code to turn devices on or off, set temperature, and set percentages.

When you build a skill with the Smart Home Skill API, the ultimate goal is to control a device. That control can include turning a device on or off, setting a temperature, or setting a percentage, such as when you’re dimming a light bulb. This post will cover the general process of device control and teach the fundamentals by demonstrating control of the ‘on’ or ‘off’ state in code using Node.js.

This technical walkthrough is a continuation in a series of smart home skill posts focused on development. Please read and follow the instructions found below to reach parity.

How Device Control Works

Figure 1: Device control process

Once a customer has properly installed, configured, and discovered all smart home devices, verbal control commands can be issued to an Alexa-enabled device, such as the Amazon Echo. Consider what happens from a developer perspective when a control command is made, such as turning on a light.

Let’s examine each step above in more detail.

  1. Customer says, “Alexa, turn on desk light.”
  2. Alexa service receives the request and routes this intent to the Smart Home Skill API.
  3. A directive is composed and contains, among other things, the ‘TurnOnRequest’ name in the directive header and the appliance ID matching the friendly name “desk light” in the payload.
  4. The skill adapter hosted in AWS Lambda receives the directive. Included in the directive is an access token to determine the customer’s account making the request. A call is made to device cloud API to turn on the device matching the appliance ID for the associated customer.
  5. The device cloud (likely owned by the device maker) receives a request from the skill adapter, and communicates in its own fashion to the device identified by appliance ID to turn on.
  6. The device, in this example, a desk light), turns on.
[Read More]

September 27, 2016

Nathan Grice

In this post, Nathan Grice, Alexa Smart Home Solutions Architect, shows you how to reduce skill development time by debugging your skill code in an local environment. Learn how to step through your code line by line while preserving roles and AWS services, like DynamoDB, used in the skill when running in AWS Lambda. Share your thoughts and feedback in this forum thread.

Amazon Alexa and the Alexa Skills Kit (ASK) are enabling developers to create voice-first interactions for applications and services. In this article, we will cover how to set up a local development environment using the Amazon Web Services (AWS) SDK for NodeJs.

By following this tutorial, you’ll be able to invoke your AWS Lambda code as if were called by the Alexa service. This will also allow you to interact with any other AWS services you may have added to your skill logic such as Amazon DynamoDB. By the end of this post, you will be able to execute and debug all of your Alexa skill’s Lambda code from your local development environment.

Using the aws-sdk, you should also be able to call any dependent services in AWS as if the skill code were executing in AWS Lambda by leveraging AWS Roles. This way, you can be sure your code is working before deploying into AWS and hopefully decrease the cycle time for applying new changes. For example, you want to persist something about users in a DynamoDB table and the only way to do this was run your code in Lambda. After this tutorial, you should be able to write to the remote Dynamo table from your local environment.

First, let’s take a look at why you would want to streamline this process. The first time I developed a skill, I was not using an integrated development environment and almost all debugging information was obtained through log statements. This presents quite a few challenges from a developer’s point of view.

  1. Extra cycle time for adding functionality and logging to analyze the state of the program at any given moment.
  2. Uploading the new code to AWS Lambda is manual.
  3. Testing the code using various methods was cumbersome, including manually constructing an event in AWS, persisting as a test event, using the developer console, or by invoking my skill on my own Echo or Alexa-enabled device.
  4. Analyzing Amazon CloudWatch logs was taking too long to effectively iterate features.

I wanted a better way to execute and debug my code, but not lose any of the functionality of being constrained to a local environment. 

In the next section we will look at how to setup a local environment to debug your AWS Lambda code using Node,js, Microsoft's Visual Studio code open-source editor, and the aws-sdk npm package. This tutorial will cover setting this up using Node.js but the AWS SDK is available for Python and Java as well.

Setting up your environment

Install Node.js

Install Node.js via the available installer. The installation is fast and easy, just follow the available prompts. For the purposes of this tutorial, I am on OSX, so I selected v4.5.0 LTS. There are versions available for Windows and Linux as well.



Install Microsoft Visual Studio Code

Repeat the process with Microsoft's Visual Studio Code. For the purposes of this tutorial, I am using Microsoft’s Visual Studio Code but others should work as well.

[Read More]

September 27, 2016

Michael Palermo

Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn the process of device discovery and how to support it in code for your smart home skill.

Developing a smart home skill is different than building a custom skill. One of the main differences is the dependency on devices to control. The device might be a light bulb, thermostat, hub, or other device that can be controlled via a cloud based API. Or maybe you created an innovative IoT gadget and you want to make it discoverable by an Alexa enabled device. In this post, you will learn how the process of device discovery works, and how you can support discovery in your custom skill adapter communicating with the Smart Home Skill API.

To meet prerequisites and set the context of the technical information in this post, start by reading the five steps before developing a smart home skill and set up your initial code to support skill adapter directive communications. This post will be the next in the series of these posts and provides the foundation for code samples to follow.

Understanding the Customer’s Perspective of Device Discovery

To appreciate the role of device discovery, consider how a customer is involved in the process. The following steps assume a consumer has an Alexa-enabled device, such as the Echo or Echo Dot, already set up.

  1. Customer physically installed a smart home device and followed instructions likely requiring customer to create an account to log into an app or web site used to associate/control the device.
  2. Customer opens Alexa app and enables the smart home skill associated with device. Customer is immediately prompted to sign in with credentials used in previous step.
  3. Customer either selects the ‘discover devices’ link in the Smart Home section of the Alexa app, or verbally commands it by saying “Alexa, discover devices.”

Once the first step is completed, the customer is able to control the smart home device typically through an app provided by the device maker, which is graphical user interface that manages device and owner information controlled in it’s own device cloud. The account created in the first step is the same account used in the second step when the consumer enables the associated smart home skill. This explains why account linking is mandatory for skills created with the Smart Home Skill API.

But what happens in the third step when the consumer makes a device discovery request? Does it actually seek for devices emitting some signal within the home? Is it querying everything it can within the local WIFI area? The answer to both questions is no. Although there are a couple of exceptions to enable early support of popular products such as Philips Hue and Belkin WeMo, the process described next is what is supported today and moving forward.

How Device Discovery Works

Figure 1: Device discovery process

When a request is made by the customer for devices to be discovered, the Alexa service identifies all the smart home skills associated with the consumers account, and makes a discover request to each one as seen here.

Let’s examine each step above in more detail. Notice the first step is the same as the last step we covered when considering the customer’s perspective, so this is a deeper dive as to what happens next. Also observe in Figure 1 that no communications occur directly between the Amazon Echo and the smart home device.

[Read More]

September 20, 2016

Marion Desmazieres

Today, we’re excited to announce a new, free video course on Alexa development by A Cloud Guru, a pioneering serverless education company in the cloud space. Instructed by Ryan Kroonenburg, an Amazon Web Services (AWS) Community Hero, the “Alexa development for absolute beginners” course allows beginner developers and non-developers to learn how to build skills for Alexa, the voice service that powers Amazon Echo.

Here is what you can expect to learn in this two-hour course in 12 lessons:

  • This beginner guide to Alexa will walk you through setting up an AWS account, registering for a free Amazon Developer account, and then building and customizing two Alexa skills with templates available on GitHub.
  • The course also shows Mac users how to use the interactive story tool to create amazing interactive stories.
  • Finally, you will learn how to create your own mp3 files, where you narrate, and how to add background music and sound effects. You will see how to convert mp3 files to an Alexa-friendly format, put them on Amazon S3, and then reference them in the graphical user interface (GUI) using Speech Synthesis Markup Language (SSML).

“All in all, it's a great course and it’s even accessible to non-developers, mums and dads who haven’t used Alexa or Amazon Web Services before! We made this available to the general public and give them an everyday use case for AWS Lambda, Amazon DynamoDB, and S3. We can’t wait to see what people build for Alexa.” – Ryan Kroonenburg, instructor and founder of A Cloud Guru.

Watch the course for free today.

Dive Deeper with Alexa Development

A Cloud Guru also offers an extended version of the course. Cloud Solution Engineer Nick Triantafillou will teach you how to build your own Alexa device with a Raspberry Pi, a MicroSD card, a speaker, a USB microphone, and Alexa Voice Service. Learn how to make Alexa rap to Eminem, how to read Shakespeare, how to use iambic pentameter and rhyming couplets with Alexa, and more. This five-hour video course in 47 lessons also covers additional skill templates available on GitHub to customize and build new capabilities for Alexa.

Watch the extended course.

Check out these Alexa developer resources:



September 15, 2016

Robert McCauley

We teamed up with hack.guides() to bring you a Tutorial Contest in June. Hack.guides() is a community of developers focused on creating tutorials to help educate and share technical knowledge. The purpose of the contest was to provide developers the opportunity to share knowledge, help other developers, contribute articles to an open-source project, and win a prize along the way.

Today we’re excited to announce the winner of the hack.guides() tutorial contest.

Winner: Control your fish tank from anywhere in the world with Alexa voice control

Alexa developer, ”piratemrs”, built a tutorial that outlines how to build a working, voice-controlled device that can be used to feed pet fish while you are away. The tutorial helps developers learn three broad technical areas: hardware, AWS, Alexa.

Both cloud and hardware technologies were integrated to build this project. The tutorial starts with a lesson on how to add external circuits and motors (servos) to a Raspberry Pi computer. Next, the tutorial steps through how to create an AWS Lambda function and Alexa skill. Finally, the skill and Raspberry Pi system are tied together via a configuration guide using the AWS IoT service. At the end, piratemrs says “Alexa, ask fish tank to feed the fish” and a custom Alexa skill activates a small motor to shake some food into the fish tank. 

The tutorial does a great job of breaking down components into separate sections and includes YouTube videos to show the results of testing each piece of the solution. Watch the videos and focus on testing and understanding each component of the solution before moving on.

Read the full tutorial to learn how you can build your own voice-controlled system to feed your fish, control your fish tank lights remotely, and more.

Honorable mentions

We’d like to thank all the participants who created Alexa tutorials for this contest. The high quality of submissions made selecting a winner a difficult decision. Tutorial submissions were scored using the contest rules provided by hack.guides(), including writing style, communication ability, effective use of technologies/APIs, and overall quality. Here are some honorable mentions.

Alexa, run this JavaScript app

This tutorial shows you how to design, build, and test an Alexa skill that implements an adventure game. If you are an experienced Node.js developer, but new to Alexa, you will appreciate the thorough breakdown of the ASK functionality and recommended project structure. Read more

Build your first Alexa skill

This tutorial shows you how to navigate the Amazon developer screens and create your first Alexa skill. If you are a novice developer, you will appreciate the clear screenshots and fun animated GIFs that appear throughout the text. Read more.

Get Started with the Alexa Skills Kit

To get started, we’ve created easy-to-use skill templates that show new developers the end-to-end process of building an Alexa skill. Visit our trivia game, fact skill, how-to skill, flash cards skill and user guide skill tutorials.

Or check out these Alexa developer resources:



Want the latest?

alexa topics

Recent Posts