Your Alexa Consoles
?
Support
Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts

Archive

August 24, 2016

David Isbitski

Before today, the Alexa Skills Kit enabled short audio via SSML audio tags on your skill responses. Today we are excited to announce that we have now added streaming audio support for Alexa skills including playback controls. This means you can easily create skills that playback audio content like podcasts, news stories, and live streams.

New AudioPlayer and PlaybackController interfaces provide directives and requests for streaming audio and monitoring playback progression. With this new feature, your skill can send audio directives to start and stop the playback. The Alexa service can provide your skill with information about the audio playback’s state, such as when the track is nearly finished, or when playback starts and stops. Alexa can also now send requests in response to hardware buttons, such as those on a remote control.

Enabling Audio Playback Support in Your Skill

To enable audio playback support in your skill you simply need to turn the Audio Player functionality on and handle the new audio Intents. Navigate to the Alexa developer portal and do the following:

  • On the Skill Information page in the developer portal, set the Audio Player option to Yes.
     
  • Include the required built-in intents for pausing and resuming audio in your intent schema and implement them in some way:
    • AMAZON.PauseIntent
    • AMAZON.ResumeIntent
       
  • Call the AudioPlayer.Play Directive from one of your Intents to start the Audio Playback
     
  • Handle AudioPlayer and PlaybackController Requests and optionally respond

In addition to the required built-in intents, your skill should gracefully handle the following additional built-in intents:
 

  • AMAZON.CancelIntent
  • AMAZON.LoopOffIntent
  • AMAZON.LoopOnIntent
  • AMAZON.NextIntent
  • AMAZON.PreviousIntent
  • AMAZON.RepeatIntent
  • AMAZON.ShuffleOffIntent
  • AMAZON.ShuffleOnIntent
  • AMAZON.StartOverIntent

Note: Users can invoke these built-in intents without using your skill’s invocation name. For example, while in a podcast skill you create, a user could say “Alexa Next” and your skill would play the next episode.

If your skill is currently playing audio, or was the skill most recently playing audio, these intents are automatically sent to your skill. Your code needs to expect them and not return an error. If any of these intents does not apply to your skill, handle it in an appropriate  way in your code. For instance, you could return a response with text-to-speech indicating that the command is not relevant to the skill. The specific message depends on the skill and whether the intent is one that might make sense at some point, for example:
 

  • For a podcast skill, the AMAZON.ShuffleOnIntent intent might return the message: “I can’t shuffle a podcast.”
  • For version 1.0 of a music skill that doesn’t yet support playlists and shuffling, the AMAZON.ShuffleOnIntent intent might return: “Sorry, I can’t shuffle music yet.”


Note: If your skill uses the AudioPlayer directives, you cannot extend the above built-in intents with your own sample utterances.

[Read More]

August 22, 2016

Thom Kephart

In the coming weeks, we’ll be participating in a variety of events and we’d love to meet you. Get hands-on learning and learn to build an Alexa skill at a hackathon, attend a presentation at smart home events, join in the conversation at select conferences, or connect with fellow developers at a local meetup.

Hackathons

Led by Alexa Solutions Architects and Developer Evangelists, hackathons are a great way to get the hands-on experience of building and testing an Alexa skill. 

Galvanize Skill Building Workshop | August 25, 6:30-8:30 p.m. PT
Seattle, WA

The Galvanize workshop is intended for software and hardware developers interested in voice control, home automation, and personal assistant technology. We will walk through the development of a new Alexa skill and incorporate it into a consumer-facing device.

/hack San Francisco | August 27, 10:00 a.m. PT – August 28, 5:30 p.m. PT
San Francisco, California

/hack (slash hack) is the premiere hackathon by hackers, for hackers. Three hundred hackers will compete in the 24-hour hackathon held in San Francisco. Hackers at all levels – working professional, college student, or even high school student – can learn from students, CTOs, architects, and more.

Amazon Alexa Virtual Hackathon Decision Tree | August 31, 11:00 a.m. PT – 12:00 p.m. PT
Virtual Webinar

In this hour-long webinar we will build an Alexa skill using the Decision Tree skill template. The template makes it easy for developers and non-developers to create skills that ask you a series of questions and then give you an answer. This is a great starter for simple adventure games and magazine style quizzes like ‘what kind of job is good for me?’ We will use AWS Lambda and the Alexa Skills Kit, and provide built-in business logic, use cases, error handling, and help functions for your new skill. Simply come up with the idea before we begin and we will help you build it.

TechCrunch Disrupt San Francisco Hackathon | September 10, 12:30 p.m. PT – September 11, 2:00 p.m. PT
San Francisco, California

Preceding the Disrupt Conference is Hackathon weekend, where developers and engineers descend from all over the world to take part in a 24-hour hacking endurance test. Teams join forces to build a new product and present it on the Disrupt stage to a panel of expert judges and an audience of tens of thousands.

Code District LA Bootcamp | September 13, 6:00 p.m. PT
Torrance, California

This free workshop is intended for anyone interested in learning how to program voice-controlled devices. Join Solutions Architect Liz Myers to learn about Alexa skills development.

[Read More]

August 18, 2016

Michael Palermo

Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. Learn what steps you should take before developing with the Smart Home Skill API.

Before creating a skill, you must first determine if you want to build a custom skill or a smart home skill. Most skills are built using the Alexa Skills Kit (ASK) and are broadly known as custom skills. However, if the end goal is to enable a skill to have voice control over a device or appliance in the home, you will want to develop a smart home skill using the Smart Home Skill API.

Smart home skills differ from custom skills in these ways:

  • With custom skills, you must build a voice interaction model to handle customer requests. Smart home skills use Amazon’s standardized language model so you don’t have to build the voice interaction model. As such, your customers don’t need to remember your skill name or a specific invocation phrase. Furthermore, customers who already control devices using Alexa with other smart home skills already know how to use your smart home skill, and will also enjoy a consistent, first class experience with your devices.
  • The smart home skill adapter must be hosted in an AWS Lambda function, whereas custom skills can be hosted in either AWS Lambda or another cloud-based hosting service.
  • You must implement account linking using OAuth 2.0 for smart home skills, while this is optional for custom skills.
  • Since smart home skills target connected devices in the home, you must have access to a cloud-based API to handle device discovery and control.

With the above differences in mind, your approach to creating a smart home skill may require more work to be done upfront than with a custom skill. However, know that with a smart home API skill, you wouldn’t need to define and maintain a language model yourself, and as Alexa improves its language model overall, you get the improvements for free.

Meeting the Smart Home Skill API Prerequisites, Step by Step

Follow these steps to confirm you have what you need before developing a smart home skill. Start by following these five steps to meet the Smart Home Skill API prerequisites:
 

  1. Choose OAuth 2.0 provider for account linking
  2. Initialize creation of smart home skill
  3. Create an AWS Lambda Function
  4. Configure smart home skill
  5. Finalize account linking and confirm
     

Step 1: Choose OAuth 2.0 Provider for Account Linking

Sebastien Stormacq recently authored an excellent blog post providing step-by-step guidance to implement account linking for custom skills. While much of the information in that post applies here too, there are some differences in implementation details. This post will provide complete guidance for setting up account linking for smart home skills, with some admitted overlap to Sebastien’s post. For a fine overview of OAuth 2.0 and understanding of options, it may benefit you to read the first two sections of his post and then resume here.

If you have already created or chosen a specific OAuth 2.0 provider, you can proceed to the next step. If not, the remainder of this step will show how to fulfill this requirement by using Login with Amazon (LWA).

First, you need to create an LWA security profile. Here’s how:
 

  • Connect to https://developer.amazon.com and authenticate with your Amazon credentials.
  • Click on “Apps & Services”, then “Login with Amazon”
  • Click on “Create a New Security Profile”

 

Figure 1 : Access Login With Amazon.

Fill in all three required fields to create your security profile and click “Save”. For the purpose of this post, the privacy URL points to Amazon’s. Make sure to replace the link with a link to your own Data Privacy policy.

[Read More]

August 17, 2016

Marion Desmazieres

We are excited to launch a recognition program that honors the most engaged developers and contributors in the community. These individuals are educating and inspiring other developers in the community online and offline. They are actively and independently sharing their passion and knowledge of Alexa with the community. We’re proud to call them our “Alexa Champions”.

Today we recognize the initial group of ten Alexa Champions and showcase their contributions to the Alexa community in a dedicated gallery. We thank them for all the knowledge they have shared with others and for the tools they have created to make it easier for developers to use the Alexa Skills Kit (ASK) and Alexa Voice Service (AVS).

Meet the Alexa Champions

Join me in welcoming the Alexa Champions:

  • April Hamilton was one of the first developers to join the private beta of the Alexa Skills Kit and to get skills certified in 2015. She curates the LoveMyEcho.com blog daily and shares tips and tricks with developers in her weekly ASK Dev Tuesday series. Learn more about April.
  • Brian Donohue started a local meetup group for Alexa enthusiasts and developers in New York which now counts over 400 members. For the first event, he created a “Hello world” template to show attendees how to build their first Alexa skill. Learn more about Brian.
  • Eric Olson a.k.a. Galactoise is one of the most active contributors in the Alexa forums with over 280 reputation points. He co-created the Alexa Skills Kit Responder that lets you mock skill responses to your Echo and gives you the ability to rapidly validate your content. Learn more about Eric.
  • John Wheeler is the creator of Flask-Ask, an Alexa Skills Kit Framework for Python that enables rapid skill development. He also created AlexaTutorial.com, a resource for leveling-up quickly with Flask-Ask and the Alexa Skills Kit. Learn more about John.
  • Mark Carpenter has been publishing the ASK Dev Weekly newsletter since September 2015. He was the architect of the Alexa Project curriculum that is offered to Bloc bootcamp students. He publishes the Alexa Skill of the Day apps which surface one exemplary Alexa skill each day. Learn more about Mark.
  • Matt Kruse created the alexa-app framework and the alexa-app-server container for hosting javascript-based skills. He also published open-source code on GitHub for integration with IFTTT and a “find my iPhone” skill using the find-my-iphone module. Learn more about Matt.
  • Rick Wargo released the alexa-skill-template, a Node.js development environment for Alexa skills authored in JavaScript and hosted locally for testing and in AWS Lambda for production with support for DynamoDB. He’s s an active participant in other open source Alexa projects. Learn more about Rick.
  • Sam Machin got started with Alexa at the BattleHack world finals in November 2015. He published several tutorials on GitHub to help teach others how to turn a RaspberryPi or a CHIP into an Alexa client with the Alexa Voice Service. His alexaweb project was the inspiration for Echosim.io. Learn more about Sam.
  • Steven Arkonovich was an Alexa enthusiast from the very beginning, writing Alexa skills before there even was ASK. He developed a Ruby framework for quickly creating Alexa skills as web services. He is one of the most active contributors in the Alexa forums. Read more about Steven.
  • Walter Quesada created a video course for Pluralsight that teaches the foundations of developing voice-enabled skills for Echo and building custom Alexa skills in C# and ASP.NET Web API. He also talked about Alexa skill development at numerous tech events. Learn more about Walter.

Get involved

There are many ways you can share educational and inspiring content about AVS and ASK with the Alexa community through your own blog or newsletter, open-source development tools, tutorials, videos or podcasts and social media. You can also organize local meetup groups for like-minded Alexa enthusiasts and developers.

[Read More]

August 17, 2016

Zoey Collier

When April L. Hamilton first saw the Amazon Echo in 2014, she knew it was the future. As an Amazon Prime member, April had an early preview of the new device. She immediately knew Echo would be the theme of her next website and blog.

Amazon Echo users and Alexa skill developers will likely know April from her website, www.lovemyecho.com. The site has become a collection of Alexa developer resources, “how-to” articles, news features, and even downloadable Bingo-cards (more on that later).

How does someone become a noted authority on a new, rapidly-evolving technology like Alexa, in such a short time? We sat down with April to find out.

An early adopter knows what she wants

The internet of things enthralled both the technologist and app developer in April. As a blogger, she wanted in on the ground floor with something that had real potential. When Amazon offered its Prime members the chance to pre-order Echo, she knew the internet of things had finally arrived.

“I saw Amazon getting on board with the release of Echo, and I said this is it. Amazon is one of the only companies with the vision, consumer knowledge, tech resources, and dedication to really make it happen.”

Next, she had to figure out how to set up a consumer blog for something so new. April knew Echo would excite consumers and developers alike, and they’d have plenty of questions. With her programming and writing background, she wanted to be the one to answer them.

First a developer, then a blogger

April knew it would be tough writing with authority about something so cutting-edge, but she wasn’t afraid to learn. In fact, April says her prime motivator was the sheer joy of learning about a new technology. So she signed up for Amazon’s Developer Day in early 2015 to get some hands-on experience with the device.

She likened the thrill she felt to when she developed apps for the first smartphones and tablets—but with a twist. “I was a mobile app developer before. Echo needed a unique type of ‘app’. So I thought, what better way to learn about it than to develop skills for it?”

She wondered what skill she could create that consumers would enjoy. Beyond that, she wanted to build an Alexa skill that would intrigue her colleagues, so they too could see the potential of Amazon Echo.

[Read More]

August 10, 2016

Amit Jotwani

Danny Dong, co-founder and CEO of iMCO, wondered what life would be like if we didn’t have to rely on screens to get the information we want and need. After witnessing countless people checking their phones while on the go, he began to redefine how people carry technology with them every day.

He set out to create the CoWatch, the world’s first Amazon Alexa-integrated smartwatch, to help break the habit of constantly checking a screen for updates. Now, users can ask for the latest weather reports, check the time, and set reminders with their voice and Alexa.

Learn how Dong and his partners worked with Alexa Voice Service (AVS) to enable the CoWatch to access a growing number of Alexa skills through the Alexa Skills Kit.

The idea

By early 2016, Dong had a working smartwatch prototype. It had a beautiful design, but it needed an OS to bring it to life. That’s when Dong contacted Leor Stern, co-founder and CEO of Cronologics Corporation.

Stern’s team, assembled from Android and Google veterans, had built a new software platform for smart wearables that helps amplify and support smartphone features. “We built Cronologics platform to unlock the true potential of the smartwatch. We don’t see [a smartwatch] as a tiny, limited extension of a smartphone. It’s an independent device with smart features of its own, not dependent on a smartphone.”

Shortly after Dong partnered with the Cronologics team, Stern began discussions with Amazon how to bring the power of Alexa to the smartwatch. Dong knew he wanted to create a completely new type of smartwatch—one that gives users a choice of how they consume information, either on a screen or by voice.

“We’d been talking to Amazon for a few months, bouncing around ideas like how cool it would be to have Alexa on a watch,” Stern said. “We were also talking with iMCO about the CoWatch when AVS came out. It fit beautifully into our plans.”

The Cronologics platform makes it easy to scroll through messages with a few taps, no phone required. And with Alexa, customers can ask for the latest scores and weather reports, or set reminders and alarms. Now, almost anything you can do with Alexa on an Echo, you can do with CoWatch.

[Read More]

August 04, 2016

Amit Jotwani

We are excited to announce a new addition to the Alexa family—Nucleus.

Nucleus is an Alexa-enabled connected home intercom system designed to bring families closer together by giving people the ability to make room-to-room, home-to-home and mobile-to-home calls. The average family today is spread across geographies and constantly caught up in daily responsibilities. Nucleus aims to redefine family communication by making it instantaneous. Now you can quickly video chat with grandma on her Nucleus from your smartphone as you’re leaving the office, or never miss family dinner when you’re traveling away from home. 

The Alexa integration on Nucleus makes it easy to check the latest weather report or add items to your shopping list from anywhere in the house. You can talk to Alexa hands-free through Nucleus using the “Alexa” wake word or tap-to-talk using the button on the screen. Simply say, “Alexa, play Adele” or “Alexa, add milk to my list.” You also have access to a growing number of Alexa skills, built by developers using the Alexa Skills Kit, including smart home controls through SmartThings, Insteon and Wink.

You can purchase Nucleus on Amazon.com.

Getting Started with AVS

Developing the next breakthrough consumer tech product? Learn how AVS can help you add rich and intuitive Alexa-enabled experiences to your connected devices, services or applications.

[Read More]

August 03, 2016

Sebastien Stormacq

Discover how to use account linking with Login with Amazon to seamlessly integrate your Alexa skills with third-party application. Get step-by-step instructions from Sebastien Stormacq, Sr. Solutions Architect at Amazon.

How Account Linking Enhances Alexa Skills

Some skills require the ability to connect the identity of an Alexa end user with a user in another system, such as Twitter, Facebook, Amazon, and many others. For example, suppose you own a web-based service “Car-Fu” that lets users order taxis. It would be very convenient for people to access Car-Fu by voice (“Alexa, ask Car-Fu to order a taxi”).

To accomplish that, you’d use a process called account linking, which provides a secure way for Alexa skills to connect with third-party systems requiring authentication.

Skills that use the Smart Home Skill API must use account linking (with the authorization code grant flow) to connect the Alexa user with their device cloud account. Custom skills can use account linking if desired. However, if your custom skill merely needs to keep track of a user to save attributes between sessions, you do not need to use account linking.

There are many ways you can use account linking to enhance your Alexa skills. For example:

  • You can map this user profile to an existing user in your user database, using the email address as key. This would allow you to create a contextual skill that behaves according to your user’s preferences and history.
  • You can decide what authorizations this user will have in your system.
  • You can use services such as Amazon Cognito to acquire an AWS Access Key and Secret Key to interact with AWS Services such as Amazon DynamoDB.

The Basics of Account Linking with the Login with Amazon (LWA) Service

Account linking leverages OAuth 2.0; an open protocol that provides a simple, standards-based method for web, mobile and desktop applications to request user authorization from remote servers.

As a skill developer, you could set up and configure your own OAuth server and identity management system. At some large companies, an OAuth server is probably already available and Identity Management procedures already in place. However, at smaller companies, this would require you to build, operate, and maintain your own complex system to manage user identities, passwords, and profiles in a secure and scalable way.

Many organizations rely instead on well-known identity providers, available on the internet. These are sites where nearly everyone has an account, such as Facebook, Google, Twitter, and Amazon. The service that acts as a public-facing identity provider for Amazon is Login with Amazon.

When using OAuth, you delegate user authentication to a third-party Identity Provider (IDP). As illustrated below, the user is redirected to the IDP web site. User authentication happens according to the IDP’s policies (username and password, one-time password, biometric, etc.), and upon successful authentication, the IDP generates an implicit grant (aka bearer token) or an authorization code grant.

The bearer token is the token you'll use for accessing information and services. On the other hand, an authorization code can only be used to request a bearer token. This usually happens on the backend, between your application server and the IDP service. While an implicit grant is often faster and simpler for developers to request, an authorization code grant is generally considered more secure and some IDPs may require it for sensitive information or services. Also, a code grant allows for automatic refreshing of the bearer token after a given expiry, which will be set according to the IDP’s policy. When using an implicit grant, the user has to manually re-authenticate themselves when attempting to use the service, which, depending on the lifespan of the bearer token, can cause friction for account linking in applications.

After authentication is complete and a valid token is received, your application is responsible for managing authorization based on the customer's profile.

Figure 1 : OAuth data flow

Account Linking, Step by Step

Follow these steps to configure your Alexa skills with account linking and Login with Amazon.

[Read More]

August 02, 2016

Robert Jamison

Today, we're pleased to make a tool with source code available to allow you to graphically design interactive adventure games for Alexa. Interactive adventure games represent a new category of skill that allows customers to engage with stories using their voice. With these skills, you can showcase original content or build compelling companion experiences to existing books, movies and games. For example, in The Wayne Investigation skill (4.7 stars, 48 reviews), you’re transported to Gotham City a few days after the murder of Bruce Wayne’s parents. You play the part of a detective, investigating the crime and interrogating interesting characters, with Alexa guiding you through multiple virtual rooms, giving you choices, and helping you find important clues. The Magic Door, an original adventure series for Alexa, enables you to tell Alexa what choices to make as you navigate a forest, a garden or an ancient temple. Learn more about game skills on Alexa.

This tool provides an easy to use front-end that allows developers to instantly deploy code for your story, or use the generated code as a starting point for more complex projects. It was written in Node.js by Thomas Yuill, a designer and engineer in the Amazon Advertising team. The tool is available now as a Github project: https://github.com/alexa/interactive-adventure-game-tool

If you want to get started quickly, you can use our Trivia or Decision Tree skill templates that make it easy for developers or non-developers to create game skills. These template makes it easy for developers or non-developers to create a skill similar to “European Vacation Recommender” or “Astronomy Trivia." The templates leverages AWS Lambda and the Alexa Skills Kit (ASK) while providing the business logic, use cases, error handling and help functions for your skill. You just need to come up with a decision tree-based idea or trivia game, plug in your questions and edit the sample provided (we walk you through how it’s done). It's a valuable way to quickly learn the end-to-end process for building and publishing an Alexa skill.

[Read More]

August 02, 2016

Emily Roberts

For inspiration on developing innovative Alexa skills, check out the Wayne Investigation, a skill developed by Warner Bros. to promote the recently released Batman v Superman: Dawn of Justice feature film. In this audio-only, interactive adventure game, you’re transported to Gotham City a few days after the murder of Bruce Wayne’s parents. You play the part of a detective, investigating the crime and interrogating interesting characters, with Alexa guiding you through multiple virtual rooms, giving you choices, and helping you find important clues.

The game, created using the Alexa Skills Kit, is collaboration between Amazon, Warner Bros., head writers at DC Comics, and Cruel & Unusual Films (the production house run by Batman v Superman’s director Zack Snyder and executive producers Debbie Snyder and Wes Coller). With these companies behind the game and its affiliation with a superhero film franchise, it’s not surprising that The Wayne Investigation was a big hit.

But it’s become enormously popular on its own accord. Launched on March 1, this was the first Alexa skill to combine Alexa technology with produced audio assets—namely, compelling music and sound effects—and the response has been extraordinary. During its first week, the Wayne Investigation was engaged 7x more (per weekly average) than all other skills combined. Currently the Wayne Investigation rates in the top 5% of skills (earning 4.8 out of 5 stars) and is the #1 skill for both total time spent engaging with the skill and average time spent per user.

The team scripted the experience by building it around a gaming map with directions and actions in each room. Once the script was finalized, they used a decision tree model to translate the experience into code, which is hosted in AWS. From three starting actions, users can make up to 37 decisions, each taking the user down paths that lead to new and iconic Gotham characters and locations before completing the game. An efficient (and lucky) walkthrough of the Wayne Investigation takes 5 to 10 minutes, but fans who want to explore every nook and cranny can spend as long as 40 minutes in this Gotham City.

An added benefit of creating the Wayne Investigation skill is that it led to the creation of a tool that allows developers to graphically design interactive adventure games. Today, we’re pleased to announce that we’ve made a tool with source code available to make it easier for the Alexa community to create similar games.

To experience the skill, simply enable it in your Alexa companion app and then say, “Alexa, open the Wayne Investigation.”

 

August 02, 2016

Noelle LaCharite

We are excited to introduce a new way to help you quickly build useful and meaningful skills for Alexa. The new Decision Tree skill template makes it easy for developers and non-developers to create skills that ask you a series of questions and then give you an answer. This is a great starter for simple adventure games and magazine style quizzes like ‘what kind of job is good for me’.  This template leverages AWS Lambda and the Alexa Skills Kit, and provides built-in business logic, use cases, error handling, and help functions for your new skill. Simply come up with the idea, plug in your decision tree content, and edit the sample provided. Follow this tutorial and we'll show you how it's done.

Using the Alexa Skills Kit, you can build an application that can receive and respond to voice requests made to Alexa. In this tutorial, you’ll build a web service to handle notifications from Alexa and map this service to a skill in the Amazon Developer Portal, making it available on your Echo, Alexa-enabled device, or Echosim.io for testing and to all Alexa users after publication.

When finished, you'll know how to:

  • Create a skill - This tutorial will walk Alexa developers through all the required steps involved in creating a skill. No previous experience required.
  • Design a Voice User Interface - Creating this skill will help you understand the basics of creating a working Voice User Interface (VUI) while using a cut/paste approach to development. You will learn by doing and end up with a published Alexa skill. This tutorial includes instructions on how to customize the skill and submit for certification. For guidance on designing a voice experience with Alexa you can also watch this video.
  • Use JavaScript/Node.js and the Alexa Skills Kit to create a skill - You will use the template as a guide but the customization is up to you. For more background information on using the Alexa Skills Kit please watch this video.
  • Get your skill published - Once you have completed your skill, this tutorial will guide you through testing your skill and sending your skill through the publication process to make it available for any Alexa user to enable.

You will also need an AWS account and an Amazon Developer account. To get a refresher on how to do this, or if you are new to skill development, you can visit our training page to review our past tutorials.

[Read More]

August 01, 2016

Paul Cutsinger

Today’s guest blog post is from Troy Petrunoff, content strategist at AngelHack. Amazon works with companies like AngelHack who are dedicated to advancing the art of voice user experience through hackathons.

This year Amazon Alexa teamed up with AngelHack, the pioneers of global hackathons, for their ninth Global Hackathon Series. Since 2011, the series has exposed over 100,000 developers from around the world to new technologies from sponsors ranging from small startups to large corporations. Amazon Alexa joined the fun this year at nine AngelHack events, sending Solutions Architects and Amazon Echo devices to give talented developers, designers, and entrepreneurs the chance to learn about the Alexa technology. Thirty two teams included Alexa technology into their projects.

Of the nine events Amazon Alexa sponsored, three of the grand prize winners won using Alexa. Winning the AngelHack Grand Prize earned these teams an exclusive invite into the AngelHack HACKcelerator program. AngelHack’s invite-only HACKcelerator program connects ambitious developers with thought leaders and experienced entrepreneurs to help them become more versatile, entrepreneurial, and successful. The program is intended to give developers of promising projects built at a hackathon the opportunity to listen and talk to some of the biggest players in the Silicon Valley tech scene on a weekly basis. All while providing them with the resources to successfully transition their Hackathon project into a viable startup with early traction.In addition to the grand prize, the Amazon Alexa team offered a challenge at each AngelHack event.  The challenge for the series was best voice user experience using Amazon Alexa. In addition to the three grand prize winning teams, two Alexa Challenge winners will also receive an invite into the HACKcelerator program. Participating teams of the HACKcelerator will be provided with mentorship and other resources to prepare them for the Global Demo Day in San Francisco.

[Read More]

July 27, 2016

Zoey Collier

Earlier this year, Paul Cutsinger, Evangelist at Amazon Alexa, joined a team of developers and designers from Capital One at SXSW in Austin to launch the new Capital One skill for Alexa. The launch of the new skill garnered national attention, as Capital One was the first company to give customers the ability to interact with their credit card and bank accounts through Alexa-enabled devices. This week at the Amazon Developer Education Conference in NYC, Capital One announced another industry first by expanding the skill to enable its customers to access their auto and home loan accounts through Alexa.

"The Capital One skill for Alexa is all part of our efforts to help our customers manage their money on their terms – anytime and anywhere," said Ken Dodelin, Vice President, Digital Product Management, Capital One. “Now, you can access in real time all of your Capital One accounts—from credit cards to bank accounts to home and auto loans—using nothing but your voice with the Capital One skill.”

The skill is one of the top-rated Alexa skills, 4.5/5 stars, with 47 reviews. It enables Capital One customers to stay on top of their credit card, auto loan, mortgage and home equity accounts by checking their balance, reviewing recent transactions, or making payments, as well as get real-time access to checking and savings account information to understand their available funds.

 “Capital One has a state of the art technology platform that allows us to quickly leverage emerging technologies, like Alexa." Scott Totman, Vice President of Digital Products Engineering, Capital One said. “We were excited about the opportunity to provide a secure, convenient, and hands-free experience for our customers.”

Building the Skill

To bring the new skill to life, the Capital One team – comprised of engineers, designers, and product managers – kicked off a two-phase development process.

“Last summer a few developers started experimenting with Echo devices, and, ultimately, combined efforts to scope out a single feature: fetching a customer’s credit card balance. That exercise quickly familiarized the team with the Alexa Skills Kit (ASK) and helped them determine the level of effort required to produce a full public offering,” said Totman. “The second phase kicked off in October and involved defining and building the initial set of skill capabilities, based on customer interviews and empathy based user research. Less than six months later we launched the first version of the Capital One skill for Alexa.”

The team also spent a lot of time finding the right balance between customers’ need for both convenience and security. In the end, Capital One worked with Amazon to strike the right balance and gave customers the option of adding a four-digit pin in order to access the skill and provide an additional layer of security. The pin can be changed or removed at the customer’s discretion.

“The Alexa Skills Kit is very straightforward. However, it is evolving quickly, so developers need to pay close attention to online documentation, webinars, and other learning opportunities in order to stay on top of new features and capabilities as they are released,” Totman said.

Finding the Right Voice

“We dedicated a lot of time to getting the conversation right from the start,” said Totman. “This meant we not only had to anticipate the questions customers were going to ask, but also how they were going to ask them.”

This was a really interesting challenge for Capital One’s design team.  In order to make the skill feel like a personalized conversation, the team had to identify exactly where and how to inject personality and humor, while carefully considering customers’ priorities and the language they use to discuss finances.

“A lot goes into making sure our customers get what they expect from our personality, as well as what they expect from Alexa’s personality. That becomes especially visible when injecting humor, because what looks great on paper doesn’t always transition to the nuance of voice inflection, cadence, or the context of banking,” said Stephanie Hay, head of Capital One’s content strategy team. “But that’s the joy of design > build > iterate in a co-creation method; product, design, and engineering design the conversation together, hear Alexa say it, react, iterate, test it with actual customers, iterate further, and then get it to a point we all feel excited about.”

Looking Ahead

Capital One’s Alexa skill represents just the starting lineup of features. Capital One’s team continues to test, learn, and explore new features by focusing on customer needs and continually refining the experience.

“As customers become more familiar using voice technologies, we anticipate growing demand for feature capabilities, as well as increased expectations regarding the sophistication of the conversation.” Totman said. “With voice technologies, we get to learn firsthand how customers are attempting to talk to us, which allows us to continually refine the conversation.”

“The possibilities with the Alexa Skills Kit are nearly endless, but I advise developers to be very thoughtful about the value of their skill,” said Totman. “Leveraging voice-activated technology is only worthwhile if you can clearly define how your solution will go above and beyond your existing digital offerings.”  

Stay tuned to part two to learn how Capital One built their Alexa skill and added new capabilities.


Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.

Get Started with Alexa Skills Kit

Are you ready to build your first (or next) Alexa skill? Build a custom skill or use one of our easy tutorials to get started quickly.

July 26, 2016

David Isbitski

Today, we’re excited to announce the Amazon Alexa session track at AWS re:Invent 2016, the largest gathering of the global Amazon developer community. AWS re:Invent provides an opportunity to connect with peers and technology experts, engage in hands-on labs and bootcamps, and learn about new technologies and how to improve productivity, network security, and application performance, all while keeping infrastructure costs low. AWS re:Invent runs November 28 through December 2, 2016.

The Alexa track at AWS re:Invent will dive deep into the technology behind the Alexa Skills Kit and the Alexa Voice Service, with a special focus on using AWS Services to enable voice experiences. We’ll cover AWS Lambda, DynamoDB, CloudFormation, Cognito, Elastic Beanstalk and more. You’ll hear from senior evangelists and engineers and learn best practices from early Alexa developers. Here’s an early peek at the Alexa sessions.

Title

Time

Level

Description

ALX 201: How Capital One Built a Voice Experience for Banking

Tuesday, November 29, 2016

10:00 AM - 11:00 AM

Introductory

As we add thousands of skills to Alexa, our developers have uncovered some basic and more complex tips for building better skills. Whether you are new to Alexa skill development or if you have created skills that are live today, this session will help you understand how to create better voice experiences. Last year, Capital One joined Alexa on stage at re:Invent to talk about their experience building an Alexa skill. Hear from them one year later to learn from the challenges that they had to overcome and the results they are seeing from their skill.

ALX 202: How Amazon is Enabling the Future of Automotive

Thursday, December 1, 2016

11:30 AM - 12:30 PM

Introductory

The experience in the auto industry is changing. For both the driver and the car manufacturer, a whole new frontier is on the near horizon. What do you do with your time while the car is driving itself? How do I have a consistent experience while driving shared or borrowed cars? How do I stay safer and more aware in the ever increasing complexity of traffic, schedules, calls, messages and tweets? In this session we will discuss how the auto industry is facing new challenges and how the use of Amazon Alexa, IoT, Logistics services and the AWS Cloud is transforming the Mobility experience of the (very near) future.

ALX 301: Alexa in the Enterprise: How JPL Leverages Alexa to Further Space Exploration with Internet of Things

Wednesday, November 30, 2016

5:00 PM - 6:00 PM

Advanced

The Jet Propulsion Laboratory designs and creates some of the most advanced space robotics ever imagined.  JPL IT is now innovating to help streamline how JPLers will work in the future in order to design, build, operate, and support these spacecraft. They hope to dramatically improve JPLers' workflows and make their work easier for them by enabling simple voice conversations with the room and the equipment across the entire enterprise.

What could this look like? Imagine just talking with the conference room to configure it. What if you could kick off advanced queries across AWS services and kick off AWS Kinesis tasks by simply speaking the commands? What if the laboratory could speak to you and warn you about anomalies or notify you of trends across your AWS infrastructure? What if you could control rovers by having a conversation with them and ask them questions? In this session, JPL will demonstrate how they leveraged AWS Lambda, DynamoDB and CloudWatch in their prototypes of these use cases and more.  They will also discuss some of the technical challenges they are overcoming, including how to deploy and manage consumer devices such as the Amazon Echo across the enterprise, and give lessons learned.  Join them as they use Alexa to query JPL databases, control conference room equipment and lights, and even drive a rover on stage, all with nothing but the power of voice!

ALX 302: Build a Serverless Back End for Your Alexa-Based Voice Interactions

Thursday, December 1, 2016

5:00 PM - 6:00 PM

Advanced

Learn how to develop voice-based serverless back ends for Alexa Voice Service (AVS) and Alexa devices using the Alexa Skills Kit (ASK), which allows you to add new voice-based interactions to Alexa. We’ll code a new skill, implemented by a serverless backend leveraging AWS services such as Amazon Cognito, AWS Lambda, and Amazon DynamoDB. Often, your skill needs to authenticate your users and link them back to your backend systems and to persist state between user invocations. User authentication is performed by leveraging OAuth compatible identity systems. Running such a system on your back end requires undifferentiated heavy lifting or boilerplate code. We’ll leverage Login with Amazon as the identity provider instead, allowing you to focus on your application implementation and not on the low-level user management parts. At the end of this session, you’ll be able to develop your own Alexa skills and use Amazon and AWS services to minimize the required backend infrastructure. This session shows you how to deploy your Alexa skill code on a serverless infrastructure, leverage AWS Lambda, use Amazon Cognito and Login with Amazon to authenticate users, and leverage AWS DynamoDB as a fully managed NoSQL data store.

ALX 303: Building a Smarter Home with Alexa

Thursday, December 1, 2016

1:00 PM - 2:00 PM

Advanced

This session introduces the beta process, the Smart Home Skill API, and how to quickly and easily set up a smart home so you can begin using Alexa to control lighting, blinds, and small appliances. We begin by going over what devices you can buy and share and some common best practices when enabling these devices in your home or office. We also demonstrate how to enable these devices and connect them with Alexa. We show you how to create groups and manage your home with your voice, as well as some tips and tricks for managing your home when you are away. This session explains how to use the Smart Home Skill API to create a custom skill to manage your smart home devices as well as lessons learned from dozens of customers and partners. Alexa smart home partner Ecobee joins us to talk about their experience in the Smart Home Skill API beta program.

ALX 304: Tips and Tricks on Bringing Alexa to Your Products

 

Friday, December 2, 2016

9:30 AM - 10:30 AM

 

Advanced

Ever wonder what it takes to add the power of Alexa to your own products?  Are you curious about what Alexa partners have learned on their way to a successful product launch?  In this session you will learn about the top tips and tricks on how to go from VUI newbie to an Alexa-enabled product launch.  Key concepts around hardware selection, enabling far field voice interaction, building a robust Alexa Voice Service (AVS) client and more will be discussed along with customer and partner examples on how to plan for and avoid common challenges in product design, development and delivery. 

ALX 305: From VUI to QA: Building a Voice-Based Adventure Game for Alexa

Friday, December 2, 2016

11:00 AM - 12:00 PM

Advanced

Hitting the submit button to publish your skill is similar to sending your child to their first day of school. You want it to be set up for a successful launch day and for many days thereafter. Learn how to set your skill up for success from Andy Huntwork, Alexa Principal Engineer and one of the creators of the popular Alexa skill "The Magic Door." You will learn the most common reasons why skills fail and also some of the more unique use cases. The purpose of this session is to help you build better skills by knowing what to look out for and what you can test for before submitting. In this session, you will learn what most developers do wrong, how to successfully test and QA your skill, how to set your skill up for successful certification, and the process of how a skill gets certified.  

MAC 202: Deep Learning in Alexa   Introductory

Neural networks have a long and rich history in automatic speech recognition. In this talk, we present a brief primer on the origin of deep learning in spoken language, and then explore today’s world of Alexa. Alexa is the AWS service that understands spoken language and powers Amazon Echo. Alexa relies heavily on machine learning and deep neural networks for speech recognition, text-to-speech, language understanding, and more. We also discuss the Alexa Skills Kit, which lets any developer teach Alexa new skills.

We encourage you to check back because we’ll have more content announcements in the coming months.

Hope to see you there! Haven’t signed up yet? Register now.

-Dave (@TheDaveDev)

 

July 22, 2016

Zoey Collier

In our first post, we shared why Discovery decided to build an Alexa skill and what requirements they outlined as they thought through what the voice experience should look like. In this post, we’ll share how they built and tested their Alexa skill and their tips for other Alexa developers.

Building and Testing the Shark Week Skill

When Stephen Garlick, Lead Development and Operations Engineer at Discovery Channel, took the lead in developing the Alexa skill, it was a chance to learn how to design a new experience for customers. He had no prior experience with AWS Lambda and Alexa Skills Kit (ASK). To start, he spent some time digging into online technical documentation and code samples provided on the Alexa Github repo. This helped him gain a deeper understanding of how to build the foundation of the Alexa skill and handle basic tasks.

By using AWS Lambda and ASK, Stephen and team were able to keep things simple and quickly deploy the code without the need to set up additional infrastructure to support the skill. Additionally, they were easily able to extend the node.js skill without having to create a skill from scratch.

Initially, Discovery used Alexa to respond with facts; later, they decided to customize her voice by using a mp3 playback. To accomplish this, Stephen used the SSML support for mp3 playback and AWS S3 with cloud front for hosting the files reliably. Each mp3 was less than 90 seconds in length, 48 kbps, and adhered to MPEG version 2 specifications. All the resources were created and deployed using the AWS CloudFormation service.

For the countdown feature, Stephen pulled in the moment.js dependency into node.js to help simplify some time-based calculations. The countdown now combines a mp3 playback for everything except the actual time which is played back by Alexa.

To test the skill, they used the skill test pane within the Alexa app. The testing tool made it easy to quickly test various scenarios without an Alexa-enabled device. Once the skill was operating as expected (and desired) in the test pane, Stephen asked other people to test the Shark Week skill on Alexa-enabled devices. This allowed them to collect additional feedback and iterate accordingly.

Overall, the entire process of learning these new technologies, coding, and building the skill took no more than 12 hours. This included a few iterations of the Alexa skill as well.

Five Tips for Other Alexa Developers

Tip #1: Make The Skill As Human As Possible: Initially, Discovery had the Alexa voice state each of the randomized facts. In an attempt to assist with the pronunciation, they spelled a few of the words and numbers phonetically. However, in doing so, the cards displayed in the Alexa app weren't correct. It quickly became apparent that a recorded reading of each fact eliminated the pronunciation issues, enabled proper spelling of facts for the cards in the Alexa app, and made the entire experience more personal.

Tip #2: Plan for Time Sensitive Coding: If you're building time specific functionality (e.g.; a countdown timer to a specific time), make sure you think about what happens when the specific time arrives. The team at Discovery was able to account for the Shark Week kickoff by providing three different countdown messages based on time in each specific time zone. The first was the countdown lead in, the second was a message indicating that Shark Week already started, and the third indicated that Shark Week had concluded and that the Shark Week website provides other shark-related information year-round.

Tip #3: Control for Volume: If you're using a combination of recordings and Alexa powered speech, make sure the volume levels are consistent throughout the experience.

Tip #4: Be Creative with Your Intent Schema and Utterances: People think, act, and speak differently. Therefore, it's important that you account for as many different intents as possible. For example, after you ask for a Shark Week fact, the skill will ask if you would like to hear another. Just a few of Discovery’s "no" utterances include "no," "nope," "no thanks," "no thank you," "not really," "definitely not," "no way," "nah," negative," "no sir," "maybe another time," and many more. It's better to be as inclusive as possible, rather than having Alexa unable to understand.

Tip #5: Take Chances: Push your limits and think big when it comes to building your Alexa skill. Discovery started the project with a broad scope in mind and were able to quickly iterate and resubmit the skill for certification.

 


Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.

Get Started with Alexa Skills Kit

Are you ready to build your first (or next) Alexa skill? Build a custom skill or use one of our easy tutorials to get started quickly.

Want the latest?

alexa topics

Recent Posts

Archive