Today we are happy to announce support for thermostat query, a new feature for Alexa skills developed using the Smart Home Skill API. The feature is now available in the US, with support for the UK and Germany coming soon. With thermostat query, customers can issue a voice command to an Alexa-enabled device, such as the Amazon Echo or Echo Dot, and hear Alexa say the response. For example, a customer with a single thermostat could say, “Alexa, what is the temperature in the house?” and Alexa would respond with the current inside temperature. This complements thermostat commands that already allow customers to set the temperature value.
This new feature simplifies development efforts by enabling specific voice interactive experiences straight from the Smart Home Skill API. In the past, smart home skill developers had to create two skills (one for smart home, the other for custom voice interactions to allow querying data) to provide this overall experience.
When Amazon first introduced the Echo, Nick Schwab was intrigued. He’d always loved voice commands in his car, but he wasn’t sure he wanted to buy another cool device just yet. Then the Echo Dot came out, and once again, Nick couldn’t resist a good deal. He ordered his own Dot, dug into the Alexa Skills Kit (ASK). Right away, he started working on Bargain Buddy, an Alexa skill to relieve him of a daily surf to find daily deals.
Two days after the Bargain Buddy was certified, Nick received his Echo Dot in the mail—his first Alexa device. That’s right, he developed, tested and released his first Alexa skill, before he even had his first Echo Dot.
That was early in 2016. These days, Nick has become a force to be reckoned in the Alexa developer community.[Read More]
A few months ago we shared a free video course on Alexa development by A Cloud Guru, a pioneering serverless education company in the cloud space. Today, we’re excited to announce a new advanced course on Alexa skill building instructed by Alexa Champion Oscar Merry for A Cloud Guru. As the co-founder and head of technology at Opearlo, a voice design agency, Oscar has extensive experience with the Alexa Skills Kit (ASK). He has worked with the technology since November 2015, designing and building skills for clients across a number of industries and use cases. He’s also been giving back to the community and sharing his ASK knowledge by running the London Alexa Devs meetup since July 2016.
In this Advanced Alexa Skills Kit course, Oscar gets you started with the ASK SDK for Node.js and shares a practical project that any meetup organizer can implement to use Alexa as their event assistant.
We are happy to announce a new Amazon Alexa Skills contest with DevPost, the developer-focused job search and hackathon company. We are challenging developers and designers to create unique new skills that make Alexa smarter. To compete for over $40,000 in prizes, you will need to create an original Alexa skill. This is our most open-ended challenge yet. Will you turn Alexa into a concierge, sous chef, fitness coach, personal shopper, or DJ? You decide. The challenge starts now – sign up![Read More]
Speech Synthesis Markup Language, or SSML, is a standardized markup language that provides a way to mark up text for changing how speech is synthesized. Numerous SSML tags are already supported by the Alexa Skills Kit, including: audio, break, p, phoneme, s, say-as, speak, and w.
Alexa now understands SSML Speechcons, which are special words and phrases that are pronounced more expressively by Alexa. Speechcons can be used in English (US) skills by adding a <say-as interpret-as="interjection"> tag around the speechcon you would like to use.[Read More]
Today we’re excited to announce that the Alexa Voice Service (AVS) is now available for developers building voice-enabled products for the UK and Germany. AVS localization provides you with language and region-specific services to expand your audience and delight new customers. With a few lines of code, you can upgrade any product with Alexa to access localized languages and skills in the UK, Germany, and US. Now your end customers can speak with Alexa and receive responses and content in their preferred language and region.[Read More]
Today we are announcing the expansion of the Alexa Voice Service (AVS) to the UK and Germany. This update enables device manufacturers to reach and delight even more customers with localized language models for the cloud-based ASR, NLU and TTS engines, and region-specific skills and content. Your AVS device can now converse in German, get a Flash Briefing from Sky Sports, or call for an Uber in London.
Follow the steps in this blog to prepare your product for use in the UK and Germany.[Read More]
In September 2016, we announced that Amazon Echo, Echo Dot and Alexa were coming to the UK and Germany. Since then, developers have created hundreds of great new skills for customers in these countries. Today we’re excited to announce that the community website Echosim.io has made it even easier for you to build and test your skills by adding new language models for English (UK) and German.[Read More]
The Amazon Alexa team is excited to support betaworks, a startup studio and seed stage VC company based in New York, on its new initiative, voicecamp, an accelerator program focused on voice-based computing.
Accelerate your voice-powered startup with voicecamp
Betaworks’ first accelerator program, botcamp, brought together eight founding teams working on conversational interfaces and chatbots. Now with voicecamp, betaworks wants to support early-stage companies at the forefront of conversational software. Voicecamp’s announcement on January 11, 2017 was covered by TechCrunch, and VentureBeat, as well as other media outlets.[Read More]
Today a speaker is not just a speaker—it’s a connected device we use to play music when we want, where we want, and from whatever service we prefer. New cloud-based, streaming media providers have given us greater control over our music libraries, creating magical and memorable experiences every time we hit “play”.
More than ever, convenience matters, and that’s where Linkplay excels.
Linkplay is a turnkey Wi-Fi audio solution provider that works with device manufacturers to build connected speaker products at various price points and for a growing number of use cases. Whether you want to build a high-end product for the home or a durable portable speaker for camping trips, Linkplay offers customizable product solutions, complete with the latest technologies, to meet your needs.[Read More]
Today we launched a new page on the Alexa portal designed to help organizations, from small businesses to global brands, connect with the agencies, tools and analytics providers that specialize in creating and managing Alexa voice experiences. We’ve had the privilege of working with many experienced companies creating innovative skills for recognizable brands. We’ve also heard from companies that are interested in building skills, but need more expertise in designing a voice user interface or don’t have the internal resources to do the work themselves. [Read More]
In November 2016 we collaborated with Capital One to accelerate the pace of voice technology innovation with a $10,000 Alexa skill contest for AWS re:Invent attendees. In the contest we challenged attendees to build innovative voice experiences using the Alexa Skills Kit. Individuals or teams of up to four competed to create a unique skill that a customer could use every day. [Read More]
Being a college student is a juggling act. That’s why the inaugural Hack-the-Dorm with Amazon Alexa contest, in collaboration with MindSumo, challenged students to build a new voice controlled Alexa skill to help make life easier and better on campus. A big thank you to the teams of students who submitted their creative and useful skills for the dorm using the Alexa Skills Kit.
The winners are ...[Read More]
Just Eat has grown a lot since its humble beginnings in a Danish basement in 2001. Now headquartered in London, Just Eat is listed on the London Stock Exchange and is the world’s leading marketplace for online food ordering and delivery. Its goal, simply put, is to revolutionize the way people find, order and enjoy food.
Just Eat is making good on that mission. Today, it connects more than 62,000 restaurants across 100 cuisines in 15 countries, with an audience of over 15 million people.
Craig Pugsley is a principal designer in Just Eat’s Product Research team. He says the UK has a long tradition of delivery and takeout meals. Just Eat’s apps let diners explore exciting new cuisines at nearby restaurants. With menus for over 27,000 restaurants in the UK alone, it’s easy to find a new favorite flavor anytime.
Research quickly showed Pugsley’s team that diners tend to order their favorites again and again. So when Amazon brought Echo and Alexa to the UK, Just Eat saw a new opportunity. The Just Eat Alexa skill would make reordering a tasty new fave even easier, with just a few words:
“Alexa, tell Just Eat to re-order Dim sum.”
No phone calls. No fumbling for a smartphone app. And no digging out credit card details. Just quick delivery of your favorite comfort food.[Read More]
Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn why you should use a custom slot instead of the soon to be deprecated AMAZON.LITERAL.
We’ve been listening to your feedback for Alexa feature requests and questions. As such, this post provides details around the deprecation of literal slots. By the time you finish reading this post, you will see it doesn’t matter that the LITERAL slot type is going away because it already has a better replacement: custom slot types.
Before we get into the details, it’s clear that the community needs more time to experiment with custom slot types and to make the transition. So, we’re moving the LITERAL deprecation date for US skills to Feb 6th 2017. From the beginning, custom slot types (and not LITERAL) have been the solution in the UK or Germany.
Slots let you to build interaction models and pass phrases from the user to your skill. Amazon provides a set of built-in slot types that cover common things like numbers, names, and dates. Custom slot types go beyond these to enable support for the scenarios that you’ve chosen to build. They are a superset of the LITERAL slot type that we’re deprecating.
This post will describe how to support three common LITERAL scenarios we’ve seen.
Imagine a situation where you want to gather information from users that you don’t know when you build your interaction model. Examples include things like lists of wines, items in a game, names of cities, nicknames, etc. It’s clear that you could build a custom slot with all the values that you do know but how do you handle the values that you don’t or can’t know?
First, be sure to check the list of built-in slot types. You may find something that we’ve already built for you like first names, city names, last names, dates, numbers, and many more.
When you create a custom slot type, a key concept to understand is that this is training data for Alexa’s NLP (natural language processing). The values you provide are NOT a strict enum or array that limit what the user can say. This has two implications 1) words and phrases not in your slot values will be passed to you, 2) your code needs to perform any validation you require if what’s said is unknown.
Let’s look at an example of how to build support for something like nicknames. The custom slot type is named NICKNAMES. The custom slot values are shown here:
Figure 1 : Custom slot for NICKNAMES
The intent schema uses NICKNAMES instead of AMAZON.LITERAL.[Read More]