Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts

Archive

Showing posts tagged with Announcements

February 15, 2017

Glenn Cameron

Alexa Developer Contest

We are happy to announce a new Amazon Alexa Skills contest with DevPost, the developer-focused job search and hackathon company. We are challenging developers and designers to create unique new skills that make Alexa smarter. To compete for over $40,000 in prizes, you will need to create an original Alexa skill. This is our most open-ended challenge yet. Will you turn Alexa into a concierge, sous chef, fitness coach, personal shopper, or DJ? You decide. The challenge starts now – sign up!

[Read More]

February 10, 2017

David Isbitski

Speechcons_blog_asV1.png

Speech Synthesis Markup Language, or SSML, is a standardized markup language that provides a way to mark up text for changing how speech is synthesized. Numerous SSML tags are already supported by the Alexa Skills Kit, including: audio, break, p, phoneme, s, say-as, speak, and w.

Alexa now understands SSML Speechcons, which are special words and phrases that are pronounced more expressively by Alexa. Speechcons can be used in English (US) skills by adding a <say-as interpret-as="interjection"> tag around the speechcon you would like to use.

[Read More]

February 07, 2017

Ted Karczewski

Today we’re excited to announce that the Alexa Voice Service (AVS) is now available for developers building voice-enabled products for the UK and Germany. AVS localization provides you with language and region-specific services to expand your audience and delight new customers. With a few lines of code, you can upgrade any product with Alexa to access localized languages and skills in the UK, Germany, and US. Now your end customers can speak with Alexa and receive responses and content in their preferred language and region.

[Read More]

February 07, 2017

Gagan Luthra

Today we are announcing the expansion of the Alexa Voice Service (AVS) to the UK and Germany. This update enables device manufacturers to reach and delight even more customers with localized language models for the cloud-based ASR, NLU and TTS engines, and region-specific skills and content. Your AVS device can now converse in German, get a Flash Briefing from Sky Sports, or call for an Uber in London.

Follow the steps in this blog to prepare your product for use in the UK and Germany.

[Read More]

February 07, 2017

Glenn Cameron

In September 2016, we announced that Amazon Echo, Echo Dot and Alexa were coming to the UK and Germany. Since then, developers have created hundreds of great new skills for customers in these countries. Today we’re excited to announce that the community website Echosim.io has made it even easier for you to build and test your skills by adding new language models for English (UK) and German.

[Read More]

February 06, 2017

Jen Gilbert

The Amazon Alexa team is excited to support betaworks, a startup studio and seed stage VC company based in New York, on its new initiative, voicecamp, an accelerator program focused on voice-based computing.

Accelerate your voice-powered startup with voicecamp

Betaworks’ first accelerator program, botcamp, brought together eight founding teams working on conversational interfaces and chatbots. Now with voicecamp, betaworks wants to support early-stage companies at the forefront of conversational software. Voicecamp’s announcement on January 11, 2017 was covered by TechCrunch, and VentureBeat, as well as other media outlets.

[Read More]

February 02, 2017

Michael Francisco

Today we launched a new page on the Alexa portal designed to help organizations, from small businesses to global brands, connect with the agencies, tools and analytics providers that specialize in creating and managing Alexa voice experiences. We’ve had the privilege of working with many experienced companies creating innovative skills for recognizable brands. We’ve also heard from companies that are interested in building skills, but need more expertise in designing a voice user interface or don’t have the internal resources to do the work themselves. [Read More]

February 01, 2017

Jen Gilbert

In November 2016 we collaborated with Capital One to accelerate the pace of voice technology innovation with a $10,000 Alexa skill contest for AWS re:Invent attendees. In the contest we challenged attendees to build innovative voice experiences using the Alexa Skills Kit. Individuals or teams of up to four competed to create a unique skill that a customer could use every day. [Read More]

January 31, 2017

Bertrand Vacherot

Being a college student is a juggling act. That’s why the inaugural Hack-the-Dorm with Amazon Alexa contest, in collaboration with MindSumo, challenged students to build a new voice controlled Alexa skill to help make life easier and better on campus. A big thank you to the teams of students who submitted their creative and useful skills for the dorm using the Alexa Skills Kit.

The winners are ... 

[Read More]

January 25, 2017

Michael Palermo

Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn why you should use a custom slot instead of the soon to be deprecated AMAZON.LITERAL.

We’ve been listening to your feedback for Alexa feature requests and questions. As such, this post provides details around the deprecation of literal slots. By the time you finish reading this post, you will see it doesn’t matter that the LITERAL slot type is going away because it already has a better replacement: custom slot types.

Before we get into the details, it’s clear that the community needs more time to experiment with custom slot types and to make the transition. So, we’re moving the LITERAL deprecation date for US skills to Feb 6th 2017. From the beginning, custom slot types (and not LITERAL) have been the solution in the UK or Germany.

Slots let you to build interaction models and pass phrases from the user to your skill. Amazon provides a set of built-in slot types that cover common things like numbers, names, and dates. Custom slot types go beyond these to enable support for the scenarios that you’ve chosen to build. They are a superset of the LITERAL slot type that we’re deprecating.

This post will describe how to support three common LITERAL scenarios we’ve seen.

Scenario 1: I want to collect an arbitrary word or phrase from what the user said.

Imagine a situation where you want to gather information from users that you don’t know when you build your interaction model. Examples include things like lists of wines, items in a game, names of cities, nicknames, etc. It’s clear that you could build a custom slot with all the values that you do know but how do you handle the values that you don’t or can’t know?

First, be sure to check the list of built-in slot types. You may find something that we’ve already built for you like first names, city names, last names, dates, numbers, and many more.

When you create a custom slot type, a key concept to understand is that this is training data for Alexa’s NLP (natural language processing). The values you provide are NOT a strict enum or array that limit what the user can say. This has two implications 1) words and phrases not in your slot values will be passed to you, 2) your code needs to perform any validation you require if what’s said is unknown.

Let’s look at an example of how to build support for something like nicknames. The custom slot type is named NICKNAMES. The custom slot values are shown here: 

Figure 1 : Custom slot for NICKNAMES

The intent schema uses NICKNAMES instead of AMAZON.LITERAL.

[Read More]

January 19, 2017

Ted Karczewski

Meet the Sugr Cube, a portable minimalist Wi-Fi speaker system now with Amazon Alexa.

With a growing number of smart products found in the home today, form factor design has become just as important as the technology found under the hood. Customers want smart home products that make their lives easier, but not at the expense of personal taste and design aesthetic. That’s where Sugr, a global innovator in high-definition Wi-Fi audio systems, comes in.

Customers can stream music from Spotify with Spotify Connect technology, Amazon Music, iHeartRadio, TuneIn, and more just by tapping and asking. The speaker also responds to unique gestures or movements—customers can play music with simple touch commands, with rocking motions, or with the companion mobile application.

[Read More]

January 13, 2017

Thom Kephart

We just wrapped up an exciting week at the 2017 Consumer Electronics Show (CES). You may have read some of the buzz around voice and Alexa at CES this year. Companies and developers expanded Alexa skills and integrations with dozens of announcements and demonstrations at CES. Developers like you are helping to build the future of a voice-driven world . Thanks to your work, Alexa now has over 7,000 skills and a growing number of integrations on third-party devices. [Read More]

January 10, 2017

Glenn Cameron

In October of last year we worked with Hackster.io to launch the Amazon Alexa API Mashup Contest challenging developers to connect their favorite public APIs to Alexa. Developers submitted 163 projects that connected Alexa to the APIs of companies like Slack, Medium, Yelp, and many others.

Special thanks to everyone who competed in this contest. We were impressed by the creativity, quality, and high number of entries. We encourage you to browse through the projects. Each one comes with source code and documentation that might be a helpful reference when you code your next Alexa skill.

[Read More]

January 06, 2017

Ted Karczewski

Today at CES, Nucleus demonstrated new Display Cards for Alexa that offer complementary visual content alongside voice responses from the Alexa Voice Service.

For screen-based devices like the Amazon Fire TV with Alexa, Display Cards for Alexa offer the ability to render a “Now Playing” interface for music and books, as well as graphical information for weather, to-do and shopping lists, calendar updates, and Alexa skills when a user engages with Alexa.

[Read More]

January 06, 2017

Ted Karczewski

Volkswagen unveiled its plans to integrate with Amazon Alexa at CES yesterday during a moderated press event and booth tour.

The company will collaborate with Amazon to bring Alexa into the car through a voice integration on the head unit. In addition to the in-car Alexa integration, Volkswagen also announced a new Alexa skill that will help car owners get information about their vehicle from inside the home using an Amazon Echo, Echo Dot, or any Alexa device.

[Read More]

Want the latest?

alexa topics

Recent Posts

Archive