Amazon Developer Blogs

Amazon Developer Blogs

Showing posts tagged with Alexa research

June 13, 2019

Stan Peshterliev

New approach to "active learning", or automatically selecting training examples for machine learning, improves the performance of natural-language-understanding system by 7% to 9%.

[Read More]

June 11, 2019

Young-Bum Kim

Natural-language-understanding system that includes both a generic model for a language and several locale-specific models improves accuracy by an average of 59% over individual locale-specific models.

[Read More]

June 06, 2019

Arpit Gupta

A new Alexa system adopts a simpler, more scalable solution to the problem of "reference resolution", or tracking references through several rounds of dialogue, by overwriting referring words with their referents.

[Read More]

June 05, 2019

Alexa Science Team

At Amazon's re:MARS conference, Alexa VP and head scientist Rohit Prasad announced new AI technology that will enable Alexa to engage in multiturn dialogs that span skills, shifting the cognitive burden of complex task coordination from the customer to Alexa.

[Read More]

May 21, 2019

Viktor Rozgic

The combination of an autoencoder, which is trained to output the same data it takes as input, and adversarial training, which pits two neural networks against each other, confers modest performance gains but opens the door to extensive training with unannotated data. 

[Read More]

May 16, 2019

Ming Sun

Text normalization is the process of converting particular words of a sentence into a standard format so that software can handle them. Breaking inputs into component parts and factoring in syntactic information reduces the error rate of a neural text normalization system by 98%.

[Read More]

May 13, 2019

Judith Gaspers

Cross-lingual transfer learning improves performance of named-entity recognizer by up to 7.4% versus a system trained from scratch in the new language.

[Read More]

May 03, 2019

Young-Bum Kim

In the past year, we’ve introduced name-free skill interaction for Alexa, which allows customers to invoke skills without mentioning them by name. Y. B. Kim explains how we add new skills to the skill selection model without causing "catastrophic forgetting."

[Read More]

May 02, 2019

Rahul Goel

Transfer learning, copy mechanism improve performance of "semantic parser".

[Read More]

April 25, 2019

Jakub Lachowicz

The ability to build new text-to-speech models with relatively little speaker-specific training data could enable a wide variety of customizable speaker styles.

[Read More]