Amazon Developer Blogs

Amazon Developer Blogs

Showing posts tagged with Alexa Research

October 31, 2018

Young-Bum Kim

Amazon researcher Young-Bum Kim describes some new modifications to the machine learning model that selects the one skill out of thousands best suited to a particular customer request.

[Read More]

October 25, 2018

Larry Hardesty

At this year's Conference on Empirical Methods in Natural Language Processing (EMNLP), Amazon researchers are cohosting what they hope will be the first in a series of annual workshops that will both catalyze and publicize research on automatic fact verification.

[Read More]

October 04, 2018

Jun Yang

Using an Echo's microphone array to perform sound-source localization could provide Alexa with useful information about a customer's physical context and enable the use of claps, taps, or snaps as control signals.

[Read More]

October 02, 2018

Chieh-Chi Kao

Alexa scientists describe two different approaches to the problem of audio event detection, the research topic that led to the new Alexa Guard home security feature. 

[Read More]

September 28, 2018

Vishal Naik

Last week, Amazon announced a redesigned Echo Show and the Alexa Presentation Language, which lets third-party developers build “multimodal” skills that coordinate voice and graphics. VIshal Naik explains the science behind a multimodal system that uses on-screen data to disambiguate voice requests.

[Read More]

September 26, 2018

Zeynab Raeesy

The science behind Alexa's newly announced whisper mode, which enables her to respond to customers' whispers by whispering back.

[Read More]

September 04, 2018

Young-Bum Kim

Classifying "domains", or topics of conversation, is so central to natural-language understanding that some systems have modules that just look for out-of-domain utterances. Amazon researchers show that training out-of-domain classifiers together with domain recognizers improves their performance.

[Read More]

August 31, 2018

Viktor Rozgic

At Interspeech 2018, Amazon scientists report a new system that learns to distinguish sounds produced by media players, such as TVs and radios, from sounds of ordinary household activity, to reduce false positives from Alexa's speech recognizers still further.

[Read More]

August 29, 2018

Larry Hardesty

Amazon papers at Interspeech 2018 examine ways that interaction histories can improve customers' Alexa experiences, from "collaborative filtering" to infer customers' tastes to methods that use the immediate "context" of a customer's requests to increase the accuracy of speech understanding systems.

[Read More]

August 27, 2018

Arpit Gupta

Alexa AI machine-learning scientist Chetan Naik and speech scientist Arpit Gupta describe their Interspeech paper on "contextual slot carryover," a crucial element of Alexa's ability to conduct natural, multiround conversations.

[Read More]