Science is critical to how Alexa is revolutionizing daily conveniences, from playing music and controlling your smart home, to getting information and much more just by using your voice. Our scientists and engineers develop foundational AI technologies for anyone to build intelligent conversational interfaces for any device, application, language or environment. We build machine learning algorithms, services and data-driven models for key components, such as wake word detection, automatic speech recognition, natural language understanding, contextual reasoning, dialog management, question answering, and text-to-speech, all of which contribute to the magic that is Alexa.
We believe in hiring and developing world-class talent in science and engineering, and building teams with multi-disciplinary skills with clear charters and goals. These multi-disciplinary teams employ our working backwards method to identify key long-term problems to solve on behalf of our customers and a staged approach to ensuring we make rapid progress towards our goals. The combination of world-class elastic computing resources available via AWS, large-scale heterogeneous data resources, and the team’s years of experience in building and deploying machine learning algorithms is key to innovation at scale.
Our research is focused on delivering magical experiences for our customers through the ground-breaking Echo family of devices and third-party devices available everywhere. As a result, our conversational AI inventions are having a direct impact on the lives of millions of people. We also contribute to the advancement of conversational AI through engagements with the academic community via funded research and Grand Challenges such as the Alexa Prize. Moreover, we encourage the publication of research that will contribute to the future of AI.
Research published recently by Alexa conversational-AI scientists includes:
Oct. 28, 2019 - Cross-lingual transfer learning, which uses machine learning models trained in one language to bootstrap models in another, benefits from algorithms that select high-value training data in the source language.
Oct. 17, 2019 - The open challenge for the Fact Extraction and Verification (FEVER) workshop at EMNLP involved devising adversarial examples that would stump fact verification systems trained on the FEVER data set.
Oct. 1, 2019 - Recorded in the lab during simulated dinner parties, a new data set should aid the development of systems for separating speech signals in reverberant rooms with multiple speakers.
Sep. 16, 2019 - Treating a conversation as a text, and dialogue state tracking as answering questions about the text, enables an 11.75% improvement in accuracy over the best-performing prior system.
It’s about the opportunity to have impact at scale. There are many roles to explore across science, engineering and data-driven modeling that span every facet of machine learning. Below are examples of peers who are delivering tomorrow’s conversational AI experiences today. You also can review our global job opportunities across the many teams that deliver Alexa experiences, or check out the job opportunities in each city listed below.