Earlier today, Rohit Prasad, vice president and head scientist, Alexa Machine Learning, and I had the pleasure of announcing the winner of the inaugural Alexa Prize competition for university students dedicated to accelerating the field of conversational artificial intelligence (AI). Congratulations to team Sounding Board, an inspiring group of students from the University of Washington, whose socialbot earned an average score of 3.17 on a 5-point scale from our panel of independent judges and achieved an average conversation duration of 10:22. As the winner of our inaugural competition, team Sounding Board earned our $500,000 first-place prize, which will be shared among the students.
We also had the privilege of honoring and surprising our other finalists on stage. Our runner up was team Alquist from Czech Technical University in Prague. We presented them with a $100,000 prize for their efforts. We also awarded our third-place winner, team What’s Up Bot from Heriot-Watt University in Edinburgh, Scotland, with a $50,000 prize.
At the start of the competition, teams were chosen based on several criteria, including potential scientific contribution to the field, the technical merit of their approaches, the novelty of their ideas, and their ability to execute against their plan. Fifteen teams then qualified for our semifinals. Each team’s socialbot continued to improve during this phase of the competition; in fact, average customer ratings improved nearly 15 percent.
Our three finalists were then selected; two based on highest average ratings by Alexa customers, and one wildcard selected by Amazon. Alexa customers continued to interact with and provide feedback to the three finalists for two months before our panel of independent judges selected a winner. In fact, since June, Alexa Prize has consistently been among the Top 10 skills by usage among all third-party skills in our Alexa Skills Store. Throughout the competition, our customers had millions of interactions with the socialbots, totaling greater than 40,000 hours of conversation.
Judging a conversational AI competition is super hard because conversation is inherently subjective; there isn’t a clear right or wrong response at each turn in a dialog, nor a precise definition of what makes a conversation “coherent” or “engaging”. We invested significant thought and effort into structuring the finals to make it both deep and insightful from a science perspective, as well as fair and unbiased from a judging perspective. The finals were held at our Seattle Day1 headquarters building over two days in November, and involved professional conversationalists who interacted with the socialbots, as well as five professional judges.
To say that this inaugural competition was a success would be an understatement. Amazon is incredibly grateful to all the student teams who exercised our Alexa Skills Kit and AWS cloud to create socialbots for Alexa that took a great step forward in conversing coherently and engagingly with humans on popular topics and news events.
Every team involved helped us advance speech science on several dimensions, from significantly advancing the custom language model (LM) we developed for the competition, to creating numerous natural language understanding components which addressed conversational AI challenges that arise when a conversation can be on any topic, and the content of the dialog can change rapidly. Moreover, the teams made important advances related to dialog management and response generation and selection. For example, several teams created ensemble approaches to dialog modeling by employing hierarchical architectures that included a main dialog manager (DM) and multiple, smaller DMs related to specific tasks, topics or conversational contexts. For generating and selecting responses, several Alexa Prize teams developed novel hybrid approaches that combined generative models with sequence-to-sequence approaches incorporating variants. Other teams utilized a reinforcement learning approach that maximized the tradeoff between satisfying the customer immediately versus taking into account the long-term reward of selecting a particular response.
These advances and many others are described in the first annual Proceedings of the Alexa Prize that includes papers from 14 Alexa Prize teams.
While we’ve certainly come a long way from the early work by Joseph Weizenbaum on Eliza back in the ‘60s, it’s still Day 1 for conversational AI. Each day we’re delighting customers as they engage with Alexa, but the work by our scientists and engineers to make conversations with her more natural, simple and fun continues.
And so does our competition…
The student teams in this year’s competition surprised and delighted us, but none was able to meet our Grand Challenge of maintaining a coherent and engaging conversation for 20 minutes. A $1 million research grant prize would have been awarded to the winning team’s university, if the winning team’s socialbot had met this challenge.
So today I am pleased to announce the 2018 Alexa Prize competition. Applications from university teams will open on December 4th and close on Jan 8, 2018. Last year, we received more than 100 applications from university teams across 22 countries, and we certainly hope the number of entrants will grow this year. We encourage teams from universities worldwide to check out alexaprize.com to learn more about our competition. More details will be posted on December 4th.
Rohit, myself and everyone involved with Alexa Prize can’t wait for next year’s competition to begin. It’s been a rewarding initial year – for us and for students and faculty – as we work together to create the future of conversational AI.
Ashwin Ram is senior manager, AI Science, Alexa Machine Learning, and leads the Alexa Prize. You can follow Ashwin on Twitter @ashwinram