Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts

Archive

Showing posts tagged with Announcements

November 14, 2016

Ashwin Ram

On September 29, 2016, Amazon announced the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. We received applications from leading universities across 22 countries. Each application was carefully reviewed by senior Amazon personnel against a rigorous set of criteria covering scientific contribution, technical merit, novelty, and ability to execute. Teams of scientists, engineers, user experience designers, and product managers read, evaluated, discussed, argued, and finally selected the ten teams who would be invited to participate in the competition. Wait, make that twelve; we received so many good applications from graduate and undergraduate students that we decided to sponsor two additional teams.

Today, we’re excited to announce the 12 teams selected to compete with an Amazon sponsorship. In alphabetical order, they are:

  • Carnegie-Mellon University: CMU Magnus
  • Carnegie-Mellon University: TBD
  • Czech Technical University, Prague: eClub Prague
  • Heriot-Watt University, UK: WattSocialBot
  • Princeton University: Princeton Alexa
  • Rensselaer Polytechnic Institute: BAKAbot
  • University of California, Berkeley: Machine Learning @ Berkeley
  • University of California, Santa Cruz: SlugBots
  • University of Edinburgh, UK: Edina
  • University of Montreal, Canada: MILA Team
  • University of Trento, Italy: Roving Minds
  • University of Washington, Seattle: HuskyBot

These teams will each receive a $100,000 research grant as a stipend, Alexa-enabled devices, free Amazon Web Services (AWS) services to support their development efforts, access to new Alexa Skills Kit (ASK) APIs, and support from the Alexa team. Teams invited to participate without sponsorship will be announced on December 12, 2016.

[Read More]

November 04, 2016

David Isbitski

Today, we unveiled a new way for customers to browse the breadth of the Alexa skills catalog and discover new Alexa skills on Amazon.com. See the experience.

Your Skill is Now on Amazon.com

Now every Alexa skill will have an Amazon.com detail page. On-Amazon detail pages improves discovery so that a customer can quickly find skills on Amazon and enables developers to link customers directly to their skill with a single click. This is the first time that we are offering a pre-login discovery experience for Alexa skills. Before now, customers would need to log in to the Alexa app on their mobile device or browser. Developers can also improve organic discovery by search engines by optimizing skill detail pages.

 

Easily Link Directly to Your Skill Detail Page

You can now link directly to your skill’s page on Amazon.com. On the page, customers can take actions, like enable and disable the skill and link their accounts. For the first time, you can drive customers directly to your skill detail page to increase discovery and engagement for your own skill. To link directly to your skill, simply navigate to your skill’s page and grab the URL from your browser.

[Read More]

October 31, 2016

Ted Karczewski

People love that they can dim their lights, turn up the heat, and more just by asking Alexa on their Amazon Echo. Now Belkin Wemo has launched new capabilities through the existing Alexa Voice Service (AVS) API, making the same smart home voice controls accessible on the Echo available on all third-party products with Alexa. Best of all, your customers can enable the Wemo skill on your device today—no additional development work needed.

Because Alexa is cloud-based, it’s always getting smarter with new capabilities, services, and a growing library of third-party skills from the Alexa Skills Kit (ASK). As an AVS developer, your product gains access to these growing capabilities through regular API updates, feature launches, and custom skills built by our active developer community.

More About Wemo

Belkin makes a variety of high-quality Wemo switches that consumers use to control a number of devices in the home, from floor lamps and ceiling bulbs to fans and home audio speakers. The switches are perfect for beginners and early adopters alike, and now with third-party integration across the family of Amazon and third-party devices with Alexa, your users can have even greater control of their smart homes without lifting a finger. Read more about how Wemo is building a smart ecosystem of connected devices for the home.

Belkin Wemo joins other Amazon Alexa Smart Home partners, such as Philips Hue SmartThings, Insteon, and Wink, in enabling voice control in third-party devices with Alexa.

Learn more about the Alexa Voice Service, its features, and design use cases.

Have questions? We are here to help. Visit us on the AVS Forum or Alexa GitHub to speak with one of our experts.

AVS is coming soon to the UK and Germany. Read the full announcement here.

October 28, 2016

Dean Bryen

We recently announced support for Alexa in two new languages, English (UK) and German. In order to easily add all three supported languages to your skills, we have updated the Alexa SDK for Node.js. We’ve also updated our Fact, Trivia and How To skill samples to include support for all three languages using the new SDK feature. You can find these updated samples over at the Alexa GitHub.

Fact – This skill helps you to create a skill similar to “Fact of the Day”, “Joke of the Day” etc. You just need to come up with a fact idea (like “Food Facts”) and then plug in your fact list to the sample provided.

Trivia – With this template you can create your own trivia skill. You just need to come up with the content idea (like “Santa Claus Trivia”) and plug in your content to the sample provided.

How To – This skill enables you to parameterize what the user says and map it to a content catalog. For example, a user might say "Alexa, Ask Aromatherapy for a recipe for focus" and Alexa would map the word "focus" to the correct oil combination in the content catalog.

If you are not familiar with the existing SDK or have not previously created a skill, you can reference the fact skill tutorial or read the SDK Getting Started Guide before continuing.

How it works

Let’s take a look at the new version of the fact skill, and walk through the added multi-language support. You can find the entire skill code here.

The resource object

The first thing that you will notice is that we now define a resource object when configuring the Alexa SDK. We do this by adding this line within our skill handler:

[Read More]

October 25, 2016

Marion Desmazieres

The Alexa team is excited to be collaborating with Udacity on a new Artificial Intelligence Nanodegree program. Udacity is a leading provider of cutting-edge online learning, with a focus on in-demand skills in innovative fields such as Machine Learning, Self-Driving Cars, Virtual Reality, and Artificial Intelligence.  

“The Alexa team is dedicated to accelerating the field of conversational artificial intelligence. Udacity’s new nanodegree for AI engineers is aligned with our vision to advance the industry. We’re excited for students to learn about our work at Amazon and to build new skills for Alexa as part of the course.”

– Rohit Prasad, VP & Head Scientist, Alexa

Learn more about the Artificial Intelligence Nanodegree program in this guest post by Christopher Watkins, Senior Writer at Udacity.

Few topics today are as compelling as artificial intelligence. From ethicists to artists, physicians to statisticians, roboticists to linguists, everyone is talking about it, and there is virtually no field that stands apart from its influence. That said, there is still so much we don’t know about the future of artificial intelligence. But, that is honestly part of the excitement!

What we DO know is that world-class, affordable AI education is still very hard to come by, which means unless something changes, and unless new learning opportunities emerge, the field will suffer for a lack of diverse, global talent.

Fortunately, something IS changing. We are so excited to announce the newest offering from Udacity, the Artificial Intelligence Nanodegree program!

“This is truly a global effort, with global potential. We believe AI will serve everyone best if it’s built by a diverse range of people.” —Sebastian Thrun (Founder, Udacity)

With the launch of this program, virtually anyone on the planet with an Internet connection (and the relevant background and skills) will be able to study to become an AI engineer. If AI is the future of computer science—and it is—then our goal is to ensure that everyone who wishes to be a part of this future can do so. We want to see every aspiring AI engineer find a job and advance their career in this extraordinary field.

Apply to the Artificial Intelligence Nanodegree program today!

Collaborating With Industry Leaders

To help achieve these goals, we are collaborating with an amazing roster of industry-leading companies, including Amazon Alexa, IBM Watson, and Didi Chuxing. In order to provide our students with the highest quality, most cutting-edge curriculum possible, we are building the Artificial Intelligence Nanodegree program in close partnership with IBM Watson. To support the career goals of our students, we have also established hiring partnerships with both IBM Watson and Didi Chuxing.

Amazon Alexa is the voice service that powers Amazon Echo and enables people to interact with the world around them in a more intuitive way using only their voice. Through a series of free, self-service, public APIs, developers, companies, and hobbyists can integrate Alexa into their products and services, and build new skills for Alexa, creating a seamless way for people to interact with technology on a daily basis.  

[Read More]

October 24, 2016

Glenn Cameron

We are happy to announce the Amazon Alexa API Mashup Contest, our newest challenge with Hackster.io. To compete, you’ll build a compelling new voice experience by connecting your favorite public API to Alexa, the brain behind millions of Alexa-enabled devices, including Amazon Echo. The contest will award prizes for the most creative and most useful API mashups.

Create great skills that report on ski conditions, connect to local business, or even read recent messages from your Slack channel. If you have an idea for something that should be powered by voice, build the Alexa skill to make it happen. APIs used in the contest should be public. If you are not sure where to start, you can check out this list of public APIs on GitHub.

Need Real-World Examples?

  • Ask Twitter for trends.
  • Ask Automatic if you need gas.
  • Ask Hurricane Center what are the current storms
  • Ask Area Code where is eight six zero.
  • Ask Uber to request a ride.

How to Win

Submit your projects for API combos to the Alexa API Mashup Contest on Hackster for a chance to win. You don't need an Echo (or any other hardware) to participate. Besides, if you place in the contest, we’ll give you an Echo (plus a bunch of other stuff!)

We’re looking for the most creative and most useful API mashups. A great contest submission will tell a great story, have a target audience in mind, and make people smile.

There will be three winners for each category; categories are: 1) the most creative API mashup and 2) the most useful API mashup.

  • First place will get a trophy, Amazon Echo, Echo Dot, Amazon Tap, and $1,500 gift card.
  • Second place will get a trophy, Amazon Echo, and $1,000 gift card.
  • Third place will get a trophy, Amazon Echo, and $500 gift card.

The first 50 people to publish skills in both Alexa and the Hackster contest page (other than winners of this contest) will receive a $100 gift card. And everyone who publishes an Alexa skill can get a limited edition Alexa developer t-shirt.

Get started by visiting Hackster.io and sign up to participate.

About the Alexa Skills Kit

The Alexa Skills Kit (ASK) enables developers to easily build capabilities, called skills, for Alexa.  ASK includes self-service APIs, documentation, templates and code samples to get developers on a rapid road to publishing their Alexa skills. For the Amazon Alexa API Mashup Contest, we will award developers who make the most creative and the most useful API mashups using ASK components.

October 21, 2016

David Isbitski

Today, we’re excited to announce that Alexa VP and Head Scientist Rohit Prasad will present a State of the Union on Alexa and recent advances in conversational AI at AWS re:Invent 2016. The Alexa team will also offer six hands-on workshops to teach developers how to build voice experiences. AWS re:Invent 2016 is the largest gathering of the global Amazon developer community and runs November 28 through December 2, 2016.

AWS re:Invent registered attendees can now reserve spots in sessions and workshops online. You can register for Alexa sessions now.

State of the Union: Alexa and Recent Advances in Conversational AI

Alexa VP and Head Scientist Rohit Prasad will present the state of the union for Amazon Alexa at AWS re:Invent 2016. He’ll address advances in spoken language understanding and machine learning in Alexa, and share how Amazon thinks about building the next generation of user experiences. Learn how Amazon is using machine learning and cloud computing to help fuel innovation in AI, making Alexa smarter every day. The session is on Wednesday, November 30, 2016 from 1-2 pm.

Get Hands On: Learn to Build Alexa Products and Experiences in Alexa Workshops

We also today announced that the Alexa team will run six workshops to teach developers how to build Alexa experiences with the Alexa Skills Kit and the Alexa Voice Service.

Workshop: Creating Voice Experiences with Alexa Skills: From Idea to Testing in Two Hours (3 sessions)
This workshop teaches you how to build your first voice skill with Alexa. You bring a skill idea and we’ll show you how to bring it to life. This workshop will walk you through how to build an Alexa skill, including Node.js setup, how to implement an intent, deploying to AWS Lambda, and how to register and test a skill. You’ll walk out of the workshop with a working prototype of your skill idea.

Workshop: Build an Alexa-Enabled Product with Raspberry Pi (3 sessions)
Fascinated by Alexa, and want to build your own device with Alexa built in? This workshop will walk you through to how to build your first Alexa-powered device step by step, using a Raspberry Pi. No experience with Raspberry Pi or Alexa Voice Service is required. We will provide you with a Raspberry Pi and the software required to build this project, and at the end of the workshop, you will be able to walk out with a working prototype of Alexa on a Pi.  Please bring a WiFi capable laptop.

Alexa Technical Sessions

The Alexa track at AWS re:Invent will dive deep into the technology behind the Alexa Skills Kit and the Alexa Voice Service, with a special focus on using AWS Services to enable voice experiences. We’ll cover AWS Lambda, DynamoDB, CloudFormation, Cognito, Elastic Beanstalk and more. You’ll hear from senior engineers, solution architects and Alexa evangelists and learn best practices from early Alexa developers.

[Read More]

October 17, 2016

Ted Karczewski

People love that they can dim their lights, turn up the heat, and more just by asking Alexa on their Amazon Echo. Now Philips Hue has launched a new cloud-based Alexa skill, making the same smart home voice controls accessible on the Echo available on all Alexa-enabled third-party products through the Alexa Voice Service API. Best of all, your customers can enable the new Hue skill today—no additional development work needed.

Because Alexa is cloud-based, it’s always getting smarter with new capabilities, services, and a growing library of third-party skills from the Alexa Skills Kit (ASK). As an AVS developer, your Alexa-enabled product gains access to these growing capabilities through regular API updates, feature launches, and custom skills built by our active developer community.

Now with Philips Hue capabilities, your end users can voice control all their favorite smart home devices just by asking your Alexa-enabled product. You can test the new Philips Hue skill for yourself by building your own Amazon Alexa prototype and trying these sample utterances:

  • Alexa, turn on the kitchen light.
  • Alexa, dim the living room lights to 20%.                                                

End users can enable the new Philips Hue skill in the “Smart Home” section on the Amazon Alexa app.

More About Philips Hue

Philips Hue offers customizable, wireless LED lighting that can be controlled by voice across the family of Amazon Alexa products. Now with third-party integration, your users will be able to turn on and off their lights, change lighting color, and more from any room in the house just by asking your Alexa-enabled third-party product. The new Philips Hue skill also includes support for Scenes, allowing Alexa customers to voice control Philips Hue devices assigned to various rooms in the house.

Whether end users have an Echo in the kitchen or an Alexa-enabled product in the living room, they can now voice control Philips Hue products from more Alexa-enabled devices across their home. Learn more about the Smart Home Skill API and how to build your own smart home skill.

[Read More]

October 14, 2016

Thom Kephart

We participate in a number of events across the globe throughout the year – and we’d love to see you at the next one.

To stay tuned to the latest events near you, check out our new events page. There you’ll be able to find information about hackathons where you can get hand-on education and build Alexa skills, conferences and presentations where you can join the conversation and meet Alexa team members, as well as community-run meetups where you can connect with fellow developers.

Bookmark the events page today, register for one near you, and we’ll see you there.

 

October 13, 2016

David Isbitski

The beta is now closed. Sign up to be notified when the List Skill API is publicly available.

Today we announced a limited participation beta for the List Skill API, a new addition to the Alexa Skills Kit, which enables developers to add capabilities, called skills, to Alexa. Developers can now teach Alexa how to interface with their list applications so that customers can simply say, “Alexa, add bananas to my Shopping List” or “Alexa, add ‘Go for a Jog’ to my To-do list.” The List Skill API taps into Amazon’s standardized language model so you don’t have to build a voice interaction model to handle customer requests. You create skills that connect your applications directly to Alexa’s Shopping and To-do list capabilities so that customers can add or review items on their lists—without lifting a finger.

How it works

The List Skill API has a bi-directional interface that ensures lists are updated across all channels. That means the API notifies developers when a customer tells Alexa to add something to their list or makes a change to an existing item. Alexa understands the user’s speech request, converts it to a To-do or Shopping item, and sends you a notification with the new item that was added to the list. The List Skill API also updates the lists for Alexa when users make changes to their lists online or in your mobile application.

Customers are increasingly using voice interfaces as a hands-free way to manage their lives. By using Alexa’s built-in Shopping and To-do lists to keep track of items to buy and things to do, customers on millions of Alexa-enabled devices only have to "ask" and it's at their command, often becoming a daily habit. By integrating with the List Skill API, you will make it easier for your existing customers to keep track of their important tasks and shopping items in the home, and introduce your brand to a new group of Alexa customers.

Here's what developers are saying

Today we announced that Any.do and Todoist created the first skills using the List Skill API. 

 “We’ve been huge fans of Alexa for a long time. Once the opportunity to work with Alexa in a deep way presented itself, we we’re extremely happy to push it forward" says Omer Perchik, the Founder and CEO of Any.do. "The work with the new Alexa List Skill API was simple, straightforward and our experience as a beta participant was smooth due to the support from Amazon.”

“At Todoist, we're very excited about the potential of AI and AI-powered services. Amazon’s Alexa is one of the earliest and best examples of making this technology useful in people's everyday lives,” says Doist founder and CEO Amir Salihefendic. “That's why we're thrilled to have collaborated with the Amazon team as part of their limited participation beta for the Alexa List Skill API. We’re sure our customers will find Alexa extremely helpful in staying organized and productive, and we're looking forward to working with Amazon to make the Todoist skill even more useful as Alexa continues to evolve and get smarter.”

Get started now

Going forward, we’re excited to open the List Skill API to more developers as part of our limited participation beta.

For more information about getting started with the Alexa Skills Kit and to apply to participate in the List Skill API beta, check out the following additional assets:

About the List Skill API
Alexa Dev Chat Podcast
Alexa Training with Big Nerd Ranch
Alexa Skills Kit (ASK)
Alexa Developer Forums

-Dave (@TheDaveDev)

October 06, 2016

Ted Karczewski

What makes the Amazon Echo so appealing is the fact that customers can control smart home devices, access news and weather reports, stream music, and even hear a few jokes just by asking Alexa. It’s simple and intuitive.

We’re excited to announce an important Alexa Voice Service (AVS) API update that now enables you to build voice-activated products that respond to the “Alexa” wake word. The update includes new hands-free speech recognition capabilities and a “cloud endpointing” feature that automatically detects end-of-user speech in the cloud. Best of all, these capabilities are available through the existing v20160207 API—no upgrades needed.

You can learn more about various use cases in our designing for AVS documentation.

Get Started with Our New Raspberry Pi Project

To help you get started quickly, we are releasing a new hands-free Raspberry Pi prototyping project with third-party wake word engines from Sensory and KITT.AI. Build your own wake word enabled, Amazon Alexa prototype in under an hour by visiting the Alexa GitHub.

And don’t forget to share your finished projects on Twitter using #avsDevs. AVS Evangelist Amit Jotwani and team will be highlighting our favorite projects, as well as publishing featured developer interviews, on the Alexa Blog. You can find Amit on Twitter here: @amit.

Learn more about the Alexa Voice Service, its features, and design use cases. See below for more information on Alexa and the growing family of Alexa-enabled products and services:

Alexa Developer Resources
Alexa Voice Service (AVS)
Alexa Skills Kit (ASK)
The Alexa Fund
AVS Developer Forums
Alexa on a Raspberry Pi

Alexa-Enabled Devices
Triby
CoWatch
Pebble Core
Nucleus

Amazon Alexa Devices
Amazon Echo
Amazon Echo Dot
Amazon Tap
Amazon Fire TV
Amazon Fire TV Stick

Have questions? We are here to help. Visit us on the AVS Forum or Alexa GitHub to speak with one of our experts.

AVS is coming soon to the UK and Germany. Read the full announcement here.

 

October 03, 2016

David Isbitski

Today we are introducing the Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Flash Briefing on Alexa, which delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.

The Flash Briefing Skill API is free to use. Get Started Now >

Creating Your Skill with the Flash Briefing Skill API

To get started, you’ll configure a JSON or RSS feed and submit descriptive information about your skill in the portal. This can be done through the following steps:

 1.  Register for a free Amazon Developer Account if you have not done already and navigate to the Alexa Skills Kit box in the Alexa menu here.

2.  Click on Add a New Skill

3.  Select Flash Briefing Skill API, fill out a name and then click Next.

4.  Unlike custom skills, the interaction model for Flash Briefing Skills will automatically be generated for you, simply hit Next.

5.  Now we will need to define our Content Feed(s). Your Flash Briefing Skill can include one or more defined feeds.



Then, click on the Add new feed button.

6.  You will then enter information about your content feed including name, how often the feed will be updated, the content type (audio or text), the genre, an icon as well as the URL for where you are hosting the feed.

7.  Repeat these steps each feed you wish to include in the skill. The first feed you add will automatically be marked as the default feed. If you add more feeds, you can choose which feed is the default, by selecting it in the Default column.

8.  Click Next when you are finished adding feeds and are ready to test your skill.

For additional information check out the Steps to Create a Flash Briefing Skill page here.

[Read More]

September 30, 2016

Michael Palermo

Today we are happy to announce the support for scenes, a new feature in Alexa skills developed with the Smart Home Skill API. With scenes, customers can issue a single voice command to an Alexa-enabled device such as the Amazon Echo or Echo Dot to set a predefined configuration of one or more devices to a desired state. For example, a customer could say, “Alexa, turn on bedtime” resulting with specific lights turning off, a bedroom light changing color to a low-lit orange-hue, a ceiling fan turned on, and the temperature changed to an ideal setting for sleep.

At first glance scenes might appear similar to the groups feature found in the Smart Home section of the Alexa app as both allow control over multiple devices with one voice command. However, scenes differ from groups in the following ways:

  • Scenes allow each device configured within it to be set to a desired state, whereas groups are stateless and simply turn devices on or off.
  • Scenes are configured by customers through a device manufacturer’s app, whereas groups are configured in the Alexa app.
  • Scenes only contain devices managed by the device manufacturer’s app, whereas groups can contain any device discovered in the Alexa app.

With scenes customers have another option to groups for controlling multiple devices. Customers may already have scenes configured in device manufacturer apps such as those provided by Control4, Crestron, Insteon, Lutron Caseta, SmartThings, or Wink. Prior to today, these scenes were invoked by using the device manufacturer’s app. Now customers can find these scenes listed as devices in their Alexa app after requesting device discovery and control via voice interaction. 

How Scenes Work

Figure 1: Scene control process


Once a customer has configured a scene through the device manufacturer’s app and requests a device discovery to Alexa, the scene name will appear in the device list in the Alexa app. Consider what happens from a developer perspective, when a voice command is made to turn a scene on.

Let’s examine each step above in more detail.

  1. Customer says, “Alexa, turn on bedtime.”
  2. Alexa service receives the request and routes this intent to the Smart Home Skill API.
  3. A directive is composed including the ‘TurnOnRequest’ name in the directive header and the appliance ID (located in directive payload) corresponding to the friendly name of the scene “bedtime.”
  4. The skill adapter hosted in AWS Lambda receives the directive. Included in the directive is an access token to determine the customer’s account making the request. A call is made to device cloud API to turn on the scene matching the appliance ID for the associated customer.
  5. The device cloud (likely owned by the device maker) receives a request from the skill adapter, and communicates to a device hub or controller to turn on the scene preconfigured by the customer.
  6. The device hub sets the desired state of each device configured by the customer. Note in this “bedtime” example, turning on a scene may result in turning off a light, since this could be the desired state of that device for the scene.
[Read More]

September 29, 2016

Ashwin Ram

Artificial intelligence (AI) is becoming ubiquitous. With advances in technology, algorithms, and sheer compute power, it is now becoming practical to utilize AI techniques in many everyday applications including transportation, healthcare, gaming, productivity, and media. Yet one seemingly intuitive task for humans still eludes computers: natural conversation. Simple and natural for humans, voice communication in everyday language continues to be one of the ultimate challenges for AI. Human conversation requires the ability to understand the meaning of spoken language, relate that meaning to the context of the conversation, create a shared understanding and world view between the parties, model discourse and plan conversational moves, maintain semantic and logical coherence across turns, and to generate natural speech.

Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. Teams of university students around the world are invited to participate in the Alexa Prize (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses coherently and engagingly with humans on popular topics for 20 minutes. We challenge teams to invent an Alexa socialbot smart enough to engage in a fun, high quality conversation on popular topics for 20 minutes.

Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Alexa users will experience truly novel, engaging conversational interactions.

Up to ten teams of students will be selected to receive a $100,000 research grant as a stipend, Alexa-enabled devices, free AWS services to support their development efforts, and support from the Alexa Skills Kit (ASK) team. Additional teams not eligible for funding may be invited to participate. University teams can submit their applications between September 29 and October 28, 2016, here. The competition will officially start on November 14, 2016 and run until November 2017, concluding with an award ceremony to be held at AWS re:Invent in Las Vegas, NV.

As we say at Amazon, this is Day 1 for conversational AI. We are excited to see where you will go next, and to be your partners in this journey. Good luck to all of the teams.

Learn more about Alexa Prize.

September 21, 2016

Ted Karczewski

Last month, we announced the launch of Nucleus, the smart home intercom that’s always getting smarter with Alexa. Designed to bring families closer together, Nucleus makes two-way video conferencing between rooms, homes, and mobile devices instantaneous. Following the successful launch of Nucleus on Amazon.com and in hundreds of Lowe’s home improvement stores throughout the US, we’re excited to announce that Alexa Fund has led a $5.6 million Series A investment round in Nucleus, with additional participation from BoxGroup, Greylock Partners, FF Angel (Founders Fund), Foxconn, and SV Angel.

“It’s incredible to receive this level of support in such a short period of time,” said Jonathan Frankel, co-founder and CEO of Nucleus. “It speaks to the importance of our shared vision: Bringing families closer together through intuitive and intelligent interfaces. Amazon has been a stand-out supporter since day one and recognizes the value Nucleus is bringing to families nationwide, and the rapid market traction we’re seeing within our growing community.”

The Alexa Fund provides up to $100 million in venture capital funding to fuel voice technology innovation. We believe experiences designed around the human voice are a more natural way for people to interface with technology. Nucleus combines ease-of-use and the Alexa Voice Service (AVS) to create an intuitive voice experience where customers can stream music, access custom Alexa skills, and more just by asking Alexa. Nucleus joins past Alexa Fund recipients Luma, Sutro, Invoxia, Musaic, Rachio, Scout Alarm, Garageio, Toymail, Dragon Innovation, MARA, Mojio, TrackR, KITT.AI, DefinedCrowd, and Ring.

Nucleus is the first touchscreen device to incorporate AVS, making it easy for customers to stream music, control smart home products such as SmartThings, Insteon and Wink, and access the library of 3,000 Alexa skills. Read more about how Nucleus and the Alexa Voice Service (AVS) worked together to bring the company’s smart video intercom system to life in this morning’s featured developer spotlight interview.

Nucleus is available for purchase on Amazon.com.
Build your own skill for Alexa and the growing family of Alexa-enabled devices with the Alexa Skills Kit.

[Read More]

Want the latest?

alexa topics

Recent Posts

Archive