In 2012, a “Down Under” team from Melbourne, Australia recognized LED lighting had finally reached a tipping point. LED technology was the most efficient way to create light, and affordable enough to pique consumers’ interest in bringing colored lighting to the home. And LIFX was born.
John Cameron, vice president, says LIFX launched as a successful Kickstarter campaign. From its crowd-funded beginnings, it has grown into a leading producer and seller of smart LED light bulbs. With headquarters in Melbourne and Silicon Valley, its bulbs brighten households in 80 countries around the globe.
Cameron says LIFX makes the world’s brightest, most efficient and versatile Wi-Fi LED light bulbs. The bulbs fit standard light sockets, are dimmable and can emit 1,000 shades of white light. The color model adds 16 million colors to accommodate a customer’s every mood.
Until 2015, LIFX customers controlled their smart bulbs using smartphones apps. Customers could turn them on or off by name, dim or brighten them, and select the color of light. They could also group the devices to control an entire room of lights at once. Advanced features let customers create schedules, custom color themes, even romantic flickering candle effects.
Without the phone, though, customers had no control.
Like Amazon, the LIFX team knew the future of customer interfaces lay in voice control. “We’re always looking for ways to let customers control [their lights] without hauling out their phone,” said Cameron. “When Alexa came along, it took everybody by storm.”
“That drove us to join Amazon's beta program for the Alexa Skills Kit (ASK)” says Daniel Hall, LIFX’s lead cloud engineer. Hall says the ASK documentation and APIs were easy to understand, making it possible for them to implement the first version of the LIFX skill in just two weeks. By the end of March 2015, LIFX had certified the skill and was ready to publish. The skill let customers control their lights just by saying “Alexa, tell ‘Life-ex’ to…”
Since the LIFX skill launch, ASK has added custom slots, a simpler and more accurate way of conveying customer-defined names for bulbs and groups of bulbs. Hall says that custom slots is something that LIFX would be interested in implementing in the future.[Read More]
If you’ve already created your first Alexa Skill, you may be using local environments, the AWS CLI, and other DevOps processes. This blog post is for advanced developers who want to level up skill creation by adding some automation, version control, and repeatability to skill deployments.
In this post we're going to programmatically create our skill backend using AWS CloudFormation. CloudFormation is an AWS service that enables you to describe your AWS resources as a JSON file, these JSON files can later be ‘executed’ to tear up and tear down your AWS environments. This gives us a number of benefits, including version control and repeatability. You can read more about AWS CloudFormation in general over in the AWS developer docs here. To put this into context, when looking at the Alexa Skills Kit Architecture below, the resources in the red box below are what we will be creating within our CloudFormation Template.
The CloudFormation template is a JSON object that describes our infrastructure. This will consist of three components.
Parameters - Where we define the input parameters we want to inject into our template, such as ‘function-name.
Resources - The AWS resources that make up our skill backend, Such as the lambda function.
Outputs – Any information that we would like to retrieve from the resources created in our CloudFormation stack. Such as the lambda function ARN.
The template that we will create in this tutorial can be used as a starting point to create the backend for any of your Alexa skills.[Read More]
What makes the Amazon Echo so appealing is the fact that customers can control smart home devices, access news and weather reports, stream music, and even hear a few jokes just by asking Alexa. It’s simple and intuitive.
We’re excited to announce an important Alexa Voice Service (AVS) API update that now enables you to build voice-activated products that respond to the “Alexa” wake word. The update includes new hands-free speech recognition capabilities and a “cloud endpointing” feature that automatically detects end-of-user speech in the cloud. Best of all, these capabilities are available through the existing v20160207 API—no upgrades needed.
You can learn more about various use cases in our designing for AVS documentation.
To help you get started quickly, we are releasing a new hands-free Raspberry Pi prototyping project with third-party wake word engines from Sensory and KITT.AI. Build your own wake word enabled, Amazon Alexa prototype in under an hour by visiting the Alexa GitHub.
And don’t forget to share your finished projects on Twitter using #avsDevs. AVS Evangelist Amit Jotwani and team will be highlighting our favorite projects, as well as publishing featured developer interviews, on the Alexa Blog. You can find Amit on Twitter here: @amit.
Learn more about the Alexa Voice Service, its features, and design use cases. See below for more information on Alexa and the growing family of Alexa-enabled products and services:
AVS is coming soon to the UK and Germany. Read the full announcement here.
In this article, we’ll review two concepts: 1) separating content from logic and 2) using the locale attribute to serve the right content to the right users.
As an example, I’ve made a new skill: Classical Guitar Facts (using this template), which has content in both English and German. Although one might assume that I could get away with US English in the UK, differences in spelling and word choice will show up in the cards within the Alexa app, and this is not the best user experience. So, we’ll create content files in three separate folders, one per language, as shown below.
Moving the content out of the index.js files means that I’ve copied the FACTS array into a separate file and saved the file as de-facts.js, gb-facts.js, and us-facts.js respectively. Remember the last item in the FACTS array does not have a comma at the end. Also, remember the last line of this file “module.exports = FACTS”, otherwise the calling file (index.js) won’t be able to find it.
var FACTS = [ "The strings of guitars are often called gut strings because…”, " …”, " …” ]; module.exports = FACTS;
At the top of the index.js file, we need to declare the FACTS variable:
var FACTS = [ ];
so that we can call it later like this:
FACTS = require('./content/en-US/us-facts.js');
Of course, we can substitute en-US/us-facts.js with en-GB/gb-facts.js and de-DE/de-facts.js when needed. Now we’re well organized to swap separate content files based on language – but how do we know which language is calling our service?[Read More]
Today’s guest blog post is from Monica Houston, who leads the Hackster Live program at Hackster. Hackster is dedicated to advancing the art of voice user experience through education.
Even though it’s a sunny Saturday morning, men, women, and perhaps a few teens filter into a big room, laptops in hand, ready to build Alexa skills. They’re here to change the future of voice user experience.
Hackster, the community for open source hardware, has run 12 events with Amazon Alexa this year and 13 more are in the planning stages. All 25 events are organized by Hackster Ambassadors, a group of women and men hand-picked from Hackster’s community for their leadership skills, friendliness, and talent for creating projects.
Hackster Ambassadors pour their time and energy into helping to evangelize Alexa. Ambassador Dan Nagle of Huntsville, Alabama, created a website where you can find Hackster + Alexa events by city. Ambassador Paul Langdon set up a helpful github page where you can see skills that were published at the event he ran in Harford. He also volunteered his time and knowledge to run a series of “office hours” to help people develop their skills.
While Hackster provides venues and catering for these events and Hackster Ambassadors spread the word to their communities, Amazon sends a Solution Architect to teach participants how to build skills for Alexa and answer questions.
Amazon Solutions Architects go above and beyond to help people submit their skills for certification. Not only do they answer questions on Hackster’s developer slack channel, they also have hosted virtual “office hours,” run webinars, and conducted two “slackathons” with Hackster’s community.
Although the 25 Alexa events are being held in US cities, Hackster Live is a global program with 30 international Ambassadors. Hackster shipped Amazon Echos to our Ambassadors in South America, Asia, Africa, and Europe. Virtual events like slackathons and webinars run by Solutions Architects make it possible for people from around the world to learn skill building and add to the conversation.[Read More]
Today we are introducing the Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Flash Briefing on Alexa, which delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.
The Flash Briefing Skill API is free to use. Get Started Now >
To get started, you’ll configure a JSON or RSS feed and submit descriptive information about your skill in the portal. This can be done through the following steps:
2. Click on Add a New Skill
3. Select Flash Briefing Skill API, fill out a name and then click Next.
4. Unlike custom skills, the interaction model for Flash Briefing Skills will automatically be generated for you, simply hit Next.
5. Now we will need to define our Content Feed(s). Your Flash Briefing Skill can include one or more defined feeds.
Then, click on the Add new feed button.
6. You will then enter information about your content feed including name, how often the feed will be updated, the content type (audio or text), the genre, an icon as well as the URL for where you are hosting the feed.
7. Repeat these steps each feed you wish to include in the skill. The first feed you add will automatically be marked as the default feed. If you add more feeds, you can choose which feed is the default, by selecting it in the Default column.
8. Click Next when you are finished adding feeds and are ready to test your skill.
For additional information check out the Steps to Create a Flash Briefing Skill page here.[Read More]
In the latest headlines from KIRO7:
[stirring theme music begins] Hello from KIRO7 in Seattle. I’m Michelle Millman…
And I’m John Knicely. Here are the top stories we’re following on this Friday.
A car erupted in flames around 5:30 this morning on northbound I-5. This was just south of downtown and caused a major traffic backup, but you can get around it by…
This might sound like a local daybreak newscast blaring from the TV in the kitchen or the bedroom, as you rush around trying to get ready for work – but it isn’t.
It’s actually a Alexa Flash Briefing skill. Flash Briefing streams today’s top news stories to your Alexa-enabled device on demand. To hear the most current news stories from whatever sources you choose, just say “Alexa, play my flash briefing” or “Alexa, what’s the news?”
The particular Flash Briefing skill in question, though, is rather unique. With all its realism and personality, you might be fooled into thinking it’s an actual news desk, complete with bantering anchors, perky weather forecast, and the day’s top local headlines.
That’s because it is—and that’s what sets KIRO7 apart from the rest.
Using the Alexa app, you can select different skills for your Flash Briefing from a number of different news sources. These include big-named outlets like NPR, CNN, NBC, Bloomberg, The Wall Street Journal, and more. These all give you snapshots of global news. Now more and more local stations are creating their own Flash Briefing skills for Alexa.
The Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Alexa’s Flash Briefing, delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.
If you’ve activated Flash Briefing before, you know that several content providers leverage Alexa to read text in her normal voice. That’s because most skills in Flash Briefing repurpose content that is already available in an RSS-style feed. They plug the same text into the feed for Alexa to ingest.
Jake Milstein, news director for KIRO7, said KIRO7 was one of the first local news channels to create a Flash Briefing. While Alexa has a wonderful reading voice, the KIRO7 team wanted to do something a bit more personal for its listeners. Working with the Alexa team, they discovered they could upload MP3 files as an alternative to text. Instead of reading from canned text files, Alexa would play the audio files.
Milstein said using real people’s voices was an obvious choice, because “We have such great personalities here at KIRO7.” The station tested various formats, but eventually settled on using two of its morning news anchors. Christine Borrmann, KIRO7 Producer, says, “We tinkered with the format until Michelle and John just started talking about the news in a very conversational way. Then we added a little music in the background. It felt right.”
KIRO7 started out with a single daily feed but now has three. The morning anchors, Michelle Millman and John Knicely, record the first ‘cast around 4 a.m. and the second shortly after their live broadcast at 8 a.m. Other news anchors record the third feed in late afternoon, so it captures the evening news topics. Each ‘cast’ is roughly two minutes long and ends by encouraging listeners to consume more KIRO7 content through the app on Amazon FireTV.
The whole KIRO7 team is proud to be the first local news station to produce a studio-quality audio experience in a Flash Briefing and the KIRO7 skill launched alongside several established networks with national scale.
Early feedback on Facebook showed KIRO7 listeners loved the skill and wanted even more. Now that Flash Briefings are skills, though, the KIRO7 team can start collecting its own reviews and star-ratings.
Milstein says it is important that KIRO7 stay at the forefront of delivering Seattle-area news the way people want to get their news. “Having our content broadcast on Alexa-enabled devices and available on Amazon Fire TV is something we're really proud of. For sure, as Amazon develops more exciting ways to deliver the news, we'll be there.”
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
Today we are happy to announce the support for scenes, a new feature in Alexa skills developed with the Smart Home Skill API. With scenes, customers can issue a single voice command to an Alexa-enabled device such as the Amazon Echo or Echo Dot to set a predefined configuration of one or more devices to a desired state. For example, a customer could say, “Alexa, turn on bedtime” resulting with specific lights turning off, a bedroom light changing color to a low-lit orange-hue, a ceiling fan turned on, and the temperature changed to an ideal setting for sleep.
At first glance scenes might appear similar to the groups feature found in the Smart Home section of the Alexa app as both allow control over multiple devices with one voice command. However, scenes differ from groups in the following ways:
With scenes customers have another option to groups for controlling multiple devices. Customers may already have scenes configured in device manufacturer apps such as those provided by Control4, Crestron, Insteon, Lutron Caseta, SmartThings, or Wink. Prior to today, these scenes were invoked by using the device manufacturer’s app. Now customers can find these scenes listed as devices in their Alexa app after requesting device discovery and control via voice interaction.
Figure 1: Scene control process
Once a customer has configured a scene through the device manufacturer’s app and requests a device discovery to Alexa, the scene name will appear in the device list in the Alexa app. Consider what happens from a developer perspective, when a voice command is made to turn a scene on.
Let’s examine each step above in more detail.
Artificial intelligence (AI) is becoming ubiquitous. With advances in technology, algorithms, and sheer compute power, it is now becoming practical to utilize AI techniques in many everyday applications including transportation, healthcare, gaming, productivity, and media. Yet one seemingly intuitive task for humans still eludes computers: natural conversation. Simple and natural for humans, voice communication in everyday language continues to be one of the ultimate challenges for AI. Human conversation requires the ability to understand the meaning of spoken language, relate that meaning to the context of the conversation, create a shared understanding and world view between the parties, model discourse and plan conversational moves, maintain semantic and logical coherence across turns, and to generate natural speech.
Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. Teams of university students around the world are invited to participate in the Alexa Prize (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses coherently and engagingly with humans on popular topics for 20 minutes. We challenge teams to invent an Alexa socialbot smart enough to engage in a fun, high quality conversation on popular topics for 20 minutes.
Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Alexa users will experience truly novel, engaging conversational interactions.
Up to ten teams of students will be selected to receive a $100,000 research grant as a stipend, Alexa-enabled devices, free AWS services to support their development efforts, and support from the Alexa Skills Kit (ASK) team. Additional teams not eligible for funding may be invited to participate. University teams can submit their applications between September 29 and October 28, 2016, here. The competition will officially start on November 14, 2016 and run until November 2017, concluding with an award ceremony to be held at AWS re:Invent in Las Vegas, NV.
As we say at Amazon, this is Day 1 for conversational AI. We are excited to see where you will go next, and to be your partners in this journey. Good luck to all of the teams.
Learn more about Alexa Prize.
Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn how to respond to control directives in code to turn devices on or off, set temperature, and set percentages.
When you build a skill with the Smart Home Skill API, the ultimate goal is to control a device. That control can include turning a device on or off, setting a temperature, or setting a percentage, such as when you’re dimming a light bulb. This post will cover the general process of device control and teach the fundamentals by demonstrating control of the ‘on’ or ‘off’ state in code using Node.js.
This technical walkthrough is a continuation in a series of smart home skill posts focused on development. Please read and follow the instructions found below to reach parity.
Figure 1: Device control process
Once a customer has properly installed, configured, and discovered all smart home devices, verbal control commands can be issued to an Alexa-enabled device, such as the Amazon Echo. Consider what happens from a developer perspective when a control command is made, such as turning on a light.
Let’s examine each step above in more detail.
Need a ride? Lyft is an on-demand transportation platform that lets you book a ride in minutes. It’s as easy as opening up the Lyft app, tapping a button and a driver arrives to get you where you need to go. Now, they’ve made it even easier. Simply say, “Alexa, ask Lyft to get me a ride to work.”
Roy Williams, the Lyft engineer who built the Alexa skill, said it started with a company hackathon.
Lyft has a long culture of hackathons. Each quarter, the San Francisco company invites employees to experiment with new ideas. The story goes that Lyft itself was born at such a hackathon, with someone’s idea for an “instant” ride service.
“It took about three weeks to go from the original prototype to a finished app,” Williams said. Lyft has been going strong ever since.
That wasn’t the last innovation to spring from a Lyft hackathon.
Williams said he purchased an Amazon Echo during the 2015 Black Friday sale. He immediately knew he wanted create an Alexa skill to let Echo users order a “lyft.” Williams dove into the Alexa Skills Kit (ASK) documentation, and he started building his prototype at the January hackathon. It was a hit.
Beyond the prototype, Williams estimates the project took three weeks of solid engineering time. The team spent one week working on the core functionality, including adding some workflow to their own API. It spent another week working through edge cases and complex decision trees, so the skill would never leave a user confused or at a dead-end. Finally, they spent another week on testing and analytics, before releasing it for an internal beta with 30 users.
Williams says ASK is very comprehensive, and because it is JSON-based, it makes testing easy. He admits having to add some edge testing to account for cases like asking Lyft for “a banana to work.” (Bananas are a favorite test fruit during certification.) In the end, he knew Lyft had a high-quality skill with near-one hundred percent test coverage.
Amazon published the final Lyft skill in July.
Megan Robershotte is a member of Lyft’s partner marketing team. She explained the Alexa skill fit well with the company’s primary goal: to get people to take their first ride with Lyft.[Read More]
In this post, Nathan Grice, Alexa Smart Home Solutions Architect, shows you how to reduce skill development time by debugging your skill code in an local environment. Learn how to step through your code line by line while preserving roles and AWS services, like DynamoDB, used in the skill when running in AWS Lambda. Share your thoughts and feedback in this forum thread.
Amazon Alexa and the Alexa Skills Kit (ASK) are enabling developers to create voice-first interactions for applications and services. In this article, we will cover how to set up a local development environment using the Amazon Web Services (AWS) SDK for NodeJs.
By following this tutorial, you’ll be able to invoke your AWS Lambda code as if were called by the Alexa service. This will also allow you to interact with any other AWS services you may have added to your skill logic such as Amazon DynamoDB. By the end of this post, you will be able to execute and debug all of your Alexa skill’s Lambda code from your local development environment.
Using the aws-sdk, you should also be able to call any dependent services in AWS as if the skill code were executing in AWS Lambda by leveraging AWS Roles. This way, you can be sure your code is working before deploying into AWS and hopefully decrease the cycle time for applying new changes. For example, you want to persist something about users in a DynamoDB table and the only way to do this was run your code in Lambda. After this tutorial, you should be able to write to the remote Dynamo table from your local environment.
First, let’s take a look at why you would want to streamline this process. The first time I developed a skill, I was not using an integrated development environment and almost all debugging information was obtained through log statements. This presents quite a few challenges from a developer’s point of view.
I wanted a better way to execute and debug my code, but not lose any of the functionality of being constrained to a local environment.
In the next section we will look at how to setup a local environment to debug your AWS Lambda code using Node,js, Microsoft's Visual Studio code open-source editor, and the aws-sdk npm package. This tutorial will cover setting this up using Node.js but the AWS SDK is available for Python and Java as well.
Install Node.js via the available installer. The installation is fast and easy, just follow the available prompts. For the purposes of this tutorial, I am on OSX, so I selected v4.5.0 LTS. There are versions available for Windows and Linux as well.
Repeat the process with Microsoft's Visual Studio Code. For the purposes of this tutorial, I am using Microsoft’s Visual Studio Code but others should work as well.[Read More]
Today’s post comes from J. Michael Palermo IV, Sr. Evangelist at Amazon Alexa. You will learn the process of device discovery and how to support it in code for your smart home skill.
Developing a smart home skill is different than building a custom skill. One of the main differences is the dependency on devices to control. The device might be a light bulb, thermostat, hub, or other device that can be controlled via a cloud based API. Or maybe you created an innovative IoT gadget and you want to make it discoverable by an Alexa enabled device. In this post, you will learn how the process of device discovery works, and how you can support discovery in your custom skill adapter communicating with the Smart Home Skill API.
To meet prerequisites and set the context of the technical information in this post, start by reading the five steps before developing a smart home skill and set up your initial code to support skill adapter directive communications. This post will be the next in the series of these posts and provides the foundation for code samples to follow.
To appreciate the role of device discovery, consider how a customer is involved in the process. The following steps assume a consumer has an Alexa-enabled device, such as the Echo or Echo Dot, already set up.
Once the first step is completed, the customer is able to control the smart home device typically through an app provided by the device maker, which is graphical user interface that manages device and owner information controlled in it’s own device cloud. The account created in the first step is the same account used in the second step when the consumer enables the associated smart home skill. This explains why account linking is mandatory for skills created with the Smart Home Skill API.
But what happens in the third step when the consumer makes a device discovery request? Does it actually seek for devices emitting some signal within the home? Is it querying everything it can within the local WIFI area? The answer to both questions is no. Although there are a couple of exceptions to enable early support of popular products such as Philips Hue and Belkin WeMo, the process described next is what is supported today and moving forward.
Figure 1: Device discovery process
When a request is made by the customer for devices to be discovered, the Alexa service identifies all the smart home skills associated with the consumers account, and makes a discover request to each one as seen here.
Let’s examine each step above in more detail. Notice the first step is the same as the last step we covered when considering the customer’s perspective, so this is a deeper dive as to what happens next. Also observe in Figure 1 that no communications occur directly between the Amazon Echo and the smart home device.[Read More]
Last month, we announced the launch of Nucleus, the smart home intercom that’s always getting smarter with Alexa. Designed to bring families closer together, Nucleus makes two-way video conferencing between rooms, homes, and mobile devices instantaneous. Following the successful launch of Nucleus on Amazon.com and in hundreds of Lowe’s home improvement stores throughout the US, we’re excited to announce that Alexa Fund has led a $5.6 million Series A investment round in Nucleus, with additional participation from BoxGroup, Greylock Partners, FF Angel (Founders Fund), Foxconn, and SV Angel.
“It’s incredible to receive this level of support in such a short period of time,” said Jonathan Frankel, co-founder and CEO of Nucleus. “It speaks to the importance of our shared vision: Bringing families closer together through intuitive and intelligent interfaces. Amazon has been a stand-out supporter since day one and recognizes the value Nucleus is bringing to families nationwide, and the rapid market traction we’re seeing within our growing community.”
The Alexa Fund provides up to $100 million in venture capital funding to fuel voice technology innovation. We believe experiences designed around the human voice are a more natural way for people to interface with technology. Nucleus combines ease-of-use and the Alexa Voice Service (AVS) to create an intuitive voice experience where customers can stream music, access custom Alexa skills, and more just by asking Alexa. Nucleus joins past Alexa Fund recipients Luma, Sutro, Invoxia, Musaic, Rachio, Scout Alarm, Garageio, Toymail, Dragon Innovation, MARA, Mojio, TrackR, KITT.AI, DefinedCrowd, and Ring.
Nucleus is the first touchscreen device to incorporate AVS, making it easy for customers to stream music, control smart home products such as SmartThings, Insteon and Wink, and access the library of 3,000 Alexa skills. Read more about how Nucleus and the Alexa Voice Service (AVS) worked together to bring the company’s smart video intercom system to life in this morning’s featured developer spotlight interview.Read More]
In early 2014, Jonathan Frankel started renovating a house in Philadelphia. With three kids and multiple floors, he wanted an intercom system, but was frustrated with the persistence of old technology. He found that home intercoms hadn’t changed much in the last 30 years; they were still expensive and difficult to install. What’s more, intercom systems had failed to keep up with today’s modern families who are spread across geographies and constantly on the move.
Frankel, now CEO of Nucleus, wanted to bring families closer together. He wanted to build a device that could bridge generations and let his mom video chat with his children with a simple touch. He wanted to visit with his family over dinner, even while away on business. Whenever, whoever, and wherever they may be, he wanted to talk to them—room-to-room, home-to-home, or mobile-to-home.
Now his vision has come to life. Nucleus, the first smart home intercom with video calling, and with the voice capabilities of Alexa, is delighting customers with easy access to music, news, weather, to-do lists, and even smart home controls.
Amazon created the Alexa Voice Service (AVS) to make it easier for developers to add voice-powered experiences to their products and services. That proved advantageous for Nucleus.
According to Isaac Levy, chief technology officer at Nucleus, hands-free interaction was part of the Nucleus vision from the beginning. They prototyped early Nucleus units with various voice recognition solutions, including open source. When they heard about the commercial availability of AVS, they knew their search was over.
“We knew right away that AVS would be a great fit, and we wanted to incorporate it into our product,” Levy said. “It’s one thing to have basic voice recognition. But being able to unlock everything Alexa can do—weather, sports, flash briefings, all those custom skills…it’s like waking up a genie in our device. AVS helped Nucleus create an even more compelling customer experience.”
Levy says AVS allowed his team to develop a more full-featured Nucleus with capabilities the company hadn’t developed on its own. For example, natural language understanding (NLU) is built into the Alexa service, providing developers with an intelligent and intuitive voice interface that’s always getting smarter. This saved Nucleus many years of development work.[Read More]