Your Alexa Consoles
?
支持
感谢您的访问。此页面目前仅提供英语版本。我们正在开发中文版本。谢谢您的理解。
Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts

Archive

Showing posts tagged with Alexa

October 21, 2016

Jen Rapp

As an Alexa developer, you have the ability to provide Alexa skill cards that contain text and/or images (see Including a Card in Your Skill's Response). There are two main types of cards:

  • Simple Card - contains a title and text body.
  • Standard Card - contains title, text body, and one image.

Customers interacting with your skill can then view these cards via the Alexa app or on Fire TV. While voice experiences allow customers to break from their screens, graphical interfaces act to complement and can enhance the experience users have with your skill.

In our new guide, Best Practices for Skill Card Design, you can learn how to best present information on cards for easy consumption by customers. Skill cards contain the same information (image and text) everywhere they appear, but have differing layouts depending on the access point, the Alexa app or Fire TV.  

To drive engagement with your Alexa skill, we’ve compiled the top 10 tips for effective Alexa skill card design.

Tip #1: Use cards to share additional information or details to the voice experience

Cards do not replace the voice experience, instead, they deliver value-added content. Customers should not need to rely on the cards to enjoy your voice experience and cards should never be required to use an Alexa skill. Instead, they should be used to provide additional information.

For example, imagine a customer asks for a recipe and you want to share details of the recipe. The skill card could add additional context by providing the recipe category, recipe description, cook time, prep time, and number of ingredients,  while Alexa may simply say, “Try chicken parmesan accented by a homemade tomato sauce.”

Tip #2: Show users what they can do with guidance and sample utterances

Cards can be a great way to get a lost user back on track, or enable self-service to show users what they can do. Give enough detail for the user to move forward when lost – without going overboard. Suggest sample utterances when they need help, or when AMAZON.HelpIntent is triggered. Always keep the utterances relevant and in context of the current situation. For example, don't suggest an utterance on how to check your previous scores when the user is in the middle of the game.

Tip #3: Keep it short, informative, and clear

Structure the copy for cards in brief, informative sentences or lines of text and avoid unstructured product details. Don’t rely on large blocks of text and keep details to a minimum so that users can quickly evaluate the card at a glance. For example, show a stock symbol and the current stock quote instead of a full sentence describing the change, which is more difficult to quickly grasp.

Tip #4: Use line breaks

Use line breaks (/n) to help format individual lines of addresses, product details or information. Again, this makes it easier to quickly scan for key information. However, don’t double line break when separating parts of a street address.

Tip #5: Keep URL links short and memorable

Since URLs in cards are not clickable links, don’t only show URLs to direct users to other sites. Instead, provide clear direction on how to get to more information (e.g., “Go to giftsgalore.com and head to ‘My Account’”). While we don’t encourage the use of URLs in cards, if you do include them, make it easy for the user to consume and remember.

Tip #6: Make it consumable at a glance

A general guideline for card content is to keep it short and easy to read. Cards should provide quick bits of content that users can consume at a glance. Providing images is a helpful way to quickly convey key information (e.g., images of a cheese pizza vs. a pepperoni pizza are instantaneously distinguishable). The card shouldn’t include everything that Alexa says, but instead simply the key information in the card (e.g., a bulleted list of product details vs. the full description).

[Read More]

October 19, 2016

Zoey Collier

Landon Borders, Director of Connected Devices at Big Ass Solutions, still chuckles when he tells customers how the company got its name. Founder Carey Smith started his company back in 1999, naming it HVLS Fan Company. Its mission was to produce a line of high-volume, low-speed (HVLS) industrial fans. HVLS Fan Company sold fans up to 24-feet in diameter for warehouses and fabrication mills.

“People would always say to him ‘Wow, that’s a big-ass fan.’ They wanted more information, but they never knew how to reach us,” says Borders. So the founder listed the company in the phone book twice, both as HVLS Fan Company and Big Ass Fans. Guess which phone rang more often? “In essence, our customers named the company.”

Today the parent company is Big Ass Solutions. It still owns Big Ass Fans. It also builds Big Ass Lights and Haiku Home, a line of smart residential lighting and fans. Now with an Alexa skill, the company’s customers can control their devices using only their voice.

Creating the world’s first smart fan

Haiku Home is where Alexa comes into the picture.

Big Ass Fans (BAF) is a direct-sales company. As such, it gets constant and direct feedback about customers' satisfaction and product applications. BAF found people were using its industrial-grade products in interesting commercial and home applications. It saw an exciting new opportunity. So in 2012, BAF purchased a unique motor technology, allowing it to create a sleek, low-profile residential fan.

That was just the starting point for BAF’s line of home products. The next year, BAF introduced Haiku with SenseME, the world’s first smart fan.

What’s a smart fan? Borders says it first has to have cutting-edge technology. Haiku Home fans include embedded motion, temperature and humidity sensors. A microprocessor uses that data to adjust the fan and light kits to the user's tastes. The device also has to be connected, so it includes a Wi-Fi radio.

The smart fan joins Alexa’s Smart Home

The microprocessor and Wi-Fi radio make the SenseME fan a true IoT device. Customers use a smartphone app to configure the fan’s set-it-and-go preferences. But after that, why should you need an app?

Borders remembers discussions in early 2015 centered on people getting tired of smartphone apps. Using apps were a good starting point, but the company found some users didn’t want to control their fan with their smartphone. BAF felt voice was definitely the user interface of the future. When they saw Amazon heavily investing in the technology, they knew what the next step would be.

They would let customers control their fans and lights simply by talking to Alexa.

[Read More]

October 17, 2016

Ted Karczewski

People love that they can dim their lights, turn up the heat, and more just by asking Alexa on their Amazon Echo. Now Philips Hue has launched a new cloud-based Alexa skill, making the same smart home voice controls accessible on the Echo available on all Alexa-enabled third-party products through the Alexa Voice Service API. Best of all, your customers can enable the new Hue skill today—no additional development work needed.

Because Alexa is cloud-based, it’s always getting smarter with new capabilities, services, and a growing library of third-party skills from the Alexa Skills Kit (ASK). As an AVS developer, your Alexa-enabled product gains access to these growing capabilities through regular API updates, feature launches, and custom skills built by our active developer community.

Now with Philips Hue capabilities, your end users can voice control all their favorite smart home devices just by asking your Alexa-enabled product. You can test the new Philips Hue skill for yourself by building your own Amazon Alexa prototype and trying these sample utterances:

  • Alexa, turn on the kitchen light.
  • Alexa, dim the living room lights to 20%.                                                

End users can enable the new Philips Hue skill in the “Smart Home” section on the Amazon Alexa app.

More About Philips Hue

Philips Hue offers customizable, wireless LED lighting that can be controlled by voice across the family of Amazon Alexa products. Now with third-party integration, your users will be able to turn on and off their lights, change lighting color, and more from any room in the house just by asking your Alexa-enabled third-party product. The new Philips Hue skill also includes support for Scenes, allowing Alexa customers to voice control Philips Hue devices assigned to various rooms in the house.

Whether end users have an Echo in the kitchen or an Alexa-enabled product in the living room, they can now voice control Philips Hue products from more Alexa-enabled devices across their home. Learn more about the Smart Home Skill API and how to build your own smart home skill.

[Read More]

October 14, 2016

Thom Kephart

We participate in a number of events across the globe throughout the year – and we’d love to see you at the next one.

To stay tuned to the latest events near you, check out our new events page. There you’ll be able to find information about hackathons where you can get hand-on education and build Alexa skills, conferences and presentations where you can join the conversation and meet Alexa team members, as well as community-run meetups where you can connect with fellow developers.

Bookmark the events page today, register for one near you, and we’ll see you there.

 

October 13, 2016

David Isbitski

The beta is now closed. Sign up to be notified when the List Skill API is publicly available.

Today we announced a limited participation beta for the List Skill API, a new addition to the Alexa Skills Kit, which enables developers to add capabilities, called skills, to Alexa. Developers can now teach Alexa how to interface with their list applications so that customers can simply say, “Alexa, add bananas to my Shopping List” or “Alexa, add ‘Go for a Jog’ to my To-do list.” The List Skill API taps into Amazon’s standardized language model so you don’t have to build a voice interaction model to handle customer requests. You create skills that connect your applications directly to Alexa’s Shopping and To-do list capabilities so that customers can add or review items on their lists—without lifting a finger.

How it works

The List Skill API has a bi-directional interface that ensures lists are updated across all channels. That means the API notifies developers when a customer tells Alexa to add something to their list or makes a change to an existing item. Alexa understands the user’s speech request, converts it to a To-do or Shopping item, and sends you a notification with the new item that was added to the list. The List Skill API also updates the lists for Alexa when users make changes to their lists online or in your mobile application.

Customers are increasingly using voice interfaces as a hands-free way to manage their lives. By using Alexa’s built-in Shopping and To-do lists to keep track of items to buy and things to do, customers on millions of Alexa-enabled devices only have to "ask" and it's at their command, often becoming a daily habit. By integrating with the List Skill API, you will make it easier for your existing customers to keep track of their important tasks and shopping items in the home, and introduce your brand to a new group of Alexa customers.

Here's what developers are saying

Today we announced that Any.do and Todoist created the first skills using the List Skill API. 

 “We’ve been huge fans of Alexa for a long time. Once the opportunity to work with Alexa in a deep way presented itself, we we’re extremely happy to push it forward" says Omer Perchik, the Founder and CEO of Any.do. "The work with the new Alexa List Skill API was simple, straightforward and our experience as a beta participant was smooth due to the support from Amazon.”

“At Todoist, we're very excited about the potential of AI and AI-powered services. Amazon’s Alexa is one of the earliest and best examples of making this technology useful in people's everyday lives,” says Doist founder and CEO Amir Salihefendic. “That's why we're thrilled to have collaborated with the Amazon team as part of their limited participation beta for the Alexa List Skill API. We’re sure our customers will find Alexa extremely helpful in staying organized and productive, and we're looking forward to working with Amazon to make the Todoist skill even more useful as Alexa continues to evolve and get smarter.”

Get started now

Going forward, we’re excited to open the List Skill API to more developers as part of our limited participation beta.

For more information about getting started with the Alexa Skills Kit and to apply to participate in the List Skill API beta, check out the following additional assets:

About the List Skill API
Alexa Dev Chat Podcast
Alexa Training with Big Nerd Ranch
Alexa Skills Kit (ASK)
Alexa Developer Forums

-Dave (@TheDaveDev)

October 12, 2016

Zoey Collier

Brian Donohue, New Jersey-born software engineer and former CEO of Instapaper, wasn't an immediate Alexa fan. In fact, his first reaction to the 2014 announcement of the Amazon Echo was "That's cool, but why would I buy one?"

All that changed over the course of one whirlwind weekend in March 2016. Almost overnight, Brian went from almost indifferent to being one of the most active developers in the Alexa community. Today he’s recognized as an Alexa Champion and a master organizer of Alexa meetups.

We sat down with Brian to find out how Alexa changed his entire view of voice technology... and why he wanted to share his excitement with other Alexa developers.

An overnight Alexa convert

Brian has led Instapaper for the last two and a half years. Its former owner, Betaworks, always encouraged employees—including Brian—to check and innovate with new technology. Brian has built apps for Google Glass and other devices, just because the company had them lying around the office.

When the company bought an Echo device in March, Brian had to take another look. He took it home one Friday night and decided to try building a skill using the Alexa Skills Kit (ASK). He selected something simple, inspirational and personal to him. The skill—which later became Nonsmoker—keeps track of when you stopped smoking and tells you how long it's been since your final cigarette.

The first version took Brian half a day to create. It was full of hardcoded values, but it was empowering. Then, in playing with this and other Alexa skills, Brian recognized something exciting. A fundamental technology shift was staring right at him. When he returned the Echo to the office on Monday, he was hooked.

“Interacting with Alexa around my apartment showed me the real value proposition of voice technology,” says Brian. “I realized it’s magical. I think it’s the first time in my life that I’d interacted with technology without using my hands.”

Bringing NYC Alexa users together

Brian wanted immediate and more active involvement in Alexa development. The following day he was searching meetup.com for Alexa user gatherings in New York City. He found none, so Brian did what always came naturally. He did it himself.

His goal was to find 20 or so interested people before going to the effort of creating a meetup. The demand was far greater than he expected. By the third week of March, he was hosting 70 people at the first-ever NYC Amazon Alexa Meetup, right in the Betaworks conference room.

After a short presentation about Echo, Tap and Dot, Brian did the rest of the program solo. He created a step-by-step tutorial with slides, a presentation and code snippets, all to explain how to create a simple Alexa skill. He walked attendees through the program, then let them test and demo their skills on his own Echo, in front of the class.

“A lot of them weren’t developers, but they could cut and paste code,” says Brian. “About half completed the skill, and some even customized the output a bit.” Brian helped one add a random number generator, so her skill could simulate rolling a pair of dice.

[Read More]

October 11, 2016

Zoey Collier

In 2012, a “Down Under” team from Melbourne, Australia recognized LED lighting had finally reached a tipping point. LED technology was the most efficient way to create light, and affordable enough to pique consumers’ interest in bringing colored lighting to the home. And LIFX was born.

John Cameron, vice president, says LIFX launched as a successful Kickstarter campaign. From its crowd-funded beginnings, it has grown into a leading producer and seller of smart LED light bulbs. With headquarters in Melbourne and Silicon Valley, its bulbs brighten households in 80 countries around the globe.

Cameron says LIFX makes the world’s brightest, most efficient and versatile Wi-Fi LED light bulbs. The bulbs fit standard light sockets, are dimmable and can emit 1,000 shades of white light. The color model adds 16 million colors to accommodate a customer’s every mood.

From smartphone apps to brilliant voice control

Until 2015, LIFX customers controlled their smart bulbs using smartphones apps. Customers could turn them on or off by name, dim or brighten them, and select the color of light. They could also group the devices to control an entire room of lights at once. Advanced features let customers create schedules, custom color themes, even romantic flickering candle effects.

Without the phone, though, customers had no control.

Like Amazon, the LIFX team knew the future of customer interfaces lay in voice control. “We’re always looking for ways to let customers control [their lights] without hauling out their phone,” said Cameron. “When Alexa came along, it took everybody by storm.”

“That drove us to join Amazon's beta program for the Alexa Skills Kit (ASK)” says Daniel Hall, LIFX’s lead cloud engineer. Hall says the ASK documentation and APIs were easy to understand, making it possible for them to implement the first version of the LIFX skill in just two weeks. By the end of March 2015, LIFX had certified the skill and was ready to publish. The skill let customers control their lights just by saying “Alexa, tell ‘Life-ex’ to…

Since the LIFX skill launch, ASK has added custom slots, a simpler and more accurate way of conveying customer-defined names for bulbs and groups of bulbs. Hall says that custom slots is something that LIFX would be interested in implementing in the future.

[Read More]

October 07, 2016

Dean Bryen

If you’ve already created your first Alexa Skill, you may be using local environments, the AWS CLI, and other DevOps processes. This blog post is for advanced developers who want to level up skill creation by adding some automation, version control, and repeatability to skill deployments.

In this post we're going to programmatically create our skill backend using AWS CloudFormation. CloudFormation is an AWS service that enables you to describe your AWS resources as a JSON file, these JSON files can later be ‘executed’ to tear up and tear down your AWS environments. This gives us a number of benefits, including version control and repeatability. You can read more about AWS CloudFormation in general over in the AWS developer docs here. To put this into context, when looking at the Alexa Skills Kit Architecture below, the resources in the red box below are what we will be creating within our CloudFormation Template.

 

The Template

The CloudFormation template is a JSON object that describes our infrastructure. This will consist of three components.

Parameters - Where we define the input parameters we want to inject into our template, such as ‘function-name.

Resources - The AWS resources that make up our skill backend, Such as the lambda function.

Outputs – Any information that we would like to retrieve from the resources created in our CloudFormation stack. Such as the lambda function ARN.

The template that we will create in this tutorial can be used as a starting point to create the backend for any of your Alexa skills.

[Read More]

October 06, 2016

Ted Karczewski

What makes the Amazon Echo so appealing is the fact that customers can control smart home devices, access news and weather reports, stream music, and even hear a few jokes just by asking Alexa. It’s simple and intuitive.

We’re excited to announce an important Alexa Voice Service (AVS) API update that now enables you to build voice-activated products that respond to the “Alexa” wake word. The update includes new hands-free speech recognition capabilities and a “cloud endpointing” feature that automatically detects end-of-user speech in the cloud. Best of all, these capabilities are available through the existing v20160207 API—no upgrades needed.

You can learn more about various use cases in our designing for AVS documentation.

Get Started with Our New Raspberry Pi Project

To help you get started quickly, we are releasing a new hands-free Raspberry Pi prototyping project with third-party wake word engines from Sensory and KITT.AI. Build your own wake word enabled, Amazon Alexa prototype in under an hour by visiting the Alexa GitHub.

And don’t forget to share your finished projects on Twitter using #avsDevs. AVS Evangelist Amit Jotwani and team will be highlighting our favorite projects, as well as publishing featured developer interviews, on the Alexa Blog. You can find Amit on Twitter here: @amit.

Learn more about the Alexa Voice Service, its features, and design use cases. See below for more information on Alexa and the growing family of Alexa-enabled products and services:

Alexa Developer Resources
Alexa Voice Service (AVS)
Alexa Skills Kit (ASK)
The Alexa Fund
AVS Developer Forums
Alexa on a Raspberry Pi

Alexa-Enabled Devices
Triby
CoWatch
Pebble Core
Nucleus

Amazon Alexa Devices
Amazon Echo
Amazon Echo Dot
Amazon Tap
Amazon Fire TV
Amazon Fire TV Stick

Have questions? We are here to help. Visit us on the AVS Forum or Alexa GitHub to speak with one of our experts.

AVS is coming soon to the UK and Germany. Read the full announcement here.

October 05, 2016

Liz Myers

Now that Alexa is multi-lingual, it’s a new day in Alexa skill making. Not only can you publish to customers around the globe, you can do so from a single code base.

In this article, we’ll review two concepts: 1) separating content from logic and 2) using the locale attribute to serve the right content to the right users.

Getting Organized

As an example, I’ve made a new skill: Classical Guitar Facts (using this template), which has content in both English and German. Although one might assume that I could get away with US English in the UK, differences in spelling and word choice will show up in the cards within the Alexa app, and this is not the best user experience. So, we’ll create content files in three separate folders, one per language, as shown below.

Create the Content Files

Moving the content out of the index.js files means that I’ve copied the FACTS array into a separate file and saved the file as de-facts.js, gb-facts.js, and us-facts.js respectively. Remember the last item in the FACTS array does not have a comma at the end. Also, remember the last line of this file “module.exports = FACTS”, otherwise the calling file (index.js) won’t be able to find it.

var FACTS = [
    "The strings of guitars are often called gut strings because…”,
    " …”,
    " …”
];
module.exports = FACTS;  

Calling External Content

At the top of the index.js file, we need to declare the FACTS variable:

var FACTS = [ ];

so that we can call it later like this:

FACTS = require('./content/en-US/us-facts.js');

Of course, we can substitute en-US/us-facts.js with en-GB/gb-facts.js and de-DE/de-facts.js when needed. Now we’re well organized to swap separate content files based on language – but how do we know which language is calling our service?

[Read More]

October 04, 2016

Jen Gilbert

Today’s guest blog post is from Monica Houston, who leads the Hackster Live program at Hackster. Hackster is dedicated to advancing the art of voice user experience through education.

Even though it’s a sunny Saturday morning, men, women, and perhaps a few teens filter into a big room, laptops in hand, ready to build Alexa skills. They’re here to change the future of voice user experience.

Hackster, the community for open source hardware, has run 12 events with Amazon Alexa this year and 13 more are in the planning stages. All 25 events are organized by Hackster Ambassadors, a group of women and men hand-picked from Hackster’s community for their leadership skills, friendliness, and talent for creating projects.

Hackster Ambassadors pour their time and energy into helping to evangelize Alexa. Ambassador Dan Nagle of Huntsville, Alabama, created a website where you can find Hackster + Alexa events by city. Ambassador Paul Langdon set up a helpful github page where you can see skills that were published at the event he ran in Harford. He also volunteered his time and knowledge to run a series of “office hours” to help people develop their skills.

While Hackster provides venues and catering for these events and Hackster Ambassadors spread the word to their communities, Amazon sends a Solution Architect to teach participants how to build skills for Alexa and answer questions.

Amazon Solutions Architects go above and beyond to help people submit their skills for certification. Not only do they answer questions on Hackster’s developer slack channel, they also have hosted virtual “office hours,” run webinars, and conducted two “slackathons” with Hackster’s community.

Although the 25 Alexa events are being held in US cities, Hackster Live is a global program with 30 international Ambassadors. Hackster shipped Amazon Echos to our Ambassadors in South America, Asia, Africa, and Europe. Virtual events like slackathons and webinars run by Solutions Architects make it possible for people from around the world to learn skill building and add to the conversation.

[Read More]

October 03, 2016

David Isbitski

Today we are introducing the Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Flash Briefing on Alexa, which delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.

The Flash Briefing Skill API is free to use. Get Started Now >

Creating Your Skill with the Flash Briefing Skill API

To get started, you’ll configure a JSON or RSS feed and submit descriptive information about your skill in the portal. This can be done through the following steps:

 1.  Register for a free Amazon Developer Account if you have not done already and navigate to the Alexa Skills Kit box in the Alexa menu here.

2.  Click on Add a New Skill

3.  Select Flash Briefing Skill API, fill out a name and then click Next.

4.  Unlike custom skills, the interaction model for Flash Briefing Skills will automatically be generated for you, simply hit Next.

5.  Now we will need to define our Content Feed(s). Your Flash Briefing Skill can include one or more defined feeds.



Then, click on the Add new feed button.

6.  You will then enter information about your content feed including name, how often the feed will be updated, the content type (audio or text), the genre, an icon as well as the URL for where you are hosting the feed.

7.  Repeat these steps each feed you wish to include in the skill. The first feed you add will automatically be marked as the default feed. If you add more feeds, you can choose which feed is the default, by selecting it in the Default column.

8.  Click Next when you are finished adding feeds and are ready to test your skill.

For additional information check out the Steps to Create a Flash Briefing Skill page here.

[Read More]

October 03, 2016

Zoey Collier

In the latest headlines from KIRO7:

[stirring theme music begins] Hello from KIRO7 in Seattle. I’m Michelle Millman…

And I’m John Knicely. Here are the top stories we’re following on this Friday.

A car erupted in flames around 5:30 this morning on northbound I-5. This was just south of downtown and caused a major traffic backup, but you can get around it by…

This might sound like a local daybreak newscast blaring from the TV in the kitchen or the bedroom, as you rush around trying to get ready for work – but it isn’t.

It’s actually a Alexa Flash Briefing skill. Flash Briefing streams today’s top news stories to your Alexa-enabled device on demand. To hear the most current news stories from whatever sources you choose, just say “Alexa, play my flash briefing” or “Alexa, what’s the news?

The particular Flash Briefing skill in question, though, is rather unique. With all its realism and personality, you might be fooled into thinking it’s an actual news desk, complete with bantering anchors, perky weather forecast, and the day’s top local headlines.

That’s because it is—and that’s what sets KIRO7 apart from the rest.

How Flash Briefing works

Using the Alexa app, you can select different skills for your Flash Briefing from a number of different news sources. These include big-named outlets like NPR, CNN, NBC, Bloomberg, The Wall Street Journal, and more. These all give you snapshots of global news. Now more and more local stations are creating their own Flash Briefing skills for Alexa.

The Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Alexa’s Flash Briefing,  delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.

Setting a higher bar for listener engagement

If you’ve activated Flash Briefing before, you know that several content providers leverage Alexa to read text in her normal voice. That’s because most skills in Flash Briefing repurpose content that is already available in an RSS-style feed. They plug the same text into the feed for Alexa to ingest.

Jake Milstein, news director for KIRO7, said KIRO7 was one of the first local news channels to create a Flash Briefing. While Alexa has a wonderful reading voice, the KIRO7 team wanted to do something a bit more personal for its listeners. Working with the Alexa team, they discovered they could upload MP3 files as an alternative to text. Instead of reading from canned text files, Alexa would play the audio files.

Milstein said using real people’s voices was an obvious choice, because “We have such great personalities here at KIRO7.” The station tested various formats, but eventually settled on using two of its morning news anchors. Christine Borrmann, KIRO7 Producer, says, “We tinkered with the format until Michelle and John just started talking about the news in a very conversational way. Then we added a little music in the background. It felt right.”

KIRO7 started out with a single daily feed but now has three. The morning anchors, Michelle Millman and John Knicely, record the first ‘cast around 4 a.m. and the second shortly after their live broadcast at 8 a.m. Other news anchors record the third feed in late afternoon, so it captures the evening news topics. Each ‘cast’ is roughly two minutes long and ends by encouraging listeners to consume more KIRO7 content through the app on Amazon FireTV.

Alexa, the news never sounded so good

The whole KIRO7 team is proud to be the first local news station to produce a studio-quality audio experience in a Flash Briefing and the KIRO7 skill launched alongside several established networks with national scale.

Early feedback on Facebook showed KIRO7 listeners loved the skill and wanted even more. Now that Flash Briefings are skills, though, the KIRO7 team can start collecting its own reviews and star-ratings.

Milstein says it is important that KIRO7 stay at the forefront of delivering Seattle-area news the way people want to get their news. “Having our content broadcast on Alexa-enabled devices and available on Amazon Fire TV is something we're really proud of. For sure, as Amazon develops more exciting ways to deliver the news, we'll be there.”

 


Get Started with Alexa Skills Kit

Are you ready to build your first (or next) Alexa skill? Build a custom skill or use one of our easy tutorials to get started quickly.

Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.

September 30, 2016

Michael Palermo

Today we are happy to announce the support for scenes, a new feature in Alexa skills developed with the Smart Home Skill API. With scenes, customers can issue a single voice command to an Alexa-enabled device such as the Amazon Echo or Echo Dot to set a predefined configuration of one or more devices to a desired state. For example, a customer could say, “Alexa, turn on bedtime” resulting with specific lights turning off, a bedroom light changing color to a low-lit orange-hue, a ceiling fan turned on, and the temperature changed to an ideal setting for sleep.

At first glance scenes might appear similar to the groups feature found in the Smart Home section of the Alexa app as both allow control over multiple devices with one voice command. However, scenes differ from groups in the following ways:

  • Scenes allow each device configured within it to be set to a desired state, whereas groups are stateless and simply turn devices on or off.
  • Scenes are configured by customers through a device manufacturer’s app, whereas groups are configured in the Alexa app.
  • Scenes only contain devices managed by the device manufacturer’s app, whereas groups can contain any device discovered in the Alexa app.

With scenes customers have another option to groups for controlling multiple devices. Customers may already have scenes configured in device manufacturer apps such as those provided by Control4, Crestron, Insteon, Lutron Caseta, SmartThings, or Wink. Prior to today, these scenes were invoked by using the device manufacturer’s app. Now customers can find these scenes listed as devices in their Alexa app after requesting device discovery and control via voice interaction. 

How Scenes Work

Figure 1: Scene control process


Once a customer has configured a scene through the device manufacturer’s app and requests a device discovery to Alexa, the scene name will appear in the device list in the Alexa app. Consider what happens from a developer perspective, when a voice command is made to turn a scene on.

Let’s examine each step above in more detail.

  1. Customer says, “Alexa, turn on bedtime.”
  2. Alexa service receives the request and routes this intent to the Smart Home Skill API.
  3. A directive is composed including the ‘TurnOnRequest’ name in the directive header and the appliance ID (located in directive payload) corresponding to the friendly name of the scene “bedtime.”
  4. The skill adapter hosted in AWS Lambda receives the directive. Included in the directive is an access token to determine the customer’s account making the request. A call is made to device cloud API to turn on the scene matching the appliance ID for the associated customer.
  5. The device cloud (likely owned by the device maker) receives a request from the skill adapter, and communicates to a device hub or controller to turn on the scene preconfigured by the customer.
  6. The device hub sets the desired state of each device configured by the customer. Note in this “bedtime” example, turning on a scene may result in turning off a light, since this could be the desired state of that device for the scene.
[Read More]

September 29, 2016

Ashwin Ram

Artificial intelligence (AI) is becoming ubiquitous. With advances in technology, algorithms, and sheer compute power, it is now becoming practical to utilize AI techniques in many everyday applications including transportation, healthcare, gaming, productivity, and media. Yet one seemingly intuitive task for humans still eludes computers: natural conversation. Simple and natural for humans, voice communication in everyday language continues to be one of the ultimate challenges for AI. Human conversation requires the ability to understand the meaning of spoken language, relate that meaning to the context of the conversation, create a shared understanding and world view between the parties, model discourse and plan conversational moves, maintain semantic and logical coherence across turns, and to generate natural speech.

Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. Teams of university students around the world are invited to participate in the Alexa Prize (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses coherently and engagingly with humans on popular topics for 20 minutes. We challenge teams to invent an Alexa socialbot smart enough to engage in a fun, high quality conversation on popular topics for 20 minutes.

Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Alexa users will experience truly novel, engaging conversational interactions.

Up to ten teams of students will be selected to receive a $100,000 research grant as a stipend, Alexa-enabled devices, free AWS services to support their development efforts, and support from the Alexa Skills Kit (ASK) team. Additional teams not eligible for funding may be invited to participate. University teams can submit their applications between September 29 and October 28, 2016, here. The competition will officially start on November 14, 2016 and run until November 2017, concluding with an award ceremony to be held at AWS re:Invent in Las Vegas, NV.

As we say at Amazon, this is Day 1 for conversational AI. We are excited to see where you will go next, and to be your partners in this journey. Good luck to all of the teams.

Learn more about Alexa Prize.

Want the latest?

alexa topics

Recent Posts

Archive