Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts

Archive

Showing posts tagged with Marketing

October 17, 2016

Ted Karczewski

People love that they can dim their lights, turn up the heat, and more just by asking Alexa on their Amazon Echo. Now Philips Hue has launched a new cloud-based Alexa skill, making the same smart home voice controls accessible on the Echo available on all Alexa-enabled third-party products through the Alexa Voice Service API. Best of all, your customers can enable the new Hue skill today—no additional development work needed.

Because Alexa is cloud-based, it’s always getting smarter with new capabilities, services, and a growing library of third-party skills from the Alexa Skills Kit (ASK). As an AVS developer, your Alexa-enabled product gains access to these growing capabilities through regular API updates, feature launches, and custom skills built by our active developer community.

Now with Philips Hue capabilities, your end users can voice control all their favorite smart home devices just by asking your Alexa-enabled product. You can test the new Philips Hue skill for yourself by building your own Amazon Alexa prototype and trying these sample utterances:

  • Alexa, turn on the kitchen light.
  • Alexa, dim the living room lights to 20%.                                                

End users can enable the new Philips Hue skill in the “Smart Home” section on the Amazon Alexa app.

More About Philips Hue

Philips Hue offers customizable, wireless LED lighting that can be controlled by voice across the family of Amazon Alexa products. Now with third-party integration, your users will be able to turn on and off their lights, change lighting color, and more from any room in the house just by asking your Alexa-enabled third-party product. The new Philips Hue skill also includes support for Scenes, allowing Alexa customers to voice control Philips Hue devices assigned to various rooms in the house.

Whether end users have an Echo in the kitchen or an Alexa-enabled product in the living room, they can now voice control Philips Hue products from more Alexa-enabled devices across their home. Learn more about the Smart Home Skill API and how to build your own smart home skill.

[Read More]

October 06, 2016

Ted Karczewski

What makes the Amazon Echo so appealing is the fact that customers can control smart home devices, access news and weather reports, stream music, and even hear a few jokes just by asking Alexa. It’s simple and intuitive.

We’re excited to announce an important Alexa Voice Service (AVS) API update that now enables you to build voice-activated products that respond to the “Alexa” wake word. The update includes new hands-free speech recognition capabilities and a “cloud endpointing” feature that automatically detects end-of-user speech in the cloud. Best of all, these capabilities are available through the existing v20160207 API—no upgrades needed.

You can learn more about various use cases in our designing for AVS documentation.

Get Started with Our New Raspberry Pi Project

To help you get started quickly, we are releasing a new hands-free Raspberry Pi prototyping project with third-party wake word engines from Sensory and KITT.AI. Build your own wake word enabled, Amazon Alexa prototype in under an hour by visiting the Alexa GitHub.

And don’t forget to share your finished projects on Twitter using #avsDevs. AVS Evangelist Amit Jotwani and team will be highlighting our favorite projects, as well as publishing featured developer interviews, on the Alexa Blog. You can find Amit on Twitter here: @amit.

Learn more about the Alexa Voice Service, its features, and design use cases. See below for more information on Alexa and the growing family of Alexa-enabled products and services:

Alexa Developer Resources
Alexa Voice Service (AVS)
Alexa Skills Kit (ASK)
The Alexa Fund
AVS Developer Forums
Alexa on a Raspberry Pi

Alexa-Enabled Devices
Triby
CoWatch
Pebble Core
Nucleus

Amazon Alexa Devices
Amazon Echo
Amazon Echo Dot
Amazon Tap
Amazon Fire TV
Amazon Fire TV Stick

Have questions? We are here to help. Visit us on the AVS Forum or Alexa GitHub to speak with one of our experts.

AVS is coming soon to the UK and Germany. Read the full announcement here.

 

October 04, 2016

Jen Gilbert

Today’s guest blog post is from Monica Houston, who leads the Hackster Live program at Hackster. Hackster is dedicated to advancing the art of voice user experience through education.

Even though it’s a sunny Saturday morning, men, women, and perhaps a few teens filter into a big room, laptops in hand, ready to build Alexa skills. They’re here to change the future of voice user experience.

Hackster, the community for open source hardware, has run 12 events with Amazon Alexa this year and 13 more are in the planning stages. All 25 events are organized by Hackster Ambassadors, a group of women and men hand-picked from Hackster’s community for their leadership skills, friendliness, and talent for creating projects.

Hackster Ambassadors pour their time and energy into helping to evangelize Alexa. Ambassador Dan Nagle of Huntsville, Alabama, created a website where you can find Hackster + Alexa events by city. Ambassador Paul Langdon set up a helpful github page where you can see skills that were published at the event he ran in Harford. He also volunteered his time and knowledge to run a series of “office hours” to help people develop their skills.

While Hackster provides venues and catering for these events and Hackster Ambassadors spread the word to their communities, Amazon sends a Solution Architect to teach participants how to build skills for Alexa and answer questions.

Amazon Solutions Architects go above and beyond to help people submit their skills for certification. Not only do they answer questions on Hackster’s developer slack channel, they also have hosted virtual “office hours,” run webinars, and conducted two “slackathons” with Hackster’s community.

Although the 25 Alexa events are being held in US cities, Hackster Live is a global program with 30 international Ambassadors. Hackster shipped Amazon Echos to our Ambassadors in South America, Asia, Africa, and Europe. Virtual events like slackathons and webinars run by Solutions Architects make it possible for people from around the world to learn skill building and add to the conversation.

[Read More]

September 30, 2016

Michael Palermo

Today we are happy to announce the support for scenes, a new feature in Alexa skills developed with the Smart Home Skill API. With scenes, customers can issue a single voice command to an Alexa-enabled device such as the Amazon Echo or Echo Dot to set a predefined configuration of one or more devices to a desired state. For example, a customer could say, “Alexa, turn on bedtime” resulting with specific lights turning off, a bedroom light changing color to a low-lit orange-hue, a ceiling fan turned on, and the temperature changed to an ideal setting for sleep.

At first glance scenes might appear similar to the groups feature found in the Smart Home section of the Alexa app as both allow control over multiple devices with one voice command. However, scenes differ from groups in the following ways:

  • Scenes allow each device configured within it to be set to a desired state, whereas groups are stateless and simply turn devices on or off.
  • Scenes are configured by customers through a device manufacturer’s app, whereas groups are configured in the Alexa app.
  • Scenes only contain devices managed by the device manufacturer’s app, whereas groups can contain any device discovered in the Alexa app.

With scenes customers have another option to groups for controlling multiple devices. Customers may already have scenes configured in device manufacturer apps such as those provided by Control4, Crestron, Insteon, Lutron Caseta, SmartThings, or Wink. Prior to today, these scenes were invoked by using the device manufacturer’s app. Now customers can find these scenes listed as devices in their Alexa app after requesting device discovery and control via voice interaction. 

How Scenes Work

Figure 1: Scene control process


Once a customer has configured a scene through the device manufacturer’s app and requests a device discovery to Alexa, the scene name will appear in the device list in the Alexa app. Consider what happens from a developer perspective, when a voice command is made to turn a scene on.

Let’s examine each step above in more detail.

  1. Customer says, “Alexa, turn on bedtime.”
  2. Alexa service receives the request and routes this intent to the Smart Home Skill API.
  3. A directive is composed including the ‘TurnOnRequest’ name in the directive header and the appliance ID (located in directive payload) corresponding to the friendly name of the scene “bedtime.”
  4. The skill adapter hosted in AWS Lambda receives the directive. Included in the directive is an access token to determine the customer’s account making the request. A call is made to device cloud API to turn on the scene matching the appliance ID for the associated customer.
  5. The device cloud (likely owned by the device maker) receives a request from the skill adapter, and communicates to a device hub or controller to turn on the scene preconfigured by the customer.
  6. The device hub sets the desired state of each device configured by the customer. Note in this “bedtime” example, turning on a scene may result in turning off a light, since this could be the desired state of that device for the scene.
[Read More]

September 29, 2016

Ashwin Ram

Artificial intelligence (AI) is becoming ubiquitous. With advances in technology, algorithms, and sheer compute power, it is now becoming practical to utilize AI techniques in many everyday applications including transportation, healthcare, gaming, productivity, and media. Yet one seemingly intuitive task for humans still eludes computers: natural conversation. Simple and natural for humans, voice communication in everyday language continues to be one of the ultimate challenges for AI. Human conversation requires the ability to understand the meaning of spoken language, relate that meaning to the context of the conversation, create a shared understanding and world view between the parties, model discourse and plan conversational moves, maintain semantic and logical coherence across turns, and to generate natural speech.

Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. Teams of university students around the world are invited to participate in the Alexa Prize (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses coherently and engagingly with humans on popular topics for 20 minutes. We challenge teams to invent an Alexa socialbot smart enough to engage in a fun, high quality conversation on popular topics for 20 minutes.

Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Alexa users will experience truly novel, engaging conversational interactions.

Up to ten teams of students will be selected to receive a $100,000 research grant as a stipend, Alexa-enabled devices, free AWS services to support their development efforts, and support from the Alexa Skills Kit (ASK) team. Additional teams not eligible for funding may be invited to participate. University teams can submit their applications between September 29 and October 28, 2016, here. The competition will officially start on November 14, 2016 and run until November 2017, concluding with an award ceremony to be held at AWS re:Invent in Las Vegas, NV.

As we say at Amazon, this is Day 1 for conversational AI. We are excited to see where you will go next, and to be your partners in this journey. Good luck to all of the teams.

Learn more about Alexa Prize.

August 24, 2016

David Isbitski

Before today, the Alexa Skills Kit enabled short audio via SSML audio tags on your skill responses. Today we are excited to announce that we have now added streaming audio support for Alexa skills including playback controls. This means you can easily create skills that playback audio content like podcasts, news stories, and live streams.

New AudioPlayer and PlaybackController interfaces provide directives and requests for streaming audio and monitoring playback progression. With this new feature, your skill can send audio directives to start and stop the playback. The Alexa service can provide your skill with information about the audio playback’s state, such as when the track is nearly finished, or when playback starts and stops. Alexa can also now send requests in response to hardware buttons, such as those on a remote control.

Enabling Audio Playback Support in Your Skill

To enable audio playback support in your skill you simply need to turn the Audio Player functionality on and handle the new audio Intents. Navigate to the Alexa developer portal and do the following:

  • On the Skill Information page in the developer portal, set the Audio Player option to Yes.
     
  • Include the required built-in intents for pausing and resuming audio in your intent schema and implement them in some way:
    • AMAZON.PauseIntent
    • AMAZON.ResumeIntent
       
  • Call the AudioPlayer.Play Directive from one of your Intents to start the Audio Playback
     
  • Handle AudioPlayer and PlaybackController Requests and optionally respond

In addition to the required built-in intents, your skill should gracefully handle the following additional built-in intents:
 

  • AMAZON.CancelIntent
  • AMAZON.LoopOffIntent
  • AMAZON.LoopOnIntent
  • AMAZON.NextIntent
  • AMAZON.PreviousIntent
  • AMAZON.RepeatIntent
  • AMAZON.ShuffleOffIntent
  • AMAZON.ShuffleOnIntent
  • AMAZON.StartOverIntent

Note: Users can invoke these built-in intents without using your skill’s invocation name. For example, while in a podcast skill you create, a user could say “Alexa Next” and your skill would play the next episode.

If your skill is currently playing audio, or was the skill most recently playing audio, these intents are automatically sent to your skill. Your code needs to expect them and not return an error. If any of these intents does not apply to your skill, handle it in an appropriate  way in your code. For instance, you could return a response with text-to-speech indicating that the command is not relevant to the skill. The specific message depends on the skill and whether the intent is one that might make sense at some point, for example:
 

  • For a podcast skill, the AMAZON.ShuffleOnIntent intent might return the message: “I can’t shuffle a podcast.”
  • For version 1.0 of a music skill that doesn’t yet support playlists and shuffling, the AMAZON.ShuffleOnIntent intent might return: “Sorry, I can’t shuffle music yet.”


Note: If your skill uses the AudioPlayer directives, you cannot extend the above built-in intents with your own sample utterances.

[Read More]

August 22, 2016

Thom Kephart

In the coming weeks, we’ll be participating in a variety of events and we’d love to meet you. Get hands-on learning and learn to build an Alexa skill at a hackathon, attend a presentation at smart home events, join in the conversation at select conferences, or connect with fellow developers at a local meetup.

Hackathons

Led by Alexa Solutions Architects and Developer Evangelists, hackathons are a great way to get the hands-on experience of building and testing an Alexa skill. 

Galvanize Skill Building Workshop | August 25, 6:30-8:30 p.m. PT
Seattle, WA

The Galvanize workshop is intended for software and hardware developers interested in voice control, home automation, and personal assistant technology. We will walk through the development of a new Alexa skill and incorporate it into a consumer-facing device.

/hack San Francisco | August 27, 10:00 a.m. PT – August 28, 5:30 p.m. PT
San Francisco, California

/hack (slash hack) is the premiere hackathon by hackers, for hackers. Three hundred hackers will compete in the 24-hour hackathon held in San Francisco. Hackers at all levels – working professional, college student, or even high school student – can learn from students, CTOs, architects, and more.

Amazon Alexa Virtual Hackathon Decision Tree | August 31, 11:00 a.m. PT – 12:00 p.m. PT
Virtual Webinar

In this hour-long webinar we will build an Alexa skill using the Decision Tree skill template. The template makes it easy for developers and non-developers to create skills that ask you a series of questions and then give you an answer. This is a great starter for simple adventure games and magazine style quizzes like ‘what kind of job is good for me?’ We will use AWS Lambda and the Alexa Skills Kit, and provide built-in business logic, use cases, error handling, and help functions for your new skill. Simply come up with the idea before we begin and we will help you build it.

TechCrunch Disrupt San Francisco Hackathon | September 10, 12:30 p.m. PT – September 11, 2:00 p.m. PT
San Francisco, California

Preceding the Disrupt Conference is Hackathon weekend, where developers and engineers descend from all over the world to take part in a 24-hour hacking endurance test. Teams join forces to build a new product and present it on the Disrupt stage to a panel of expert judges and an audience of tens of thousands.

Code District LA Bootcamp | September 13, 6:00 p.m. PT
Torrance, California

This free workshop is intended for anyone interested in learning how to program voice-controlled devices. Join Solutions Architect Liz Myers to learn about Alexa skills development.

[Read More]

August 02, 2016

Emily Roberts

For inspiration on developing innovative Alexa skills, check out the Wayne Investigation, a skill developed by Warner Bros. to promote the recently released Batman v Superman: Dawn of Justice feature film. In this audio-only, interactive adventure game, you’re transported to Gotham City a few days after the murder of Bruce Wayne’s parents. You play the part of a detective, investigating the crime and interrogating interesting characters, with Alexa guiding you through multiple virtual rooms, giving you choices, and helping you find important clues.

The game, created using the Alexa Skills Kit, is collaboration between Amazon, Warner Bros., head writers at DC Comics, and Cruel & Unusual Films (the production house run by Batman v Superman’s director Zack Snyder and executive producers Debbie Snyder and Wes Coller). With these companies behind the game and its affiliation with a superhero film franchise, it’s not surprising that The Wayne Investigation was a big hit.

But it’s become enormously popular on its own accord. Launched on March 1, this was the first Alexa skill to combine Alexa technology with produced audio assets—namely, compelling music and sound effects—and the response has been extraordinary. During its first week, the Wayne Investigation was engaged 7x more (per weekly average) than all other skills combined. Currently the Wayne Investigation rates in the top 5% of skills (earning 4.8 out of 5 stars) and is the #1 skill for both total time spent engaging with the skill and average time spent per user.

The team scripted the experience by building it around a gaming map with directions and actions in each room. Once the script was finalized, they used a decision tree model to translate the experience into code, which is hosted in AWS. From three starting actions, users can make up to 37 decisions, each taking the user down paths that lead to new and iconic Gotham characters and locations before completing the game. An efficient (and lucky) walkthrough of the Wayne Investigation takes 5 to 10 minutes, but fans who want to explore every nook and cranny can spend as long as 40 minutes in this Gotham City.

An added benefit of creating the Wayne Investigation skill is that it led to the creation of a tool that allows developers to graphically design interactive adventure games. Today, we’re pleased to announce that we’ve made a tool with source code available to make it easier for the Alexa community to create similar games.

To experience the skill, simply enable it in your Alexa companion app and then say, “Alexa, open the Wayne Investigation.”

 

August 01, 2016

Paul Cutsinger

Today’s guest blog post is from Troy Petrunoff, content strategist at AngelHack. Amazon works with companies like AngelHack who are dedicated to advancing the art of voice user experience through hackathons.

This year Amazon Alexa teamed up with AngelHack, the pioneers of global hackathons, for their ninth Global Hackathon Series. Since 2011, the series has exposed over 100,000 developers from around the world to new technologies from sponsors ranging from small startups to large corporations. Amazon Alexa joined the fun this year at nine AngelHack events, sending Solutions Architects and Amazon Echo devices to give talented developers, designers, and entrepreneurs the chance to learn about the Alexa technology. Thirty two teams included Alexa technology into their projects.

Of the nine events Amazon Alexa sponsored, three of the grand prize winners won using Alexa. Winning the AngelHack Grand Prize earned these teams an exclusive invite into the AngelHack HACKcelerator program. AngelHack’s invite-only HACKcelerator program connects ambitious developers with thought leaders and experienced entrepreneurs to help them become more versatile, entrepreneurial, and successful. The program is intended to give developers of promising projects built at a hackathon the opportunity to listen and talk to some of the biggest players in the Silicon Valley tech scene on a weekly basis. All while providing them with the resources to successfully transition their Hackathon project into a viable startup with early traction.In addition to the grand prize, the Amazon Alexa team offered a challenge at each AngelHack event.  The challenge for the series was best voice user experience using Amazon Alexa. In addition to the three grand prize winning teams, two Alexa Challenge winners will also receive an invite into the HACKcelerator program. Participating teams of the HACKcelerator will be provided with mentorship and other resources to prepare them for the Global Demo Day in San Francisco.

[Read More]

July 27, 2016

Zoey Collier

Earlier this year, Paul Cutsinger, Evangelist at Amazon Alexa, joined a team of developers and designers from Capital One at SXSW in Austin to launch the new Capital One skill for Alexa. The launch of the new skill garnered national attention, as Capital One was the first company to give customers the ability to interact with their credit card and bank accounts through Alexa-enabled devices. This week at the Amazon Developer Education Conference in NYC, Capital One announced another industry first by expanding the skill to enable its customers to access their auto and home loan accounts through Alexa.

"The Capital One skill for Alexa is all part of our efforts to help our customers manage their money on their terms – anytime and anywhere," said Ken Dodelin, Vice President, Digital Product Management, Capital One. “Now, you can access in real time all of your Capital One accounts—from credit cards to bank accounts to home and auto loans—using nothing but your voice with the Capital One skill.”

The skill is one of the top-rated Alexa skills, 4.5/5 stars, with 47 reviews. It enables Capital One customers to stay on top of their credit card, auto loan, mortgage and home equity accounts by checking their balance, reviewing recent transactions, or making payments, as well as get real-time access to checking and savings account information to understand their available funds.

 “Capital One has a state of the art technology platform that allows us to quickly leverage emerging technologies, like Alexa." Scott Totman, Vice President of Digital Products Engineering, Capital One said. “We were excited about the opportunity to provide a secure, convenient, and hands-free experience for our customers.”

Building the Skill

To bring the new skill to life, the Capital One team – comprised of engineers, designers, and product managers – kicked off a two-phase development process.

“Last summer a few developers started experimenting with Echo devices, and, ultimately, combined efforts to scope out a single feature: fetching a customer’s credit card balance. That exercise quickly familiarized the team with the Alexa Skills Kit (ASK) and helped them determine the level of effort required to produce a full public offering,” said Totman. “The second phase kicked off in October and involved defining and building the initial set of skill capabilities, based on customer interviews and empathy based user research. Less than six months later we launched the first version of the Capital One skill for Alexa.”

The team also spent a lot of time finding the right balance between customers’ need for both convenience and security. In the end, Capital One worked with Amazon to strike the right balance and gave customers the option of adding a four-digit pin in order to access the skill and provide an additional layer of security. The pin can be changed or removed at the customer’s discretion.

“The Alexa Skills Kit is very straightforward. However, it is evolving quickly, so developers need to pay close attention to online documentation, webinars, and other learning opportunities in order to stay on top of new features and capabilities as they are released,” Totman said.

Finding the Right Voice

“We dedicated a lot of time to getting the conversation right from the start,” said Totman. “This meant we not only had to anticipate the questions customers were going to ask, but also how they were going to ask them.”

This was a really interesting challenge for Capital One’s design team.  In order to make the skill feel like a personalized conversation, the team had to identify exactly where and how to inject personality and humor, while carefully considering customers’ priorities and the language they use to discuss finances.

“A lot goes into making sure our customers get what they expect from our personality, as well as what they expect from Alexa’s personality. That becomes especially visible when injecting humor, because what looks great on paper doesn’t always transition to the nuance of voice inflection, cadence, or the context of banking,” said Stephanie Hay, head of Capital One’s content strategy team. “But that’s the joy of design > build > iterate in a co-creation method; product, design, and engineering design the conversation together, hear Alexa say it, react, iterate, test it with actual customers, iterate further, and then get it to a point we all feel excited about.”

Looking Ahead

Capital One’s Alexa skill represents just the starting lineup of features. Capital One’s team continues to test, learn, and explore new features by focusing on customer needs and continually refining the experience.

“As customers become more familiar using voice technologies, we anticipate growing demand for feature capabilities, as well as increased expectations regarding the sophistication of the conversation.” Totman said. “With voice technologies, we get to learn firsthand how customers are attempting to talk to us, which allows us to continually refine the conversation.”

“The possibilities with the Alexa Skills Kit are nearly endless, but I advise developers to be very thoughtful about the value of their skill,” said Totman. “Leveraging voice-activated technology is only worthwhile if you can clearly define how your solution will go above and beyond your existing digital offerings.”  

Stay tuned to part two to learn how Capital One built their Alexa skill and added new capabilities.


Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.

Get Started with Alexa Skills Kit

Are you ready to build your first (or next) Alexa skill? Build a custom skill or use one of our easy tutorials to get started quickly.

July 22, 2016

Zoey Collier

In our first post, we shared why Discovery decided to build an Alexa skill and what requirements they outlined as they thought through what the voice experience should look like. In this post, we’ll share how they built and tested their Alexa skill and their tips for other Alexa developers.

Building and Testing the Shark Week Skill

When Stephen Garlick, Lead Development and Operations Engineer at Discovery Channel, took the lead in developing the Alexa skill, it was a chance to learn how to design a new experience for customers. He had no prior experience with AWS Lambda and Alexa Skills Kit (ASK). To start, he spent some time digging into online technical documentation and code samples provided on the Alexa Github repo. This helped him gain a deeper understanding of how to build the foundation of the Alexa skill and handle basic tasks.

By using AWS Lambda and ASK, Stephen and team were able to keep things simple and quickly deploy the code without the need to set up additional infrastructure to support the skill. Additionally, they were easily able to extend the node.js skill without having to create a skill from scratch.

Initially, Discovery used Alexa to respond with facts; later, they decided to customize her voice by using a mp3 playback. To accomplish this, Stephen used the SSML support for mp3 playback and AWS S3 with cloud front for hosting the files reliably. Each mp3 was less than 90 seconds in length, 48 kbps, and adhered to MPEG version 2 specifications. All the resources were created and deployed using the AWS CloudFormation service.

For the countdown feature, Stephen pulled in the moment.js dependency into node.js to help simplify some time-based calculations. The countdown now combines a mp3 playback for everything except the actual time which is played back by Alexa.

To test the skill, they used the skill test pane within the Alexa app. The testing tool made it easy to quickly test various scenarios without an Alexa-enabled device. Once the skill was operating as expected (and desired) in the test pane, Stephen asked other people to test the Shark Week skill on Alexa-enabled devices. This allowed them to collect additional feedback and iterate accordingly.

Overall, the entire process of learning these new technologies, coding, and building the skill took no more than 12 hours. This included a few iterations of the Alexa skill as well.

Five Tips for Other Alexa Developers

Tip #1: Make The Skill As Human As Possible: Initially, Discovery had the Alexa voice state each of the randomized facts. In an attempt to assist with the pronunciation, they spelled a few of the words and numbers phonetically. However, in doing so, the cards displayed in the Alexa app weren't correct. It quickly became apparent that a recorded reading of each fact eliminated the pronunciation issues, enabled proper spelling of facts for the cards in the Alexa app, and made the entire experience more personal.

Tip #2: Plan for Time Sensitive Coding: If you're building time specific functionality (e.g.; a countdown timer to a specific time), make sure you think about what happens when the specific time arrives. The team at Discovery was able to account for the Shark Week kickoff by providing three different countdown messages based on time in each specific time zone. The first was the countdown lead in, the second was a message indicating that Shark Week already started, and the third indicated that Shark Week had concluded and that the Shark Week website provides other shark-related information year-round.

Tip #3: Control for Volume: If you're using a combination of recordings and Alexa powered speech, make sure the volume levels are consistent throughout the experience.

Tip #4: Be Creative with Your Intent Schema and Utterances: People think, act, and speak differently. Therefore, it's important that you account for as many different intents as possible. For example, after you ask for a Shark Week fact, the skill will ask if you would like to hear another. Just a few of Discovery’s "no" utterances include "no," "nope," "no thanks," "no thank you," "not really," "definitely not," "no way," "nah," negative," "no sir," "maybe another time," and many more. It's better to be as inclusive as possible, rather than having Alexa unable to understand.

Tip #5: Take Chances: Push your limits and think big when it comes to building your Alexa skill. Discovery started the project with a broad scope in mind and were able to quickly iterate and resubmit the skill for certification.

 


Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.

Get Started with Alexa Skills Kit

Are you ready to build your first (or next) Alexa skill? Build a custom skill or use one of our easy tutorials to get started quickly.

July 20, 2016

Michael Palermo

Hello, my name is Michael Palermo, and I recently joined the Alexa team as the first dedicated evangelist for smart home. When friends and acquaintances ask what I do, they often looked puzzled before I get past my title. Inevitably I get questions like: What is a “smart home”? Who or what is Alexa? Why are you called an evangelist?

In this post, I’ll answer a lot of these questions. Granted, you may already be familiar with some of the topics, but stay tuned as I will also provide additional insights as to why it might matter to you.

What is a “Smart Home”?

The term “smart home” or “Connected Home” (CoHo) refers to a residence consisting of one or more smart products which enhance the living experience with benefits such as convenience, control, and optimization of resources. A product is deemed “smart” when it is capable of communicating with other smart products and/or a user interface to manage it.

What is Alexa?

From a consumer perspective, a more familiar brand name is Echo. Alexa is the voice service that powers Echo and other similar devices like Amazon Tap and Echo Dot. With Alexa, developers can build new voice experiences with the Alexa Skills Kit (ASK) or by adding voice to connected devices with Alexa Voice Service (AVS).

[Read More]

July 14, 2016

Zoey Collier

Craig Johnson, president of Emerson’s Residential Solutions business, claims it was inevitable. “Thermostats are no longer just passive HVAC controllers hanging on your wall. The convergence of wireless and mobile technologies allowed us to develop a thermostat that allows better temperature control, programmability and scheduling, as well as remote access.”

Even before Amazon’s Smart Home Skill API was publicly released, Johnson was excited about smart home. Prior to Smart Home, Emerson had a fully functional mobile app and internet portal our customers could use to control their Sensi thermostat remotely. But integration of Alexa is a natural extension of that remote access and remote functionality.”

In February 2016, Johnson’s software development manager, Joe Mahari, jumped on board the Smart Home beta program. In just four weeks’ time—and by the time Amazon officially launched the Smart Home Skill API—Mahari’s team had built and tested its Sensi Smart Home skill and passed certification.

The Smart Home Skill API converts a voice command, such as “Alexa, increase my first floor by 2 degrees,” to directives (JSON messages). The directive includes:

  • the action (“increase”)
  • the device ID representing the thermostat named “first floor”)
  • any options (such as “2 degrees”), and
  • the device owner’s authentication information.

It then sends the directive to the methods implemented in the Sensi skill.

According to Mahari, Emerson implemented three main directives. Examples of these are:

  • Alexa, set my first floor to 75 degrees (where “first floor” specifies which thermostat)
  • Alexa, increase my thermostat by 2 degrees (Alexa will ask which thermostat)
  • Alexa, decrease my second floor by 3 degrees

The Emerson team agrees the skill and API were well packaged and supported, end-to-end. “Amazon defined the use case very crisply,” said Johnson. “We received a deck of scenarios to achieve, plus integrated logging, systems’ checks and documentation. These were essential to our success.”

Mahari says it was invaluable that the Amazon team connected with them daily. “For example, we had some concerns about how to increase or decrease the temperature during auto-schedules. But working directly with the Alexa team, we figured out how to make it work.”

So, if working with Amazon’s support and the API itself went so smoothly, what were some challenges the Emerson team faced over the four-week project?

[Read More]

July 05, 2016

Marion Desmazieres

A year ago we launched the Alexa Skills Kit to allow developers to build new voice capabilities, called skills, for Alexa. Since then, we’ve seen many Alexa developers start independent meetup groups in their local communities. The purpose of these groups are to network with other Alexa enthusiasts, share Alexa skill development knowledge, and build great voice user experiences.

We’ve curated a list of upcoming community-run Alexa meetups and local groups you can join. Thank you to the community leaders who volunteer their time to organize these local events and continue to contribute to the vibrant Alexa developer community.

Attend a July Meetup Event

Find an event near you, sign up, and meet fellow Alexa developers in your city:

  • San Diego, CA – Tuesday, July 5  | 7 p.m. – 8 p.m. PT | Sign up
  • Boston, MA – Wednesday, July 6 | 6 p.m. – 8 p.m. ET |  Sign up (Virtual Kickoff)
  • Grand Rapids, MI – Monday, July 11 | 5:30 p.m. - 7:30 p.m. ET | Sign up
  • San Francisco, CA – Monday, July 18 | 6:30 p.m. - 8:30 p.m. PT | Sign up
  • Appleton, WI – Wednesday, July 20 | 6:00 p.m. CT | Sign up
  • New York, NY – Wednesday, July 20 | 6:30 p.m. - 9:30 p.m. ET | Sign up

Join an Alexa Meetup Group

So far, we’ve heard about more than ten Alexa-focused meetup groups run by the community. Did you create an Alexa meetup group that is not listed below? Let us know by tweeting @AlexaDevs.

[Read More]

June 29, 2016

David Isbitski

Today, we are launching a bi-weekly podcast focused exclusively on the Alexa developer community and the Amazon teams building Alexa technology. Each episode will be 20-30 minutes long and air twice a month. We’ll discuss various aspects of Alexa, including the Alexa Skills Kit, Alexa Voice Service, natural language understanding, voice recognition, and first hand experiences directly from developers like you.  

To kick it off, our first episode is a chat between myself and Charlie Kindel, director of Alexa Smart Home at Amazon. Charlie and I go into the details behind the launch of the Smart Home Skill API and some of the decisions the team had to make along the way. I also had the opportunity to learn about Charlie’s experience  in smart home and his thoughts on how he sees it evolving over time.

Check out the first episode.

-Dave (@TheDaveDev)

Want the latest?

alexa topics

Recent Posts

Archive