Gracias por tu visita. Esta página solo está disponible en inglés.
Alexa Blogs

Alexa Blogs

Want the latest?

alexa topics

Recent Posts

Archive

December 20, 2016

Jeff Blankenburg

We all have our favorite places. It may be your childhood hometown, an exotic place you visited, or even your college town. Regardless of why a city is your favorite, we all have our favorite spots to visit and want to tell others about, and that’s exactly what this new skill template helps you do.

This new template uses AWS Lambda, the Alexa Skills Kit (ASK), and the Alexa SDK for Node.js, in addition to the New York Times Search API for news. We provide the business logic, error handling, and help functions for your skill, you just need to provide the data and credentials.

For this example, we will create a skill for the city of Seattle, Washington. The user of this skill will be able to ask things like:

  • “Alexa, ask Seattle Guide what there is to do.”
  • “Alexa, ask Seattle Guide about the Space Needle.”
  • “Alexa, ask Seattle Guide for the news.”

You will be able to use your own city in the sample provided, so that users can learn to love your location as much as you do. This might also be a good opportunity to combine the knowledge from this template with our Calendar Reader sample, so that you can provide information about the events in your town, as well as the best places to visit.

After completing this tutorial, you’ll know how to do the following:

  • Create a city guide skill - This tutorial will walk Alexa skills developers through all the required steps involved in creating a skill that shares information about a city, and can search for news about that location.
  • Understand the basics of VUI design - Creating this skill will help you understand the basics of creating a working Voice User Interface (VUI) while using a cut/paste approach to development. You will learn by doing, and end up with a published Alexa skill. This tutorial includes instructions on how to customize the skill and submit for certification. For guidance on designing a voice experience with Alexa you can also watch this video.
  • Use JavaScript/Node.js and the Alexa Skills Kit to create a skill - You will use the template as a guide but the customization is up to you. For more background information on using the Alexa Skills Kit please watch this video.
  • Manage state in an Alexa skill - Depending on the user’s choices, we can handle intents differently.
  • Get your skill published - Once you have completed your skill, this tutorial will guide you through testing your skill and sending your skill through the certification process so it can be enabled by any Alexa user. You may even be eligible for some Alexa swag!
  • Interact with the Bing Search API.

Get started and build your first—or next—Alexa skill today.

Special Offer: Free Hoodies

All published skills will receive an Alexa dev hoodie. Quantities are limited. See Terms and Conditions.

[Read More]

December 16, 2016

Zoey Collier

Today's guest post comes from Jim Kresge from Capital One Engineering.

In March 2016, Capital One became the first company to offer its customers a way to interact with their financial accounts through Alexa devices. With the Capital One skill for Alexa, customers can access in real time all of their Capital One accounts -- from credit cards to bank accounts, to home and auto loans. The skill is highly rated on the Alexa app, with 4/5 stars. CapitalOne_TechCaseStudy_234.jpg

The Capital One team has continued to update the skill since launch, including a recent update to the skill called “How much did I spend?” With the update, Capital One customers can access their recent spending history at more than 2,000 merchants.  Customers who have enabled the skill can now ask Alexa about their spending for the past six months--by day, month, or a specific date range--through questions posed in natural language such as:  

Q:  Alexa, ask Capital One, how much did I spend last weekend?
A:  Between December 9th and December 11th, you spent a total of $90.25 on your Venture Card.

Q:  Alexa, ask Capital One, how much did I spend at Starbucks last month?
A:  Between November 1st and November 30th, you spent a total of $43.00 at Starbucks on your Quicksilver Card. 

Q:  Alexa, ask Capital One, how much did I spend at Amazon between December 1 and December 15?
A:  Between December 1st and December 15th, you spent a total of $463.00 at Amazon on your Quicksilver Card.

The building of the skill was a collaborative effort between product development, engineering and design teams at Capital One. I have the privilege of representing the great work of the entire team in this blog post to give a behind the scenes look at the building of the Capital One skill.

A Beta is Born

In summer 2015, a group of engineers at Capital One recognized the potential to develop a skill for accessing financial accounts using Amazon Echo. We got together for a hackathon, worked our way through several possibilities, and began building the skill. The Beta version included a server-side account linking mechanism that we built ourselves. We were able to use an enhanced beta version of the Capital One mobile app to provide the account linking interface and created some AWS infrastructure to support it. We then demoed the Beta at the AWS re:Invent conference in October 2015.

Evolving the Beta

Having proved out the Beta version of the skill, we became really driven and focused on building the first skill for Alexa that would enable people to interact with their financial accounts.

We began working on a production version in December, 2015, with the goal of delivering a product by March, 2016. Working in an iterative design model, we found that coding the skill for Capital One financial accounts was relatively straightforward. But, as with anything game-changing, we realized that what we were attempting involved some things no one had done before. First, we were attempting to integrate sensitive data with Alexa, which no company with a skill on Alexa had done yet. It was also the first time we had built a conversational UI. And, the Ask Alexa software was still maturing and evolving as we were building the skill, which meant that we needed to be flexible in quickly making adjustments to code.

We started with the premise that in the first iteration, Capital One credit card and bank customers can ask Alexa things like their current account balance, their recent transactions, and when their next bill is due.

Data security is always top of mind for us, as was creating an experience for customers that was friction-free and simple.

With Amazon, we worked through possible solutions within the Alexa infrastructure to build in a security layer that ensures data integrity while still providing a simple, hands-free experience. In addition to using OAuth to securely link accounts, we added a security solution that involves an in-channel spoken “personal key.”  As users set up the Capital One skill and pair their accounts using OAuth, Alexa asks the user if they would like to add a “personal key,” a 4-digit personal identification code.

In addition to wanting users to be able to control access to their account information, we wanted the language Alexa uses in her conversations with customers to be warm and humorous at times. We learned a lot through testing and are using that feedback as we fine tune tone and wording along the way.

Some Creative Technical Work

We built the Capital One skill using node.js. We also use AWS to host our skill and internal APIs to get customer account information. The basic engineering work is straightforward and the Amazon developer portal documentation makes it easy to learn. Here are a few of the creative technical solutions we added on top of the basic engineering work to help us move fast with high quality:

The Capital One utterance compiler

We created a tool that automatically generates an expansive set of utterances from just a few input parameters.  This allows us to avoid maintaining a huge list of individual utterances for our skill. For example, in our "AccountBalance" intent, we have many ways of asking for the balance on an account. To this already long list we then added account types (e.g. checking, savings, etc). After that we added product names (e.g. Venture credit card, Quicksilver credit card). Our list of utterances for that intent is now huge when you incorporate all the different ways customers can ask for their balance across account types and product names.  Our utterance compiler makes it simple to generate and maintain all these utterances.

[Read More]

December 15, 2016

Jen Gilbert

ReadWrite_logo.png

Guest Blog post by Lauren Marinaro, Director of Smart Cities and Developer Engagement, ReadWriteHack. ReadWrite, a leading tech editorial platform for IoT and the Connected World, works to connect IoT thought leaders, influencers, and innovators in meaningful ways, including hackathons.

This year, Amazon Alexa teamed up with ReadWrite for two major hackathons — the IoT for Cities Hackathon at IoT World and the Industrial IoT Hackathon at SEMICON West. Each one connected over 100 developers with the latest IoT technology to create innovative, life-changing products over the course of two days.

"The IoT for Cities Hackathon is a place where developers can innovate around technologies that are actually making a difference in people's lives. We are excited to be part of these kinds of initiatives, as developers are constantly showing us new and valuable ways to use Alexa,” said Paul Cutsinger, Head of Alexa Voice Design Education.

And Amazon Alexa APIs were used in five out of eight of the winning solutions at SEMICON West and seven out of nine of the winning solutions at IoT World, including the Grand Prize.

What is it about Amazon Alexas voice service that makes it a favorite among IoT developers?

AmazonAlexaHackPic1.jpg

As we move towards a more connected and streamlined world, we expect more seamless interactions with our devices. For instance, if the person sitting next to you drops to the ground and you need to provide emergency services, wouldnt you be able to act faster if the Automated External Defibrillator (AED) on the wall was smart and could talk you through saving that persons life — all while calling the Emergency Response team for you in the background?

That’s what Team Ciklum built, winning the Grand Prize at the IoT for Cities Hackathon at IoT World. They also incorporated three other products from GE, Pitney Bowes, and Cisco to create the ultimate Smart AED. But what stood out in their demo was Amazon Alexas voice-activated, life-saving support in a situation where seconds can mean the difference between life and death.

At the Industrial IoT Hackathon at SEMICON West, Team EcoByte took the Grand Prize by creating a pollution awareness service that provides interactive environmental information to enable enhanced well-being. The main selling point: its interactive, voice-activated, and hands-free, thanks to Amazon Alexa.

In a hackathon environment, where you typically have little time to create something, the opportunity to actually demo your project can determine if you win or lose.

Developers are not only competing for the top prize, theyre competing for the attention of sponsors, influencers, and decision-makers. This is an opportunity to get your hands on the latest technology, prove your skills and ability to take complex IoT products and platforms and create something connected, useful, and marketable.

Alexa gives competitors a chance to create something quickly (check out their easy to maneuver skills here: and have something to demo, even as a beginner coder. It really helps that Amazon’s team has used the Alexa Skills Kit to build skills on their own. Great Alexa evangelists, like Noelle LaCharite, have created capabilities of their own, such as an in-home voice-activated robot bartender.

Voice command is the interface of the future. Leading developers have figured this out, and that is probably a big reason why over two-thirds of the IoT solutions created for our hackathons incorporate Amazon Alexas APIs.

To meet with Amazon Alexa Evangelists and Solutions Architects and start creating your own Smart City projects using the Alexa Skills Kit, be sure to sign up for the Smart Cities Hackathon at CES in Las Vegas, January 7th + 8th. Sign up here

[Read More]

December 13, 2016

Marion Desmazieres

Coding-Dojo.png

Today, we’re excited to announce a new, free video series on Alexa development by Coding Dojo, a pioneer in the coding bootcamp space that offers in-person and online classes. These Coding Dojo YouTube videos will help aspiring and established Python coders learn about building skills for Alexa, the voice service that powers Amazon Echo.

Here is what you can expect to learn in Coding Dojo's Alexa Skill Training series:

  • The videos will introduce Alexa-enabled devices like Echo and talk about the Alexa Skills Kit, a collection of self-service APIs, tools, documentation and code samples that make it fast and easy for you to add skills to Alexa.
  • The video instructor will take you through the process of creating an Alexa skill built in Python using an AWS Lambda function as the backend to handle the skill's requests. You will learn the steps to create a Coding Dojo skill that can tell you about the coding bootcamp and their instructors.
  • The videos will cover how to configure a skill in the Amazon developer portal, and will discuss setting up the interaction model, intent schema, and sample utterances, and testing the skill.
  • With a code walkthrough you will take a closer look at the code that’s allowing your Alexa skill and Lambda function to interact.
  • Finally, the video training will walk you through creating your own backend using Flask-Ask, a Python framework and Flask extension created by John Wheeler, an Alexa Champion. You will also learn how ngrok can allow you to test your skill locally. The series will end with an overview of AWS Elastic Beanstalk and its advantages.
“At Coding Dojo we want to give people hands-on experience building apps and programs for popular technologies in order to help them further their careers,” said Richard Wang, CEO at Coding Dojo. “The new videos will give both novice and existing developers invaluable project experience for their resumes and portfolios. With a number of our graduates already working at Amazon, we're hopeful that these types of real world projects will help more of our students get the opportunity to work on exciting new technology like Alexa.”

Watch the Alexa video series for free on YouTube today.

Learn more about Alexa with Coding Dojo

In addition to the videos, Coding Dojo announced a new in-person and online class, as well as an Alexa hackathon that will train Python developers to create skills. The Alexa skill building class is available as a module in the Python stack at Coding Dojo’s 14-week onsite and 20-week online coding bootcamp. Finally, Coding Dojo will host an Alexa skills hackathon led by Amazon Alexa employees on February 20, 2017 in San Jose. Anyone interested in participating should contact Coding Dojo's San Jose campus.

Check out the full announcement by Coding Dojo here.

December 09, 2016

Ted Karczewski

Voice is one of the most natural ways we can control and interact with the technology we use every day. From the kitchen to the beach, customers have told us that they love the ability to simply ask Alexa to stream their favorite songs, check weather and news reports, and access thousands of Alexa skills.

Today we’re excited to announce the Alexa Fund has added Vesper, developer of the world’s most advanced acoustic sensors, to the Alexa family by contributing to its Series A funding. The $15 million round was led by Boston-based venture capital firm Accomplice, and also includes investment support from Hyperlane, Miraenano Tech, and other undisclosed investors.

Vesper’s MEMS microphones use a patented piezoelectric design to prevent dust, water, solder flux vapors, and more from impacting performance, presenting a unique opportunity for manufacturers to build products for a variety of environments and use cases. Vesper’s latest product, VM1010, is the only wake-on-sound MEMS microphone on the market, introducing the possibility of always-listening devices at next-to-zero power draw.

According to Marwan Boustany, senior technology analyst, MEMS and Sensors, at IHS Markit, “MEMS microphones are growing so quickly because voice interaction is becoming ubiquitous. Microphones such as Vesper’s, which improve ruggedness and performance and support well-matched microphone arrays, may well accelerate many more use cases where voice is the standard user interface.”

As part of its commitment to invest up to $100 million in companies fueling voice technology innovation, the Alexa Fund is constantly looking for startups enabling new and exciting voice-activated capabilities for their customers. Vesper’s technology opens the door to adding Alexa to new device types, such as portable electronics where environmental resistance is an important attribute or where in-home devices require far-field applications for truly hands-free experiences.

Are you the next Alexa Fund business?

The Alexa Fund builds on Amazon’s track record of helping innovative individuals grow ideas into successful products and businesses. Amazon helps accelerate ideas by offering unique benefits to development teams, such as early access to Alexa capabilities and enhanced marketing support across channels.

Learn more about the Alexa Fund and its portfolio of companies on the Amazon Developer Portal.

December 08, 2016

Ted Karczewski

We’re excited to announce the Conexant AudioSmart™ 2-Mic Development Kit for Amazon AVS, a commercial-grade reference solution that streamlines the design and implementation of audio front end systems. This solution works with our updated Java sample client for Raspberry Pi, which also includes music certification enhancements. This kit features Conexant’s AudioSmart™ CX20921 Voice Input Processor with a dual microphone board and Sensory’s TrulyHandsfree™ wake word engine tuned to “Alexa”. 

Learn more about Conexant’s AudioSmart™ 2-Mic Development Kit for Amazon AVS

“Conexant’s AudioSmart 2-Mic Development Kit for Amazon AVS unlocks serious voice capture capabilities, allowing developers to achieve a far better AVS user experience through voice processing technologies that overcome acoustic and distance challenges,” said Steve Rabuchin, Vice President Amazon Alexa. “Utilizing Conexant’s AVS solutions will help third-party manufacturers quickly innovate with Alexa.” 

[Read More]

December 07, 2016

David Isbitski

Earlier in the year, we introduced built-ins with 15 different intents (such as Stop, Cancel, Help, Yes, No) and 10 slot types (such as Date, Number, City, etc.) that made it easier for developers to create voice interactions.  Today, the US preview of our new Alexa Skills Kit (ASK) built-in library is available to developers. This expands the library to hundreds more slots and intents covering new domains including books, video and local businesses. We chose these based on feedback from our developer community, as well as our own learnings with Alexa over the past year.

When you’re building a skill, it’s challenging to think of all the different ways your customers might ask the same question or express the same idea – all of which your skill would ideally need to understand. The new built-in intents and slots reduce your heavy-lifting by providing a pre-built model. For example, just including the following statement “SearchAction” makes your skill understand a customer’s request for phone numbers for local businesses. 

Customer usage and your feedback is important for us to improve the accuracy of the library, which will increase over the course of the preview. To provide feedback during this developer preview or submit your questions, visit our Alexa Skills Kit developer forums, create a question, and use the “built-in library” topic. We appreciate your help!

Getting Started

The built-in intent library gives you access to built-in intents that fall into categories, such as the weather forecast which I will walk through below (check out the full list of categories here). You can use these intents to add functionality to your skill without providing any sample utterances. Using one of these new built-in intents in your skill is similar to using a standard built-in intent like AMAZON.HelpIntent:

  1. Add the intent name to your intent schema.
  2. Implement a handler for the intent in your code.

The differences are:

  • Intents in the library are named according to a structure using actions, entities, and properties. Understanding this naming convention can help you understand the purpose and use of each intent.
  • Intents in the library also have slots for providing additional information from the user’s utterance. The slots are provided automatically, so you do not define them in the intent schema. In contrast, the standard built-in intents like AMAZON.HelpIntent cannot use slots.

Our weather example would have an intent schema like this:
 

{

  "intents": [

    {

      "intent": "AMAZON.SearchAction"

    }

  ]

}

Although no slots are defined in the above schema, an utterance like “what’s the weather today in Seattle” would send your skill a request with slots containing today’s date and the city “Seattle.”

These intents are designed around a set of actions, entities, and properties. The name of each intent combines these elements into an intent signature. In the above example the action is SearchAction, its property is object, and the entity is WeatherForecast.

[Read More]

December 06, 2016

Zoey Collier

On November 18, the first episode of The Grand Tour series marked the most-watched premiere in Amazon’s video streaming service’s history. British car enthusiasts Jeremy Clarkson, Richard Hammond, and James May returned to the screen for an all-new series of globetrotting adventures. Each episode takes Amazon Prime Video viewers to another exotic location.

For Amazon Alexa users, watching The Grand Tour is only half the fun. Prior to the series premiere, Amazon debuted a companion skill built by PullString on the Alexa Store, available to its US and UK customers.

Each Thursday, prior to the show’s Friday airtime, The Grand Tour skill provides a new clue about what to watch for in the upcoming video episode. On Saturday, if viewers are truly “on the tour” and answer three trivia questions correctly, they’ll unlock exclusive video content.

The fun aside, what makes the skill unique is another first: the PullString Platform on which it was developed.

Developing conversational experiences with Alexa

Mike Houlahan, head of PullString’s enterprise partner program, explains Oren Jacob and Martin Reddy co-founded the company in 2011. The two Pixar Animation veterans’ vision was to build lasting emotional connections between characters and audiences using two-way computer conversations. They noted an absence of professional toolsets for building conversational experiences between a character and its audience, and they set about filling that gap.

The PullString Platform is an all-in-one environment that lets developers and authors create award-winning conversational experiences, like the Lt. Reyes chatbot from Call of Duty and Hello Barbie.

Now, the company makes the power of the PullString Platform available to Alexa developers. “We are very excited to launch The Grand Tour skill,” Houlahan said. “We are simultaneously announcing the availability of PullString for the Alexa Developer Community to build their own Alexa skills.”

The PullString Platform includes:

  • A professional conversation authoring and debugging environment
  • A conversational AI engine to interpret and drive the interaction
  • Text message and bot conversation support
  • A platform to host the experience
  • Direct publishing to the Alexa environment

Learn more about the PullString Platform.

Creating The Grand Tour skill

With the PullString Platform, a creative writer can prototype, develop, test and deploy an entire skill without writing a single line of code. That’s just what Danielle Frimer did.

Frimer is the creative writer who scripted the voice interaction model (VUI) for The Grand Tour Alexa skill using PullString. She worked with Amazon Prime Video to get the show’s actors into the recording booth to record dialog, and put it all together using the PullString Platform.

“I am not a developer in any way,” says Frimer. “With the platform, I could focus my attention on the creative aspects of it—the lines, the flow of things, the overall design—not on the underlying nuts and bolts of it.”

The skill’s design mimics the flow of The Grand Tour’s episode rollout. The voice interaction, of course, is peppered with the recorded dialog, making the experience even more engaging.

Frimer says PullString’s templates and documentation give developers a quick-start on different types of conversation projects. In all cases, it relieves both authors and developers of the complicated logic involved with a complex VUI model.  

[Read More]

December 02, 2016

Marion Desmazieres

The name of Harrison Kinsley may not ring a bell but if you’re into Python programming you’ve probably heard the name “Sentdex”. With over 125,000 subscribers to his YouTube channel and about 800 free tutorials on his associated website, Harrison has become a reference for learning materials on Python programming.

Today, we’re excited to share a new Alexa skills tutorial for Python programmers available for free on PythonProgramming.net with companion video screencasts to follow along. This three-part tutorial series provides the instructions and code snippets to build an Alexa skill in Python that goes to the World News subreddit, a popular feed on news aggregator Reddit, and reads the latest headlines. To follow along, you will need an Alexa-enabled devicengrok or an https enabled server, and an Amazon Developer account.

In this tutorial, you can expect to learn:

Get started with the Alexa tutorial series here. For more Python tutorials, head to Harrison’s website.

Happy coding!

Marion

Learn more

Check out these Alexa developer resources:

 

December 02, 2016

Zoey Collier

Tushar Chugh is a graduate student at the Robotics Institute at Carnegie Mellon University (CMU). There he studies the latest in robotics, particularly how computer vision devices perceive the world around them.

One of his favorite projects was a robot named Andy. Besides having arms, Andy could discern colors and understand spatial arrangement. Andy could also respond to voice commands, like “pick up the red block and place it on top of the blue block.” Andy’s speech recognition, a CMU framework, was about to change.

When Amazon came to give some lectures at CMU, they had a raffle drawing. Chugh won the drawing and took home a new Amazon Echo as a prize. Over three days and nights without sleep, he completely integrated Andy and Alexa using the Alexa Skills Kit (ASK).

When he saw Hackster’s 2016 Internet of Voice challenge, he knew he had to enter. And in August 2016, Chugh’s Smart Cap won the prize for the Best Alexa Skills Kit with Raspberry Pi category.

The inspiration and genesis of Smart Cap

According to Chugh, there are about 285 million visually-impaired people in the world. In 2012, he worked on a project to help the visually impaired navigate inside a building. His device, a belt with embedded sensing tiles, won a couple of prizes, including a Wall Street Journal Technology Innovation Award. It was ahead of its time, though, and it wasn’t yet practical to develop the technology into a commercial product.

A lot can change in four years, including Chugh’s discovery of Alexa. Besides dabbling with Alexa and Andy the robot, he has also worked with Microsoft Cognitive Services for image recognition. Chugh now saw a chance to bring a new and better “seeing device” to light.

“When I saw Alexa, I thought we can extend it and integrate [Alexa] as a separate component,” says Chugh. “I talked with a couple of organizations for the blind in India, and they agreed this kind of system would be very, very useful. That was my main motivation.”

Chugh says the hardware for the Smart Cap is basic. He used a Raspberry Pi (RPi), a battery pack, a camera and a cap on which to mount it. As for the software, it included:

  • Alexa Skills Kit (ASK)
  • Amazon Web Services (AWS)
  • DynamoDB
  • Microsoft Cognitive Services (MSCS)
  • Custom Python code to run RPi to interface with the camera and MSCS

The goal was straightforward. A visually-impaired user could ask Alexa what is in front of them. Alexa would vocalize the scene, allowing the person to navigate safely wherever he or she may be.

How the Smart Cap works

How do the pieces all fit together?

Chugh says there are two distinct parts.

First, the image capture and analysis:

  • As the Smart Cap wearer walks down the street, a Python script on the RPi directs the camera to take pictures every two seconds.
  • Another program sends the image to MSCS using RPi’s WiFi / phone connection.
  • MSCS returns a text description of the image with relevant keywords.
  • The description is stored on RPi, then sent via AWS to be stored on DynamoDB.

Now comes the Alexa skill:

  • The wearer says “Alexa, ask Smart Cap to describe the scene” or “Alexa, ask Smart Cap what is in front of me”.
  • The skill uses AWS Lambda to retrieve and parse the latest value from DynamoDB.
  • Alexa responds with the description and keywords via the speaker or bone conduction headphones.
[Read More]

December 01, 2016

Ted Karczewski

The home is rapidly evolving thanks to the proliferation of connected devices and advancements in voice recognition technology. Together, new smart home products and voice control services are giving customers greater control over their homes.

Amazon and Intel see a tremendous opportunity to bring the benefits of a personal voice experience to millions of new consumers and are collaborating to encourage developers and device manufacturers to extend natural voice interaction to more products via Amazon Alexa.

The collaboration will enable partners to build new devices with Alexa using an Intel-based smart speaker form factor reference design coming in Q1 2017, as well as make it easier to create skills that work with the Intel-based Smart Home Hub.

Enabling Product Development with Intel and Amazon Alexa

Intel is working with Amazon to deliver smart speaker form factor reference designs (FFRD) with Alexa that make it easier for device manufacturers to build products with high-performance, far-field voice interaction. The first FFRD will be available starting in Q1 2017 and will offer device makers:

  • Accelerated development of Alexa voice-enabled smart speakers on Intel architecture.
  • Voice as the primary interface, allowing Alexa skills developers to build capabilities that reach even more end users.
  • Requisite speakers and microphone arrays expected from smart speakers, as well as the home radios that support the standards needed for PAN connectivity in the home, including Wi-Fi, Zigbee, Z-Wave, and Bluetooth, and is extensible enough to add video capabilities and environment sensors for an all-in-one customer experience.
  • An SDK that enables developers to add voice and video capabilities to connected products.

The FFRD combines Intel’s platform technology advancements with Amazon’s ever-smarter Alexa Voice Service to accelerate innovation among device manufacturers and the developers building new skills for all Alexa-enabled products. 

[Read More]

December 01, 2016

Ted Karczewski

We are excited to announce a new addition to the Alexa family—JAM Voice.

JAM Voice is a portable speaker system with Alexa that serves as a complete hub for music and information. It’s a Wi-Fi and Bluetooth-connected speaker that features touch-activated integration with the Alexa Voice Service (AVS), giving customers the ability to push a button and just ask Alexa to play music, check the weather, get the news, or even order a pizza.

Whether entertaining a group of friends or relaxing after work, the JAM Voice system can play music from one or many rooms in the house. You can pair multiple speakers when connected to Wi-Fi, streaming music from Amazon Music, iHeartRadio, and TuneIn just by asking Alexa. The Alexa integration also makes it easy for customers to access thousands of third-party skills, built using the Alexa Skills Kit, including smart home controls through Philips Hue, Belkin Wemo, SmartThings, Insteon, and Wink.

Buy JAM Voice on Amazon.com now.

Getting Started with AVS

Alexa is always getting smarter with new capabilities and services through machine learning. Your product also gains access to new capabilities with Alexa through API updates, feature launches, and custom skills. Learn how AVS can add rich voice-powered experiences to your connected products, and read how some of our partners below have integrated with Alexa already:

Have questions? We’re here to help. Visit us on the AVS Forum or Alexa GitHub to speak with one of our experts.

AVS is coming soon to the UK and Germany. Read the full announcement here.

 

November 30, 2016

David Isbitski

Update December 7, 2016: Today we announced the US preview of our new Alexa Skills Kit (ASK) built-in library is available to developers. Learn more >

A year and a half ago, we released the Alexa Skills Kit, and we’ve seen developers are eager to build skills and learn to build voice experiences. Developers like yourself have published over 5,000 skills, up from just over 100 at the beginning of the year. These skills are available on millions of Alexa-enabled devices in the US, UK and Germany.  

Introducing the Alexa Skills Kit Built-in Library

Today we announced that we will roll out a library of hundreds of new intents and slots as part of the Alexa Skills Kit in developer preview in the coming weeks (US only). These new slots and intents are the product of learnings over the last year for Alexa’s natural language understanding (NLU) that help Alexa better understand and reply to requests. With the new built-in library, we have combined those learnings with the most common requests we have seen from the developer community to offer hundreds of built-ins for use in your own skills. This is just a start and we will continue to increase the set of built-in functionality and improve their accuracy as we get feedback from all of you.  

What are Built-Ins?

With built-in intents and slot types, you do not need to provide sample utterances to train Alexa to understand customers’ intents for your skill. We introduced the concept of built-ins earlier in the year beginning with 15 intents (such as Stop, Cancel, Help, Yes, No) and 10 slot types (such as Date, Number, City, etc.). As part of the Alexa Skills Kit we now are introducing a new built-in library that provides hundreds of built-in intents and slots – for developers as part of the Alexa Skills Kit. The syntax for these built-ins are designed to make integration of these capabilities super simple in your custom skills. 

For example, let’s imagine a custom skill that allows someone to ask for the temperature in a location for the next three days. If we wanted to build this skill previously, we would have to create an interaction model that included a combination of built-in and custom intents for handling how someone would ask the question. Most likely this would include built-in slot types for city and state, a built-in slot type for the number of days, and then a lot of sample utterances to ensure Alexa was accurately understanding the question each time. We also would need to do server side type validation to ensure we were being passed the specific type of data we were looking for.

With the new built-in intents library, weather becomes an object that Alexa knows a lot about, both weather itself and its attributes, but also how a person may ask for the weather. Our interaction model now can be done with no sample utterances and a single intent! We call this new type of interaction an Intent Signature and it includes actions, entities and properties. There are numerous Intent Signatures available for use in your Alexa skills across all sorts of categories.

Stay tuned to learn more about built-in library. For more information about getting started with the Alexa Skills Kit, check out the following:

Alexa Skills Kit (ASK)
Alexa Dev Chat Podcast
Alexa Training with Big Nerd Ranch
Alexa Developer Forums

-Dave (@TheDaveDev)

November 30, 2016

Douglas Booms

For over a year, the Alexa Fund has been investing in promising startups that are fueling speech technology advances and voice application innovation. Today, we are excited to announce the Alexa Accelerator, powered by Techstars. The accelerator is the first of its kind at Amazon and another initiative from the Alexa Fund to champion builders, developers, and entrepreneurs innovating with voice technology.

The Alexa Accelerator, powered by Techstars, will focus on Alexa domains, including, but not limited to, such areas as connected home, wearables and hearables, enterprise, communications, connected car, health and wellness, and enabling technologies.  It will also give us the opportunity to work with exciting startups from around the world, and along the way, we expect to see many new applications and innovations we haven’t even imagined.

The Alexa Accelerator, powered by Techstars, will begin accepting applications to the program in January 2017.  Following a rigorous review process, 10-12 selected finalists will come together in Seattle to take part in an intensive  13-week program, during which time they’ll be matched with mentors from Amazon and Techstars to develop their technologies and business models.  The program begins in July 2017 and comes to an eventfulclose in October 2017 with the Alexa Accelerator, powered by Techstars Demo Day, giving the companies the opportunity to showcase their products and meet with investors.

The Alexa Accelerator, powered by Techstars, is part of the fast-growing Alexa Fund, which now includes 22 investments in companies that range from early- to late-stage.  To share more about the accelerator, program leaders from the Alexa Fund and Techstars will host information sessions in places like Seattle, San Francisco, New York City, Boston, London, Berlin, and Tel Aviv over the coming months.  Visit the Alexa Accelerator, powered by Techstars page to learn more and to stay tuned to updates.

 

November 30, 2016

Ted Karczewski

Consumers want greater control over their homes—the ability to manage not only smart products like their lights and thermostats, but also the services that provide them with connectivity and original content. For service providers, this means tapping into a network that empowers customers with a growing number of capabilities and products built for managing the entire home through voice.

Today, Technicolor and Amazon announced a new collaboration that brings Amazon Web Services and Amazon Alexa together with next generation home gateways from Technicolor that will allow service providers to develop new services for connected homes faster than ever.

Technicolor is a leader in digital innovation for the media and entertainment industry that works with cable, telco and satellite operators to bring bandwidth-intensive experiences into the home. The company sees opportunities for network service providers to build a bridge between cloud and edge technology to introduce new revenue-generating services while making home networks easier to access, manage and configure through voice activated commands.

Technicolor will use AWS for home gateway applications, leverage AWS IoT and Greengrass, integrate Alexa into its new gateway products to allow users to configure network settings and more just by asking, and incorporate Amazon’s Alpine System-on-chip into its new family of gateway products.

What this Means for Network Service Providers and Consumers

The collaboration has clear benefits for both service providers and consumers: NSPs will gain access to a broad developer community through AWS that’s constantly building new applications for the home, improving overall quality of service, customer experience, cost control, while addressing privacy issues by empowering consumers to have greater control over their own information. For consumers, Technicolor’s new services and gateway products will offer the ability to live and interact with connected home services in an easy and natural manner—through voice. This in turn will drive demand for new, more meaningful applications, incentivizing developers to continue to push the bar when working with NSPs.

[Read More]

Want the latest?

alexa topics

Recent Posts

Archive