Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2018-09-18T18:29:31+00:00 Apache Roller /blogs/alexa/post/5f7dcae8-7254-4604-b1b3-1fca0c986412/ask-sdk-for-python-now-generally-available ASK SDK for Python Now Generally Available BJ Haberkorn 2018-09-18T18:29:31+00:00 2018-09-18T18:29:31+00:00 <p><img src="" /></p> <p>Use the ASK SDK for Python to simplify development of the backend cloud service for your Alexa skill.</p> <p><img src="" /></p> <p>Today, we are happy to announce the general availability of the Alexa Skills Kit (ASK) SDK for Python. We've incorporated much of your feedback since <a href="">the beta release of the SDK</a> in June, and have added new samples to help you get started quickly. You can use the ASK SDKs—now available for Python, Node.js, and Java—to simplify development of the back-end cloud service for your Alexa skill.</p> <h2>Write Less Boilerplate Code</h2> <p style="margin-left:0in; margin-right:0in">As we shared when we launched the beta version of the SDK, our goal is to reduce the amount of code you need to write to process Alexa requests and responses and to handle other common skill tasks. You can use the following key features:</p> <ul> <li><strong>Request Handling</strong>. Request handling in the SDK makes it easy for you to invoke the right code when Alexa sends you a request. You can write a single handler for multiple Alexa intents, or invoke different handlers based on nearly any request attribute. The ASK SDK for Python also introduces flexible handler registration, allowing you to use either decorators or traditional class-based implementations of handler features.</li> <li><strong>Response Building</strong>. You can deliver responses to your customers that include text-to-speech, audio and video streams, and cards and other visual elements. Customers will receive one or more of these elements depending on what device they are using. Using the SDK, you can build responses that include all of these elements.</li> <li><strong>Attribute Management.</strong> You can store and retrieve information at different scopes using attributes in the SDK. Attributes allow you to keep track of what happened so far, and to use this information to determine what happens next. You can define attributes that persist for a single request, for a single customer session, or for the lifetime of your skill.</li> <li><strong>Alexa API Calls.</strong> You can call nearly any Alexa API from within your skill logic using service clients in the SDK. The service clients automatically inject relevant endpoint and authentication-token information on your behalf.</li> </ul> <h2>Bootstrap Your Next Alexa Skill Project with Six New Samples</h2> <p>You can take advantage of six new samples:</p> <ul> <li><strong><a href="">Simple Facts</a></strong><strong> </strong>- Learn the ins and outs of the ASK SDK for Python with this simple fact skill.</li> <li><strong><a href="">City Guide </a></strong>- Demonstrates how to call an API from within your skill to provide dynamic content to your customers.</li> <li><strong><a href="">Fact In-Skill Purchases</a></strong><strong> </strong>- Learn how to monetize your skill with both one-time purchases and subscriptions.</li> <li><strong><a href="">Pet Match </a></strong>- Easily prompt for and parse multiple values from customers with <a href="">dialog management</a> and <a href="">entity resolution</a>.</li> <li><a href=""><strong>How-To </strong></a>- Teach your skill how to be multi-lingual using Python's internationalization library.</li> <li><strong><a href="">Quiz Game </a></strong>- Configure your skill to support displays on Alexa-enabled devices with a screen.</li> </ul> <h2>Build Your First Alexa Skill with Python and Tell Us What You Think</h2> <p>Visit the <a href="">alexa-skills-kit-sdk-for-python</a> repository on GitHub to find everything you need, including the samples described above. Try it today, and tell us what you think. Create a GitHub issue on the repository to provide feature requests and feedback on issues you encounter. We can’t wait to see what you build.</p> /blogs/alexa/post/54e7b354-837d-4dc1-a33c-b6cf9e09a7a5/introducing-alexa-gadgets-toolkit-create-echo-connected-accessories-that-deliver-customer-delight Introducing Alexa Gadgets Toolkit: Create Fun and Delightful Echo-Connected Accessories Karen Yue 2018-09-18T14:40:14+00:00 2018-09-18T14:40:14+00:00 <p><a href="" target="_blank"><img alt="Alexa Gadgets Toolkit" src="" style="height:480px; width:1908px" /></a></p> <p>We’re excited to announce the Alexa Gadgets Toolkit (Beta), allowing you to build your very own Alexa Gadgets — fun and delightful accessories that pair to compatible Echo devices via Bluetooth.</p> <p><a href="" target="_blank"><img alt="Alexa Gadgets Toolkit " src="" style="height:480px; width:1908px" /></a></p> <p>We’re excited to announce the availability of the <a href="" target="_blank">Alexa Gadgets Toolkit</a>&nbsp;(Beta), allowing you to build your very own Alexa Gadgets — fun and delightful accessories that pair to compatible Echo devices via Bluetooth. Alexa Gadgets extend Alexa’s capabilities to new modalities with motors, lights, sound chips, and more.</p> <p>With the Alexa Gadgets Toolkit, you can build on what customers already love about Alexa, whether it’s responding with requested information from across the room, setting reminders, or&nbsp;playing music. Now, you can extend these same capabilities to gadget devices, and manifest&nbsp;customer interactions with Alexa in a variety of physical forms. For example, create a disco ball that sparkles the room with light when a customer asks Alexa a question, or&nbsp;a robot that lip syncs to things Alexa says. The possibilities for designing customer enjoyment are infinite. Early adopters including <a href="" target="_blank">Hasbro</a>, <a href="" target="_blank">WowWee Group Limited</a>, <a href="" target="_blank">Gemmy Industries</a>, <a href="" target="_blank">Baby Plus<sup>&reg;</sup></a>, <a href="" target="_blank">TOMY International</a>, <a href="" target="_blank">Novalia</a>, and <a href="" target="_blank">eKids (an affiliate of iHome)</a>, are already working to deliver customer entertainment using the Alexa Gadgets Toolkit.</p> <p>The Alexa Gadgets Toolkit offers self-service APIs, including <a href="" target="_blank">Gadget Interfaces</a> that expose metadata of Alexa’s capabilities on compatible Echo devices. It also includes <a href="" target="_blank">technical documentation</a> and <a href="" target="_blank">sample code</a> that facilitate direct pairing and connectivity, communication, and over-the-air (OTA) updates between your gadget and its paired Echo device. Start building today without requiring advanced processors, microphone and audio processing, or device cloud management.</p> <p>With the Alexa Gadgets Toolkit and technical documentation, you can build an Alexa Gadget that uses one or more of our list of Gadget Interfaces. The list of available interfaces will continue to grow, but for now, here are a few ideas to get you started:</p> <ul> <li><strong>Wake Word Detection</strong>: Respond when the wake word is detected, such a cuckoo clock that pops its head out whenever a customer says, “Alexa”</li> <li><strong>Speech:</strong> Sync movement to text-to-speech, such as a robot that lip syncs as Alexa&nbsp;reads the local weather report</li> <li><strong>Notifications</strong>: Respond to notifications, such as a flag that raises each time a notification is received</li> <li><strong>Timers</strong>: Respond when a timer has expired, such as an outdoor gong that chimes when backyard playtime timer has concluded</li> <li><strong>Alarms</strong>: Respond when an alarm has been triggered, such as a switch that releases dog food each time an alarm has expired</li> <li><strong>Reminders:</strong> Respond when a pre-set reminder has gone off, such as a pill box that plays a short tune and flashes an array of colors when it’s time for daily vitamins</li> <li><strong>Music (Coming Soon):</strong> Create visual performances with music, such as a hula girl that sways her hips when songs are&nbsp;playing on Amazon Music</li> </ul> <p>The first products to take advantage of the Alexa Gadgets Toolkit will be available to consumers later this year, including a variety of dancing plush animatronics and an updated Big Mouth Billy Bass from Gemmy Industries. Each uses the available Gadget Interfaces to deliver amusing reactions as customers interact with Alexa.</p> <h2>More to Come: Alexa Gadgets for Kids</h2> <p>Later this year, we will unlock additional opportunities for you to extend the fun and entertainment of Alexa Gadgets for the whole family to enjoy, including the little ones. For the first time, developers will be able to create gadgets for kids that are accompanied by compatible kid skills.</p> <p>Several global play and entertainment companies have been working on Alexa Gadgets concepts for kids. <a href="" target="_blank">You can sign up to be notified</a> when these products are available on Here’s what our developers are saying, along with a sneak peek of upcoming products:</p> <ul> <li><strong>Hasbro</strong>:&nbsp;“Our mission to create the world’s best play experiences requires us to reach audiences where they are today and where they’re going in the future. We’re excited about the possibilities to bring our brands and characters to life through Alexa Gadgets, and see great potential for more immersive play with this technology,” says Brian Chapman, Senior Vice President and Global Head of Design and Development, Hasbro.</li> <li><strong>TOMY International</strong>: “Our objective is to develop products that make families’ lives easier, while also incorporating technologies that promote convenience. We are thrilled to work with Alexa Gadgets to offer parents the benefits of hands-free, voice-activated assistance with our products as they navigate through their children’s first years,”&nbsp;says Vincent D’Alleva, Chief Brand and Commercial Officer,&nbsp;TOMY International.</li> <li><strong>WowWee Group Limited</strong>: BRUSHBOTS are smart toothbrushes that encourage the correct dentist-approved brushing technique. Through lights, sound effects and gameplay, BRUSHBOTS sense and measure motion and timing to help kids not only brush correctly – but have fun doing it! “BRUSHBOTS’ proprietary sensing technology transforms brushing into an entertainment experience for kids everywhere. With Alexa's unique capabilities, the gameplay possibilities for BRUSHBOTS are limitless,&quot; says Davin Sufer, Chief Technology Officer,&nbsp;WowWee Group Limited.</li> <li><strong>BabyPlus<sup>&reg;</sup>: </strong>Waddles the Smart Duck<sup>TM</sup> lends a helping hand to parents when they need it most, from bath-time to nap-time. Parents will love the convenience of hands-free control in the nursery and streaming music from any room. “Waddles was inspired by new parents who need versatile products for every stage of their parenting journey. We knew we needed a universally beloved figure and the rubber duck was an obvious choice and the perfect vessel for the next generation of nursery smart gadgets. And now, we are excited to bring the fun to Alexa,” says Matt MacBeth, CTO, BabyPlus.</li> <li><strong>Novalia</strong>: <a href="" target="_blank">Touchscapes </a>is an interactive touch sensitive table mat that helps your child learn through immersive play. “We have dreamed of creating experiences with Alexa for years. Through Alexa Gadgets, this magical soundscape and immersive experience is finally possible for our products!” says Kate Stone, Founder and CEO,&nbsp;<a href="" target="_blank">Novalia</a>, an Alexa Accelerator company.</li> </ul> <h2>Keep in Touch: Sign Up for Alexa Gadgets Updates</h2> <p>The Alexa Gadgets Toolkit is currently available in the US, UK, and Germany. <a href="" target="_blank">Start building your own Alexa Gadget</a> and <a href="" target="_blank">sign up to receive the latest updates</a> on the Alexa Gadgets Toolkit and upcoming features. We can’t wait to see what you build!</p> /blogs/alexa/post/4a46da08-d1b8-4d8e-9277-055307a9bf4a/alexa-skill-recipe-update-call-and-get-data-from-external-apis Alexa Skill Recipe Update: Call and Get Data from External APIs Jennifer King 2018-09-17T14:00:00+00:00 2018-09-17T17:55:31+00:00 <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>With the new Software Development Kit (SDK) for Node.js, there are new methods to be aware of for making HTTP calls to external APIs. This post compares the process when using version one versus version two of the SDK for Node.js.</p> <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>In a previous Alexa skill recipe, we shared how you can <a href="">call HTTP requests to get data from an external API</a>, enabling your skill to fetch meaningful data from a remote source. Since then, we’ve released <a href="">version two of the Alexa Skills Kit Software Development Kit (SDK) for Node.js</a>. With the new SDK, there are new methods to be aware of for making HTTP calls to external APIs. This post details the process of making external API calls in version two of the SDK.</p> <p>Whereas you may be familiar with achieving this through callbacks in version one of the SDK, we actually approach this in version two through the use of promises. With version two, you cannot send a response to Alexa through a callback. Below, we elaborate a little further.</p> <h2>Understanding How Version Two Handles Responses</h2> <p>Below, we have an example handler for a response using version two of the SDK:</p> <pre> <code class="language-javascript">const HelloWorldIntentHandler = { canHandle(handlerInput) { return handlerInput.requestEnvelope.request.type === 'IntentRequest' &amp;&amp; === 'HelloWorldIntent'; }, handle(handlerInput) { const speechText = 'Hello World!'; return handlerInput.responseBuilder .speak(speechText) .withSimpleCard('Hello World', speechText) .getResponse(); }, };</code></pre> <p>As shown above, we <strong>need</strong> to return our object after we have added everything required, so we cannot call another function and rely on a callback (like we would have done in version one). This is because we’ve lost access to that specific scope and we wouldn’t really be returning our object anywhere, except for inside the new scope after the callback has been handled. With that in mind, we need a new approach to understand how to call external APIs in version two the SDK.</p> <h2>Understanding Await, Async, and Promises</h2> <p>Through these types of operators, functions, and objects, we can call external APIs through version two of the SDK without the need of a callback – they are part of Javascript and we are able to make use of them in our Node.js code.</p> <p>We’ll only touch on these concepts, but to summarise:</p> <ul> <li><u>async</u>: If you <strong>async</strong> before a function declaration, we are committing that this function <strong>will</strong> return a promise.</li> <li><u>await</u>: Only inside <strong>async</strong> functions, using <strong>await</strong> will make your code wait until that promise has been settled with a result returned</li> <li><u>Promise</u>: As shown by the name, this is literally a promise that an object may (or may not) produce a value at some point in the future</li> </ul> <p>So, how do we use these to call external APIs? We are now going to literally halt our code until some other code (in our case, the actual HTTP call) has been completed and resolved. Once complete, we get our data and add it to our Alexa response object.</p> <h2>Example Approach for HTTP Calls in Version Two</h2> <p><strong>1. Again, we initialise the https module which is a default part of Node.js:</strong></p> <pre> <code class="language-javascript">var https = require('https');</code></pre> <p><strong>2. Write a function that process requests and sends back responses from the server via callback (in this case, a Chuck Norris joke API):</strong></p> <pre> <code class="language-javascript">function httpGet() { return new Promise(((resolve, reject) =&gt; { var options = { host: '', port: 443, path: '/jokes/random', method: 'GET', }; const request = https.request(options, (response) =&gt; { response.setEncoding('utf8'); let returnData = ''; response.on('data', (chunk) =&gt; { returnData += chunk; }); response.on('end', () =&gt; { resolve(JSON.parse(returnData)); }); response.on('error', (error) =&gt; { reject(error); }); }); request.end(); })); }</code></pre> <p><strong>3. Call the function and respond to the user with the formatted data:</strong></p> <pre> <code class="language-javascript">const GetJokeHandler = { canHandle(handlerInput) { const request = handlerInput.requestEnvelope.request; return request.type === 'IntentRequest' &amp;&amp; === 'GetJokeIntent'; }, async handle(handlerInput) { const response = await httpGet(); console.log(response); return handlerInput.responseBuilder .speak(&quot;Okay. Here is what I got back from my request. &quot; + response.value.joke) .reprompt(&quot;What would you like?&quot;) .getResponse(); }, };</code></pre> <p>So as you can see in step 1, we initialize the HTTP module once again – nothing has changed here.</p> <p>For step 2, you’ll notice that our httpGet function is similar in the sense that it is passing in the relevant options to the call and parsing the data It gets back, but you’ll notice that this is now enclosed inside of a <strong>promise</strong>. Because of our commitment, we handle our two functions inside of this promise; <strong>resolve</strong> if everything is successful where we parse our data and send it back, and <strong>reject</strong> where there is something wrong and we error out.</p> <p>On to step 3, we create our version two handler and ensure we add <strong>async</strong> to our handle function because as mentioned before, we are saying that the following code <strong>will</strong> return a promise at one point. Because of this declaration, we can then add <strong>await</strong> to out httpGet call, and as mentioned before, this will then halt the code until our external call has completed/resolved. Once we have our data, we the just send it back to the user as normal.</p> <h2>Learn More and Get Started</h2> <p>You can find these concepts and more in the <a href="">Pet Match</a> sample in the official Alexa Github, and if you haven’t yet checked out the new SDK, see the official repository over <a href="">here</a> for more info and documentation. Reach out directly to me <a href="" target="_blank">@jamielliottg</a>, post on the <a href="" target="_blank">forums</a>, or <a href="">get in touch</a> with questions about how this works!</p> <h2>Related Content</h2> <ul> <li><a href="">Making HTTP Requests to Get Data from an External API (V1)</a></li> <li><a href="">Amazon Alexa Voice Design Guide</a></li> <li><a href="">More Alexa Skill Recipes</a></li> </ul> <h2>Make Money by Creating Engaging Skills Customers Love</h2> <p>When you create delightful skills with compelling content, customers win. You can make money through Alexa skills using <a href="">in-skill purchasing</a> or <a href="">Amazon Pay for Alexa Skills</a>. You can also make money for eligible skills that drive some of the highest customer engagement with <a href="">Alexa Developer Rewards</a>. Learn more about how you can <a href="">make money with Alexa skills</a>.</p> /blogs/alexa/post/0c8ecdd9-2570-4131-8988-e9a618e6895d/echo-buttons-contest-offers-25-000-in-prizes-for-best-game-skill Echo Buttons Contest Offers $25,000 in Prizes for Best Game Skill Karen Yue 2018-09-14T21:10:11+00:00 2018-09-14T21:10:11+00:00 <p><strong><a href="" target="_blank"><img alt="Echo Buttons Game Skills Contest" src="" style="height:480px; width:1908px" /></a></strong></p> <p>We’re thrilled to announce the Echo Buttons Game Skills Contest with, an opportunity for US developers to publish a game skill for Echo Buttons.</p> <p><a href="" target="_blank"><img alt="Echo Buttons Game Skills Contest" src="" style="height:480px; width:1908px" /></a></p> <p>We’re thrilled to announce the <a href="" target="_blank">Echo Buttons Game Skills Contest</a> with, an opportunity for US developers to publish a game skill for Echo Buttons for a chance to win their share of Gift Cards* and prizes totaling $25,000. With Echo Buttons, developers can build even more engaging gaming experiences that bring families and friends together. Each button illuminates and can be pressed to trigger a variety of play experiences powered by Alexa. One finalist will take home a $5,000&nbsp; Gift Card&nbsp;and will get their skill promoted by Amazon.</p> <p>On top of all that, the winners will get:</p> <ul> <li><strong>Grand Prize - Best Echo Buttons Game:</strong> One grand prize winner will receive a $5,000 Gift Card.</li> <li><strong>Finalists - Best Echo Buttons Game: </strong>Four finalist winners will receive a $2,500 Gift Card.</li> <li><strong>Best Echo Buttons Kids Game:</strong>&nbsp;One winner will receive a $2,000 Gift Card&nbsp;To be eligible, you must indicate that your skill will be directed to children under the age of 13 before you submit.<a href="" target="_blank"> Learn more.</a></li> <li><strong>Best Echo Buttons Action Game:</strong>&nbsp;One winner will receive a $1,500 Gift Card.</li> <li><strong>Best Echo Buttons Trivia Game:</strong>&nbsp;One&nbsp;winner will receive a $1,500 Gift Card.</li> <li><strong>First 150 Echo Buttons Games Published</strong>: Receive a 2-pack&nbsp;of Echo Buttons, with a bonus $100 Gift Card for the first 25&nbsp;games published.&nbsp;</li> </ul> <p>These prizes are to be split among team members, or hoarded if the winning skill was built by just one person. You have 10&nbsp;weeks to build and submit your skill for certification and provide your submission package to by November 25. On December 19, we will announce the grand prize, finalist, and bonus prize winners.&nbsp;For the official contest rules, please see <a href="" target="_blank">here</a>.</p> <h2>What Are We Looking For?</h2> <p>The Echo Buttons Game Skills Contest invites developers to build an Echo Buttons game skill that educates and entertains Alexa customers. We’re looking for skills that bring friends and family back for game night.</p> <p>Your skill will be judged on:</p> <ul> <li>Quality, creativity, and originality of your idea</li> <li>Implementation of your idea including ease-of-use and whether it performs as expected</li> <li>Potential impact including its ability to educate and entertain with Echo Buttons</li> <li>Accompanying submission package including the skill description and ID, video demo, and other materials</li> </ul> <h2>Need Inspiration?</h2> <p>Explore some of the most engaging Echo Buttons game skills to date including: <a href="">Bandit Buttons</a>, <a href="">Hanagram</a>, <a href="">Trivial Pursuit Tap</a>, and <a href="">more</a>.&nbsp;</p> <h2>Start Today!</h2> <p><a href="">Register for the Echo Buttons Game Skills Contest now</a>. We look forward to seeing what educational, entertaining, and engaging skills you build for Echo Buttons!</p> <p><em>*The&nbsp;<a href=""></a></em><em>&nbsp;Gift Card may only be used for purchases of eligible goods on or certain of its affiliated websites. The&nbsp;</em><em><a href="">;</a></em><em>Gift Card cannot be redeemed for the purchase of another&nbsp;</em><em><a href="">;</a></em><em>Gift Card. Except as required by law, the&nbsp;</em><em><a href="">;</a></em><em>Gift Card cannot be transferred for value or redeemed for cash. To redeem or view a&nbsp;</em><em><a href="">;</a></em><em>Gift Card balance, visit “Your Account” on Amazon is not responsible if any&nbsp;</em><em><a href="">;</a></em><em>Gift Card is lost, stolen, destroyed or used without permission. If the&nbsp;</em><em><a href="">;</a></em><em>Gift Card is lost or stolen, it will not be replaced. See&nbsp;</em><em><a href="">;</a></em><em>for complete terms and conditions. The&nbsp;</em><em><a href="">;</a>Gift Card is issued by ACI Gift Cards LLC., a Washington limited liability company. No expiration date or service fees.</em></p> /blogs/alexa/post/b5dafee1-1559-45cf-a82f-f1952eeb5c76/5-tips-for-using-intent-history-to-enhance-your-alexa-skill 5 Tips for Using Intent History to Enhance Your Alexa Skill Jennifer King 2018-09-14T14:00:00+00:00 2018-09-14T14:58:58+00:00 <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>Intent History is a powerful tool that introduces a new dimension of insight into how customers are engaging with your Alexa skill. Check out these five tips for using the UTTERANCE, CONFIDENCE, and RESOLVED INTENT fields to improve your skill’s interaction model.</p> <p><img alt="" src="" style="height:240px; width:954px" /></p> <p><a href="">Intent History</a> is a powerful tool that introduces a new dimension of insight into how customers are engaging with your Alexa skill. If 10 or more customers have interacted with your skill in a single day, Intent History will provide aggregated and anonymized details about what customers have said and how these requests were handled by your skill’s interaction model. Not only does this information help you understand your customer base and the things they most frequently say to your skill, but you can also use Intent History insights to improve your skill’s interaction model and provide even better user experiences.</p> <p>You can navigate to your skill’s Intent History under the Build tab in the Alexa Developer Console. The example below shows the type of data you might see for a custom smart home skill that lets customers control the light, temperature, and security settings in their homes.</p> <p><img alt="" src="" style="height:562px; width:1062px" /></p> <p>Each column provides information about the most frequent requests made to this skill:</p> <ul> <li>UTTERANCE provides a list of utterances customers have said while interacting with your skill</li> <li>CONFIDENCE indicates the degree of confidence with which an utterance was matched to an intent</li> <li>RESOLVED INTENT shows the intent in your skill’s interaction model that an utterance was matched to</li> <li>RESOLVED SLOTS lists slots that were filled by the utterance</li> <li>DIALOG ACT indicates whether the utterance was part of a dialog such as slot elicitation, slot confirmation, or intent confirmation</li> <li>INTERACTION TYPE indicates whether the utterance was a modal or one-shot request (Modal requests are said after the skill has been opened. One-shot requests open the skill and make a request in the same utterance)</li> </ul> <p>In the image above, we can see that customers have said “turn on the upstairs lights” to this home automation skill. To the right, we can see this utterance was matched with HIGH confidence to LightsOnIntent. Under the SLOTS column, it shows that the “Location” slot was filled by this utterance. The empty field under Dialog Act indicates this utterance was not part of a Dialog Act. The final column, INTERACTION TYPE, shows that this utterance was interpreted as a Modal request. Find more information about the data available in your Intent History <a href="">here</a>.</p> <p>In addition to the Intent History <a href="">technical documentation</a>, here are five tips for using the UTTERANCE, CONFIDENCE, and RESOLVED INTENT fields to improve your skill’s interaction model.</p> <h2>1. Identify Actionable Requests Missing from Your Skill’s Interaction Model</h2> <p>Before publishing a skill, you should generate robust sample utterances to support different ways customers might engage with your skill. However, it is not always possible to predict the different ways customers might phrase their commands while talking to your skill. You can use Intent History to identify phrases that are not yet present in the skill’s interaction model but would be beneficial to add.</p> <p>To get started, look for utterances in your Intent History that were matched to incorrect intents, or appear to have been matched with only LOW or MEDIUM confidence. It is likely that these utterances are missing from your skill’s interaction model. Compare these utterances to the sample utterances in your skill’s interaction model to confirm whether or not they are missing. Consider adding any missing utterances to the relevant intents in your skill. This will help ensure they are matched to the proper intent in the future. For example, you can see the user utterance “dim the lights” was matched to LightsOnIntent with MEDIUM confidence:</p> <p><img alt="" src="" style="height:161px; width:1068px" /></p> <p>In order to fulfill the user’s request, the utterance “dim the lights” should have been routed to the skill’s TurnLightsDownIntent. However, this utterance was not originally included as a sample utterance for that intent. To ensure “dim the lights” is matched to the proper intent with HIGH confidence in the future, this utterance should be added as a sample utterance to TurnLightsDownIntent.</p> <p><strong>In addition to adding specific utterances that appear in your history, consider adding slight variations of those utterances for increased coverage</strong>. For instance, the smart home skill in the example above allows customers to specify which room they would like to adjust the light settings for. The {Location} slot supports room values such as “dining room,” “master bedroom,” etc. Because we see “dim the lights” in the skill’s Intent History, we can predict customers will say things like “dim the dining room lights” or “dim the lights in the master bedroom.” To provide coverage for these variants, add utterances such as the following to TurnLightsDownIntent:</p> <p style="margin-left:.5in; margin-right:0in"><em>dim the lights</em></p> <p style="margin-left:.5in; margin-right:0in"><em>dim the {Location} lights</em></p> <p style="margin-left:.5in; margin-right:0in"><em>dim the lights in {Location}</em></p> <p>Note that some of the utterances in your Intent History may prompt broader considerations about your skill’s conversational design. For example, we want to consider the best way for our skill to respond to a request like “dim the lights,” which does not contain a {Location} value. In this case, we might want to implement a feature like Slot Elicitation to prompt customers about which<em> </em>room they want to dim the lights in. More information about Slot Elicitation is available <a href="">here</a>.</p> <h2>2. Identify Carrier Phrase Patterns That Can Be Added To All Intents</h2> <p><em>Carrier phrases</em> are short, generic phrases that customers often say at the beginning or end of their requests to Alexa. Typically, carrier phrases do not impact the overall meaning of a request, but are said naturally, as customers speak to Alexa in casual, conversational ways. They include phrases like, “can you,” “tell me,” or “please,” which might precede a wide range of requests made to custom skills.</p> <p>Use Intent History to identify carrier phrases that customers are saying while engaging with your skill. For instance, in the example below, you can see that an utterance starting with “can you” is being matched to AMAZON.FallbackIntent:</p> <p><img alt="" src="" style="height:307px; width:1104px" /></p> <p>Because the utterance “can you raise the temperature three degrees” does not have a matching sample utterance in the skill’s model, it is incorrectly being matched to AMAZON.FallbackIntent. As we discussed in tip 1, we want to improve accuracy for this particular request by adding the matching sample utterance to the relevant intent. In this case, we want to add the following utterance to RaiseTemperatureIntent:</p> <p style="margin-left:.5in"><em>can you raise the temperature {Number} degrees</em></p> <p>However, if this is the only sample utterance in the skill’s interaction model that begins with the phrase “can you,” <em>all </em>requests that start with “can you” will likely be matched to RaiseTemperatureIntent. For instance, the utterance “can you turn the lights on” may be incorrectly matched to RaiseTemperatureIntent. To avoid all “can you” utterances being matched to RaiseTemperatureIntent, we want to add the pattern of “<em>can you + &lt;request&gt;”</em> to all intents in this skill that could support this pattern. For example, we might want to add the following sample utterances to these intents:</p> <table border="1" cellpadding="1" cellspacing="1" style="height:157px; width:611px"> <thead> <tr> <th scope="col">Intent</th> <th scope="col">Sample Utterance</th> </tr> </thead> <tbody> <tr> <td><em>RaiseTemperatureIntent</em></td> <td>can you raise the temperature {Number} degrees</td> </tr> <tr> <td><em>LowerTemperatureIntent</em></td> <td>can you lower the temperature {Number} degrees</td> </tr> <tr> <td><em>LightsOnIntent</em></td> <td>can you turn the lights on</td> </tr> <tr> <td><em>LightsOffIntent</em></td> <td>can you turn the lights off</td> </tr> </tbody> </table> <h2>3. Consider Variations in Human Speech to Interpret Confusing Utterances</h2> <p>The utterances in your Intent History rely on <a href="">automatic speech recognition</a> to convert words into spoken text and enable Alexa to respond. Although Alexa’s ability to recognize human speech is always improving, utterances in your Intent History may sometimes contain misheard words, improperly tokenized forms, or disfluencies in user speech.</p> <p>Do not incorporate utterances <em>with recognition errors</em> into your interaction model as they appear in your Intent History. Adding utterances containing these errors will increase the likelihood of those errors being recognized in the future—degrading the overall accuracy of your model over time.</p> <p>Instead of including utterances in the exact form they appear in your history, use your judgment based on how people speak and what your skill can do to interpret what your customers intended to say. Add the utterances as customers likely meant to say them, not necessarily as they appear in your history. For instance, the example below shows the utterance “can you turn the lights on the bedroom” went to AMAZON.FallbackIntent.</p> <p><img alt="" src="" /></p> <p>Based on our intuitions of how people speak, we know it’s likely that the customer actually intended to say “can you turn the lights on <em>in</em> the bedroom.” This is the form we want to add as a sample utterance to LightsOnIntent in order to improve accuracy of this request in the future. Note that simply adding “can you turn the lights on the bedroom” as it appears in the Intent History will actually make the model more likely to recognize the utterance incorrectly without “in.”</p> <table border="1" cellpadding="1" cellspacing="1" style="height:87px; width:717px"> <thead> <tr> <th scope="col">Utterance in Intent History</th> <th scope="col">Utterance Said by User</th> <th scope="col">Sample Utterance to Add</th> </tr> </thead> <tbody> <tr> <td>can you turn the lights on the bedroom</td> <td>can you turn the lights on <em>in</em> the bedroom</td> <td>can you turn the lights on <em>in </em>the {Location}</td> </tr> </tbody> </table> <h2>4. Avoid Adding “Out-of-Domain Requests” to Your Interaction Model</h2> <p>Sometimes requests that were not intended for your skill will appear in your intent history<strong>. </strong>For instance, customers might say “volume up” without realizing that device volume controls cannot be accessed from within a skill. Instead of building out your model to accommodate all non-skill-directed requests in your history, ensure your skill has a graceful backend response for these types of requests when they are routed to <a href="">FallbackIntent</a> or other catchall intents.</p> <h2>5. Prioritize Higher-Frequency Utterances</h2> <p>Your intent history does not reveal the exact number of times each utterance was said to your skill. However, utterances appearing at the top of your history occur with higher frequency.<strong> </strong>Focus on improving the experience for these utterances since they represent more common patterns that will have the highest impact for customers of your skill. Note that utterances appearing at the bottom of the list may have only been said a few times to your skill, and it may not be necessary to modify your interaction model to support less frequent utterances.</p> <p><img alt="" src="" /></p> <p style="margin-left:0in">By following these tips, you can use your Intent History to improve the accuracy of your skill’s interaction model and provide a better experience for your customers. After adding new sample utterances to your skill, remember to submit your updated model for certification. Once the updates are approved, your new model will be live.</p> <h2 style="margin-left:0in">Related Content</h2> <p style="margin-left:0in">Check out the following links for more information on how to use Intent History:</p> <ul> <li><a href="">Discover How Customers Engage With Your Alexa Skill Using Intent History</a></li> <li><a href="">Review the Intent History for a Custom Skill</a></li> <li><a href="">Intent History and AMAZON.FallbackIntent</a></li> </ul> /blogs/alexa/post/2a32d792-d471-4136-8262-79962a2b4d72/cpu-memory-and-storage-for-alexa-built-in-devices Sizing Up CPU, Memory, and Storage for Your Alexa Built-in Device Ted Karczewski 2018-09-13T18:00:00+00:00 2018-09-13T18:00:00+00:00 <p><img alt="" src="" /></p> <p>In this blog post, we provide examples of existing AVS device solutions that can be used as a guide for sizing up CPU, memory, and storage for a headless voice-forward device with microphone(s) and speaker(s).</p> <p><img alt="" src="" /></p> <p>Alexa offers your customers a new way to interface with technology – a convenient UI that enables them to plan their day, stream media, and access news and information. If you’re planning on building a device with the <a href="" target="_blank">Alexa Voice Service (AVS)</a>, you’ll want to ensure you have the right amount of central processing unit (CPU) power, memory, and flash storage to ensure your product brings a delightful hands-free Alexa experience to your customers.</p> <p>In this blog post, we provide examples of existing AVS device solutions that can be used as a guide for sizing up CPU, memory, and storage for a headless voice-forward device with microphone(s) and speaker(s). Please note this blog does not cover CPU or memory requirements for screen-based devices, tap-to-talk Alexa implementations, smart home use cases or <a href=";node=16713667011&amp;tag=googhydr-20&amp;hvadid=269550237298&amp;hvpos=1t1&amp;hvnetw=g&amp;hvrand=17973210007638947159&amp;hvpone=&amp;hvptwo=&amp;hvqmt=b&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9031935&amp;hvtargid=kwd-446572585186&amp;ref=pd_sl_5ffjw71q5u_e" target="_blank">Alexa Calling and Messaging</a>.</p> <h2>Sizing Up CPU</h2> <p>Sizing an embedded system processor is a combination of science and art. A common but outdated convention is to use <a href="" target="_blank">Dhrystone MIPS</a> (Million Instructions Per Second), or DMIPS, as a measure of processor performance relative to the 1970-era DEC VAX 11/780 minicomputer. DMIPS are generally reported as <a href="" target="_blank">DMIPS/MHz</a> for the typical MIPS of a processor at a given MHz. The Dhrystone benchmark suffers from several shortcomings, as performance metrics can vary considerably for the same hardware using different compilers, compiler optimization settings that optimize away large portions of the test code, and <a href="" target="_blank">wait-state delays</a> for reading from memory. Other benchmarks suffer similarly. At the end of the day, your real-world application is the final judge of actual performance.</p> <h2>CPU and Memory Work Together</h2> <p>An Alexa client application requires host processor cycles for tasks such as wake word detection, data compression and decompression. It also requires memory to buffer outbound and inbound audio streams for Text-To-Speech (TTS) and music playback. Compiler <a href="" target="_blank">options for code size optimization</a> and techniques for <a href="" target="_blank">code compaction</a> to generate smaller executables that more easily fit in limited amounts of memory on embedded systems are commonly used to keep costs to a minimum. These constraints impose a need for considering both CPU and memory when developing and optimizing embedded systems software.</p> <p>Programming styles on large versus small computer systems can also vary and affect required processing power and memory. The adage that compute cycles are cheaper than human programming cycles as it generally applies to large computer systems does not necessarily translate well to small embedded systems. While it’s true that <a href="" target="_blank">System on Chip</a> (SoC) or <a href="" target="_blank">System On Module</a> (SOM) capabilities continue to increase and lower in cost, competitive markets and tight margins impose a need for close scrutiny of the overall cost of the system, especially when millions are being produced. Techniques such as <a href="" target="_blank">code profiling</a> help isolate portions of programs that utilize more CPU or memory. Focusing optimization in these areas is a first step in reducing the overhead of software components and ultimately enable a lower cost hardware solution.</p> <h2>Headroom, Not Overhead</h2> <p>Software component overhead is often reported in DMIPS, which we’ve already seen can be problematic. Generally, overhead is calculated by the sum of all components believed to contribute. The challenge lies in knowing all of the contributors. Another challenge is knowing whether other components running on the system, some periodic, were also considered. Software involving complex and unpredictable interactions make this even more challenging.</p> <p>Headroom provides a measurement of what’s left, given all else. Measuring the headroom of unused processor cycles, memory, and storage accounts for all components running on the system. Also, by starting with the <a href="" target="_blank">AVS Device SDK sample application (SampleApp)</a> and measuring the associated headroom, you’ll be able to determine how much is left for your application’s features and customizations. Available tools on Linux for measuring CPU, memory, and storage include top, df, and inspection of /proc/meminfo. Be sure to subtract out the CPU and memory usage of top itself when using it.</p> <h2>Sizing It All Up</h2> <p>The amount of CPU, memory, and storage can vary substantially for different processor architectures and operating systems. Optimization techniques play a huge role in reducing required system resource capacity. Table 1 below shows examples of processor, memory, and flash storage headroom values in Alexa applications for Alexa conversation (voice responses, Flash Briefings, weather) and streaming media use cases. The vendor column indicates whether the device originates with a processor vendor (V), <a href="" target="_blank">AVS Systems Integrator</a> (SI), or an <a href="" target="_blank">AVS Development Kit</a> (Dev Kit).</p> <p style="text-align:center"><img alt="" src="" /></p> <p style="text-align:center"><strong>Table 1- Headroom on Alexa Built-in Devices</strong></p> <p>Each solution has varying degrees of optimization as illustrated in the Level of Optimization (LoO) column. It also shows an example of a very optimized solution on a considerably less powerful ARM Cortex-R4 microcontroller for illustration purposes, but where the Alexa wake word engine runs exclusively on the digital signal processor front-end of the device to offload the host processor.</p> <p>Table 2 and Figure 1 below illustrate how effectively the various solutions utilize the host processor and memory.</p> <p style="text-align:center"><img alt="" src="" /></p> <p style="text-align:center"><strong>Table 2- Normalized Headroom on Alexa Built-in&nbsp;Devices</strong></p> <p>The data for each of the processors were normalized to a common frequency of 1GHz MCPS and 512MB RAM for comparison. A gain multiplier was introduced to factor in a hypothetical gain that could be achieved by exchanging machine cycles for larger RAM utilization. The low-end ARM Cortex-R4 was left out to limit the comparison to solutions that implement the Alexa wake word on the host processor.</p> <p style="text-align:center"><img alt="" src="" /></p> <p style="text-align:center"><strong>Table 3 - Effective Cycle Usage of Alexa Built-in&nbsp;Devices</strong></p> <p>The data demonstrates that solutions provided by AVS Systems Integrators generally make the most effective use of system resources on the device, thereby enabling a lower cost solution and lowering mass production costs. Note that the data may vary over time and are only a snapshot of present Systems Integrators performance. Working with Systems Integrators also provides the benfits of acoustic expertise, Audio Front-End (AFE), pre-validation and manufacturing testing support, OTA updates for new features and security, as well as peace of mind know you’ll have a device that passes the high-quality bar.</p> <p>If you plan on rolling out your own device with Alexa built-in and have the needed inhouse expertise and resources, please review the provided LoO data to determine the solution architecture that best fits your product’s needs and the capabilities of your developer resources. Be sure to leave a 20%+ margin in available CPU after considering the Alexa client and your product’s software to incorporate future Alexa features and functionality. For Cortex-A7 and more powerful processors, be sure to use at least 512 MB of flash for more robust Over-The-Air (OTA) updates as the extra storage will allow for an active and inactive set of partitions with safe fallback in the event of an incomplete update.</p> <p>Cost and the need for low-power voice-forward consumer devices are the driving forces for using lower-powered processors and less memory. However, this must be balanced with time-to-market and a capability for supporting new Alexa features, especially for the initial first release of a novel product. Users have an expectation for feature parity across voice-forward devices. The good news is that working with Systems Integrators can get your product to market faster and lower the overall cost with a more effective solution.</p> <h2>New to AVS?</h2> <p>AVS makes it easy to integrate Alexa directly into your products and bring voice-forward experiences to customers. Through AVS, you can add a new natural user interface to your products and offer your customers access to a growing number of Alexa features, smart home integrations, and skills. <a href="" target="_blank">Get started with AVS</a>.</p> /blogs/alexa/post/4ac489b5-8635-48ac-984c-bdfbb9044191/avs-sis-for-set-top-box AVS Introduces the First Systems Integrator Solutions for the Set-Top Box Rachel Bennett 2018-09-13T15:30:00+00:00 2018-09-13T18:19:42+00:00 <p><img alt="SIs_for_STB_blog_image(1).png" src="" /></p> <p>Today, we are introducing qualified solutions for set-top boxes (STBs) with Alexa built-in from <a href="" target="_blank">AVS Systems Integrators (SIs)</a> ARRIS, Cisco Infinite Video Platform, DiscVision, and Technicolor.&nbsp;</p> <p><img alt="SIs_for_STB_blog_image(1).png" src="" /></p> <p>Video content service providers and their consumers recognize the compelling and natural use cases for voice interaction with the set-top box. Today, we are introducing qualified solutions for set-top boxes (STBs) with Alexa built-in from <a href="" target="_blank">AVS Systems Integrators (SIs)</a> ARRIS, Cisco Infinite Video Platform, DiscVision, and Technicolor.&nbsp;</p> <p>The first solutions for the STB category, from four new AVS Systems Integrators, enable device makers to leverage more complete designs that are already pre-integrated with the <a href="" target="_blank">Alexa Voice Service</a>:</p> <ul> <li><strong>ARRIS </strong>supports service providers in enabling Alexa built-in solutions. As part its residential portfolio, ARRIS’&nbsp;STBs, Wi-Fi extenders, and IoT solutions represent existing platforms for integrating Alexa to fulfill the consumer demand for smart voice service, connected home, and advanced IoT technologies.</li> <li><strong>Cisco Infinite Video Platform (IVP)</strong> is a cloud service for Pay-TV operators to secure, distribute, and monetize premium video experiences on all devices. IVP enables new IP, Hybrid IP, and Over-the-Top services with multi-screen experiences. Cisco IVP’s cloud delivery makes it easy to keep subscribers up to date with the latest Alexa innovations.</li> <li><strong>DiscVision</strong> enables integration of Alexa into visual devices including TVs, STBs, and media players. DiscVision’s SDK is designed to reduce Alexa integration time and cost. It can be combined with DiscVision’s pre-compiled AVS library for target platforms, client systems, and white-labeled skills.</li> <li><strong>Technicolor</strong>'s latest generation of set-top-boxes have Alexa built-in, enabling network service provider customers to meet consumer demand for a smarter home environment. The set-top box now becomes a hub delivering new features and capabilities that can be monetized by operators&nbsp;by leveraging the same infrastructure used to deliver entertainment experiences.</li> </ul> <p>“Set-top boxes provide a seamless way to integrate Alexa into the television experience in many homes,” said Priya Abani, Director of Amazon Alexa. “These new AVS systems integrators are expected to increase the speed at which developers can launch new video devices with Alexa built-in, and broaden the variety of devices customers can use to interact with Alexa.”</p> <p>To learn more about Systems Integrators with solutions qualified by Amazon, go to our <a href="" target="_blank">Systems Integrators page</a> on the AVS Developer Portal.</p> <h2>New to AVS?</h2> <p>AVS makes it easy to integrate Alexa directly into your products and bring voice-forward experiences to customers. Using the Alexa Voice Service, you can add a new natural voice interface to your products and offer your customers access to a growing number of Alexa features, smart home integrations, and skills. <a href="" target="_blank">Get Started</a>.</p> /blogs/alexa/post/7da88ba2-8091-460b-a034-a2d3257c0106/with-in-skill-purchasing-gal-shenar-sets-his-growing-voice-business-up-for-long-term-success With In-Skill Purchasing, Gal Shenar Sets His Growing Voice Business Up for Long-Term Success Jennifer King 2018-09-13T14:50:21+00:00 2018-09-13T14:50:21+00:00 <p><img alt="" src="" /></p> <p>With over 30 published skills and 200,000 active monthly users, there’s no doubt Gal Shenar has <a href="">cracked the code</a> for building highly engaging Alexa skills. When Amazon introduced <a href="">in-skill purchasing</a>, Shenar saw even greater opportunity to monetize his skills.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>With over 30 published skills and 200,000 active monthly users, there’s no doubt Gal Shenar has <a href="">cracked the code</a> for building highly engaging Alexa skills. When Amazon introduced <a href="">in-skill purchasing</a>, Shenar saw even greater opportunity to monetize his skills.</p> <p>“I wanted to explore more business models and really see what's possible on Alexa,” says Shenar. “In-skill purchasing provides a seamless process that expands the possibilities of voice, what I can provide to my customers, and what I can earn.”</p> <p>Shenar has added in-skill purchasing for premium content to two of his most popular game skills, <a href="" target="_blank">Escape the Room</a> and <a href="" target="_blank">Escape the Airplane</a>, to deliver more engaging experiences to customers. So far, those customers are more than happy to pay for his optional “hint” packs. For Escape the Airplane, Shenar is seeing conversion rates as high as 34%. And 8% of Escape the Room players who are offered the premium content have opted in to purchase the packs.</p> <p>“Monetization drives growth in any industry,” says Shenar. “It’s great to have in-skill purchasing because I'm able to have complete control over my skills, my business, and how my investment brings in money.”</p> <h2>Enhancing the Voice-First Gaming Experience</h2> <p>As a web developer by day, Shenar was initially intrigued by the possibilities with Alexa. His curiosity paid off quickly–he received $25,000 in <a href="">Alexa Developer Rewards</a> within six months of publishing his first skill. The rewards allowed him to invest more time building more complex skills and to hire others to create even higher quality content and audio.</p> <p>Now Shenar has a thriving voice business called <a href="" target="_blank">Stoked Skills</a> and a growing catalog of skills that span the gamut of popular categories, from surf weather reporting to meditation and workout skills.</p> <p>But Shenar knew the only way to build a sustainable voice business was to continually offer customers more in terms of content and engaging new features. With Escape the Room and Escape the Airplane, which are voice-first variations of popular “escape rooms” around the world, the skills detect if players get stuck on a puzzle and then offer “hint” packs for purchase.</p> <p>“You can play the game for free, but if you get stuck, you can get the premium, tailored experience of receiving contextual hints during the game,” says Shenar. “You can ask for a hint to help you along without ruining the puzzles you're trying to solve.”</p> <p>Both skills have a 4.6-star rating in the Alexa Skills Store. By offering premium content for highly engaging skills like these, Shenar gives customer more of what they love.&nbsp;</p> <p>“I'm really excited to keep seeing all the positive feedback,” says Shenar. “The conversion rates that I'm seeing on my skills of people who purchase after hearing the upsell are much higher than what you’d expect on mobile.”</p> <h2>Offering Premium Content without the “Hard Sell”</h2> <p>A well-known marketing mantra is “no one likes to be sold.” The same holds true when offering your in-skill products, according to Shenar. To be effective, the offer has to be a welcome one, and one that doesn’t mar the customer experience with a hard sales pitch. That’s important to Shenar, whose Escape the Room skill has over 1,000 reviews and a 4.6-star rating</p> <p>“I've seen reviews of parents playing with their families and couples who have a date night and play the games together,” says Shenar. “Seeing reviews like this has kept me excited about offering more content and features.”</p> <p>Rather than trying to push a sale on customers, Shenar instead approaches it as offering them a premium experience. While the hint packs enhance the skill, customers can continue to play and enjoy the games even if they don’t purchase them.</p> <p>Shenar suggests a few tips for making your in-skill purchasing experience a delightful one. First, make sure your skill is one that customers love and want to spend time with. Understand your audience and how it reacts to your skill, then imagine what those customers want more of and would be willing to buy.</p> <p>He also suggests offering a sample of the premium content—such as a complimentary “hint” for Escape the Room—before customers make the purchase. When customers see they can get more of what they already love, they’re more likely to be happy to pay for the premium content.&nbsp;</p> <p>“People are more likely to spend money on your skill if they're coming back to it already, so start with a skill that people can incorporate into their daily lives,” says Shenar. “When you can make the experience better for a couple of dollars, people might be willing to spend money on that.”</p> <h2>Embracing the Many Opportunities That Come with Monetization</h2> <p>According to Shenar, in-skill purchasing gives him the ability to deliver more engaging experiences to customers and provide an additional revenue stream for his voice business. But with that revenue comes an added benefit: control over how much and how fast your business can grow.</p> <p>“Monetization gives me control over how much money I can bring in,” says Shenar. “I hope to use in-skill purchasing and other Alexa monetization capabilities to keep growing my business and reaching more customers.”</p> <h2>Related Content</h2> <ul> <li><a href="">Sell Premium Content to Enrich Your Skill Experience</a></li> <li><a href="">Earn Money with Alexa Developer Rewards</a></li> <li><a href="">Guide: Make Money with Alexa Skills</a></li> <li><a href="">In-Skill Purchasing Takes Volley’s Thriving Voice Business to the Next Level</a></li> </ul> /blogs/alexa/post/d3a6a9f3-4cdd-4e2e-a7b9-107e459c42cb/introducing-the-newest-alexa-champions Introducing the Newest Alexa Champions Glenn Cameron 2018-09-12T22:57:31+00:00 2018-09-12T23:49:41+00:00 <p><img alt="" src="" /></p> <p>Today, we are excited to introduce 12 new developers to the Alexa Champions program, which is a recognition program that honors the most engaged developers and contributors to the Alexa community.</p> <p><img alt="" src="" style="height:451px; width:1501px" /></p> <p>Today, we are excited to introduce 12 new developers to the Alexa Champions program, which is a recognition program that honors the most engaged developers and contributors to the Alexa community. Champions are chosen by first being nominated by an Alexa team member. Then their contributions are stack ranked against other nominees, and the top nominees get invited to the program. Champions have the opportunity to participate in private beta programs, direct communication lines to Alexa evangelists, special opportunities for their skills to be featured, and more.</p> <p>It’s a privilege to reveal our newest group of Alexa Champions. Their contributions span three continents and collectively they have published innovative skills, developed open-source projects, and shared their knowledge around the world about how to build with Alexa. Visit the Champions gallery to read about their contributions to the voice community.</p> <h2>Meet the Newest Alexa Champions</h2> <p><a href="" target="_self"><strong><img alt="Adva Levin" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Adva Levin</strong></a> is a voice-first pioneer and head of Pretzel Labs, which is a voice studio that designs playful and educational Alexa skills for kids.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Heather Luna" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Heather Luna</strong></a> is a technology industry veteran and speaker. She uses her own experiences with Alexa to support, encourage, and engage both new and experienced developers.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Hidetaka Okamoto" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Hidetaka Okamoto</strong></a> is a developer, speaker, organizer, and Alexa group moderator in Japan. He has helped develop products that make cutting-edge use of AWS and Alexa.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Jess Williams" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Jess Williams</strong></a> is CEO and voice designer of Opearlo, a studio that produces Alexa skills in a wide range of categories. She is also a Voice Design Tutor for an online learning platform, and regularly speaks at conferences on her learnings working in voice.&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="John Gillilan" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>John Gillilan</strong></a> is a developer and creative producer with extensive experience in music and audio. He continues to explore where voice design intersects with digital culture.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Kesha Williams" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Kesha Williams</strong></a> is an award-winning software engineer and has mentored thousands of developers around the world. She is a lecturer, innovation team leader, and role model for women in voice design.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Liam Sorta" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Liam Sorta</strong></a> is the director of the development studio, which produces a VR game using Alexa as an interfacing tool. He also moderates a 6,000-strong online community for game developers.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Ralf Eggert" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Ralf Eggert</strong></a> is an experienced web developer and founder of Travello, an agency focused on building travel communities and web apps. Since 2017, his company has also been dedicated to voice design with Alexa.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Tim Kahle" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Tim Kahle</strong></a> is a digital media marketing expert and co-founder of 169 Labs, one of the first voice design agencies in Germany. He is also a voice-first conference organizer and podcast host.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Tom Hewitson" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Tom Hewitson</strong></a> is a conversation designer, speaker, and founder of voice games studio</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Tomoharu Ito" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Tomoharu Ito</strong></a> is a Japanese developer, event organizer, and mentor in the Netherlands. He is committed to promoting voice design using Alexa among the Japanese-language AWS community.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong><img alt="Travis Teague" src="" style="float:left; height:150px; margin-left:10px; margin-right:10px; width:150px" /></strong></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="" target="_self"><strong>Travis Teague</strong></a> is CTO of a Long-Range Low-Power IoT solutions provider. He is also a mentor, organizer, and moderator for the Alexa group in Houston, Texas.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>View all the current Alexa Champions in the <a href="" target="_self">Champion Gallery</a>.</p> <h2>Build Your First Skill, Get a T-Shirt</h2> <p>Bring your big idea to life with Alexa and earn perks through our&nbsp;<a href="">milestone-based developer promotion</a>. US developers, publish your first Alexa skill and earn a custom Alexa developer t-shirt. If you're not in the US, check out our promotions in&nbsp;<a href="">Canada</a>, the&nbsp;<a href="" target="_blank">UK</a>,&nbsp;<a href="" target="_blank">Germany</a>,&nbsp;<a href="">Japan</a>,&nbsp;<a href="">France</a>,&nbsp;<a href="">Australia</a>, and&nbsp;<a href="" target="_blank">India</a>.&nbsp;<a href="">Learn more</a>&nbsp;about our promotion and start building today.</p> /blogs/alexa/post/f041b41e-6fce-49dc-903e-2f9d4e8abbfd/alexa-champion-interview-ito 新Alexaチャンピオン 伊東知治さんインタビュー Motoko Onitsuka 2018-09-12T21:00:00+00:00 2018-09-13T04:08:21+00:00 <p><img alt="" src="" style="height:480px; width:1910px" /></p> <p>Alexa開発者コミュニティーへの貢献が大きい開発者を称えるためのプログラム、<a href="">Alexa チャンピオン</a>として今回、新たに世界で<a href="">12人が発表され</a>、日本からも初めて2人のチャンピオンが誕生しました。これまでの活動や、今後について、チャンピオンのうちの1人、伊東知治さんに伺いました。</p> <p><img alt="" src="" style="height:480px; width:1910px" /></p> <p>Alexa開発者コミュニティーへの貢献が大きい開発者を称えるためのプログラム、<a href="">Alexa チャンピオン</a>として今回、新たに世界で<a href="">12人が発表され</a>、日本からも初めて2人のチャンピオンが誕生しました。それぞれのチャンピオンのこれまでの活動や、今後について伺ったインタビューをご紹介します。</p> <h2>伊東知治さんインタビュー</h2> <p>― Amazon Alexaを知ったきっかけを教えてください。</p> <p>2年半ほど前、AWSのユーザーコミュニティ、JAWS-UGの神戸支部を引き継ぎまして、初めての勉強会で AWS Lambda をテーマとして取り上げました。その勉強会の中で、「AWS Lambdaはファンクション。Lambdaを使うならもっと面白いものがある」と、Alexaを紹介されました。日本ではまだローンチされていませんでしたが、AWS のバージニア リージョンでは、Alexa のイベントトリガーとして、AWS Lambda が使える状況でした。また、GitHubでは、Raspberry Pi (ラズパイ)で作るAlexaデバイス(AVS)のプロトタイプを作成するためのリポジトリがあり、調べながら実際に作ってみたところ、大きくはまりました。</p> <p>― どこが面白いと思いましたか?</p> <p>最初はAlexaが、自分の言っていることを聞いてくれませんでした。私のジャパニーズイングリッシュが原因と思うのですが、それでも、ちゃんと返事してくれることもあるし、「分かりません」と返してくることもある。その複雑さ、というか、あいまいさが面白かった。これほど雑にデバイスとコミュニケーションする体験はそれまで、ありませんでした。また、AIなので、もちろん学習によって頭がよくなっていく。日ごとに英語のイントネーションが違っていったりするのも面白かったですね。開発者としては、こんなに簡単にデバイスってセルフで作れてしまうんだ、というところが大きかったですし、何より人間的なインターフェースであることに、衝撃を受けました。これは絶対に面白い、という根拠のない確信を持つようになり、「これは広がったほうが社会が面白くなるぞ」と、没頭していきました。</p> <p>― ちゃんと返事をしてくれないところも面白いと感じたのですね。</p> <p>プログラマーとしては基本、あいまいさの排除と戦っているので、「最終的にシステムは100%に近いところで動かないとだめ」というのがこれまでの価値観でした。でも、音声では「間違ったっていいじゃない」というか、それが当たり前の世界があることに気づきました。そもそも音声はあいまいです。あいまいな中でやりとりをし、その上で答えを返す。しかも精度はどんどんよくなっていく。Alexaに対して愛着がわくくらいのレベルに、すごいスピードで進化していく点も含めて、今までのシステムとは違い、機械と人間とが近くなっていくという魅力を感じました。</p> <p>― VUI(音声ユーザーインターフェース)での開発は、あいまいな部分と、あいまいではない部分がありますよね。</p> <p>VUIはこれまでのソフトウェアよりも、状況に依存するインターフェースだと思っています。例えば医療関連のシチュエーションでは、絶対に間違いがあってはいけないですよね。だから、もしVUIを導入するとなったら、できる限り単刀直入に、短い発話で、正しく伝える言い回しを選ばないといけない。ナースコールがボタン一発で看護師を呼べるように、正確さとアジリティがいるわけです。対して、あいまいさを許してくれる状況というのもありますよね。たとえば、雑談とか、音楽を聴くとか。Alexaと雑談をしながら、「今の気分にあった音楽をかけてよ」というのは、あいまいでもいいので、あえて会話数を多く設計して、たくさん話せたほうがいいかもしれない。こういう人間同士のシチュエーションに近いやりとりを考えていると、楽しいですね。</p> <p>― これまではアメリカで2スキル、日本で1スキルを開発されていますが、開発で感じたことは?</p> <p>実際に携わらせていただいたプロジェクトで、英語ではすんなり聞こえるのに、同じ内容を日本語にするともったりしている、というフィードバックをもらったことがあります。この調整にはいつも苦労します。現場の、しかも人の感覚に依存する部分が多いので、SSMLを言語ごとにペルソナ通りにちゃんと書くのが、バックエンド側よりも大変だったりします。日本語は、言葉の性質だと思いますが、どうしても長くなってしまう。話す速度も、英語よりゆっくりはっきり話す傾向があると感じます。ただ、Alexaとの長いやりとりを聞いていると疲れますよね。そこで、できるだけフランクなスキルにしたい、というのは開発中いつもテーマとして持っています。敬語ではなく「~だよ」「~じゃない?」といった言葉を状況によって使うなど、堅苦しい日本語は打破していきたいと思っています。</p> <p>― Alexaのような存在によって、将来人のコミュニケーションスタイルが変わっていくということもあり得るかもしれませんね。</p> <p>最近Alexaを導入し始めた企業から、社員同士のコミュニケーションが活性化しているかもしれない、との話を聞きました。2つの部屋にそれぞれAlexaを置いて、出退勤、タスク管理をAlexaに話しかけて打刻する、というものなのですが、Alexaに社員がしゃべりかけるうちに、たまたまそこにいた社員同士もしゃべり始め、会話が弾んでいるかもしれない、というのです。Alexaが人と人を仲介できる、面白い事例です。</p> <p>― 今後Alexa開発の中で実現できたらいいなと思うことは何でしょうか。</p> <p>3つあります。まず、翻訳です。現時点で一つのデバイスは一つの言語になっていますが、いろんな国の人がそれぞれの言語で話しかけても、答えてくれる、というもの。実は去年ラズパイで、日本語で話しかけると英語で返すというAVSデバイスのプロトタイプを作ってみたのですが、言語の壁を飛び越えてしまった感覚になり、素敵な経験だと感じました。もう一つは、医療分野。例えば、病気や障害でうまくしゃべれない人の言葉をAlexaが正確に聞き取り、答えるとか。通常の発話だけでなく、聞き取りづらい発話も Alexaが学習して聞き取れるようになれば、コミュニケーションに役に立つのではないかと考えています。視覚や聴覚に障害を持つ方が、Alexaを介してより能力を発揮しやすくなる世界がくればと考えています。3つ目が、子どもとお年寄りです。特にお年寄りについては最近、話し相手としてAIスピーカーは効果があるとの記事を読みました。それだけでなく、今までパソコンや機器を使うのが難しいために便利なサービスなどを使えていなかった方々が、音声で簡単にアクセスできるようになれば、生活がより豊かになると思います。病院やお医者さんとつなげるということもできるかもしれないですね。デバイスの操作が覚えられない、新しいテクノロジーに対して抵抗感がある、などの理由でオンラインでの行動から遮断されてきた人にとっては、操作自体が自分の声になるわけですから、可能性が大きいと思います。</p> <p>― 伊東さんはオランダ在住ですが、オランダではどのような使われ方が想定できますか?</p> <p>オランダでは、業務効率化という視点での活用が加速すると考えています。ITサービスにおいて、オランダには人手を徹底的に排除して効率化する、という視点があると思っています。まだEchoデバイスは発売されていませんが、既に発売されているドイツで買ってきている人はたくさんいますし、雑誌などでもスマートスピーカー特集が組まれていたりして、アンテナは張られている感じです。先駆けて音声を導入しようという会社もありますし、ミニハッカソンなども散見されます。</p> <p>― オランダでプログラマーとして活動されている中で、どのような手ごたえを感じていますか?</p> <p>オランダへの移住計画は、3年ほど前に始まりました。私のパートナーがオランダは移住しやすいかもしれない、という情報を見つけてきたのです。当時は私も会社員で、どちらかというと否定的だったのですが、その後フリーランスになり、AWSのコミュニティー活動で韓国に行ったときに、「日本じゃなくてもいいかもしれない」と思ったりもしました。その後、紆余曲折ありまして、2017年7月からオランダに住んでいます。クライアントはすべて日本の会社さんです。仕事は全然違和感なくできています。今後はオランダでもクライアントを増やしていきたいですね。以前から、日本人だけというより、いろんな国や文化の人とミックスして仕事をした方がシナジーが生まれやすいのでは、と感じていました。価値観がごちゃ混ぜになった方が、新しい発想ができるのでは、と。移住をしてから、この感覚はさらに強くなってきています。コミュニティー活動を通して、色んな国の人と話をし、国を超えたイベントを行うことで、私自身も変わってきました。これからも色々な人とコラボレーションしていきたいと思っています。</p> <p>― 今後、Alexaチャンピオンとしてのコミュニティー活動はどのように展開したいと思っていますか。</p> <p>現在3つのコミュニティーの運営に関わっています。</p> <p>日本では Alexaユーザー、開発者のためのコミュニティー <a href="">AAJUG(Amazon Alexa Japan User Group)</a> を立ち上げました。ユーザー体験と開発者がいっしょになって、Alexaをもりあげるためのコミュニティーです。</p> <p>また、Alexa を始めるきっかけとなった <a href="">JAWS-UG 神戸</a> も引き続き運営をしています。Alexa を知ってから、勉強会のテーマとしてずっとAlexaを取り上げていたのですが、「Alexaの聖地は神戸」と言われることもあるほどです。</p> <p>そして最後は、オランダで新しく立ち上げた AWSコミュニティー、<a href="">JAWS-UG Netherlands Branch(オランダ支部)</a>です。1回目のミートアップには35人ほど集まり、2回目も6月に開催しました。地域にこだわったコミュニティーというより、たとえば神戸とオランダのコミュニティーがコラボレートするなど、地域の枠を超えた活動で、ワクワクできる技術をシェアしていきたいと思っています。</p> <p>3つのコミュニティーがありますが、JAWS-UG 神戸を引き継いだときに、そのメンバーで決めた根本的な理念のようなものがあります。それは、「自分たちがAWSでわくわくしたことをシェアする」というものです。このポリシーはどのコミュニティーでも同じで、オランダでもAlexaでわくわくした体験を広げられるような活動をしていければと考えています。</p> <p style="text-align:right">(インタビュー実施:2018年6月)</p> <p>伊東知治さんの<a href="">Alexaチャンピオンページ</a>(英語)</p> <p>Amazon Alexa Japan User Group(AAJUG): <a href=""></a></p> <p>JAWS-UG KOBE: <a href=""></a></p> <p>JAWS-UG Netherlands Branch: <a href=""></a></p> <p>&nbsp;</p>