アクセスいただきありがとうございます。こちらのページは現在英語のみのご用意となっております。順次日本語化を進めてまいりますので、ご理解のほどよろしくお願いいたします。
Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2019-10-18T04:03:26+00:00 Apache Roller /blogs/alexa/post/d54c8cab-bb83-45db-af97-f53743daf7c3/join-the-alexa-team-at-aws-re-invent-2019 Join the Alexa team at AWS re:Invent 2019 June Lee 2019-10-17T16:25:10+00:00 2019-10-17T20:30:34+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_reInvent-2019_954x240.png._CB450544124_.png" style="height:480px; width:1908px" />We’re excited to invite you to join the Alexa team at AWS re:Invent 2019, December 2-6 in Las Vegas, Nevada. AWS re:Invent is a learning conference hosted by Amazon Web Services (AWS) for the global cloud computing community.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_reInvent-2019_954x240.png._CB450544124_.png" style="height:480px; width:1908px" /></p> <p>We’re excited to invite you to join the Alexa team at AWS re:Invent 2019, December 2-6 in Las Vegas, Nevada. AWS re:Invent is a learning conference hosted by Amazon Web Services (AWS) for the global cloud computing community. The event will feature keynote announcements, training and certification opportunities, access to more than 2,500 technical sessions, a partner expo, after-hours events, and so much more.</p> <h2>Meet Us at the Alexa Booth</h2> <p>Join us at the Alexa re:Invent booth located at the entrance of the Main Expo (Hall B) from Monday, December 2 to Thursday, December 5. Check out our newest Echo devices and learn what you can build with Alexa to help serve your customers more naturally with voice anywhere they go. You can also take this opportunity to meet our team and ask any questions.</p> <h2>More Than 50 Alexa Sessions and Workshops</h2> <p>This year, there will be <a href="https://www.portal.reinvent.awsevents.com/connect/search.ww?trk=null#loadSearch-searchPhrase=&amp;searchType=session&amp;tc=0&amp;sortBy=abbreviationSort&amp;p=&amp;i(19577)=32821">more than 50 Alexa sessions</a> at re:Invent, including hands-on workshops, interactive chalk talks, and technical breakouts. Join sessions to dive deep into the technology across Alexa Skills Kit, Alexa Voice Service, and Alexa’s smart home capabilities, and learn how you can build compelling voice experiences. Session topics include designing great multimodal experiences, creating conversational voice interfaces, integrating in-skill purchasing, building Alexa-enabled devices for the connected world, and more. You’ll hear from senior leaders, technical evangelists, product team members, and engineers who will share the latest and greatest practices for building with voice.</p> <h2>What You’ll Learn</h2> <p>Wondering what you’ll learn in our <a href="https://www.portal.reinvent.awsevents.com/connect/search.ww?trk=direct,www.google.com#loadSearch-searchPhrase=&amp;searchType=session&amp;tc=0&amp;sortBy=abbreviationSort&amp;sessionTypeID=2&amp;p=&amp;i(19577)=32821">breakout sessions</a>, <a href="https://www.portal.reinvent.awsevents.com/connect/search.ww?trk=direct,www.google.com#loadSearch-searchPhrase=&amp;searchType=session&amp;tc=0&amp;sortBy=abbreviationSort&amp;sessionTypeID=2623&amp;p=&amp;i(19577)=32821">chalk talks</a>, and <a href="https://www.portal.reinvent.awsevents.com/connect/search.ww?trk=direct,www.google.com#loadSearch-searchPhrase=&amp;searchType=session&amp;tc=0&amp;sortBy=abbreviationSort&amp;sessionTypeID=2523&amp;p=&amp;i(19577)=32821">hands-on workshops</a>? Here’s a preview of just a few of the sessions offered:</p> <ul> <li><strong>Alexa, What Can I Do Now?</strong><br /> Every year, the Alexa Skills Kit (ASK) grows in capabilities and features. In this state of the union, we discuss the latest trends in conversational artificial intelligence, highlight some of the most innovative skills, and provide an overview of everything that has been released in the past year for ASK.</li> </ul> <ul> <li><strong>Improving Customer Retention for Your Alexa Skill</strong><br /> Retaining your customers is both an art and a science. In this session, you will discover the mechanisms you can use to keep your customers coming back for more. You will have the opportunity to ask questions, and discuss ideas among fellow skill developers.</li> </ul> <ul> <li><strong>Building Robots That Respond to Voice</strong><br /> Learn how you can control robots with your voice through the Alexa Skills Kit. In this workshop, we will live-code with the Robot Operating System (ROS) and AWS RoboMaker to build an intelligent robotic application that responds to our voice commands and navigates autonomously in a simulated environment.</li> </ul> <ul> <li><strong>How to Build Alexa Skills with AWS Database &amp; Storage Services</strong><br /> Learn how to leverage AWS database and storage services effectively in your Alexa skill.</li> </ul> <h2>See You at re:Invent</h2> <p>re:Invent is a great place to meet and interact with the Alexa team and a global community of builders. Visit the <a href="https://reinvent.awsevents.com/">AWS re:Invent</a> website to register now. Then, start building your event schedule by reviewing Alexa sessions in the event catalog and <a href="https://www.portal.reinvent.awsevents.com/connect/publicDashboard.ww">reserving a seat</a> in available sessions.</p> <p>We can’t wait to see you in Vegas!</p> <p>&nbsp;</p> <p style="margin-left:.25in">&nbsp;</p> /blogs/alexa/post/3e98845a-2070-406e-9ea7-bb5d49ed096e/the-fever-data-set-what-doesn-t-kill-it-will-make-it-stronger1 The FEVER Data Set: What Doesn’t Kill It Will Make It Stronger Larry Hardesty 2019-10-17T13:00:00+00:00 2019-10-17T15:26:07+00:00 <p>The open challenge for the&nbsp;<em>F</em>act&nbsp;<em>E</em>xtraction and&nbsp;<em>Ver</em>ification (FEVER)&nbsp;workshop at EMNLP involved devising adversarial examples that would stump fact verification systems trained on the FEVER data set.</p> <p><sup><em>Arpit Mittal cowrote this post with Christos Christodoulopoulos</em></sup></p> <p>This year at the Conference on Empirical Methods in Natural-Language Processing (EMNLP), we will cohost the <a href="http://fever.ai/workshop.html" target="_blank">Second Workshop</a> on Fact Extraction and Verification — or FEVER — which will explore techniques for automatically assessing the veracity of factual assertions online.</p> <p>Fact verification is an important part of Alexa’s question-answering service, enabling Alexa to validate the answers she provides and to justify those answers with evidence. The Alexa team’s interest in fact verification is widely shared, as is evidenced by a host of recent challenges, papers, and conferences — including the <a href="https://truthandtrustonline.com/" target="_blank">Truth and Trust Online</a> conference.</p> <p>The workshop originated from a public data set — the FEVER data set — that <a href="https://developer.amazon.com/blogs/alexa/post/786939bb-3fe9-4e64-8c2a-d9794315f5c2/amazon-and-university-of-sheffield-researchers-make-large-scale-fact-extraction-and-verification-dataset-publicly-available" target="_blank">we created</a> together with colleagues at the University of Sheffield. The data set contains 185,000 factual assertions, both true and false, which are correlated with Wikipedia excerpts that either substantiate or refute them.</p> <p>Like the first workshop, the second will feature invited talks from leaders in the field, papers on a range of topics related to fact verification, and presentations by contestants in an open, FEVER-based competition announced the previous spring.</p> <p>In the first FEVER competition, contestants used the FEVER data set to train machine learning systems to verify facts. The systems were evaluated according to their FEVER scores, which measure both the accuracy of their truth assessments and the quality of the supporting evidence they supply.&nbsp;</p> <p>This year’s FEVER competition was designed to help augment the FEVER data set through the well-studied machine learning technique of adversarial example generation. The technique has long been a staple of computer vision research but has recently gained ground in natural-language-processing research; Stanford University’s <a href="https://rajpurkar.github.io/SQuAD-explorer/" target="_blank">SQuAD dataset</a> is one prominent example.</p> <p>Contestants were invited to produce test cases — either algorithmically or manually — that would elicit mistaken responses from fact verification systems trained on FEVER data. Our aim is that by identifying characteristics of the error-inducing test cases we would learn new ways to augment the FEVER data, so that the resulting systems would be both more accurate and more resilient.<br /> <br /> <img alt="Adversarial_example.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Adversarial_example.png._CB450497414_.png?t=true" style="display:block; height:246px; margin-left:auto; margin-right:auto; width:300px" /></p> <p style="text-align:center"><em><sup>Two examples of adversarial assertions designed to confound a system trained on an assertion (the </sup></em><sup>original refuted instance<em>) in the original FEVER data set</em></sup><em><sup>, together with supporting evidence drawn from Wikipedia.</sup></em></p> <p>At the first FEVER workshop, <a href="http://fever.ai/2018/task.html" target="_blank">we reported</a> the performance of 23 teams that participated in the first challenge. The top four finishers allowed us to create versions of their systems that we could host online, so that participants in the second FEVER challenge could attack them at will.</p> <p>Since the first workshop, however, another 39 teams have submitted fact verification systems trained on FEVER data, pushing the top FEVER score from 64% <a href="https://competitions.codalab.org/competitions/18814#results" target="_blank">up to 70%</a>. Three of those teams also submitted hostable versions of their systems, bringing the total number of targets for the second challenge to seven. Following the taxonomy of the <a href="https://builditbreakit.org/" target="_blank">Build It, Break It, Fix It</a> contest model, we call the designers of target systems “Builders”.</p> <p>Three “Breaker” teams submitted adversarial examples. One of these — the Columbia University Natural-Language Processing group, or CUNLP — was also a Builder. CUNLP submitted 501 algorithmically generated adversarial examples; TMLab, from the Samsung R&amp;D Institute Poland, submitted 79 examples, most of which were algorithmically generated but a few of which were manual; and NbAuzDrLqg, from the University of Massachusetts Amherst Center for Intelligent Information Retrieval, submitted 102 manually generated examples.&nbsp;</p> <p>Only texts that look like valid assertions require verification, so we discounted adversarial examples if they were semantically or syntactically incoherent or if they could not be substantiated or refuted by Wikipedia data. On that basis, we created a weighted FEVER score called the resilience score, which we used to evaluate the Breakers’ submissions.</p> <p>We tested all three sets of adversarial examples — plus an in-house baseline consisting of 498 algorithmically generated examples — against all seven target systems. The average resilience of the Builder models was 28.5%, whereas their average FEVER score on the original data set was 58.3%. This demonstrates that the adversarial examples were indeed exposing omissions in the original data set.</p> <p>TMLabs’ examples were the most potent, producing more errors per example than either of the others. They were generated using a <a href="https://arxiv.org/pdf/1910.00337.pdf" target="_blank">variation</a> of the <a href="https://openai.com/blog/better-language-models/" target="_blank">GPT-2</a> language model, which (like all language models) was designed to predict the next word in a sequence of words on the basis of those that preceded it.</p> <p>The CUNLP researchers used their successful adversarial examples as templates for generating additional training data. The idea was that if the model was re-trained on the type of data that tended to stump it, it would learn how to handle that data. CUNLP thus became not only a Builder and a Breaker but also our one “Fixer”. After re-training, the CUNLP system became 11% more resilient to adversarial examples, and its FEVER score on the original task also increased, by 2%.</p> <p>In addition to presentations by Builders and Breakers, the workshop will also feature two oral paper presentations and 10 posters. The papers cover a range of topics: some are theoretical explorations of what it means to verify an assertion, drawing on work in areas such as stance detection, argumentation theory, and psychology; others are more-concrete experiments with natural-language-processing and search systems.</p> <p>The <a href="http://fever.ai/workshop.html" target="_blank">invited speakers</a> include William Wang of the University of California, Santa Barbara; Emine Yilmaz of University College London, an Amazon scholar; Hoifung Poon of Microsoft Research; Sameer Singh of the University of California, Irvine; and David Corney of Full Fact.</p> <p>The problem of fact verification is far from solved. That’s why we’re excited to be cohosting this Second Workshop and pleased to see the wide adoption of the FEVER data set and the FEVER score and the contributions they’re making to continuing progress in the field.</p> <p><em>Christos Christodoulopoulos is an applied scientist, and Arpit Mittal is a senior machine learning scientist, both in the Alexa Information Domain group.</em></p> <p><a href="https://developer.amazon.com/alexa/science" target="_blank"><strong>Alexa science</strong></a></p> <p><strong>Related</strong>:</p> <ul> <li><a href="http://fever.ai/index.html" target="_blank">Workshop web page</a></li> <li><a href="https://arxiv.org/pdf/1803.05355.pdf" target="_blank">Original FEVER paper</a></li> <li><a href="http://fever.ai/data.html" target="_blank">FEVER dataset</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/dfcb764c-3191-433c-ab9c-400ec37c0f5e/teaching-computers-to-answer-complex-questions" target="_blank">Teaching Computers to Answer Complex Questions</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/82b2fb27-9f3f-4c02-9b3a-252b78dc992e/bringing-the-power-of-neural-networks-to-the-problem-of-search" target="_blank">Bringing the Power of Neural Networks to the Problem of Search</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/eaed4d64-c7af-4587-8241-7a006db9b19b/amazon-helps-launch-workshop-on-automatic-fact-verification" target="_blank">Amazon Helps Launch Workshop on Automatic Fact Verification</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/d1b3f12f-165d-41d4-a61b-cf36d17a8926/public-release-of-fever-dataset-quickly-begins-to-pay-dividends" target="_blank">Public Release of Fact-Checking Dataset Quickly Begins to Pay Dividends</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/786939bb-3fe9-4e64-8c2a-d9794315f5c2/amazon-and-university-of-sheffield-researchers-make-large-scale-fact-extraction-and-verification-dataset-publicly-available" target="_blank">Amazon and University of Sheffield Researchers Make Large-Scale Fact Extraction and Verification Dataset Publicly Available</a></li> </ul> /blogs/alexa/post/016ccae4-3849-4e1a-8b49-2c5bd5723490/new-alexa-skills-training-course-how-to-design-for-in-skill-purchasing New Alexa Skills Training Course: How to Design for In-Skill Purchasing Ben Grossman 2019-10-16T22:18:45+00:00 2019-10-16T22:18:45+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_ISP_design-announcement_954x240.png._CB450217554_.png" /></p> <p>We’re excited to introduce our new Alexa Skills course, &quot;How to Design for In-Skill Purchasing<a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing">&quot;</a>. This free course outlines tips and best practices for designing a great monetized Alexa skill experience.</p> <p><a href="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_ISP_design-announcement_954x240.png._CB450217554_.png"><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_ISP_design-announcement_954x240.png._CB450217554_.png" /></a></p> <p>We’re excited to introduce our new Alexa Skills course, <a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing" target="_blank">How to Design for In-Skill Purchasing</a>. This free course outlines the best practices for designing a great monetized Alexa skill experience.</p> <h2><strong>Optimize Your Voice Experience for In-Skill Purchasing</strong></h2> <p>In order to effectively monetize your Alexa skills, you need to design an experience that inspires your customers to continue using your skill over and over. While a portion of the experience depends on the technical implementation (code, information architecture, APIs, etc.) it can only go as far as your voice interaction design. So we created a design-focused course to help you design a skill with in-skill purchasing. You’ll learn what makes great premium content, when to make offers, how to write offers, how to handle transitions to and from the Amazon Purchase flow, and how to provide access to purchases.<br /> <br /> By completing this course, you’ll be equipped with the knowledge to design and optimize your skill for in-skill purchasing.</p> <p><strong>Course Components</strong></p> <ul> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing/module-1" target="_blank">Introducing Our Use Case</a></li> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing/module-2" target="_blank">Offer the Right Premium Content</a></li> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing/module-3" target="_blank">Make an Offer at the Right Time</a></li> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing/module-4" target="_blank">Write Effective Upsells</a></li> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing/module-5" target="_blank">Make a Smooth Handoff</a></li> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing/module-6" target="_blank">Provide Access to Purchases</a></li> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing/module-7" target="_blank">Wrapping Up &amp; Resources</a></li> </ul> <p>Whether you’ve previously built a skill with in-skill purchasing or you’re just starting out, we consider this course to be a milestone along your Alexa skills educational journey. You should be able to speed read through everything in about an hour. Keep in mind that it’s self-paced and you don’t need to do it all at once. In fact, we recommend completing a section or two, pausing to reflect or experiment and then coming back later to continue your learning. This course will also be a great resource to have open in a tab while designing your next monetized skill experience.</p> <h2>Get Started with How to Design for In-Skill Purchasing</h2> <p>The self-paced course is free and available for anyone ready to build Alexa skills. <a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk" target="_blank">Click here</a> to get started.<br /> <br /> Be sure to check out our ongoing multi-part blog series on designing skills with in-skill purchasing which contextualizes many of the themes introduced through out the course with real-world examples:</p> <ul> <li><a href="https://developer.amazon.com/blogs/alexa/post/33cce9ae-97a9-4b33-9652-3a0ea76ec5ef/designing-skills-for-in-skill-purchasing-part-1-scope-it-right" target="_blank">Designing Skills for In-Skill Purchasing, Part 1: Scope It Right</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/95b86f3c-c0c3-4c65-807f-9b82dcc8d04c/designing-skills-for-in-skill-purchasing-part-2-surface-upsells" target="_blank">Designing skills for In-Skill Purchasing, Part 2: Surface Upsells</a></li> </ul> <h2>More Resources to Enhance Your Alexa Skills</h2> <p>Once you’ve completed this course, we recommend you continue your learning by checking out these additional training materials:</p> <ul> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/learn/build-a-business/in-skill-purchasing" target="_blank">Introductory Guide</a>: You can also check out the following resources to learn more about in-skill purchasing</li> <li><a href="https://github.com/alexa/skill-sample-nodejs-premium-hello-world" target="_blank">Premium Hello World Skill</a>: Learn to implement in-skill purchasing with this simple skill sample and tutorial</li> <li><a href="https://github.com/alexa/skill-sample-nodejs-fact-in-skill-purchases" target="_blank">Premium Fact Skill</a>: Learn to implement One Time Purchases, Subscriptions and Consumables together in a fact skill</li> <li><a href="https://developer.amazon.com/docs/in-skill-purchase/isp-overview.html" target="_blank">Technical documentation</a>: Learn how how in-skill purchasing works at a low-level</li> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/cake-walk" target="_blank">Cake Walk: Build an Engaging Alexa Skill</a>: Learn how to get started building engaging experiences.</li> <li><a href="http://alexa.design/cdw" target="_blank">Designing for Conversation Course</a>: Learn how to design more dynamic and conversational experiences.</li> <li><a href="https://developer.amazon.com/docs/alexa-design/get-started.html" target="_blank">Alexa Design Guide</a>: Learn the principles of situational voice design so that you can create voice-first skills that are natural and user-centric.</li> </ul> /blogs/alexa/post/a466dbf7-d9d0-463f-99da-6e632f5352e9/hear-it-from-a-skill-builder-how-to-make-your-skill-stand-out-with-sonic-branding-and-earcons Hear It from a Skill Builder: How to Make Your Skill Stand Out with Sonic Branding and “Earcons” Michelle Wallace 2019-10-15T18:51:35+00:00 2019-10-15T18:51:35+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_steven-arkonovich_954x240.png._CB450292241_.png" style="height:480px; width:1908px" /></p> <p>Hear from Alexa Champion Steven Arkonovich about how he used sonic branding and “earcons” to help his skill, Big Sky, stand out.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_steven-arkonovich_954x240.png._CB450292241_.png" /></p> <p><em>Today’s guest blog post is from</em> <a href="https://developer.amazon.com/en-US/alexa/champions/steven-arkonovich"><em>Steven Arkonovich</em></a><em>,</em> <em>an Alexa Champion and professor of philosophy and humanities at Reed College in Portland, Oregon. Steven was an Alexa enthusiast from the very beginning, actively writing Alexa applications before there even was an API. He has since developed a Ruby framework for quickly creating Alexa skills as web services. An avid audiophile, Steven’s excited about the possibilities that Alexa opens up to interact with digital media.</em></p> <h2>Exploring a Sonic Branding Opportunity</h2> <p>My skill <a href="https://www.amazon.com/Philosophical-Creations-Big-Sky/dp/B01GU4MBM4" target="_blank">Big Sky</a> is the most reviewed weather skill on Alexa (4,711 reviews) and has been featured on CNET, Wired, and TechCrunch. But it’s not the only weather skill on Alexa. And, of course, it lives alongside the native Alexa weather experience. From the start, I realized that I needed to do something to separate the skill for the rest of the pack, and give users a quick way to know that they’ve reached the content they were looking for. Using <a href="https://developer.amazon.com/blogs/alexa/post/6ad6020e-1ef4-4366-b59d-7411db4903c3/steven-arkonovich-enhances-voice-first-alexa-skills-with-visuals-and-touch-using-the-alexa-presentation-language">Alexa Presentation Language (APL)</a>, I created a distinctive look for Big Sky that quickly distinguishes it from other weather skills. But the majority of users invoke the skill on a voice-only device, so I knew I also needed a way to make it stand out using <em>sound alone.</em></p> <p>To get started, I reached out to Eric Seay at <a href="https://auxnyc.com/" target="_blank">Audio UX in NY</a> to explore opportunities for <em>sonic branding.</em> Essentially, sonic branding is the use of audio to distinguish your brand. Together, Eric and I started working on creating a distinctive sound for Big Sky. The main goal was to assure users that they were getting a genuine Big Sky report, as opposed to the native experience. But the sound had to do more than that. The audio aesthetic needed to be clear and concise to reflect the accuracy of the skill, while also maintaining a warm tone to showcase a sense of helpfulness. Ideally, it would create an emotional connection to the skill.</p> <p>After working through several possibilities, the Audio UX team landed on something that accomplished all the goals. The result was the Big Sky audio logo—and a set of “earcons” that extended the audio logo.</p> <p>&nbsp;</p> <p><iframe allowfullscreen="" frameborder="0" height="180" src="//www.youtube.com/embed/vO__7z-UYCA?rel=0" width="320"></iframe></p> <p>&nbsp;</p> <h2>Earcons: Conveying Information Instantly</h2> <p>Having introduced the audio logo, Eric also came up with the idea of extending the simple logo into a set of “earcons” for the Big Sky experience. Earcons are, as the name suggests, an audio version of the more familiar icons. Icons convey meaningful information visually. It’s why your computer hard drive icons look like hard drives, and the trash icon looks like a trash can. Earcons do the same thing, but with sound: think of the “crumpling paper” sound your computer makes when you empty the trash. That’s an earcon.</p> <p>Big Sky has implemented five distinct earcons to alert the user of current weather conditions. There are sounds for rain, snow, wind, fog, and clear skies. The earcons are modifications of the main Big Sky audio logo, with the distinctive weather condition sounds layered on top of the basic audio logo. Here, for example, is the “rain” earcon:</p> <p>&nbsp;</p> <p><iframe allowfullscreen="" frameborder="0" height="180" src="//www.youtube.com/embed/up5QD8RPs78?rel=0" width="320"></iframe></p> <p>&nbsp;</p> <p>This is a multifunctional and modular audio logo that also serves as an earcon to convey important information. When the users invoke the skill, it simultaneously lets them know they’ve reached Big Sky, drives familiarity, and cues them into the current weather conditions. It’s useful, delightful, and really sets the Big Sky weather experience apart.</p> <h2>Using Sonic Branding in Your Own Skill</h2> <p>The most engaging Alexa skills take advantage of the full range of modalities available to Alexa: visuals, touch, and sound. But don’t limit “sound” to just spoken output. Consider adding some sort of audio logo to your skills as well. Audio logos can:</p> <ul> <li>Distinguish your skill experience among competing skills</li> <li>Provide consistency for your product across platforms</li> <li>Drive skill familiarity</li> <li>Create an emotional connection to your skill</li> </ul> <p>Interested in enhancing your own skill with an audio logo or other element of sonic branding? I’ll leave you with a few tips:</p> <ul> <li>You don't have to actually be a brand to develop sonic branding.</li> <li>Take the time to develop your audio identity like you would develop your skill.</li> <li>Identify the most important audio moments in your skill to benefit the user. Skill launch is a great way to immediately let the user know that they are transitioning into a new experience.</li> <li>Make the audio moments count, and avoid creating a “noisy” skill by filling it up with unnecessary sounds.</li> <li>If your skill requires a bit of time to respond, you can deliver the audio logo using <a href="https://developer.amazon.com/docs/custom-skills/send-the-user-a-progressive-response.html">progressive response</a>. The user will get immediate feedback that they’ve reached your skill, and you get a second or two to ready your response.</li> </ul> /blogs/alexa/post/67b9b4a5-9ed8-4621-b717-36e07a393348/tools-for-generating-synthetic-data-helped-bootstrap-alexa-s-new-language-releases Tools for Generating Synthetic Data Helped Bootstrap Alexa’s New-Language Releases Larry Hardesty 2019-10-11T13:00:00+00:00 2019-10-14T13:30:40+00:00 <p>Synthetic-data generators provided initial training data for natural-language-understanding models in Hindi, U.S. Spanish, and Brazilian Portuguese.</p> <p>In the past few weeks, Amazon announced versions of Alexa in three new languages: Hindi, U.S. Spanish, and Brazilian Portuguese.&nbsp;</p> <p>Like all new-language launches, these addressed the problem of how to bootstrap the machine learning models that interpret customer requests, without the ability to learn from customer interactions. At a high level, the solution is to use synthetic data. These three locales were the first to benefit from two new in-house tools, developed by the Alexa AI team, that produce higher-quality synthetic data more efficiently.</p> <p>Each new locale has its own speech recognition model, which converts an acoustic speech signal into text. But interpreting that text — determining what the customer wants Alexa to do — is the job of Alexa’s natural-language-understanding (NLU) systems.</p> <p>When a new-language version of Alexa is under development, training data for its NLU systems is scarce. Alexa feature teams will propose some canonical examples of customer requests in the new language, which we refer to as “golden utterances”; training data from existing locales can be translated by machine translation systems; crowd workers may be recruited to generate sample texts; and some data may come from <a href="https://www.amazon.com/Amazon-Cleo/dp/B01N5QDE0Y" target="_blank">Cleo</a>, an Alexa skill that allows multilingual customers to help train new-language models by responding to voice prompts with open-form utterances.</p> <p>Even when data from all these sources is available, however, it’s sometimes not enough to train a reliable NLU model. The new bootstrapping tools, from Alexa AI’s Applied Modeling and Data Science group, treat the available sample utterances as templates and generate new data by combining and varying those templates.</p> <p>One of the tools, which uses a technique called grammar induction, analyzes a handful of golden utterances to learn general syntactic and semantic patterns. From those patterns, it produces a series of rewrite expressions that can generate thousands of new, similar sentences. The other tool, guided resampling, generates new sentences by recombining words and phrases from examples in the available data. Guided resampling concentrates on optimizing the volume and distribution of sentence types, to maximize the accuracy of the resulting NLU models.</p> <h3><strong>Rules of Grammar</strong></h3> <p>Grammars have been a tool in Alexa’s NLU toolkit since well before the first Echo device shipped. A grammar is a set of rewrite rules for varying basic template sentences through word insertions, deletions, and substitutions.</p> <p>Below is a very simple grammar, which models requests to play either pop or rock music, with or without the modifiers “more” and “some”. Below the rules of the grammar is a diagram of a computational system (a finite-state transducer, or FST) that implements them.</p> <p><img alt="grammar_2.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/grammar_2.png._CB450023968_.png?t=true" style="display:block; height:176px; margin-left:auto; margin-right:auto; width:400px" /><br /> <img alt="FST.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/FST.png._CB451792449_.png?t=true" style="display:block; height:89px; margin-left:auto; margin-right:auto; width:600px" /></p> <p style="text-align:center">&nbsp;<sup><em>A toy grammar, which can model requests to play pop or rock music, with or without the modifiers “some” or “more”, and a diagram of the resulting finite-state transducer. The question mark indicates that the </em>some_more<em> variable is optional.</em></sup></p> <p>Given a list of, say, 50 golden utterances, a computational linguist could probably generate a representative grammar in a day, and it could be operationalized by the end of the following day. With the Applied Modeling and Data Science (AMDS) group’s grammar induction tool, that whole process takes seconds.</p> <p>AMDS research scientists Ge Yu and Chris Hench and language engineer Zac Smith experimented with a neural network that learned to produce grammars from golden utterances. But they found that an alternative approach, called Bayesian model merging, offered similar performance with advantages in reproducibility and iteration speed.</p> <p>The resulting system identifies linguistic patterns in lists of golden utterances and uses them to generate candidate rules for varying sentence templates. For instance, if two words (say, “pop” and “rock”) consistently occur in similar syntactic positions, but the phrasing around them varies, then one candidate rule will be that (in some defined contexts)&nbsp;“pop” and “rock” are interchangeable.</p> <p>After exhaustively listing candidate rules, the system uses Bayesian probability to calculate which rule accounts for the most variance in the sample data, without overgeneralizing or introducing inconsistencies. That rule becomes an eligible variable in further iterations of the process, which recursively repeats until the grammar is optimized.</p> <p>Crucially, the tool’s method for creating substitution rules allows it to take advantage of existing catalogues of frequently occurring terms or phrases. If, for instance, the golden utterances were sports related, and the grammar induction tool determined that the words “Celtics” and “Lakers” were interchangeable, it would also conclude that they were interchangeable with “Warriors”, “Spurs”, “Knicks”, and all the other names of NBA teams in a standard catalogue used by a variety of Alexa services.</p> <p>From a list of 50 or 60 golden utterances, the grammar induction tool might extract 100-odd rules that can generate several thousand sentences of training data, all in a matter of seconds.</p> <h3><strong>Safe Swaps</strong></h3> <p>The guided-resampling tool also uses catalogues and existing examples to augment training data. Suppose that the available data contains the sentences “play Camila Cabello” and “can you play a song by Justin Bieber?”, which have been annotated to indicate that “Camila Cabello” and “Justin Bieber” are of the type <em>ArtistName</em>. In NLU parlance, <em>ArtistName</em> is a <em>slot type</em>, and “Camila Cabello” and “Justin Bieber” are <em>slot values</em>.</p> <p>The guided-resampling tool generates additional training examples by swapping out slot values — producing, for instance, “play Justin Bieber” and “can you play a song by Camila Cabello?” Adding the vast Amazon Music databases of artist names and song titles to the mix produces many additional thousands of training sentences.</p> <p>Blindly swapping slot values can lead to unintended consequences, so which slot values can be safely swapped? For example, in the sentences “play jazz music” and “read detective books”, both “jazz” and “detective” would be labeled with the slot type <em>GenreName</em>. But customers are unlikely to ask Alexa to play “detective music”, and unnatural training data would degrade the performance of the resulting NLU model.&nbsp;</p> <p>AMDS’s Olga Golovneva, a research scientist, and Christopher DiPersio, a language engineer, used the Jaccard index — which measures the overlap between two sets — to evaluate pairwise similarity between slot contents in different types of requests. On that basis, they defined a threshold for valid slot mixing.</p> <h3><strong>Quantifying Complexity</strong></h3> <p>As there are many different ways to request music, another vital question is how many variations of each template to generate in order to produce realistic training data. One answer is simply to follow the data distributions from languages that Alexa already supports.&nbsp;</p> <p>Comparing distributions of sentence types across languages requires representing customer requests in a more abstract form. We can encode a sentence like “play Camila Cabello” according to the word pattern <em>other + ArtistName</em>, where <em>other</em> represents the verb “play”, and <em>ArtistName</em> represents “Camila Cabello”. For “play ‘Havana’ by Camila Cabello”, the pattern would be <em>other + SongName + other + ArtistName</em>. To abstract away from syntactic differences between languages, we can condense this pattern further to <em>other + ArtistName + SongName</em>, which represents only the semantic concepts included in the request.&nbsp;</p> <p>Given this level of abstraction, Golovneva and DiPersio investigated several alternative techniques for determining the semantic distributions of synthetic data.&nbsp;</p> <p>Using Shannon entropy, which is a measure of uncertainty, Golovneva and DiPersio calculated the complexity of semantic sentence patterns, focusing on slots and their combinations. Entropy for semantic slots takes into consideration how many different values each slot might have, as well as how frequent each slot is in the data set overall. For example, the slot <em>SongName</em> occurs very frequently in music requests, and its potential values (different song titles) number in the millions; in contrast, <em>GenreName</em> also occurs frequently in music requests, but its set of possible values (music genres) is fairly small.&nbsp;</p> <p>Customer requests to Alexa often include multiple slots (such as “play ‘Vogue’|<em>SongName</em> by Madonna|<em>ArtistName</em>” or “set a daily|<em>RecurrenceType</em> reminder to {walk the dog}|<em>ReminderContent</em> for {seven a. m.}|<em>Time</em>”), which increases the pattern complexity further.&nbsp;</p> <p>In their experiments, Golovneva and DiPersio used the entropy measures from slot distributions in the data and the complexity of slot combinations to determine the optimal distribution of semantic patterns in synthetic training data. This results in proportionally larger training sets for more complex patterns than for less complex ones. NLU models trained on such data sets achieved higher performance than those trained on datasets which merely “borrowed” slot distributions from existing languages.</p> <p>Alexa is always getting smarter, and these and other innovations from AMDS researchers help ensure the best experience possible when Alexa launches in a new locale.</p> <p><em>Janet Slifka, a senior manager for research science in Alexa AI’s Natural Understanding group, leads the Applied Modeling and Data Science team.</em></p> <p><a href="https://developer.amazon.com/alexa/science" target="_blank"><strong>Alexa science</strong></a></p> <p><strong>Acknowledgments</strong>: Ge Yu, Chris Hench, Zac Smith, Olga Golovneva, Christopher DiPersio, Karolina Owczarzak, Sreekar Bhaviripudi, Andrew Turner</p> <p><strong>Related</strong>:</p> <ul> <li><a href="https://developer.amazon.com/blogs/alexa/post/32f8381b-1b30-4f42-bbcd-4dfad6605eb5/active-learning-algorithmically-selecting-training-data-to-improve-alexa-s-natural-language-understanding" target="_blank">Active Learning: Algorithmically Selecting Training Data to Improve Alexa’s Natural-Language Understanding</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/11b51c0b-9794-48bf-81c1-ecadf63fede3/adapting-alexa-to-regional-language-variations" target="_blank">Adapting Alexa to Regional Language Variations</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/45276f8f-83e7-4446-855f-0bb0d5019f07/training-a-machine-learning-model-in-english-improves-its-performance-in-japanese" target="_blank">Training a Machine Learning Model in English Improves Its Performance in Japanese</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/306022ab-dcf6-473d-9144-03f478c31579/how-we-add-new-skills-to-alexa-s-name-free-skill-selector" target="_blank">How We Add New Skills to Alexa’s Name-Free Skill Selector</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/0535b1ff-d810-4933-a197-841bfb3fa894/cross-lingual-transfer-learning-for-bootstrapping-ai-systems-reduces-new-language-data-requirements" target="_blank">Cross-Lingual Transfer Learning for Bootstrapping AI Systems Reduces New-Language Data Requirement</a>s</li> </ul> /blogs/alexa/post/fec54390-8005-4e0d-9df8-48b0194e8d02/what-s-new-in-the-alexa-skills-kit-september-2019-release-roundup What's New in the Alexa Skills Kit: September 2019 Release Roundup Leo Ohannesian 2019-10-11T00:09:57+00:00 2019-10-11T00:09:57+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Intent-history_blog.png._CB460678784_.png?t=true" style="height:480px; width:1908px" /></p> <p>In this roundup video we share details about the new things released for skill developers last month, including the Web API for Games (Preview), Alexa-hosted Skills Python Support, the NLU Evalution tool, and all of the announcements from our September Event.</p> <p><em><strong>Editor's Note: </strong>Our monthly release roundup series showcases the latest in Alexa Skills Kit developer tools and features that can make your skills easier to manage, simpler to deploy, and more engaging for your customers. Build with these new capabilities to enable your Alexa skills to drive brand or revenue objectives.</em></p> <p>In this roundup video we share details about the new things released for skill developers last month, including the Web API for Games (Preview), Alexa-hosted Skills Python Support, the NLU Evalution tool, and all of the announcements from our September Event. Check out the entire video for more information from Alexa evangelists and code samples.</p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/XQL1hEWqHRA" width="640"></iframe></p> <h2>1.&nbsp;Read developer news related to the September event</h2> <p>We are excited to announce new developer tools to accompany a brand new line-up of Alexa devices. Now you can deliver skills in more places, build experiences for existing and new audiences, and reach more customers in their preferred languages.&nbsp;<a href="https://developer.amazon.com/blogs/alexa/post/e4e576c1-21e1-45c1-8086-70a50b78e172/announcing-new-alexa-skills-kit-capabilities-supporting-the-latest-echo-devices-more-languages-and-personalization" target="_blank">Check out the announcement here</a>.</p> <p>&nbsp;</p> <ul> </ul> <h2>2. Publish visually rich, interactive skills with Alexa Presentation Language (APL), now generally available</h2> <p>Alexa Presentation Language (APL) is now generally available. We will continue to add new features, tools, and resources to APL over the coming months. <a href="https://developer.amazon.com/blogs/alexa/post/f01a503d-e3a4-4ef8-a2ef-9372c8570033/alexa-presentation-language-now-generally-available-build-multimodal-experiences-that-come-alive-with-animation" target="_blank">Check out the announcement</a> or&nbsp;<a href="https://developer.amazon.com/docs/alexa-presentation-language/understand-apl.html" target="_blank">read about it in our docs</a>.&nbsp;</p> <p>&nbsp;</p> <h2>3. Publish Spanish skills in the US and Make Money with In-Skill Purchasing (ISP) and Alexa Developer Rewards</h2> <p>In the coming weeks, you’ll be able to publish Spanish skills in the US and make money with in-skill purchasing (ISP) and Alexa Developer Rewards.&nbsp;<a href="https://https://developer.amazon.com/blogs/alexa/post/56d1e4a5-ce06-49be-a6ef-75d49763223b/spanish-skills-are-going-live-in-the-us-with-in-skill-purchasing-isp-and-alexa-developer-rewards" target="_blank">Check out the announcement here</a>&nbsp;and <a href="http://https://developer.amazon.com/es/docs/custom-skills/develop-skills-in-multiple-languages.html" target="_blank">the technical documentation here.&nbsp;</a></p> <p>&nbsp;</p> <h2>4. Build Games with Web Technologies (Preview)</h2> <p>The Alexa Web API for Games (Developer Preview) introduces new web technologies and tools to create visually rich and interactive voice-controlled game experiences.&nbsp;<a href="https://developer.amazon.com/blogs/alexa/post/e4e576c1-21e1-45c1-8086-70a50b78e172/announcing-new-alexa-skills-kit-capabilities-supporting-the-latest-echo-devices-more-languages-and-personalization" target="_blank">Read the announcement here</a> or <a href="https://build.amazonalexadev.com/AlexaWebAPIforGames.html" target="_blank">sign up for the preview</a>.&nbsp;</p> <p>&nbsp;</p> <h2>5. Use Alexa voice profiles (Preview) to personalize your&nbsp;content for your customers</h2> <p>Soon your skill will be able to deliver customized information based on who is speaking. Learn more and sign up for the Developer Preview here.&nbsp;<a href="https://developer.amazon.com/blogs/alexa/post/2d754e03-e754-4454-9cb5-927472473c1f/announcing-personalized-alexa-skill-experiences-developer-preview" target="_blank">Read the announcement here</a>.</p> <p>&nbsp;</p> <ul> </ul> <h2>6. The&nbsp;Alexa Education Skill API (Preview) allows you to easily create voice interfaces for Education Technology Applications</h2> <p>With the Alexa Education Skill API (Developer Preview), integrating ed-tech systems such as Learning Management Systems (LMS), Student Information Systems (SIS), and Classroom Management and massively open online course (MOOC) platforms is quick and easy.&nbsp;Parents and students 13 and older can get information about their school and assignments directly from Alexa without the added step of opening a skill by asking&nbsp;“Alexa, how is Kaylee doing in school?” or “Alexa, what is my homework tonight?”. <a href="https://developer.amazon.com/blogs/alexa/post/92af8bc8-d076-4df4-9121-d2e968fea00a/the-alexa-education-skill-api-preview-allows-you-to-easily-create-voice-interfaces-for-education-technology-applications" target="_blank">Read about it here</a>.&nbsp;</p> <p>&nbsp;</p> <h2>7. LEGO MINDSTORMS Voice Challenge: Powered by Alexa — Your Chance to Win Up to $100,000 in Prizes</h2> <p>We are thrilled to announce LEGO MINDSTORMS Voice Challenge: Powered by Alexa – an opportunity for Alexa developers, LEGO MINDSTORMS enthusiasts, and creators around the world to explore and build the future of voice-based experiences through construction and robotics play. Enter for your chance to win from over one hundred prizes worth up to $100,000.<a href="https://developer.amazon.com/blogs/alexa/post/d1ece4c7-7d33-43da-8b98-c42d3edb6f85/lego-mindstorms-voice-challenge-offers-a-chance-to-win-up-to-100-000-in-prizes" target="_blank"> Read the announcement here.&nbsp;</a></p> <p>&nbsp;</p> <h2>8. Populate custom slot values with URL reference to an existing catalog</h2> <p>We are excited to announce the launch of reference based Catalog management features (SMAPI and CLI) for managing custom slots. Using this feature, developers can now create slot types to ingest values from an external data source with a URL reference to the catalog. For example, a recipe skill developer will now be able to pull a list of ingredients from their existing catalog instead of having to enter each individual ingredients and keep both data sources in sync.&nbsp;<a href="https://developer.amazon.com/docs/smapi/Reference-based-catalog-management-for-custom-slots-CLI.html" target="_blank">Read the technical documentation.&nbsp;</a></p> <p>&nbsp;</p> <h2>9. Develop Alexa-hosted skills in Python</h2> <p>Python developers can now get started quickly with Alexa skills. Alexa-hosted skills now support Python from both the Alexa Developer Console and the ASK CLI.&nbsp;<a href="http://https://developer.amazon.com/docs/hosted-skills/build-a-skill-end-to-end-using-an-alexa-hosted-skill.html" target="_blank">Read the tech docs.&nbsp;</a></p> <p>&nbsp;</p> <h2>10.&nbsp;Batch test your skill model with the NLU Evaluation Tool</h2> <p>The NLU Evaluation Tool helps you avoid overtraining your skill’s NLU model by identifying which&nbsp;utterances will improve accuracy if added to your&nbsp;Interaction Model, create and run regression tests on your skill’s NLU model, and measure accuracy of your skill’s NLU model with anonymized frequent live utterances. <a href="https://developer.amazon.com/blogs/alexa/post/a5b37f34-83c8-4274-b576-073a21dfdb7a/build-test-and-tune-your-skills-with-three-new-tools1" target="_blank">Read the blog here.&nbsp;</a><a href="https://developer.amazon.com/docs/custom-skills/batch-test-your-nlu-model.html" target="_blank">Read the tech docs here.&nbsp;</a></p> <p>&nbsp;</p> <p>As always, we can't wait to see what you build. As a reminder, learn how to get the most out of the tech docs by visiting the <a href="https://developer.amazon.com/docs/ask-overviews/latest-tips-documentation.html" target="_blank">Latest Tips page.</a></p> /blogs/alexa/post/a3044117-24ac-44a1-8452-fc2f42a84108/skill-flow-builder-tips-and-tricks-use-extensions-to-level-up-your-narrative-driven-games Skill Flow Builder Tips and Tricks: Use Extensions to Level Up Your Narrative-Driven Games June Lee 2019-10-09T18:10:56+00:00 2019-10-09T19:25:50+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_SkillFlowBuilder_954x240.png._CB440696291_.png" /></p> <p>Here are some tips and tricks for using Skill Flow Builder (SFB), a tool for visually designing and building story-based game skills which makes it easier for content creators to create skills without needing a large development team.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_SkillFlowBuilder_954x240.png._CB440696291_.png" /> In July, we released <a href="https://developer.amazon.com/docs/custom-skills/understand-the-skill-flow-builder.html">Skill Flow Builder</a> (SFB), a tool for visually designing and building story-based game skills which makes it easier for content creators to create skills without needing a large development team. If you’re new to SFB, you can check out <a href="https://developer.amazon.com/blogs/alexa/post/83c61d4e-ab3f-443e-bf71-75b5354bdc9e/skill-flow-builder">our introductory blog post</a> for an overview of the SFB Editor and how to get started. This blog post assumes you have built at least one skill in SFB and is intended for intermediate to advanced users.</p> <p>For most experiences, the SFB Editor and features are enough to build a rich experience with dynamic responses that keep users returning. But what happens when you hit the extent of SFB’s base features? Maybe it’s some complex math. Maybe you need to keep track of inventory or divert logic to a mini game. When the plethora of SFB features run out, it’s time to build an extension. Luckily, SFB makes extension building easy.</p> <h2>When to Create an Extension</h2> <p>So you’ve created a robust story-based game using SFB. Your players can travel across the world and fight deadly beasts. They can collect key items for progression and make pivotal plot decisions. At first, the inventory is basic—only a few items to keep track of—but as your story grows, the inventory grows with it. Users may grow frustrated when they’re offered the wrong item at the wrong time. A user who’s in the middle of combat and extremely low on health won’t want to search to find their health potion. They’ll want it to be offered to them without even having to search. Selecting the right composition of items to suggest to a player starts to require more and more conditional statements in SFB’s Editor. This is the point when an extension becomes an asset.</p> <p>At their core, extensions are simply functions that your SFB story may leverage at any time. If your conditional statements start requiring more than three comparisons or your list of flagged items grows from a manageable 15 to 50, it’s time to look into creating an extension. If it takes 10 lines of logic to do what feels like basic math, it might be time for an extension.</p> <p>There are three types of extensions: DriverExtension, InstructionExtension, and ImporterExtension. You can learn more about the syntax and functionality of these extension types in the <a href="https://developer.amazon.com/docs/custom-skills/set-up-the-skill-flow-builder-as-a-developer.html#custom">SFB documentation</a>. For the purposes of this blog, we’re going to focus on the extension type you’ll use the most: InstructionExtension.</p> <p>An InstructionExtension is composed of functions called by the story files as the story is running. Some use cases for the InstructionExtension include:</p> <ol> <li>Complex math on a variable such as exponents and remainder division</li> <li>Inventory management</li> <li>Store/catalog management</li> <li>Iterating over time-dependent components</li> <li>Mass setting or unsetting variable</li> <li>Calling external APIs that do not cause the skill session to exit</li> </ol> <p>So what are some ways you might use an InstructionExtension in your own game skills? Let’s dive into some examples. We’ll start with a simple example to get you familiar with the layout of extensions and then move on to a separate advanced example that combines multiple extension types.</p> <h2>Basic Example: Enable/Disable Cheat Codes with Environment Variables</h2> <p>Over time, your SFB story may grow to become a game that can’t be completed in a short amount of time. You may want to be able to easily jump around through the game and automatically set variables as you go. However, you don’t want this functionality to be available to live users. In this example, we’ll use an InstructionExtension to process which version of the skill the player is accessing and then restrict access to content.</p> <p>To make restricting access easy, we’ll set an environment variable in Lambda with a key of VERSION and possible values of dev or prod. Since this is a variable that is not accessible by SFB automatically, we need to inject that information into the story.</p> <h3>Build the InstructionExtension</h3> <p>When you create a new SFB story, it includes SampleCustomExtension.ts in the code/extensions folder. For ease, we’ll add our environment setter to SampleCustomExtension.ts.</p> <p>First, replace the code in your SampleCustomExtension.ts file with the following:</p> <pre> <code>import { InstructionExtension, InstructionExtensionParameter } from '@alexa-games/sfb-f'; /** * Custom InstructionExtension */ export class SampleCustomExtension implements InstructionExtension { public async setEnvironment(param: InstructionExtensionParameter): Promise&lt;void&gt; { console.info(&quot;Player environment is: &quot;, process.env.VERSION); param.storyState.environmentType = process.env.VERSION ? process.env.VERSION : &quot;dev&quot;; } }</code></pre> <h3>Call the Extension from the Story Files</h3> <p>Now that we have an extension, we need to access it from the story files. To prevent production/live skill users from accessing the cheat codes, we can use a simple IF statement to restrict access to a reusable scene called cheat_codes. In this example, if the skill is using the version of “dev” and the user says “cheat,” then it’ll route to the cheat code. Otherwise, the story goes back to the previous scene.</p> <p>Add the following code to your story.abc file. If you already have @global_append, then you should extend that section with the call to setEnvironment and the environmentType check.</p> <pre> <code>@global_append *then setEnvironment if environmentType == 'dev' { &lt;-&gt; cheat_codes } @cheat_codes *then hear cheat { -&gt; cheat } hear cheat more { -&gt; cheat_more } &gt;&gt; RETURN </code></pre> <p>&nbsp;</p> <h2>Advanced Example: Get User Consumables from the Monetization API</h2> <p>In this example, we’re going to do a simple get request to the Monetization Service Client to determine if a consumable is purchasable. Since monetization is not available in every locale, this allows us to avoid presenting an upsell to users who can’t or shouldn’t be offered the consumable.</p> <p>Before we get started, make sure you’re familiar with setting up in-skill purchasing (ISP) for a skill and the requirements for consumables. You can read more about in-skill purchasing in the <a href="https://developer.amazon.com/docs/in-skill-purchase/isp-overview.html">documentation</a>.</p> <p>Unfortunately, the InstructionExtension can’t access <strong>handlerInput</strong> and the monetization service requires the user’s locale from handlerInput. However, the DriverExtension can access the request object from Alexa before it reaches the SFB logic. The InstructionExtension allows us to send data back and forth to the story files while the DriverExtension can communicate with external services. Luckily, in SFB you can combine any of the extension types together into a single extension file, so you can use both at the same time.</p> <p>You can view the full Typescript file for this extension in the <a href="https://github.com/alexa/alexa-cookbook/blob/master/code-snippets/skill-flow-builder/ISPExtension.ts">Alexa Cookbook code snippets</a>.</p> <h3>Create the File for the Custom Extension</h3> <p>Just like you did in the basic example, you'll need to create a file to hold your extension code. Unlike that example, though, you also need to import DriverExtension and DriverExtensionParameter. Next, to combine two extension types, you just need to implement the additional types in the class. For our ISP extension, you’ll implement <strong>InstructionExtension</strong> and <strong>DriverExtension</strong>.<br /> First, add a new file to the extensions folder in your SFB project and name it ISPExtension.ts. Once you have your file ready, add the following code to ISPExtension.ts to create the framework for the extension.</p> <pre> <code class="language-javascript">import { InstructionExtension, DriverExtension, InstructionExtensionParameter, DriverExtensionParameter, } from &quot;@alexa-games/sfb-f&quot;; export class ISPExtension implements InstructionExtension, DriverExtension { }</code></pre> <h3>Build the DriverExtension</h3> <p>The DriverExtension is similar to the request and response interceptors available in the Alexa Skills Kit SDK. The logic is executed before the request reaches SFB and/or before the response is sent to the user. This makes the DriverExtension great for cleaning up data or doing additional logic on story content. A DriverExtension requires both a <strong>pre</strong> and a <strong>post</strong> function, but either of these can be left empty. In this case, we only need the pre function to get the <strong>handlerInput</strong> object.</p> <pre> <code class="language-javascript">private handlerInput: any; async pre(param: DriverExtensionParameter) { this.handlerInput = param.userInputHelper.getHandlerInput(); return; } async post(param: DriverExtensionParameter) { // Use for post processing, not needed this time }</code></pre> <p>Add the following code inside the ISPExtension class you created in the previous step to add pre and post functionality:</p> <h3>Build the InstructionExtension</h3> <p>Now that we have the handlerInput, we can send requests to the Monetization Service Client and also access the user’s locale. The next step is to add two functions: one to check purchasable status and one to check the number of consumables purchased. Additionally, there is a separate function for making the request to the Monetization Service Client.</p> <p><strong>Purchasable</strong></p> <p>The sole goal of this extension is to be easily callable from the story files. The function for “purchasable” sets the type of request the skill is making; in this case, the type is “purchasable.” We’ll then use a variable from the storyState, <strong>monetizationPurchasable</strong>, to flag whether the item is available. <strong>storyState</strong> is passed back and forth from the story files and contains details about the user such as current point in the story and any variables that have been added or flagged over time.</p> <p>Once purchasable and request type (workflowType) are set, the function simply triggers a call to the Monetization Service Client via the getMonetizationData function.</p> <p>Add the following code below the pre and post code you added earlier:</p> <pre> <code class="language-javascript">public async purchasable(param: InstructionExtensionParameter): Promise { param.instructionParameters.workflowType = &quot;purchasable&quot;; param.storyState.monetizationPurchasable = false; param.storyState = await this.getMonetizationData(param); return; } </code></pre> <p><strong>Consumable</strong></p> <p>The function for consumable is intended to retrieve the amount of a consumable that’s been purchased and is available for a user. All this basic function needs to do is set the workflowType of “consumable.”</p> <p>Add the following code for consumable below the purchasable function. This function just sets the workflowType and allows the consumable checks to be called separately from purchasable checks.</p> <pre> <code class="language-javascript">public async consumable(param: InstructionExtensionParameter): Promise { param.instructionParameters.workflowType = &quot;consumable&quot;; param.storyState = await this.getMonetizationData(param); return; } </code></pre> <p><strong>getMonetizationData()</strong></p> <p>While purchasable and consumable are vanity calls to make the monetization checks easily referable from the story files, the getMonetizationData function does all of the work for calling the Monetization Service Client. The structure is almost identical to standard Node.js calls to the client, with some added references to storyState for the amount of the consumable that has been purchased.</p> <p>The following code does additional checks to verify if the consumable amount is out of sync with what is being stored by the skill. Add this section to ISPExtension.ts below the consumable function you added in the previous step:</p> <pre> <code class="language-javascript">private async getMonetizationData( param: InstructionExtensionParameter ): Promise { const product = param.instructionParameters.item; // Supplied from the story file if (!product) { throw [AlexaMonetizationExtension Syntax Error] monetized item=[${product}] not provided.; } const ms: any = this.handlerInput.serviceClientFactory.getMonetizationServiceClient(); const locale: string = this.handlerInput.requestEnvelope.request.locale; const isp: any = await ms.getInSkillProducts(locale).then((res: any) =&gt; { if (res.inSkillProducts.length &gt; 0) { let item = res.inSkillProducts.filter( (record: any) =&gt; record.referenceName === product ); return item; } }); // Return product information based on user request if (param.instructionParameters.workflowType === “purchasable”) { if (isp &amp;&amp; isp[“purchasable”] === “PURCHASABLE”) { console.info(“Item is purchasable: “, isp.name); // Easily indicate within the story the item is purchasable param.storyState.monetizationPurchasable = true; } else { console.info(“Item cannot be purchased: “, product); } } else if (param.instructionParameters.workflowType === “consumable”) { if (isp &amp;&amp; isp.activeEntitlementCount) { let itemAmount: number = parseInt(isp.activeEntitlementCount); param.storyState[${product}Purchased] = itemAmount; // Set the purchased and consumed session variables to keep track during game if (itemAmount) { if (!param.storyState[${product}Consumed]) { param.storyState[${product}Consumed] = 0; } if (param.storyState[${product}Consumed] &gt; itemAmount) { // User shows as using more of the consumable than purchased param.storyState[${product}Consumed] = itemAmount; } } param.storyState.monetizationPurchasable = true; } else { console.info(“Item is not available: “, product); param.storyState[${product}Consumed] = 0; param.storyState[${product}Purchased] = 0; param.storyState[${product}] = 0; param.storyState.monetizationPurchasable = false; } } return param.storyState; } </code></pre> <h3>Call the Extension from the Story Files</h3> <p>We have an extension and we have some basic parameters for checking the state of a consumable. Now let’s call it from a scene in the story. For the sake of this example, we’re making a redundant check if the item is purchasable to demonstrate how each function works. In practice, you can just use the consumable function since it already checks if an ISP item is purchasable.</p> <p>Add the following code for the reusable @check_item scene to your story.abc file. To test the code, you can follow the Basic example and call @check_item from @global_append.</p> <pre> <code>@check_item *then purchasable item=’coffee’ if monetizationPurchasable { consumable item=’coffee’ // Reset the amount of the consumable that is available to use set coffee to coffeePurchased decrease coffee by coffeeConsumed -&gt; has_item_scene } if !monetizationPurchasable { -&gt; no_buy_scene } </code></pre> <p>Now, if you release your skill in a locale that doesn’t support monetization, you can avoid sending users an upsell dialog by first checking if the item is available. You can also keep the amount of a consumable that is available up-to-date as the user progresses through the skill.</p> <p>This may seem like a complex extension, but at the core all we’ve done is take an API call and add some additional story variables to it.</p> <h3>Conclusion</h3> <p>Extensions are a great tool for passing story variables back and forth without having to do complex SFB logic within the story files themselves. We went through a basic example to access data not readily available to the story files and then a more advanced example of how to call external APIs with SFB. Extensions allow you to add more robust logic to your story-based games and take your adventures from simple narratives to leveling adventures. You can now take this knowledge and add combat modules, character progression, and get those health potions to the right players when they need them.</p> <p>We’re always excited to hear about your extensions, so feel free to share your creations with us on Twitter!</p> /blogs/alexa/post/a5b37f34-83c8-4274-b576-073a21dfdb7a/build-test-and-tune-your-skills-with-three-new-tools1 Build, Test, and Tune Your Skills with Three New Tools Leo Ohannesian 2019-10-09T17:35:13+00:00 2019-10-10T00:24:11+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/SkillBuilder.png._CB451694446_.png" style="height:480px; width:1908px" /></p> <p>We’re excited to announce the General Availability of&nbsp;Natural Language Understanding (NLU) Evaluation Tool and Utterance Conflict Detection. We are also excited to announce the&nbsp;Get Metrics API, now in Beta.&nbsp;</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/SkillBuilder.png._CB451694446_.png" /></p> <p>We’re excited to announce the General Availability of two tools which focus on your voice model’s accuracy: Natural Language Understanding (NLU) Evaluation Tool and Utterance Conflict Detection. We are also excited to announce that you will now be able to build your own quality and usage reporting with the Get Metrics API, now in Beta. These tools help complete the suite of Alexa skill testing and analytics tools that aide in creating and validating your voice model prior to publishing your skill, detect possible issues when your skill is live, and help you refine your skill over time.<br /> <br /> The NLU Evaluation Tool helps you batch test utterances and compare how they are interpreted by your skill’s NLU model against your expectations. The tool has three use cases:</p> <ol> <li>Prevent overtraining NLU models: overtraining your NLU model with too many sample utterances and slot values can reduce accuracy. Instead of adding exhaustive sample utterances to your interaction model, you can now run NLU Evaluations with utterances you expect users to say. If any utterance resolves to the wrong intent and/or slot, you can improve accuracy of your skill’s NLU model by only adding those utterances as new training data (by creating new sample utterances and/or slots).</li> <li>Regression tests - you can create regression tests and run them after adding new features to your skills to ensure your customer experience stays intact.</li> <li>Accuracy measurements - you can measure the accuracy of your skill’s NLU model by running an NLU Evaluation with anonymized frequent live utterances surfaced in Intent History (production data), and then measure the impact on accuracy for any changes you make to their NLU model.</li> </ol> <p><br /> Utterance Conflict Detection helps you detect utterances which are accidentally mapped to multiple intents, which reduces accuracy of your Alexa skill’s Natural Language Understanding (NLU) model. This tool is automatically run on each model build and can be used prior to publishing the first version of your skill or as you add intents and slots over time - preventing you from building models with unintended conflicts.<br /> <br /> Finally, with the Get Metrics API (Beta) you can immediately benefit from the ability to analyze key metrics like unique customers in your preferred tools for analysis or aggregation. For Example, you can now connect to AWS CloudWatch and create monitors, alarms and dashboards in order to stay on top of changes that may impact customer engagement.<em> </em><br /> <br /> With these three additions to the ASK Tech toolset, we will recap the overall suite of testing and feedback tools you have available and where they fall in the overall skill development lifecycle. The skill development lifecycle can be separated into three general steps that come after your design phase (see situational design): building, testing, and tuning.<br /> <br /> <strong>Build Your Dialog Model</strong><br /> As you are defining your intents, slots, and dialogs from the ground up per your <a href="https://developer.amazon.com/docs/alexa-design/get-started.html" target="_blank">situational design definition,</a> you will want to test how utterances fall into your model. This is where <a href="https://developer.amazon.com/docs/custom-skills/test-utterances-and-improve-your-interaction-model.html" target="_blank">utterance profiler</a> is useful. You can enter utterances to see how they resolve to your intents and slots. When an utterance does not invoke the right intent or slot, you can update your sample utterances or slot and retest, all before writing any code for your skills. You should set up a <a href="https://developer.amazon.com/docs/custom-skills/standard-built-in-intents.html#fallback" target="_blank">fallback intent</a> for requests that your skill does not know how to interpret, otherwise known as unhandled requests. Now, as you’re building your voice model out, you can use <a href="https://developer.amazon.com/docs/custom-skills/find-utterance-conflicts-in-your-model.html" target="_blank">utterance conflict detection</a> to ensure that there aren’t conflicting utterances with respect to your VUI. Utterance conflict detection will identify utterances (and slots) that map to more than one intent. Outlining conflicting utterance will help you detect areas where the NLU model of your skill could break and cause an unintended customer experience.<br /> <br /> <strong>Test Before Go-Live</strong><br /> As you approach voice model readiness, you will want to test using the built in <a href="https://developer.amazon.com/docs/devconsole/test-your-skill.html#test-simulator" target="_blank">Alexa Simulator.</a> You can also <a href="https://developer.amazon.com/docs/devconsole/test-your-skill.html#h2_register" target="_blank">test-distribute to your Alexa device</a> or go <a href="https://developer.amazon.com/docs/custom-skills/skills-beta-testing-for-alexa-skills.html" target="_blank">for beta testing. </a>As your voice model becomes solidified, you can start using the <a href="https://developer.amazon.com/docs/custom-skills/batch-test-your-nlu-model.html" target="_blank">NLU Evaluation Tool</a> to batch test utterances and how they fit into your voice model. You will need to define a set of utterances mapped to the intents and slots you expect to be sent to your skill. You can then run an NLU Evaluation and add to your slots and intents to improve the accuracy of your skill depending on the results. Before going live, you will want to both<a href="https://developer.amazon.com/docs/custom-skills/functional-testing-for-a-custom-skill.html" target="_blank"> functionally test</a> and <a href="https://developer.amazon.com/docs/custom-skills/test-and-debug-a-custom-skill.html" target="_blank">debug your skill. </a><br /> <br /> <strong>Tune Over Time</strong><br /> The skill development journey has only begun when you go live. You can use Interaction path Analysis to begin to understand your customer’s journey through your skill, and where possible bottlenecks are. <a href="https://developer.amazon.com/blogs/alexa/post/f2ef2a55-b465-4580-a9fc-2c0a9be49f00/gain-interaction-insights-using-new-analytics-in-the-ask-developer-console" target="_blank">Interaction path analysis</a> shows aggregate skill usage patterns in a visual format, including which intents your customers use, in what order. This enables you to verify if customers are using the skill as expected, and to identify interactions where customers become blocked or commonly exit the skill. You can use insights gained from interaction path analysis to make your flow more natural, fix errors, and address unmet customer needs.<br /> <br /> The <a href="https://developer.amazon.com/docs/custom-skills/review-intent-history-devconsole.html" target="_blank">Intent History page</a> of the developer console displays aggregated, anonymized frequent live utterances and the resolved intents. You can use this to learn how users interact with your skill to identify improvements you may want to make to your interaction model. The Intent History page displays the frequent utterances in two tabs, <a href="https://developer.amazon.com/docs/custom-skills/review-intent-history-devconsole.html#review-and-resolve" target="_blank">Unresolved Utterances</a>, which did not successfully map to an Intent, and Resolved Utterances, which mapped successfully to an intent and slot. This lets you review the utterances, update your interaction model to account for phrases that were not routed correctly, and mark utterances as <em>resolved</em>. For example, suppose you see a particular utterance that was sent to <code>AMAZON.FallbackIntent</code>, but it is actually a phrase that should trigger one of your custom intents. You can map that utterance directly to that intent and update your interaction model right from the Intent History page. Conversely, you could add to your voice model if you find that an utterance falling to the Fallback intent is a good feature for your skill. As mentioned above, you can also use the utterances surfaced in Intent History to run a <a href="https://developer.amazon.com/docs/custom-skills/batch-test-your-nlu-model.html" target="_blank">NLU Evaluation</a> and generate an accuracy indicator for your skill. You can also re-run the test after making changes to your skill model to measure the overall impact on your skill experience, otherwise known as a regression test.<br /> <br /> Access to skill metrics was <a href="https://developer.amazon.com/docs/devconsole/measure-skill-usage.html" target="_blank">previously restricted to pre-configured dashboards displaying static metrics in the developer console</a>. Static metrics are insightful but fall short when you need to automate mechanisms that guarantee operational continuity. In contrast, with the <a href="https://developer.amazon.com/docs/smapi/metrics-api.html" target="_blank">Get Metrics API (Beta)</a>, you can set up live metrics to your preferred analysis tools to pinpoint changes in your Skill's performance and behavior. You can now compute your own aggregated metrics or create automation that feeds that data into a monitoring system like<a href="https://aws.amazon.com/cloudwatch/" target="_blank"> AWS CloudWatch</a>, where you can create alarms or trigger changes in your skill based on certain inputs. For example, you can track how new customers are interacting with your skill and set up alarms to understand when indicators of a bad user experience surface, like when customers land on the <code>AMAZON.FallbackIntent</code> at a higher rate than normal. The Get Metrics API (Beta) also works across multiple skills so you can now set up aggregated reporting for your entire skills dialog without switching back-and-fourth to the developer console.<br /> <br /> With the new Get Metrics API, you can save time and increase visibility into the key insights that we provide in order to optimize skill engagement. The Get Metrics API is available for skill builders in all locales and currently supports the Custom skill model, the pre-built Flash Briefing model, and the Smart Home Skill API.<br /> <br /> <strong>Start Optimizing today </strong><br /> Begin working with the three new tools in order to create an optimal customer experience. Start by reading our technical documentation on the <a href="https://developer.amazon.com/docs/custom-skills/batch-test-your-nlu-model.html" target="_blank">NLU Evaluation Tool</a>, <a href="https://developer.amazon.com/docs/custom-skills/find-utterance-conflicts-in-your-model.html" target="_blank">Utterance Conflict Detection</a>, and the <a href="https://developer.amazon.com/docs/smapi/metrics-api.html" target="_blank">Get Metrics API (Beta)</a> today!</p> /blogs/alexa/post/95b86f3c-c0c3-4c65-807f-9b82dcc8d04c/designing-skills-for-in-skill-purchasing-part-2-surface-upsells Designing skills for In-Skill Purchasing, Part 2: Surface Upsells Ben Grossman 2019-10-08T23:52:43+00:00 2019-10-08T23:52:43+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/ISP_Tech_Blog_Header._CB496589928_.png" style="height:240px; width:954px" /></p> <p>Welcome to Part 2 in our series on designing skills with in-skill purchasing (ISP)! Today we’ll discuss how to surface upsells effectively in your skill to drive conversions and build customer trust.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/ISP_Tech_Blog_Header._CB496589928_.png" style="height:240px; width:954px" /></p> <p>Welcome to Part 2 of our series on designing skills with in-skill purchasing (ISP)! In <a href="https://developer.amazon.com/blogs/alexa/post/33cce9ae-97a9-4b33-9652-3a0ea76ec5ef/designing-skills-for-in-skill-purchasing-part-1-scope-it-right" target="_blank">Part 1</a>, we discussed the different types of ISPs, and scoped a hypothetical trivia skill – Seattle Super Trivia – for three kinds of purchases (a subscription, some one-time purchases, and a consumable). Now that we’ve established some best practices for <em>what</em> to offer, it’s time to decide <em>how</em> we’ll offer these products to our customers. Specifically, when we’ll tell them about what they can buy. In Part 3, we’ll discuss how to write an effective upsell message, but we need to decide when those messages will appear first.<br /> <br /> Unlike an app, skills don’t have a screen to use to remind a customer of purchases that are available: There’s no buy button in a skill, no modal windows, and no popups. A skill that offers in-skill purchases is most effective when it follows some of the best practices one might experience in a live sales conversation: The skill will need to engage the customer, get to know them, build trust and excitement, then ask for the sale at the right time and follow up in a way that isn’t intrusive or annoying.<br /> <br /> No one likes a persistent car salesman that constantly tries to push their customers to buy a car that isn't right for them. Your skill should avoid pressuring your customer to buy something. Don't offer them the tricked-out 2020 armored sports car with all the upgrades before you find out the affordable, no-frills daily driver was the best fit for them: They'll have walked away from your lot (your skill) already.</p> <h2>Engage the Customer First</h2> <p>Customers receiving upsell messages during a conversation with Alexa only have the length of the upsell message and an open mic of eight seconds to understand what you’re offering and how it benefits their experience, then make a decision to say “yes” or “no.” That’s a heavy cognitive load for them to carry. A customer can’t make this decision without first becoming acquainted with your skill.<br /> <br /> The best way to do this is to give the customer what they asked for first, wherever possible. If they asked your skill when the game is on, don’t make a customer sit through a subscription offer to listen to live games before answering their simple question (or worse, never answering at all). The customer should experience the benefit of your skill first. For example, the <a href="https://www.amazon.com/TuneIn-Live/dp/B075SMLSDS/ref=sr_1_1?keywords=tunein+live&amp;qid=1570571164&amp;s=digital-skills&amp;sr=1-1" target="_blank">TuneIn Live</a> skill will answer basic questions about your favorite sports team’s schedule before informing you you’ll need a subscription to listen. In our hypothetical trivia game – Seattle Super Trivia – this means we’re going to let the customer play their first game without mentioning in-skill purchases.</p> <p>Here is an example of what <strong>NOT </strong>to do:<br /> <iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/VNtNbrvg6UU" width="640"></iframe></p> <h2>Offers Should Be Simple</h2> <p>Upsell messages ask the customer to process a lot of information, and we’ll want to reduce their cognitive load wherever we can. When a customer is confused, it’s just easier for them to say “no” to whatever is being asked of them. This means reducing the number of steps it takes to purchase and, for skills that offer more than one ISP, only offering one product at a time. For Seattle Super Trivia, that means we won’t try to sell the player a subscription, a one-time purchase, and hints at the same time.<br /> <br /> Let's take look at an example of what <strong>NOT</strong> to do:<br /> <br /> “Welcome to Seattle Super Trivia. It’s OK, you’ve probably never heard of it. Think you know Seattle? Think again, transplant. I’m about to school you! Before we get started, you can get a subscription to Seattle Super Trivia to get more trivia questions every day, purchase one of 50 mega-trivia packs, or stock up on hints before you do trivia battle. What would you like to do? Subscribe? Stock up on hints? Or get a pack?”<br /> <br /> That was overwhelming. We threw a lot of information at the customer before they even got a chance to understand what the skill does. What are you likely to say to that prompt?</p> <p><strong>Complex:</strong></p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/Zcp9cOWooZM" width="640"></iframe></p> <p><strong>Simple:</strong></p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/JbB8lCKKa94" width="640">&lt;br /&gt; </iframe></p> <h2>Be Relevant to Be Trusted</h2> <p>Offer an in-skill purchase when your customer will be most inclined to say “yes.” If you have multiple ISPs like our trivia skill, you’ll want to consider which to offer a customer first. For our trivia skill, we can offer a pack first since it’s an inexpensive, low-commitment purchase for those who’ve only played a couple times. If that customer returns to the skill and finishes our daily game of five questions several days in a row, that’s an indicator it’s a good time to offer them the subscription.<br /> <br /> While the scope of your skill might affect when a customer is most likely to agree to a purchase, some milestones in their experience to consider surfacing upsells might include:</p> <ul> <li>A customer has run out of free content</li> <li>A customer has reached a point in the experience where it may be enhanced - but not blocked - by making a specific kind of purchase</li> <li>A customer has asked directly for content that requires payment</li> <li>When a customer returns to the skill for the first time since additional paid features were added</li> <li>When a customer has engaged with the skill successfully several times after declining the first upsell offer</li> </ul> <p>Some inopportune times to surface an upsell might include:</p> <ul> <li>The first time the customer uses your skill</li> <li>The customer has asked the skill to stop, quit, or exit</li> <li>The customer has not recently had a positive interaction with the skill, such as after hitting several errors or not completing their primary task</li> <li>The customer has recently purchased from the skill and hasn’t used their purchase</li> <li>The customer has recently declined to purchase from the skill</li> </ul> <h2>Don’t Mislead, Confuse, or Disrupt</h2> <p>Don’t pull a “bait and switch” on your customers. They should clearly understand what kind of content they can access for free, and what content is “premium.” At the end of a round of Seattle Super Trivia, we’re not going ask our player if they want to play more trivia (of course they do!) and then tell them they can’t do that without a purchase:<br /> <br /> Skill: Thanks for playing today’s round of Seattle Super Trivia. Want to play a another round?<br /> Customer: Yes<br /> Skill: Sorry, you’ll need a subscription to Seattle Super Trivia to play the daily bonus round. Would you like to learn how to subscribe?<br /> Customer: No!</p> <p>That was frustrating wasn't it?! Make sure you avoid doing this at all costs. Now let's take a moment to see what that would look like between two human conversational partners.</p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/y7U6h8K3jK8" width="640"></iframe></p> <p>From now on, my poor player is going to be wondering when I’ll offer them landmines like that again, nervous every time they agree to an option that I’ll try to upsell them.<br /> <br /> In the spirit of the discussion above about reducing friction and not trying to sell to someone we haven’t yet successfully interacted with (or haven’t even “met”), we don’t want to disrupt the experience they were expecting to get when they invoked our skill. In Seattle Super Trivia, we won’t stop customers with an upsell after agreeing to start their game. Let's make sure we <strong>AVOID</strong> the following:<br /> <br /> “Ok, let’s continue your Seattle Superstars Trivia Pack. Before we get started, did you know we also offer 49 more trivia packs about history, animals, and music? Want to learn about more packs?”</p> <h2>Don't Upsell Too Often… or Too Little</h2> <p>When a customer declines to make a purchase, we’ll now have to decide when to offer a purchase again. Customers shouldn’t be shut out from purchases forever after their first “no,” but we’ll have to be contextual in the way we decide to make an offer again. Whatever upsell “trigger” we set, we’ll need to validate that mechanism. For example:</p> <ul> <li>If a skill offers a resource as a consumable that will be relatively scarce, a trigger to upsell when there is a low balance of such items is likely to deliver the message too often. Since Seattle Super Trivia only gives one hint a day for free, and offers a purchase of only three hints, we don’t want to set an upsell to trigger every time the customer only has one hint left: They’d be hearing upsells too often!</li> <li>If a skill upsell is set to be delivered every time the player is out of free content, they may hear the upsell as the welcome message every time they open the skill. Once a player finishes the Seattle Super Trivia daily game, we’ll want to offer the upsell again if they come back the same day, but if they decline that upsell, they shouldn’t hear that message over and over each time they open the skill until the next daily game is available.</li> <li>If a skill will surface an upsell after every trivia game the customer completes, the skill will deliver an annoying number of upsell messages if each trivia game is only three questions, while the skill may not deliver enough upsells if there are 100 questions per game.</li> </ul> <p>Typical upsell intervals used by developers who have implemented ISPs include:</p> <ul> <li>Once every 24-72 hours</li> <li>After three consecutive days of play</li> <li>Once every set number of sessions</li> </ul> <p>Consider a different, longer interval if a customer has already declined an upsell from your skill.<br /> <br /> Now that we’ve decided what circumstances will trigger an upsell message and where, it’s time to write the message itself. But what do we write? How do we convince customers in just a breath’s worth of dialog? Stay tuned for Part 3 of this series, where we’ll discuss how to write an effective upsell!<br /> &nbsp;</p> /blogs/alexa/post/0574e4b3-3b0a-4d89-b153-f501d1b15d31/sakananojikan-interview Alexa Developerスキルアワード2019キッズ部門受賞「サカナノジカン」スキル開発インタビュー Chisato Hiroki 2019-10-08T04:00:07+00:00 2019-10-08T04:00:07+00:00 <p>Alexa Developerスキルアワード2019キッズ部門を受賞した『サカナノジカン』を開発した國川雅司氏、 南島康一氏、小川麟太郎氏は、6月9日に行われたスキルアワードハッカソンで出会い、結成されたチームです。スキルアワードを通した経験や、スキル開発の工夫などを伺いました。</p> <p>Alexa Developerスキルアワード2019キッズ部門を受賞した『<a href="https://www.amazon.co.jp/dp/B07W1J54Q9/ref=as_at?tag=rsaf-22&amp;linkCode=as2&amp;">サカナノジカン</a>』を開発した國川雅司氏、 南島康一氏、小川麟太郎氏は、6月9日に行われたスキルアワードハッカソンで出会い、結成されたチームです。スキルアワードを通した経験や、スキル開発の工夫などを伺いました。</p> <p><strong>國川雅司氏</strong> (クラウド技術 エンジニア )</p> <p><strong>南島康一 氏 </strong>(株式会社カラーズ サーバーサイドエンジニア)</p> <p><strong>小川麟太郎氏</strong> (電通デジタル クリエーティブプランナー/アートディレクター)</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/Sakananojikan_Blog._CB1570501154_.jpg" /></p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>なぜハッカソンに参加しようと思ったのですか?</strong></p> <p>國川:シンプルに楽しそうだな、と思って参加しました。デザイナーに出会えたらいいな、くらいの気持ちでいました。</p> <p>南島:これまでハッカソンに参加したことがなかったのですが、Alexaハッカソンは、参加にあたり敷居が低かったので、思い切って参加してみようと思いました。スキルアワードにも興味があったので、一石二鳥になるかなとも思いました。</p> <p>小川:普段業務で、VUI関連の調査やプランニングをしていて、VUIの生かし方についてもっと知ることができればと思っていました。また、Alexaで何が実現できるかをエンジニアと一緒に考え、試してみる機会がなかったこともあります。ハッカソンに参加し、作品を好きにアレンジしながら、どうしたら面白くなるかを研究してみたいと思いました。</p> <p>&nbsp;</p> <p><strong>―</strong><strong> ハッカソンでは、どのように</strong><strong>チームを作ったのですか?</strong></p> <p>國川:僕は、あまりそこは上手ではなくて……(笑)</p> <p>小川:國川さんのアイデアを見て、面白いと思い、自分から声を掛けました。個人のアイデアをみんなで投票し、みんなで自由にマッチングしていくハッカソンの仕組みは良かったと思います。</p> <p>南島:自分で作成してもいいかな、と思っていたのですが、せっかくハッカソンにきてみたし、アイデアをみてちょっとやってみようと思い、チームに参加してみました。</p> <p>&nbsp;</p> <p><strong>―</strong><strong> スキルのアイデアは</strong><strong>初めからあったのでしょうか?</strong></p> <p>國川: Echo Spotの画面を見ていて、金魚鉢のように見えたのが発端で、水槽に見立てたスキルを作ってみたいなと思いました。キッズ向けのスキルにしようと決めたのは、ハッカソン中です。</p> <p>南島:初めは課金メインで考えていたのですが、キッズは課金ができないということから、課金をとるか、キッズで行くかを考えて、キッズで行こうと決定しました。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/IMG_6676._CB1570501850_.jpg" style="display:block; height:420px; margin-left:auto; margin-right:auto; width:560px" /></p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>より面白いスキルにするために、どのような工夫をしましたか?</strong></p> <p>國川:時間が経つと、“サカナの状態が変わる”という点にこだわりました。お腹がすいたり、水槽が汚れたり、サカナが遊んで欲しがったり、など、開くたびに新しい発見があるように工夫しました。また、10日間経つと、サカナが成魚に変わる、といった遊び心を加えました。</p> <p>小川:サカナのパターンは、ベースが12枚くらいあり、色バリエーションがあり、動画があり、画像があり、ボツ案もあり、かなりの数を作成しました。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/IMG_6669._CB1570501850_.jpg" style="display:block; height:431px; margin-left:auto; margin-right:auto; width:574px" /></p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>VUI</strong><strong>のデザインはどのように行いましたか? </strong></p> <p>小川:序盤はフローチャートを使って、整理を行いました。話しことばやバリエーションは盛り込まず、インテントの整理や進め方のみにフォーカスしました。その後、Alexa(話者)のパーソナリティーを設定しました。どういった人から子どもに話しかけているのか、話者が言うこと、言わないことのルールを設定し、シナリオで切って会話を書き起こしていきました。会話を膨らませ、初回起動時、毎日繰り返して使う時、などに分けて会話を作成しました。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/IMG_6678._CB451518903_.jpg" style="display:block; height:660px; margin-left:auto; margin-right:auto; width:574px" /></p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>普段の仕事を今回のスキル開発の役割でどのように生かすことができましたか?</strong></p> <p>國川:普段はインフラ側の仕事をしていて、デザイナーと一緒に仕事をする機会が全くないので、デザイナー側から出てくるものが、とても新鮮でした。自分が出したアイデアをデザイナーがブラッシュアップし、それをもう一度自分が作るといった作業です。もともとあったアイデアをさらに洗練させてくれたことに、感動しました。</p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>ハッカソン参加後は、どのように開発を進めましたか?</strong></p> <p>小川:かなり初期の段階でコンセプトボートを作成しました。サカナノジカンの世界感や、このように使ってほしい、こういったターゲットに使ってほしいといった部分を1枚にまとめたのが良かったです。この1枚に、スキルの進め方、サカナをお世話した時のリアクションなどをすべてまとめ、これを実現するには各自が何をすべきかの分担を明確化することができました。また、この工程は削らないと間に合わない、などの見通しを立てることもできました。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/IMG_6660._CB1570501850_.jpg" style="display:block; height:507px; margin-left:auto; margin-right:auto; width:380px" /></p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>ドット絵にした理由は何ですか?</strong></p> <p>小川:工数的な部分もありましたが、一番はターゲットを3〜4歳に設定したことです。この年齢層の子どもが生き物だとちゃんと認識でき、かつ色彩が楽しめるビジュアルを目指す場合、実物に近い見た目のものよりも、ドット絵にデフォルメすることで、世界観に没頭してもらえると考えました。応募した時は、時間的な制約もあり、青だけで勝負したのですが。最終的には、赤、青、黄色の3色で展開し、育っていくサカナがそれぞれ違う見た目になっています。子どもたちが“私のは丸っこい”“私のはとんがっている”のような個性を出し、楽しんでもらえる世界を思い描いて作成していました。</p> <p>國川:はじめドットの画像を見た時は、オーと思いました(笑)ハッカソンの時とはかなり変わっていたので。</p> <p><img alt="https://qiita-user-contents.imgix.net/https%3A%2F%2Fqiita-image-store.s3.ap-northeast-1.amazonaws.com%2F0%2F308709%2Ff8236d86-4ed2-9b81-7b4c-fb5a0154edee.png?ixlib=rb-1.2.2&amp;auto=compress%2Cformat&amp;gif-q=60&amp;s=4c711614ad1b263bd2c9d525bb96ba66" src="https://qiita-user-contents.imgix.net/https%3A%2F%2Fqiita-image-store.s3.ap-northeast-1.amazonaws.com%2F0%2F308709%2Ff8236d86-4ed2-9b81-7b4c-fb5a0154edee.png?ixlib=rb-1.2.2&amp;auto=compress%2Cformat&amp;gif-q=60&amp;s=4c711614ad1b263bd2c9d525bb96ba66" style="display:block; height:309px; margin-left:auto; margin-right:auto; width:529px" /></p> <p>小川:絵は、みんなに原画を見せて、男の子だったら?女の子だったら?変わったものが好きな子だったら?のようにパターンをいくつか作成し、話し合いながら決めていきました。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/IMG_6673._CB1570502035_.jpg" style="display:block; height:390px; margin-left:auto; margin-right:auto; width:521px" /></p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>ハッカソンに参加してよかったことは?</strong></p> <p>小川:初ハッカソンだったので、かなり緊張しながら参加したのですが、自分では思いつくことのできないコアアイデアを生み出せたことが大きかったです。また、仕事の合間の短い時間の中で、お互い刺激し合いながら、スケジュールを切って開発・制作を進められたのは、すごく良い経験でした。</p> <p>南島:チームと出会えたこと。新しい発見もありましたし、ハッカソン自体が好きになるきっかけになりました。また、仕事外の仲間と出会い、モノを作る体験をすることができました。ビジネスと趣味の中間地点に位置しているため、ほどよい緊張感の中で楽しみながら、心地よく制作をすることができました。</p> <p>國川: Alexaが好きな友人に出会えたこと。またアイデア自体は持っていたけれど、自分ひとりでは、サカナをデザインすることが出来なかったため、仲間に出会うことができてよかったです。</p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>開発で苦労したポイントは?</strong></p> <p>南島:デザインでやりたかったことを、そのまま実現できない部分がありました。具体的には、ウエイクワード無しに、Alexaの待機中にサカナや海藻がゆらゆら揺れたりといったことは、実現できませんでした。※詳しい開発のお話は、南島氏の<a href="https://qiita.com/ikegam1/items/2cb254afef9d87420150">ブログ</a>をご覧ください。</p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>開発するにあたり、今後どのような機能を期待していますか?</strong></p> <p>南島:発話待ちで、アニメーションを動かせるようになってほしいです。</p> <p>小川:アイドル状態を自分で設定できるようになったら、影響が大きいなと思います。アートを飾るといった使い方もできるようになるのでは?スキルを動かし続けることができれば、更に魅力的になるように思います。<a href="https://developer.amazon.com/ja/docs/custom-skills/speech-synthesis-markup-language-ssml-reference.html">SSML</a>の調整ももう少し、自然になるといいですね。</p> <p>南島:<a href="https://developer.amazon.com/ja/docs/alexa-presentation-language/apl-avg-format.html">AVG</a>をもっと簡単に使えるようにしてほしいです。</p> <p>&nbsp;</p> <p><strong>―</strong><strong> スキル</strong><strong>アワードに参加し、変化したことはありますか?</strong></p> <p>國川:社外の方の経験や知識を共有してもらえる場はなかなかありませんが、今回、個人としてすごく成長することができました。技術的なアドバイスを受けることが出来たり、デザイナーと一緒に仕事をすることが、とてもいい経験になり、楽しい緊張感の中で、作品を作ることができました。</p> <p>小川:この領域の仕事を、もっとやってみたい、もっといろんな人を絡ませたい、といったモチベーションがわいてきました。</p> <p>南島:仕事でも、スマートスピーカーで何かできるのではないか、といった可能性を感じるようになりました。一方、今回の企画は仕事外なので、いい意味で損得勘定なく力を発揮することができたのもよかった点です。</p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>今後の活動について、どう考えていますか?</strong></p> <p>南島:サカナノジカンは、無理やり課金に紐づけようとすると、コンセプトが歪んでしまう可能性があるため、無理をして課金を付けようとは思っていません。</p> <p>小川:このチームで、何かできるかを考えています。新しいアイデア出しをオンラインで行っています。</p> <p>&nbsp;</p> <p><strong>―</strong><strong> </strong><strong>今度アワードやハッカソンに参加する人達に向けてのアドバイスを教えてください</strong></p> <p>南島:成功要因はチームワークだったと思います。「3人」という人数構成も良かったです。</p> <p>南島、小川:ハッカソンは、参加して損はないから行ってみるといいと思います。勉強になることは非常に多かったです。絶対に何かしら得るものがあります。</p> <p>國川、南島、小川:それぞれが、お互いを尊重しながら、コンセプトを貫き通すことができたのが、勝因でしたね。</p> <p>&nbsp;</p> <p><strong>■ ぜひ「サカナノジカン」チーム作成の紹介動画もご覧ください⇒<a href="https://www.youtube.com/watch?reload=9&amp;v=nYXrQD8EBxA&amp;feature=youtu.be">動画を見る</a></strong></p>