Grato por sua visita. Neste momento esta página está apenas disponível em inglês.
Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2019-10-18T21:21:36+00:00 Apache Roller /blogs/alexa/post/36126a1a-c8aa-4b8b-b8d6-c8e2ac5e2a83/designing-skills-for-in-skill-purchasing-part-3-writing-effective-upsells Designing Skills for In-Skill Purchasing, Part 3: Writing Effective Upsells Ben Grossman 2019-10-18T21:21:36+00:00 2019-10-18T21:21:36+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Welcome to Part 3 in our series on designing skills with in-skill purchasing (ISP)! Today we’ll discuss how to write effective upsells and communicate a compelling value proposition to customers.</p> <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Welcome to Part 3 of our series on designing skills with in-skill purchasing! In <a href="">Part 2</a>, we discussed design best practices for deciding where you might consider (or not consider) asking customers to purchase content, whether via subscription, one-time purchase, or consumable. You can learn more about those ISP types in <a href="">Part 1</a>.<br /> <br /> Now that we’ve considered the most (and least) ideal locations and intervals for surfacing upsells to our customers, how should we ask them to purchase? What will we say? As I’ve stated before, there are no popup windows, buy buttons, or any other way for a customer to passively consume information about purchases. Your skill will have a short amount of time to tell the customer clearly what an ISP offers – the value proposition – and ask if they’re interested before routing them to the Amazon purchase flow.<br /> <br /> Our value proposition should be relevant to the customer with specific and transparent language spoken in a friendly, conversational tone … and be quick about it! We may be interrupting the customer’s experience briefly, but we shouldn’t <em>dis</em>rupt them.</p> <h2><strong><strong>Be Relevant to Be Trusted.</strong></strong></h2> <p>Customers abandon skills that surface too many irrelevant or uninteresting offers for in-skill purchases. Conversely, skills may build rapport with customers by offering exactly what they need, when they need it, and avoiding offers they don’t need or aren't likely to find interesting. We want to begin by offering the customer something they're likely to want or need appropriate to their context.<br /> <br /> In our hypothetical trivia skill –Seattle Super Trivia – we wouldn't want to try to sell the player hints (as a consumable) when they haven't yet used their free hint we're giving each player daily. We should never offer an in-skill purchase a customer doesn’t need, has already purchased, or is out-sized to their needs. On the other hand, we're more likely to interest the customer in a 50-question Seattle Sports Superstars trivia pack (a one-time purchase) if they've just aced our free daily 5-question trivia that was all about Seattle sports.<br /> <br /> Some instances in which we may want to be careful of offering our customers something they don’t need include:</p> <ul> <li>Offering a one-time-purchase content pack the customer already owns. For this reason, randomized product upsells are not very effective and quickly become annoying.</li> <li>Offering a purchase when the customer already has one (or many) they haven't used yet.</li> <li>Offering the most expensive type of consumable content to customers who haven't been using the skill very long or successfully.</li> <li>Offering an ISP out-sized to what the customer may need to finish their experience successfully, such as offering a large “bundle” of hints to a trivia game with only one question left in the round.</li> </ul> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <h2><strong>Include a Clear Value Proposition.</strong></h2> <p>In other words, tell the customer exactly what kind of new or enhanced experience their purchase will buy. Why would they be interested? How much content is there? How will their expected experience change? Be specific, but concise. You should consider these questions as your litmus test to writing a good upsell.<br /> <br /> Now that we know what questions we should answer while crafting our upsell, let’s take a look at one that we created for Seattle Super Trivia, mentioned above:<br /> <br /> Congrats! You finished today’s Seattle Super Trivia challenge featuring your favorite Seattle sports stars with a score of 4 out of 5. Since you’re such a rabid fan of the greens and blues, you might want to try the Seattle Sports Superstars pack with 50 more questions to test your mettle as 12th man. Want to learn how to get it?<br /> <br /> Let’s take a moment to dissect this. From the very first sentence, you can immediately tell that we’re offering this upsell after our customer has completed a game and did quite well. This is a great place to offer a pack of new questions since they’ve just finished a game, scored well, and are probably wanting more. We’ve chosen a pack to offer that matches the theme of the day since the player seems knowledgeable about it. We’ve also included the number of questions in the pack.<br /> <br /> Writing a great upsell becomes even more critical for subscription skills that will offer a free trial to fulfill the free content requirement (you can learn more about the free trial requirement in Part I: Scoping a Skill). A customer won’t get the chance to access the experience for free without saying “yes” to that upsell message, so we’ll have to tell them more about it. We’ve decided to give our Seattle Super Trivia player a few free questions daily so this won’t be a concern, but what if we offered a free trial model instead? We’ll need to write an upsell that answers our litmus test. Let’s start by looking at a weak upsell for our free trial.<br /> <br /> Customer: “Alexa, play Seattle Super Trivia”<br /> Alexa: “Sorry, you need a subscription to Seattle Super Trivia to start playing. New members get it free for 30 days. Want to learn how to subscribe?”<br /> <br /> Right from the beginning we committed a cardinal design sin. We start off with an apology and then ask our customer to subscribe. Now let’s take a look at a better upsell that answers our litmus test questions.<br /> <br /> Customer: “Alexa, play Seattle Super Trivia”<br /> Alexa: “You can start playing with a subscription to Seattle Super Trivia. Unlock ten daily challenge questions and half-price mega-trivia question packs and extra free hints every month. New members get it free for 30 days. Want to learn how to subscribe?<br /> <br /> This upsell is much better. Our customers now understand that purchasing a subscription grants access to ten daily questions, discounts, extra free hints and a 1-month free trial, and that they can try the skill for a trial period for free.<br /> <br /> Let’s take a look at a few different upsells in a real conversation. Some have stronger value propositions than others. Which one would you be most likely to respond to?</p> <p><strong>Strong Value Proposition:</strong></p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p><strong>Weak Value Proposition:</strong></p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p><strong>No Value Proposition:</strong></p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <h2><strong>Build Trust</strong></h2> <p>To increase your chances of converting your upsells into a sale, it’s important to build trust. We’ve demonstrated some ways to build trust by offering a quality free experience and surfacing upsells for relevant products at carefully chosen moments. Building trust also means avoiding skill development and marketing jargon, and formal and non-conversational language. Terms and phrases such as those below are a red flag that the skill’s upsell messages may be un-relatable, confusing, or sound untrustworthy to the customer:</p> <ul> <li>Purchase, in-skill purchase, monetization</li> <li>Bundle, pack, expansion (without any accompanying descriptors)</li> <li>Menu, options</li> <li>Content, premium content</li> <li>Access</li> </ul> <p>You’ll notice we haven’t included pricing in any of our upsell examples. That’s because this information will be delivered by the Amazon purchase flow after the skill routes the customer there to handle the final purchase during final purchase confirmation (which we will discuss in Part IV: Handle Transactions &amp; Transitions). Skills that include prices in their ISP upsells will fail certification.</p> <h2><strong>Be Transparent. No “Landmines”.</strong></h2> <p>Since a customer is purchasing a product they cannot see or touch, upsells and product descriptions must help the customer imagine the product and ensure they have all the details they’ll need to confidently agree to the purchase. That begins with introducing our premium content clearly. We shouldn’t offer paid options in the same list as other free content, hoping the customer will select the premium option so we can deliver our upsell. That sets our customers up for a “landmine” type of experience. They’ll lose trust in our skill if they never know when they’ll get an upsell message instead of the content they thought they had selected.<br /> <br /> How long would you continue to use Seattle Super Trivia if this happened:<br /> Customer: Alexa, Play Seattle Super Trivia<br /> Alexa: Welcome back to Seattle Super Trivia. You can play the daily challenge, or start the Seattle Sports Heroes pack. Which would you like?“<br /> Customer: Sport heroes!<br /> Alexa: The Seattle Sports Heroes pack is available for purchase. It has 50 sports questions to stump fans of the Sonics, Seahawks, Storm, and more. Want to learn how to get it?&quot;<br /> <br /> Our messaging should be <em>pro</em>active, not <em>re</em>active. A more effective approach for offering this pack to our customer would be:<br /> Customer: Alexa, Play Seattle Super Trivia<br /> Alexa: Welcome back to Seattle Super Trivia. You already knocked it out of the park on today’s trivia challenge. I can’t wait to see how you do tomorrow. In the meantime, would you like to learn how to go into extra innings with the Seattle Sports Heroes pack?“<br /> <br /> You may have also noticed by now that none of our examples ask the customer if they’d like to buy something or subscribe explicitly. That’s because we don’t want to mislead them into thinking a purchase will be made immediately if they say “yes” because it will not. Since the Amazon purchase flow will ask for final confirmation after giving the price, it will ask “Do you want to buy it?” or “Should I start your subscription?” If we use “buy” or “subscribe language in the skill, a customer is likely to say ”no,“ since they haven’t heard the price yet!</p> <h2><strong>Ask for the Sale. One Sale.</strong></h2> <p>Upsells are most effective when they present one option as a simple question with a yes or no answer. We don’t have much time to explain one product to a customer, much less two. Either-or prompts can be confusing. If the skill offers multiple ISP options, surface only the one most relevant to the customer.<br /> <br /> Let’s take a look at the difference between an upsell that offers one ISP versus several.</p> <p><strong>Do Sell One Relevant Product:</strong></p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe><br /> <br /> <strong>Do NOT Sell Everything at Once:</strong></p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Passively telling customers about purchases and not asking for the sale is even less effective at converting customers to a sale. Doing so requires the customer to remember an utterance for later without being prompted. This kind of unsolicited messages often feels disruptive to customers, too.<br /> <br /> <strong>Don’t:</strong> “Before we start the daily challenge, did you know that you can buy extra Seattle Super Trivia hints to help you through those tough rounds? Just ask me to buy hints at any time during your game. Let’s get started …”</p> <h2><strong><strong>Strike the Right Tone.</strong></strong></h2> <p>Customers have expectations of Alexa’s personality characteristics and tone they've learned over time. Customers may respond more positively to messages that are consistent with their expectations of Alexa and the experiences they've previously had in your skill. Customers lose trust in skills that badger, threaten, or nag them about purchases.<br /> <br /> Messages should be positive and delightful and remind customers of its value before and after a purchase is made.<br /> <br /> <strong>Do:</strong> “Want to bring on the rain? Er, I mean more trivia? Seattle needs a hero like you. A Seattle Sports Heroes pack with 50 more questions, that is! Want to learn how to get it?”<br /> <br /> Celebrate the benefit after the purchase, offering an immediate opportunity to use it where possible.<br /> <br /> <strong>Don’t:</strong> Your purchase is complete. <em>&lt;ends skill session&gt;</em><br /> <br /> <strong>Do:</strong> “Bring on the rain–er … trivia! Thanks for buying the Seattle Sports Heroes trivia pack! Want to start playing it?”<br /> <br /> Avoid taking on a nagging tone, and avoid implicit or explicit threats. Friendly hints over nagging are a more trustworthy means of communication.<br /> <br /> <strong>Don’t:</strong> <em>“If you stock up on hints before you start the round, you could avoid a mistake that could hurt your rankings. You are currently in danger of dropping in rank. Want to buy more hints?”</em><br /> <br /> <strong>Do: </strong><em>“Recovering from a small mistake during this round could help you reach the next level. You might consider an insurance policy. Want to stock up on hints?” </em><br /> <br /> <strong>Positive vs. Negative Tone Examples:</strong></p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe><br /> <br /> Now that we’ve learned how to write dialog that gets customers interested in your in-skill offerings, it’s time to think about what happens next. A customer will interact with an Amazon purchase flow to complete the purchase, and our skill dialog will need to transition to and from that flow gracefully. Stay tuned for Part 4 of this series, where we’ll discuss how to handle transactions and transitions.</p> /blogs/alexa/post/1be1a55a-9317-4148-8917-ff725ffbf7cd/adam-savage-sharing-collaboration-are-keys-to-the-future-for-builders Adam Savage: Sharing, Collaboration Are Keys to the Future for Builders June Lee 2019-10-18T16:18:57+00:00 2019-10-18T16:18:57+00:00 <p>I recently sat down with Adam Savage and he shared his recommendation on how to approach the Alexa skill development process to successfully deliver a meaningful experience for customers.</p> <p>We’re still in the early days of voice technology, and Alexa developer community is shaping the future of voice as we speak. Together, we’ve hit many milestones along the way, including the <a href="">latest 100,000 skills celebration</a>, thanks to your imagination, innovation, and hard work.</p> <p>I recently sat down with Adam Savage and he shared his recommendation on how to approach the Alexa skill development process to successfully deliver a meaningful experience for customers. Adam, well-known for his 14 seasons as a star of the popular Mythbusters television show, is an expert builder himself. With almost forty years’ experience in building, his newest television show, Savage Builds, emphasizes collaboration as he pulls in others to help him build outrageous projects. His recent book, <a href="">Every Tool’s a Hammer: Life is What You Make It</a>, explores his own building obsession, as well as lessons on creativity and finding inspiration.</p> <p>During our chat, I asked Adam about his ideas on creativity in building and how they apply to voice development. As you can imagine, Adam had unique insight into the process and what works. We discussed four key ideas builders should keep in mind as they begin their projects, the mindset they should embrace, and why the future of building lies in collaboration instead of working alone.</p> <h2>Every Builder Contributes a Unique Point of View</h2> <p>Every builder begins each project with their own unique vision. Because each developer has their own point of view, says Adam, “They’re going to contribute something interesting” to the project. This is the first step to solving the problem they’re working on.</p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <h2>Forget About “Fail Fast”—the Future Is “Iterate Fast”</h2> <p>The idea of “fail fast” is catchy, but it doesn’t capture what we really meant. What a builder really needs to do is “iterate fast.” Adam says it’s not about achieving perfection in your design, but about figuring out what works or what doesn’t, and moving on to the next step as quickly as possible. A successful builder doesn’t dwell on the outcome they want, but works toward the outcome the project wants.</p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <h2>Sharing Ideas Drives a Stronger Builder Community</h2> <p>The best projects aren’t built in isolation. Adam stressed that sharing is extremely important in the creative community, though it seems counterintuitive in our culture. The biggest dividends come from sharing ideas, sharing credit, and sharing encouragement. Adam says, “When we share with others, we are widening the circle of collaboration.”</p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <h2>Storytelling: The Ultimate Act of Collaboration</h2> <p>Humans are the only animals that tell stories, and it’s one of our strongest tools for collaboration. Adam says when we listen to each other and hear each other’s experiences, we have a chance to see from another’s point of view. Our strength lies in collaborating with others, both in development and in our lives.</p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <h2>Enter the Alexa Developer Community Contest Now to Give and Receive Feedback on Alexa Skills</h2> <p>It was a pleasure talking with Adam. If Adam’s enthusiasm to innovate and collaborate with the growing Alexa developer community inspired you, participate in the <a href="">Alexa Developer Community Contest</a> to give and receive feedback on Alexa skills. We have $7,000 in total prizes including participation prizes and bonus prizes. Hope you can join us!</p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> /blogs/alexa/post/d54c8cab-bb83-45db-af97-f53743daf7c3/join-the-alexa-team-at-aws-re-invent-2019 Join the Alexa team at AWS re:Invent 2019 June Lee 2019-10-17T16:25:10+00:00 2019-10-17T20:30:34+00:00 <p><img alt="" src="" style="height:480px; width:1908px" />We’re excited to invite you to join the Alexa team at AWS re:Invent 2019, December 2-6 in Las Vegas, Nevada. AWS re:Invent is a learning conference hosted by Amazon Web Services (AWS) for the global cloud computing community.</p> <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>We’re excited to invite you to join the Alexa team at AWS re:Invent 2019, December 2-6 in Las Vegas, Nevada. AWS re:Invent is a learning conference hosted by Amazon Web Services (AWS) for the global cloud computing community. The event will feature keynote announcements, training and certification opportunities, access to more than 2,500 technical sessions, a partner expo, after-hours events, and so much more.</p> <h2>Meet Us at the Alexa Booth</h2> <p>Join us at the Alexa re:Invent booth located at the entrance of the Main Expo (Hall B) from Monday, December 2 to Thursday, December 5. Check out our newest Echo devices and learn what you can build with Alexa to help serve your customers more naturally with voice anywhere they go. You can also take this opportunity to meet our team and ask any questions.</p> <h2>More Than 50 Alexa Sessions and Workshops</h2> <p>This year, there will be <a href=";searchType=session&amp;tc=0&amp;sortBy=abbreviationSort&amp;p=&amp;i(19577)=32821">more than 50 Alexa sessions</a> at re:Invent, including hands-on workshops, interactive chalk talks, and technical breakouts. Join sessions to dive deep into the technology across Alexa Skills Kit, Alexa Voice Service, and Alexa’s smart home capabilities, and learn how you can build compelling voice experiences. Session topics include designing great multimodal experiences, creating conversational voice interfaces, integrating in-skill purchasing, building Alexa-enabled devices for the connected world, and more. You’ll hear from senior leaders, technical evangelists, product team members, and engineers who will share the latest and greatest practices for building with voice.</p> <h2>What You’ll Learn</h2> <p>Wondering what you’ll learn in our <a href=",;searchType=session&amp;tc=0&amp;sortBy=abbreviationSort&amp;sessionTypeID=2&amp;p=&amp;i(19577)=32821">breakout sessions</a>, <a href=",;searchType=session&amp;tc=0&amp;sortBy=abbreviationSort&amp;sessionTypeID=2623&amp;p=&amp;i(19577)=32821">chalk talks</a>, and <a href=",;searchType=session&amp;tc=0&amp;sortBy=abbreviationSort&amp;sessionTypeID=2523&amp;p=&amp;i(19577)=32821">hands-on workshops</a>? Here’s a preview of just a few of the sessions offered:</p> <ul> <li><strong>Alexa, What Can I Do Now?</strong><br /> Every year, the Alexa Skills Kit (ASK) grows in capabilities and features. In this state of the union, we discuss the latest trends in conversational artificial intelligence, highlight some of the most innovative skills, and provide an overview of everything that has been released in the past year for ASK.</li> </ul> <ul> <li><strong>Improving Customer Retention for Your Alexa Skill</strong><br /> Retaining your customers is both an art and a science. In this session, you will discover the mechanisms you can use to keep your customers coming back for more. You will have the opportunity to ask questions, and discuss ideas among fellow skill developers.</li> </ul> <ul> <li><strong>Building Robots That Respond to Voice</strong><br /> Learn how you can control robots with your voice through the Alexa Skills Kit. In this workshop, we will live-code with the Robot Operating System (ROS) and AWS RoboMaker to build an intelligent robotic application that responds to our voice commands and navigates autonomously in a simulated environment.</li> </ul> <ul> <li><strong>How to Build Alexa Skills with AWS Database &amp; Storage Services</strong><br /> Learn how to leverage AWS database and storage services effectively in your Alexa skill.</li> </ul> <h2>See You at re:Invent</h2> <p>re:Invent is a great place to meet and interact with the Alexa team and a global community of builders. Visit the <a href="">AWS re:Invent</a> website to register now. Then, start building your event schedule by reviewing Alexa sessions in the event catalog and <a href="">reserving a seat</a> in available sessions.</p> <p>We can’t wait to see you in Vegas!</p> <p>&nbsp;</p> <p style="margin-left:.25in">&nbsp;</p> /blogs/alexa/post/3e98845a-2070-406e-9ea7-bb5d49ed096e/the-fever-data-set-what-doesn-t-kill-it-will-make-it-stronger1 The FEVER Data Set: What Doesn’t Kill It Will Make It Stronger Larry Hardesty 2019-10-17T13:00:00+00:00 2019-10-17T15:26:07+00:00 <p>The open challenge for the&nbsp;<em>F</em>act&nbsp;<em>E</em>xtraction and&nbsp;<em>Ver</em>ification (FEVER)&nbsp;workshop at EMNLP involved devising adversarial examples that would stump fact verification systems trained on the FEVER data set.</p> <p><sup><em>Arpit Mittal cowrote this post with Christos Christodoulopoulos</em></sup></p> <p>This year at the Conference on Empirical Methods in Natural-Language Processing (EMNLP), we will cohost the <a href="" target="_blank">Second Workshop</a> on Fact Extraction and Verification — or FEVER — which will explore techniques for automatically assessing the veracity of factual assertions online.</p> <p>Fact verification is an important part of Alexa’s question-answering service, enabling Alexa to validate the answers she provides and to justify those answers with evidence. The Alexa team’s interest in fact verification is widely shared, as is evidenced by a host of recent challenges, papers, and conferences — including the <a href="" target="_blank">Truth and Trust Online</a> conference.</p> <p>The workshop originated from a public data set — the FEVER data set — that <a href="" target="_blank">we created</a> together with colleagues at the University of Sheffield. The data set contains 185,000 factual assertions, both true and false, which are correlated with Wikipedia excerpts that either substantiate or refute them.</p> <p>Like the first workshop, the second will feature invited talks from leaders in the field, papers on a range of topics related to fact verification, and presentations by contestants in an open, FEVER-based competition announced the previous spring.</p> <p>In the first FEVER competition, contestants used the FEVER data set to train machine learning systems to verify facts. The systems were evaluated according to their FEVER scores, which measure both the accuracy of their truth assessments and the quality of the supporting evidence they supply.&nbsp;</p> <p>This year’s FEVER competition was designed to help augment the FEVER data set through the well-studied machine learning technique of adversarial example generation. The technique has long been a staple of computer vision research but has recently gained ground in natural-language-processing research; Stanford University’s <a href="" target="_blank">SQuAD dataset</a> is one prominent example.</p> <p>Contestants were invited to produce test cases — either algorithmically or manually — that would elicit mistaken responses from fact verification systems trained on FEVER data. Our aim is that by identifying characteristics of the error-inducing test cases we would learn new ways to augment the FEVER data, so that the resulting systems would be both more accurate and more resilient.<br /> <br /> <img alt="Adversarial_example.png" src="" style="display:block; height:246px; margin-left:auto; margin-right:auto; width:300px" /></p> <p style="text-align:center"><em><sup>Two examples of adversarial assertions designed to confound a system trained on an assertion (the </sup></em><sup>original refuted instance<em>) in the original FEVER data set</em></sup><em><sup>, together with supporting evidence drawn from Wikipedia.</sup></em></p> <p>At the first FEVER workshop, <a href="" target="_blank">we reported</a> the performance of 23 teams that participated in the first challenge. The top four finishers allowed us to create versions of their systems that we could host online, so that participants in the second FEVER challenge could attack them at will.</p> <p>Since the first workshop, however, another 39 teams have submitted fact verification systems trained on FEVER data, pushing the top FEVER score from 64% <a href="" target="_blank">up to 70%</a>. Three of those teams also submitted hostable versions of their systems, bringing the total number of targets for the second challenge to seven. Following the taxonomy of the <a href="" target="_blank">Build It, Break It, Fix It</a> contest model, we call the designers of target systems “Builders”.</p> <p>Three “Breaker” teams submitted adversarial examples. One of these — the Columbia University Natural-Language Processing group, or CUNLP — was also a Builder. CUNLP submitted 501 algorithmically generated adversarial examples; TMLab, from the Samsung R&amp;D Institute Poland, submitted 79 examples, most of which were algorithmically generated but a few of which were manual; and NbAuzDrLqg, from the University of Massachusetts Amherst Center for Intelligent Information Retrieval, submitted 102 manually generated examples.&nbsp;</p> <p>Only texts that look like valid assertions require verification, so we discounted adversarial examples if they were semantically or syntactically incoherent or if they could not be substantiated or refuted by Wikipedia data. On that basis, we created a weighted FEVER score called the resilience score, which we used to evaluate the Breakers’ submissions.</p> <p>We tested all three sets of adversarial examples — plus an in-house baseline consisting of 498 algorithmically generated examples — against all seven target systems. The average resilience of the Builder models was 28.5%, whereas their average FEVER score on the original data set was 58.3%. This demonstrates that the adversarial examples were indeed exposing omissions in the original data set.</p> <p>TMLabs’ examples were the most potent, producing more errors per example than either of the others. They were generated using a <a href="" target="_blank">variation</a> of the <a href="" target="_blank">GPT-2</a> language model, which (like all language models) was designed to predict the next word in a sequence of words on the basis of those that preceded it.</p> <p>The CUNLP researchers used their successful adversarial examples as templates for generating additional training data. The idea was that if the model was re-trained on the type of data that tended to stump it, it would learn how to handle that data. CUNLP thus became not only a Builder and a Breaker but also our one “Fixer”. After re-training, the CUNLP system became 11% more resilient to adversarial examples, and its FEVER score on the original task also increased, by 2%.</p> <p>In addition to presentations by Builders and Breakers, the workshop will also feature two oral paper presentations and 10 posters. The papers cover a range of topics: some are theoretical explorations of what it means to verify an assertion, drawing on work in areas such as stance detection, argumentation theory, and psychology; others are more-concrete experiments with natural-language-processing and search systems.</p> <p>The <a href="" target="_blank">invited speakers</a> include William Wang of the University of California, Santa Barbara; Emine Yilmaz of University College London, an Amazon scholar; Hoifung Poon of Microsoft Research; Sameer Singh of the University of California, Irvine; and David Corney of Full Fact.</p> <p>The problem of fact verification is far from solved. That’s why we’re excited to be cohosting this Second Workshop and pleased to see the wide adoption of the FEVER data set and the FEVER score and the contributions they’re making to continuing progress in the field.</p> <p><em>Christos Christodoulopoulos is an applied scientist, and Arpit Mittal is a senior machine learning scientist, both in the Alexa Information Domain group.</em></p> <p><a href="" target="_blank"><strong>Alexa science</strong></a></p> <p><strong>Related</strong>:</p> <ul> <li><a href="" target="_blank">Workshop web page</a></li> <li><a href="" target="_blank">Original FEVER paper</a></li> <li><a href="" target="_blank">FEVER dataset</a></li> <li><a href="" target="_blank">Teaching Computers to Answer Complex Questions</a></li> <li><a href="" target="_blank">Bringing the Power of Neural Networks to the Problem of Search</a></li> <li><a href="" target="_blank">Amazon Helps Launch Workshop on Automatic Fact Verification</a></li> <li><a href="" target="_blank">Public Release of Fact-Checking Dataset Quickly Begins to Pay Dividends</a></li> <li><a href="" target="_blank">Amazon and University of Sheffield Researchers Make Large-Scale Fact Extraction and Verification Dataset Publicly Available</a></li> </ul> /blogs/alexa/post/016ccae4-3849-4e1a-8b49-2c5bd5723490/new-alexa-skills-training-course-how-to-design-for-in-skill-purchasing New Alexa Skills Training Course: How to Design for In-Skill Purchasing Ben Grossman 2019-10-16T22:18:45+00:00 2019-10-16T22:18:45+00:00 <p><img alt="" src="" /></p> <p>We’re excited to introduce our new Alexa Skills course, &quot;How to Design for In-Skill Purchasing<a href="">&quot;</a>. This free course outlines tips and best practices for designing a great monetized Alexa skill experience.</p> <p><a href=""><img alt="" src="" /></a></p> <p>We’re excited to introduce our new Alexa Skills course, <a href="" target="_blank">How to Design for In-Skill Purchasing</a>. This free course outlines the best practices for designing a great monetized Alexa skill experience.</p> <h2><strong>Optimize Your Voice Experience for In-Skill Purchasing</strong></h2> <p>In order to effectively monetize your Alexa skills, you need to design an experience that inspires your customers to continue using your skill over and over. While a portion of the experience depends on the technical implementation (code, information architecture, APIs, etc.) it can only go as far as your voice interaction design. So we created a design-focused course to help you design a skill with in-skill purchasing. You’ll learn what makes great premium content, when to make offers, how to write offers, how to handle transitions to and from the Amazon Purchase flow, and how to provide access to purchases.<br /> <br /> By completing this course, you’ll be equipped with the knowledge to design and optimize your skill for in-skill purchasing.</p> <p><strong>Course Components</strong></p> <ul> <li><a href="" target="_blank">Introducing Our Use Case</a></li> <li><a href="" target="_blank">Offer the Right Premium Content</a></li> <li><a href="" target="_blank">Make an Offer at the Right Time</a></li> <li><a href="" target="_blank">Write Effective Upsells</a></li> <li><a href="" target="_blank">Make a Smooth Handoff</a></li> <li><a href="" target="_blank">Provide Access to Purchases</a></li> <li><a href="" target="_blank">Wrapping Up &amp; Resources</a></li> </ul> <p>Whether you’ve previously built a skill with in-skill purchasing or you’re just starting out, we consider this course to be a milestone along your Alexa skills educational journey. You should be able to speed read through everything in about an hour. Keep in mind that it’s self-paced and you don’t need to do it all at once. In fact, we recommend completing a section or two, pausing to reflect or experiment and then coming back later to continue your learning. This course will also be a great resource to have open in a tab while designing your next monetized skill experience.</p> <h2>Get Started with How to Design for In-Skill Purchasing</h2> <p>The self-paced course is free and available for anyone ready to build Alexa skills. <a href="" target="_blank">Click here</a> to get started.<br /> <br /> Be sure to check out our ongoing multi-part blog series on designing skills with in-skill purchasing which contextualizes many of the themes introduced through out the course with real-world examples:</p> <ul> <li><a href="" target="_blank">Designing Skills for In-Skill Purchasing, Part 1: Scope It Right</a></li> <li><a href="" target="_blank">Designing skills for In-Skill Purchasing, Part 2: Surface Upsells</a></li> </ul> <h2>More Resources to Enhance Your Alexa Skills</h2> <p>Once you’ve completed this course, we recommend you continue your learning by checking out these additional training materials:</p> <ul> <li><a href="" target="_blank">Introductory Guide</a>: You can also check out the following resources to learn more about in-skill purchasing</li> <li><a href="" target="_blank">Premium Hello World Skill</a>: Learn to implement in-skill purchasing with this simple skill sample and tutorial</li> <li><a href="" target="_blank">Premium Fact Skill</a>: Learn to implement One Time Purchases, Subscriptions and Consumables together in a fact skill</li> <li><a href="" target="_blank">Technical documentation</a>: Learn how how in-skill purchasing works at a low-level</li> <li><a href="" target="_blank">Cake Walk: Build an Engaging Alexa Skill</a>: Learn how to get started building engaging experiences.</li> <li><a href="" target="_blank">Designing for Conversation Course</a>: Learn how to design more dynamic and conversational experiences.</li> <li><a href="" target="_blank">Alexa Design Guide</a>: Learn the principles of situational voice design so that you can create voice-first skills that are natural and user-centric.</li> </ul> /blogs/alexa/post/a466dbf7-d9d0-463f-99da-6e632f5352e9/hear-it-from-a-skill-builder-how-to-make-your-skill-stand-out-with-sonic-branding-and-earcons Hear It from a Skill Builder: How to Make Your Skill Stand Out with Sonic Branding and “Earcons” Michelle Wallace 2019-10-15T18:51:35+00:00 2019-10-15T18:51:35+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Hear from Alexa Champion Steven Arkonovich about how he used sonic branding and “earcons” to help his skill, Big Sky, stand out.</p> <p><img alt="" src="" /></p> <p><em>Today’s guest blog post is from</em> <a href=""><em>Steven Arkonovich</em></a><em>,</em> <em>an Alexa Champion and professor of philosophy and humanities at Reed College in Portland, Oregon. Steven was an Alexa enthusiast from the very beginning, actively writing Alexa applications before there even was an API. He has since developed a Ruby framework for quickly creating Alexa skills as web services. An avid audiophile, Steven’s excited about the possibilities that Alexa opens up to interact with digital media.</em></p> <h2>Exploring a Sonic Branding Opportunity</h2> <p>My skill <a href="" target="_blank">Big Sky</a> is the most reviewed weather skill on Alexa (4,711 reviews) and has been featured on CNET, Wired, and TechCrunch. But it’s not the only weather skill on Alexa. And, of course, it lives alongside the native Alexa weather experience. From the start, I realized that I needed to do something to separate the skill for the rest of the pack, and give users a quick way to know that they’ve reached the content they were looking for. Using <a href="">Alexa Presentation Language (APL)</a>, I created a distinctive look for Big Sky that quickly distinguishes it from other weather skills. But the majority of users invoke the skill on a voice-only device, so I knew I also needed a way to make it stand out using <em>sound alone.</em></p> <p>To get started, I reached out to Eric Seay at <a href="" target="_blank">Audio UX in NY</a> to explore opportunities for <em>sonic branding.</em> Essentially, sonic branding is the use of audio to distinguish your brand. Together, Eric and I started working on creating a distinctive sound for Big Sky. The main goal was to assure users that they were getting a genuine Big Sky report, as opposed to the native experience. But the sound had to do more than that. The audio aesthetic needed to be clear and concise to reflect the accuracy of the skill, while also maintaining a warm tone to showcase a sense of helpfulness. Ideally, it would create an emotional connection to the skill.</p> <p>After working through several possibilities, the Audio UX team landed on something that accomplished all the goals. The result was the Big Sky audio logo—and a set of “earcons” that extended the audio logo.</p> <p>&nbsp;</p> <p><iframe allowfullscreen="" frameborder="0" height="180" src="//" width="320"></iframe></p> <p>&nbsp;</p> <h2>Earcons: Conveying Information Instantly</h2> <p>Having introduced the audio logo, Eric also came up with the idea of extending the simple logo into a set of “earcons” for the Big Sky experience. Earcons are, as the name suggests, an audio version of the more familiar icons. Icons convey meaningful information visually. It’s why your computer hard drive icons look like hard drives, and the trash icon looks like a trash can. Earcons do the same thing, but with sound: think of the “crumpling paper” sound your computer makes when you empty the trash. That’s an earcon.</p> <p>Big Sky has implemented five distinct earcons to alert the user of current weather conditions. There are sounds for rain, snow, wind, fog, and clear skies. The earcons are modifications of the main Big Sky audio logo, with the distinctive weather condition sounds layered on top of the basic audio logo. Here, for example, is the “rain” earcon:</p> <p>&nbsp;</p> <p><iframe allowfullscreen="" frameborder="0" height="180" src="//" width="320"></iframe></p> <p>&nbsp;</p> <p>This is a multifunctional and modular audio logo that also serves as an earcon to convey important information. When the users invoke the skill, it simultaneously lets them know they’ve reached Big Sky, drives familiarity, and cues them into the current weather conditions. It’s useful, delightful, and really sets the Big Sky weather experience apart.</p> <h2>Using Sonic Branding in Your Own Skill</h2> <p>The most engaging Alexa skills take advantage of the full range of modalities available to Alexa: visuals, touch, and sound. But don’t limit “sound” to just spoken output. Consider adding some sort of audio logo to your skills as well. Audio logos can:</p> <ul> <li>Distinguish your skill experience among competing skills</li> <li>Provide consistency for your product across platforms</li> <li>Drive skill familiarity</li> <li>Create an emotional connection to your skill</li> </ul> <p>Interested in enhancing your own skill with an audio logo or other element of sonic branding? I’ll leave you with a few tips:</p> <ul> <li>You don't have to actually be a brand to develop sonic branding.</li> <li>Take the time to develop your audio identity like you would develop your skill.</li> <li>Identify the most important audio moments in your skill to benefit the user. Skill launch is a great way to immediately let the user know that they are transitioning into a new experience.</li> <li>Make the audio moments count, and avoid creating a “noisy” skill by filling it up with unnecessary sounds.</li> <li>If your skill requires a bit of time to respond, you can deliver the audio logo using <a href="">progressive response</a>. The user will get immediate feedback that they’ve reached your skill, and you get a second or two to ready your response.</li> </ul> /blogs/alexa/post/67b9b4a5-9ed8-4621-b717-36e07a393348/tools-for-generating-synthetic-data-helped-bootstrap-alexa-s-new-language-releases Tools for Generating Synthetic Data Helped Bootstrap Alexa’s New-Language Releases Larry Hardesty 2019-10-11T13:00:00+00:00 2019-10-14T13:30:40+00:00 <p>Synthetic-data generators provided initial training data for natural-language-understanding models in Hindi, U.S. Spanish, and Brazilian Portuguese.</p> <p>In the past few weeks, Amazon announced versions of Alexa in three new languages: Hindi, U.S. Spanish, and Brazilian Portuguese.&nbsp;</p> <p>Like all new-language launches, these addressed the problem of how to bootstrap the machine learning models that interpret customer requests, without the ability to learn from customer interactions. At a high level, the solution is to use synthetic data. These three locales were the first to benefit from two new in-house tools, developed by the Alexa AI team, that produce higher-quality synthetic data more efficiently.</p> <p>Each new locale has its own speech recognition model, which converts an acoustic speech signal into text. But interpreting that text — determining what the customer wants Alexa to do — is the job of Alexa’s natural-language-understanding (NLU) systems.</p> <p>When a new-language version of Alexa is under development, training data for its NLU systems is scarce. Alexa feature teams will propose some canonical examples of customer requests in the new language, which we refer to as “golden utterances”; training data from existing locales can be translated by machine translation systems; crowd workers may be recruited to generate sample texts; and some data may come from <a href="" target="_blank">Cleo</a>, an Alexa skill that allows multilingual customers to help train new-language models by responding to voice prompts with open-form utterances.</p> <p>Even when data from all these sources is available, however, it’s sometimes not enough to train a reliable NLU model. The new bootstrapping tools, from Alexa AI’s Applied Modeling and Data Science group, treat the available sample utterances as templates and generate new data by combining and varying those templates.</p> <p>One of the tools, which uses a technique called grammar induction, analyzes a handful of golden utterances to learn general syntactic and semantic patterns. From those patterns, it produces a series of rewrite expressions that can generate thousands of new, similar sentences. The other tool, guided resampling, generates new sentences by recombining words and phrases from examples in the available data. Guided resampling concentrates on optimizing the volume and distribution of sentence types, to maximize the accuracy of the resulting NLU models.</p> <h3><strong>Rules of Grammar</strong></h3> <p>Grammars have been a tool in Alexa’s NLU toolkit since well before the first Echo device shipped. A grammar is a set of rewrite rules for varying basic template sentences through word insertions, deletions, and substitutions.</p> <p>Below is a very simple grammar, which models requests to play either pop or rock music, with or without the modifiers “more” and “some”. Below the rules of the grammar is a diagram of a computational system (a finite-state transducer, or FST) that implements them.</p> <p><img alt="grammar_2.png" src="" style="display:block; height:176px; margin-left:auto; margin-right:auto; width:400px" /><br /> <img alt="FST.png" src="" style="display:block; height:89px; margin-left:auto; margin-right:auto; width:600px" /></p> <p style="text-align:center">&nbsp;<sup><em>A toy grammar, which can model requests to play pop or rock music, with or without the modifiers “some” or “more”, and a diagram of the resulting finite-state transducer. The question mark indicates that the </em>some_more<em> variable is optional.</em></sup></p> <p>Given a list of, say, 50 golden utterances, a computational linguist could probably generate a representative grammar in a day, and it could be operationalized by the end of the following day. With the Applied Modeling and Data Science (AMDS) group’s grammar induction tool, that whole process takes seconds.</p> <p>AMDS research scientists Ge Yu and Chris Hench and language engineer Zac Smith experimented with a neural network that learned to produce grammars from golden utterances. But they found that an alternative approach, called Bayesian model merging, offered similar performance with advantages in reproducibility and iteration speed.</p> <p>The resulting system identifies linguistic patterns in lists of golden utterances and uses them to generate candidate rules for varying sentence templates. For instance, if two words (say, “pop” and “rock”) consistently occur in similar syntactic positions, but the phrasing around them varies, then one candidate rule will be that (in some defined contexts)&nbsp;“pop” and “rock” are interchangeable.</p> <p>After exhaustively listing candidate rules, the system uses Bayesian probability to calculate which rule accounts for the most variance in the sample data, without overgeneralizing or introducing inconsistencies. That rule becomes an eligible variable in further iterations of the process, which recursively repeats until the grammar is optimized.</p> <p>Crucially, the tool’s method for creating substitution rules allows it to take advantage of existing catalogues of frequently occurring terms or phrases. If, for instance, the golden utterances were sports related, and the grammar induction tool determined that the words “Celtics” and “Lakers” were interchangeable, it would also conclude that they were interchangeable with “Warriors”, “Spurs”, “Knicks”, and all the other names of NBA teams in a standard catalogue used by a variety of Alexa services.</p> <p>From a list of 50 or 60 golden utterances, the grammar induction tool might extract 100-odd rules that can generate several thousand sentences of training data, all in a matter of seconds.</p> <h3><strong>Safe Swaps</strong></h3> <p>The guided-resampling tool also uses catalogues and existing examples to augment training data. Suppose that the available data contains the sentences “play Camila Cabello” and “can you play a song by Justin Bieber?”, which have been annotated to indicate that “Camila Cabello” and “Justin Bieber” are of the type <em>ArtistName</em>. In NLU parlance, <em>ArtistName</em> is a <em>slot type</em>, and “Camila Cabello” and “Justin Bieber” are <em>slot values</em>.</p> <p>The guided-resampling tool generates additional training examples by swapping out slot values — producing, for instance, “play Justin Bieber” and “can you play a song by Camila Cabello?” Adding the vast Amazon Music databases of artist names and song titles to the mix produces many additional thousands of training sentences.</p> <p>Blindly swapping slot values can lead to unintended consequences, so which slot values can be safely swapped? For example, in the sentences “play jazz music” and “read detective books”, both “jazz” and “detective” would be labeled with the slot type <em>GenreName</em>. But customers are unlikely to ask Alexa to play “detective music”, and unnatural training data would degrade the performance of the resulting NLU model.&nbsp;</p> <p>AMDS’s Olga Golovneva, a research scientist, and Christopher DiPersio, a language engineer, used the Jaccard index — which measures the overlap between two sets — to evaluate pairwise similarity between slot contents in different types of requests. On that basis, they defined a threshold for valid slot mixing.</p> <h3><strong>Quantifying Complexity</strong></h3> <p>As there are many different ways to request music, another vital question is how many variations of each template to generate in order to produce realistic training data. One answer is simply to follow the data distributions from languages that Alexa already supports.&nbsp;</p> <p>Comparing distributions of sentence types across languages requires representing customer requests in a more abstract form. We can encode a sentence like “play Camila Cabello” according to the word pattern <em>other + ArtistName</em>, where <em>other</em> represents the verb “play”, and <em>ArtistName</em> represents “Camila Cabello”. For “play ‘Havana’ by Camila Cabello”, the pattern would be <em>other + SongName + other + ArtistName</em>. To abstract away from syntactic differences between languages, we can condense this pattern further to <em>other + ArtistName + SongName</em>, which represents only the semantic concepts included in the request.&nbsp;</p> <p>Given this level of abstraction, Golovneva and DiPersio investigated several alternative techniques for determining the semantic distributions of synthetic data.&nbsp;</p> <p>Using Shannon entropy, which is a measure of uncertainty, Golovneva and DiPersio calculated the complexity of semantic sentence patterns, focusing on slots and their combinations. Entropy for semantic slots takes into consideration how many different values each slot might have, as well as how frequent each slot is in the data set overall. For example, the slot <em>SongName</em> occurs very frequently in music requests, and its potential values (different song titles) number in the millions; in contrast, <em>GenreName</em> also occurs frequently in music requests, but its set of possible values (music genres) is fairly small.&nbsp;</p> <p>Customer requests to Alexa often include multiple slots (such as “play ‘Vogue’|<em>SongName</em> by Madonna|<em>ArtistName</em>” or “set a daily|<em>RecurrenceType</em> reminder to {walk the dog}|<em>ReminderContent</em> for {seven a. m.}|<em>Time</em>”), which increases the pattern complexity further.&nbsp;</p> <p>In their experiments, Golovneva and DiPersio used the entropy measures from slot distributions in the data and the complexity of slot combinations to determine the optimal distribution of semantic patterns in synthetic training data. This results in proportionally larger training sets for more complex patterns than for less complex ones. NLU models trained on such data sets achieved higher performance than those trained on datasets which merely “borrowed” slot distributions from existing languages.</p> <p>Alexa is always getting smarter, and these and other innovations from AMDS researchers help ensure the best experience possible when Alexa launches in a new locale.</p> <p><em>Janet Slifka, a senior manager for research science in Alexa AI’s Natural Understanding group, leads the Applied Modeling and Data Science team.</em></p> <p><a href="" target="_blank"><strong>Alexa science</strong></a></p> <p><strong>Acknowledgments</strong>: Ge Yu, Chris Hench, Zac Smith, Olga Golovneva, Christopher DiPersio, Karolina Owczarzak, Sreekar Bhaviripudi, Andrew Turner</p> <p><strong>Related</strong>:</p> <ul> <li><a href="" target="_blank">Active Learning: Algorithmically Selecting Training Data to Improve Alexa’s Natural-Language Understanding</a></li> <li><a href="" target="_blank">Adapting Alexa to Regional Language Variations</a></li> <li><a href="" target="_blank">Training a Machine Learning Model in English Improves Its Performance in Japanese</a></li> <li><a href="" target="_blank">How We Add New Skills to Alexa’s Name-Free Skill Selector</a></li> <li><a href="" target="_blank">Cross-Lingual Transfer Learning for Bootstrapping AI Systems Reduces New-Language Data Requirement</a>s</li> </ul> /blogs/alexa/post/fec54390-8005-4e0d-9df8-48b0194e8d02/what-s-new-in-the-alexa-skills-kit-september-2019-release-roundup What's New in the Alexa Skills Kit: September 2019 Release Roundup Leo Ohannesian 2019-10-11T00:09:57+00:00 2019-10-11T00:09:57+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>In this roundup video we share details about the new things released for skill developers last month, including the Web API for Games (Preview), Alexa-hosted Skills Python Support, the NLU Evalution tool, and all of the announcements from our September Event.</p> <p><em><strong>Editor's Note: </strong>Our monthly release roundup series showcases the latest in Alexa Skills Kit developer tools and features that can make your skills easier to manage, simpler to deploy, and more engaging for your customers. Build with these new capabilities to enable your Alexa skills to drive brand or revenue objectives.</em></p> <p>In this roundup video we share details about the new things released for skill developers last month, including the Web API for Games (Preview), Alexa-hosted Skills Python Support, the NLU Evalution tool, and all of the announcements from our September Event. Check out the entire video for more information from Alexa evangelists and code samples.</p> <p><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <h2>1.&nbsp;Read developer news related to the September event</h2> <p>We are excited to announce new developer tools to accompany a brand new line-up of Alexa devices. Now you can deliver skills in more places, build experiences for existing and new audiences, and reach more customers in their preferred languages.&nbsp;<a href="" target="_blank">Check out the announcement here</a>.</p> <p>&nbsp;</p> <ul> </ul> <h2>2. Publish visually rich, interactive skills with Alexa Presentation Language (APL), now generally available</h2> <p>Alexa Presentation Language (APL) is now generally available. We will continue to add new features, tools, and resources to APL over the coming months. <a href="" target="_blank">Check out the announcement</a> or&nbsp;<a href="" target="_blank">read about it in our docs</a>.&nbsp;</p> <p>&nbsp;</p> <h2>3. Publish Spanish skills in the US and Make Money with In-Skill Purchasing (ISP) and Alexa Developer Rewards</h2> <p>In the coming weeks, you’ll be able to publish Spanish skills in the US and make money with in-skill purchasing (ISP) and Alexa Developer Rewards.&nbsp;<a href="https://" target="_blank">Check out the announcement here</a>&nbsp;and <a href="http://" target="_blank">the technical documentation here.&nbsp;</a></p> <p>&nbsp;</p> <h2>4. Build Games with Web Technologies (Preview)</h2> <p>The Alexa Web API for Games (Developer Preview) introduces new web technologies and tools to create visually rich and interactive voice-controlled game experiences.&nbsp;<a href="" target="_blank">Read the announcement here</a> or <a href="" target="_blank">sign up for the preview</a>.&nbsp;</p> <p>&nbsp;</p> <h2>5. Use Alexa voice profiles (Preview) to personalize your&nbsp;content for your customers</h2> <p>Soon your skill will be able to deliver customized information based on who is speaking. Learn more and sign up for the Developer Preview here.&nbsp;<a href="" target="_blank">Read the announcement here</a>.</p> <p>&nbsp;</p> <ul> </ul> <h2>6. The&nbsp;Alexa Education Skill API (Preview) allows you to easily create voice interfaces for Education Technology Applications</h2> <p>With the Alexa Education Skill API (Developer Preview), integrating ed-tech systems such as Learning Management Systems (LMS), Student Information Systems (SIS), and Classroom Management and massively open online course (MOOC) platforms is quick and easy.&nbsp;Parents and students 13 and older can get information about their school and assignments directly from Alexa without the added step of opening a skill by asking&nbsp;“Alexa, how is Kaylee doing in school?” or “Alexa, what is my homework tonight?”. <a href="" target="_blank">Read about it here</a>.&nbsp;</p> <p>&nbsp;</p> <h2>7. LEGO MINDSTORMS Voice Challenge: Powered by Alexa — Your Chance to Win Up to $100,000 in Prizes</h2> <p>We are thrilled to announce LEGO MINDSTORMS Voice Challenge: Powered by Alexa – an opportunity for Alexa developers, LEGO MINDSTORMS enthusiasts, and creators around the world to explore and build the future of voice-based experiences through construction and robotics play. Enter for your chance to win from over one hundred prizes worth up to $100,000.<a href="" target="_blank"> Read the announcement here.&nbsp;</a></p> <p>&nbsp;</p> <h2>8. Populate custom slot values with URL reference to an existing catalog</h2> <p>We are excited to announce the launch of reference based Catalog management features (SMAPI and CLI) for managing custom slots. Using this feature, developers can now create slot types to ingest values from an external data source with a URL reference to the catalog. For example, a recipe skill developer will now be able to pull a list of ingredients from their existing catalog instead of having to enter each individual ingredients and keep both data sources in sync.&nbsp;<a href="" target="_blank">Read the technical documentation.&nbsp;</a></p> <p>&nbsp;</p> <h2>9. Develop Alexa-hosted skills in Python</h2> <p>Python developers can now get started quickly with Alexa skills. Alexa-hosted skills now support Python from both the Alexa Developer Console and the ASK CLI.&nbsp;<a href="http://" target="_blank">Read the tech docs.&nbsp;</a></p> <p>&nbsp;</p> <h2>10.&nbsp;Batch test your skill model with the NLU Evaluation Tool</h2> <p>The NLU Evaluation Tool helps you avoid overtraining your skill’s NLU model by identifying which&nbsp;utterances will improve accuracy if added to your&nbsp;Interaction Model, create and run regression tests on your skill’s NLU model, and measure accuracy of your skill’s NLU model with anonymized frequent live utterances. <a href="" target="_blank">Read the blog here.&nbsp;</a><a href="" target="_blank">Read the tech docs here.&nbsp;</a></p> <p>&nbsp;</p> <p>As always, we can't wait to see what you build. As a reminder, learn how to get the most out of the tech docs by visiting the <a href="" target="_blank">Latest Tips page.</a></p> /blogs/alexa/post/a3044117-24ac-44a1-8452-fc2f42a84108/skill-flow-builder-tips-and-tricks-use-extensions-to-level-up-your-narrative-driven-games Skill Flow Builder Tips and Tricks: Use Extensions to Level Up Your Narrative-Driven Games June Lee 2019-10-09T18:10:56+00:00 2019-10-09T19:25:50+00:00 <p><img alt="" src="" /></p> <p>Here are some tips and tricks for using Skill Flow Builder (SFB), a tool for visually designing and building story-based game skills which makes it easier for content creators to create skills without needing a large development team.</p> <p><img alt="" src="" /> In July, we released <a href="">Skill Flow Builder</a> (SFB), a tool for visually designing and building story-based game skills which makes it easier for content creators to create skills without needing a large development team. If you’re new to SFB, you can check out <a href="">our introductory blog post</a> for an overview of the SFB Editor and how to get started. This blog post assumes you have built at least one skill in SFB and is intended for intermediate to advanced users.</p> <p>For most experiences, the SFB Editor and features are enough to build a rich experience with dynamic responses that keep users returning. But what happens when you hit the extent of SFB’s base features? Maybe it’s some complex math. Maybe you need to keep track of inventory or divert logic to a mini game. When the plethora of SFB features run out, it’s time to build an extension. Luckily, SFB makes extension building easy.</p> <h2>When to Create an Extension</h2> <p>So you’ve created a robust story-based game using SFB. Your players can travel across the world and fight deadly beasts. They can collect key items for progression and make pivotal plot decisions. At first, the inventory is basic—only a few items to keep track of—but as your story grows, the inventory grows with it. Users may grow frustrated when they’re offered the wrong item at the wrong time. A user who’s in the middle of combat and extremely low on health won’t want to search to find their health potion. They’ll want it to be offered to them without even having to search. Selecting the right composition of items to suggest to a player starts to require more and more conditional statements in SFB’s Editor. This is the point when an extension becomes an asset.</p> <p>At their core, extensions are simply functions that your SFB story may leverage at any time. If your conditional statements start requiring more than three comparisons or your list of flagged items grows from a manageable 15 to 50, it’s time to look into creating an extension. If it takes 10 lines of logic to do what feels like basic math, it might be time for an extension.</p> <p>There are three types of extensions: DriverExtension, InstructionExtension, and ImporterExtension. You can learn more about the syntax and functionality of these extension types in the <a href="">SFB documentation</a>. For the purposes of this blog, we’re going to focus on the extension type you’ll use the most: InstructionExtension.</p> <p>An InstructionExtension is composed of functions called by the story files as the story is running. Some use cases for the InstructionExtension include:</p> <ol> <li>Complex math on a variable such as exponents and remainder division</li> <li>Inventory management</li> <li>Store/catalog management</li> <li>Iterating over time-dependent components</li> <li>Mass setting or unsetting variable</li> <li>Calling external APIs that do not cause the skill session to exit</li> </ol> <p>So what are some ways you might use an InstructionExtension in your own game skills? Let’s dive into some examples. We’ll start with a simple example to get you familiar with the layout of extensions and then move on to a separate advanced example that combines multiple extension types.</p> <h2>Basic Example: Enable/Disable Cheat Codes with Environment Variables</h2> <p>Over time, your SFB story may grow to become a game that can’t be completed in a short amount of time. You may want to be able to easily jump around through the game and automatically set variables as you go. However, you don’t want this functionality to be available to live users. In this example, we’ll use an InstructionExtension to process which version of the skill the player is accessing and then restrict access to content.</p> <p>To make restricting access easy, we’ll set an environment variable in Lambda with a key of VERSION and possible values of dev or prod. Since this is a variable that is not accessible by SFB automatically, we need to inject that information into the story.</p> <h3>Build the InstructionExtension</h3> <p>When you create a new SFB story, it includes SampleCustomExtension.ts in the code/extensions folder. For ease, we’ll add our environment setter to SampleCustomExtension.ts.</p> <p>First, replace the code in your SampleCustomExtension.ts file with the following:</p> <pre> <code>import { InstructionExtension, InstructionExtensionParameter } from '@alexa-games/sfb-f'; /** * Custom InstructionExtension */ export class SampleCustomExtension implements InstructionExtension { public async setEnvironment(param: InstructionExtensionParameter): Promise&lt;void&gt; {;Player environment is: &quot;, process.env.VERSION); param.storyState.environmentType = process.env.VERSION ? process.env.VERSION : &quot;dev&quot;; } }</code></pre> <h3>Call the Extension from the Story Files</h3> <p>Now that we have an extension, we need to access it from the story files. To prevent production/live skill users from accessing the cheat codes, we can use a simple IF statement to restrict access to a reusable scene called cheat_codes. In this example, if the skill is using the version of “dev” and the user says “cheat,” then it’ll route to the cheat code. Otherwise, the story goes back to the previous scene.</p> <p>Add the following code to your file. If you already have @global_append, then you should extend that section with the call to setEnvironment and the environmentType check.</p> <pre> <code>@global_append *then setEnvironment if environmentType == 'dev' { &lt;-&gt; cheat_codes } @cheat_codes *then hear cheat { -&gt; cheat } hear cheat more { -&gt; cheat_more } &gt;&gt; RETURN </code></pre> <p>&nbsp;</p> <h2>Advanced Example: Get User Consumables from the Monetization API</h2> <p>In this example, we’re going to do a simple get request to the Monetization Service Client to determine if a consumable is purchasable. Since monetization is not available in every locale, this allows us to avoid presenting an upsell to users who can’t or shouldn’t be offered the consumable.</p> <p>Before we get started, make sure you’re familiar with setting up in-skill purchasing (ISP) for a skill and the requirements for consumables. You can read more about in-skill purchasing in the <a href="">documentation</a>.</p> <p>Unfortunately, the InstructionExtension can’t access <strong>handlerInput</strong> and the monetization service requires the user’s locale from handlerInput. However, the DriverExtension can access the request object from Alexa before it reaches the SFB logic. The InstructionExtension allows us to send data back and forth to the story files while the DriverExtension can communicate with external services. Luckily, in SFB you can combine any of the extension types together into a single extension file, so you can use both at the same time.</p> <p>You can view the full Typescript file for this extension in the <a href="">Alexa Cookbook code snippets</a>.</p> <h3>Create the File for the Custom Extension</h3> <p>Just like you did in the basic example, you'll need to create a file to hold your extension code. Unlike that example, though, you also need to import DriverExtension and DriverExtensionParameter. Next, to combine two extension types, you just need to implement the additional types in the class. For our ISP extension, you’ll implement <strong>InstructionExtension</strong> and <strong>DriverExtension</strong>.<br /> First, add a new file to the extensions folder in your SFB project and name it ISPExtension.ts. Once you have your file ready, add the following code to ISPExtension.ts to create the framework for the extension.</p> <pre> <code class="language-javascript">import { InstructionExtension, DriverExtension, InstructionExtensionParameter, DriverExtensionParameter, } from &quot;@alexa-games/sfb-f&quot;; export class ISPExtension implements InstructionExtension, DriverExtension { }</code></pre> <h3>Build the DriverExtension</h3> <p>The DriverExtension is similar to the request and response interceptors available in the Alexa Skills Kit SDK. The logic is executed before the request reaches SFB and/or before the response is sent to the user. This makes the DriverExtension great for cleaning up data or doing additional logic on story content. A DriverExtension requires both a <strong>pre</strong> and a <strong>post</strong> function, but either of these can be left empty. In this case, we only need the pre function to get the <strong>handlerInput</strong> object.</p> <pre> <code class="language-javascript">private handlerInput: any; async pre(param: DriverExtensionParameter) { this.handlerInput = param.userInputHelper.getHandlerInput(); return; } async post(param: DriverExtensionParameter) { // Use for post processing, not needed this time }</code></pre> <p>Add the following code inside the ISPExtension class you created in the previous step to add pre and post functionality:</p> <h3>Build the InstructionExtension</h3> <p>Now that we have the handlerInput, we can send requests to the Monetization Service Client and also access the user’s locale. The next step is to add two functions: one to check purchasable status and one to check the number of consumables purchased. Additionally, there is a separate function for making the request to the Monetization Service Client.</p> <p><strong>Purchasable</strong></p> <p>The sole goal of this extension is to be easily callable from the story files. The function for “purchasable” sets the type of request the skill is making; in this case, the type is “purchasable.” We’ll then use a variable from the storyState, <strong>monetizationPurchasable</strong>, to flag whether the item is available. <strong>storyState</strong> is passed back and forth from the story files and contains details about the user such as current point in the story and any variables that have been added or flagged over time.</p> <p>Once purchasable and request type (workflowType) are set, the function simply triggers a call to the Monetization Service Client via the getMonetizationData function.</p> <p>Add the following code below the pre and post code you added earlier:</p> <pre> <code class="language-javascript">public async purchasable(param: InstructionExtensionParameter): Promise { param.instructionParameters.workflowType = &quot;purchasable&quot;; param.storyState.monetizationPurchasable = false; param.storyState = await this.getMonetizationData(param); return; } </code></pre> <p><strong>Consumable</strong></p> <p>The function for consumable is intended to retrieve the amount of a consumable that’s been purchased and is available for a user. All this basic function needs to do is set the workflowType of “consumable.”</p> <p>Add the following code for consumable below the purchasable function. This function just sets the workflowType and allows the consumable checks to be called separately from purchasable checks.</p> <pre> <code class="language-javascript">public async consumable(param: InstructionExtensionParameter): Promise { param.instructionParameters.workflowType = &quot;consumable&quot;; param.storyState = await this.getMonetizationData(param); return; } </code></pre> <p><strong>getMonetizationData()</strong></p> <p>While purchasable and consumable are vanity calls to make the monetization checks easily referable from the story files, the getMonetizationData function does all of the work for calling the Monetization Service Client. The structure is almost identical to standard Node.js calls to the client, with some added references to storyState for the amount of the consumable that has been purchased.</p> <p>The following code does additional checks to verify if the consumable amount is out of sync with what is being stored by the skill. Add this section to ISPExtension.ts below the consumable function you added in the previous step:</p> <pre> <code class="language-javascript">private async getMonetizationData( param: InstructionExtensionParameter ): Promise { const product = param.instructionParameters.item; // Supplied from the story file if (!product) { throw [AlexaMonetizationExtension Syntax Error] monetized item=[${product}] not provided.; } const ms: any = this.handlerInput.serviceClientFactory.getMonetizationServiceClient(); const locale: string = this.handlerInput.requestEnvelope.request.locale; const isp: any = await ms.getInSkillProducts(locale).then((res: any) =&gt; { if (res.inSkillProducts.length &gt; 0) { let item = res.inSkillProducts.filter( (record: any) =&gt; record.referenceName === product ); return item; } }); // Return product information based on user request if (param.instructionParameters.workflowType === “purchasable”) { if (isp &amp;&amp; isp[“purchasable”] === “PURCHASABLE”) {“Item is purchasable: “,; // Easily indicate within the story the item is purchasable param.storyState.monetizationPurchasable = true; } else {“Item cannot be purchased: “, product); } } else if (param.instructionParameters.workflowType === “consumable”) { if (isp &amp;&amp; isp.activeEntitlementCount) { let itemAmount: number = parseInt(isp.activeEntitlementCount); param.storyState[${product}Purchased] = itemAmount; // Set the purchased and consumed session variables to keep track during game if (itemAmount) { if (!param.storyState[${product}Consumed]) { param.storyState[${product}Consumed] = 0; } if (param.storyState[${product}Consumed] &gt; itemAmount) { // User shows as using more of the consumable than purchased param.storyState[${product}Consumed] = itemAmount; } } param.storyState.monetizationPurchasable = true; } else {“Item is not available: “, product); param.storyState[${product}Consumed] = 0; param.storyState[${product}Purchased] = 0; param.storyState[${product}] = 0; param.storyState.monetizationPurchasable = false; } } return param.storyState; } </code></pre> <h3>Call the Extension from the Story Files</h3> <p>We have an extension and we have some basic parameters for checking the state of a consumable. Now let’s call it from a scene in the story. For the sake of this example, we’re making a redundant check if the item is purchasable to demonstrate how each function works. In practice, you can just use the consumable function since it already checks if an ISP item is purchasable.</p> <p>Add the following code for the reusable @check_item scene to your file. To test the code, you can follow the Basic example and call @check_item from @global_append.</p> <pre> <code>@check_item *then purchasable item=’coffee’ if monetizationPurchasable { consumable item=’coffee’ // Reset the amount of the consumable that is available to use set coffee to coffeePurchased decrease coffee by coffeeConsumed -&gt; has_item_scene } if !monetizationPurchasable { -&gt; no_buy_scene } </code></pre> <p>Now, if you release your skill in a locale that doesn’t support monetization, you can avoid sending users an upsell dialog by first checking if the item is available. You can also keep the amount of a consumable that is available up-to-date as the user progresses through the skill.</p> <p>This may seem like a complex extension, but at the core all we’ve done is take an API call and add some additional story variables to it.</p> <h3>Conclusion</h3> <p>Extensions are a great tool for passing story variables back and forth without having to do complex SFB logic within the story files themselves. We went through a basic example to access data not readily available to the story files and then a more advanced example of how to call external APIs with SFB. Extensions allow you to add more robust logic to your story-based games and take your adventures from simple narratives to leveling adventures. You can now take this knowledge and add combat modules, character progression, and get those health potions to the right players when they need them.</p> <p>We’re always excited to hear about your extensions, so feel free to share your creations with us on Twitter!</p> /blogs/alexa/post/a5b37f34-83c8-4274-b576-073a21dfdb7a/build-test-and-tune-your-skills-with-three-new-tools1 Build, Test, and Tune Your Skills with Three New Tools Leo Ohannesian 2019-10-09T17:35:13+00:00 2019-10-10T00:24:11+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>We’re excited to announce the General Availability of&nbsp;Natural Language Understanding (NLU) Evaluation Tool and Utterance Conflict Detection. We are also excited to announce the&nbsp;Get Metrics API, now in Beta.&nbsp;</p> <p><img alt="" src="" /></p> <p>We’re excited to announce the General Availability of two tools which focus on your voice model’s accuracy: Natural Language Understanding (NLU) Evaluation Tool and Utterance Conflict Detection. We are also excited to announce that you will now be able to build your own quality and usage reporting with the Get Metrics API, now in Beta. These tools help complete the suite of Alexa skill testing and analytics tools that aide in creating and validating your voice model prior to publishing your skill, detect possible issues when your skill is live, and help you refine your skill over time.<br /> <br /> The NLU Evaluation Tool helps you batch test utterances and compare how they are interpreted by your skill’s NLU model against your expectations. The tool has three use cases:</p> <ol> <li>Prevent overtraining NLU models: overtraining your NLU model with too many sample utterances and slot values can reduce accuracy. Instead of adding exhaustive sample utterances to your interaction model, you can now run NLU Evaluations with utterances you expect users to say. If any utterance resolves to the wrong intent and/or slot, you can improve accuracy of your skill’s NLU model by only adding those utterances as new training data (by creating new sample utterances and/or slots).</li> <li>Regression tests - you can create regression tests and run them after adding new features to your skills to ensure your customer experience stays intact.</li> <li>Accuracy measurements - you can measure the accuracy of your skill’s NLU model by running an NLU Evaluation with anonymized frequent live utterances surfaced in Intent History (production data), and then measure the impact on accuracy for any changes you make to their NLU model.</li> </ol> <p><br /> Utterance Conflict Detection helps you detect utterances which are accidentally mapped to multiple intents, which reduces accuracy of your Alexa skill’s Natural Language Understanding (NLU) model. This tool is automatically run on each model build and can be used prior to publishing the first version of your skill or as you add intents and slots over time - preventing you from building models with unintended conflicts.<br /> <br /> Finally, with the Get Metrics API (Beta) you can immediately benefit from the ability to analyze key metrics like unique customers in your preferred tools for analysis or aggregation. For Example, you can now connect to AWS CloudWatch and create monitors, alarms and dashboards in order to stay on top of changes that may impact customer engagement.<em> </em><br /> <br /> With these three additions to the ASK Tech toolset, we will recap the overall suite of testing and feedback tools you have available and where they fall in the overall skill development lifecycle. The skill development lifecycle can be separated into three general steps that come after your design phase (see situational design): building, testing, and tuning.<br /> <br /> <strong>Build Your Dialog Model</strong><br /> As you are defining your intents, slots, and dialogs from the ground up per your <a href="" target="_blank">situational design definition,</a> you will want to test how utterances fall into your model. This is where <a href="" target="_blank">utterance profiler</a> is useful. You can enter utterances to see how they resolve to your intents and slots. When an utterance does not invoke the right intent or slot, you can update your sample utterances or slot and retest, all before writing any code for your skills. You should set up a <a href="" target="_blank">fallback intent</a> for requests that your skill does not know how to interpret, otherwise known as unhandled requests. Now, as you’re building your voice model out, you can use <a href="" target="_blank">utterance conflict detection</a> to ensure that there aren’t conflicting utterances with respect to your VUI. Utterance conflict detection will identify utterances (and slots) that map to more than one intent. Outlining conflicting utterance will help you detect areas where the NLU model of your skill could break and cause an unintended customer experience.<br /> <br /> <strong>Test Before Go-Live</strong><br /> As you approach voice model readiness, you will want to test using the built in <a href="" target="_blank">Alexa Simulator.</a> You can also <a href="" target="_blank">test-distribute to your Alexa device</a> or go <a href="" target="_blank">for beta testing. </a>As your voice model becomes solidified, you can start using the <a href="" target="_blank">NLU Evaluation Tool</a> to batch test utterances and how they fit into your voice model. You will need to define a set of utterances mapped to the intents and slots you expect to be sent to your skill. You can then run an NLU Evaluation and add to your slots and intents to improve the accuracy of your skill depending on the results. Before going live, you will want to both<a href="" target="_blank"> functionally test</a> and <a href="" target="_blank">debug your skill. </a><br /> <br /> <strong>Tune Over Time</strong><br /> The skill development journey has only begun when you go live. You can use Interaction path Analysis to begin to understand your customer’s journey through your skill, and where possible bottlenecks are. <a href="" target="_blank">Interaction path analysis</a> shows aggregate skill usage patterns in a visual format, including which intents your customers use, in what order. This enables you to verify if customers are using the skill as expected, and to identify interactions where customers become blocked or commonly exit the skill. You can use insights gained from interaction path analysis to make your flow more natural, fix errors, and address unmet customer needs.<br /> <br /> The <a href="" target="_blank">Intent History page</a> of the developer console displays aggregated, anonymized frequent live utterances and the resolved intents. You can use this to learn how users interact with your skill to identify improvements you may want to make to your interaction model. The Intent History page displays the frequent utterances in two tabs, <a href="" target="_blank">Unresolved Utterances</a>, which did not successfully map to an Intent, and Resolved Utterances, which mapped successfully to an intent and slot. This lets you review the utterances, update your interaction model to account for phrases that were not routed correctly, and mark utterances as <em>resolved</em>. For example, suppose you see a particular utterance that was sent to <code>AMAZON.FallbackIntent</code>, but it is actually a phrase that should trigger one of your custom intents. You can map that utterance directly to that intent and update your interaction model right from the Intent History page. Conversely, you could add to your voice model if you find that an utterance falling to the Fallback intent is a good feature for your skill. As mentioned above, you can also use the utterances surfaced in Intent History to run a <a href="" target="_blank">NLU Evaluation</a> and generate an accuracy indicator for your skill. You can also re-run the test after making changes to your skill model to measure the overall impact on your skill experience, otherwise known as a regression test.<br /> <br /> Access to skill metrics was <a href="" target="_blank">previously restricted to pre-configured dashboards displaying static metrics in the developer console</a>. Static metrics are insightful but fall short when you need to automate mechanisms that guarantee operational continuity. In contrast, with the <a href="" target="_blank">Get Metrics API (Beta)</a>, you can set up live metrics to your preferred analysis tools to pinpoint changes in your Skill's performance and behavior. You can now compute your own aggregated metrics or create automation that feeds that data into a monitoring system like<a href="" target="_blank"> AWS CloudWatch</a>, where you can create alarms or trigger changes in your skill based on certain inputs. For example, you can track how new customers are interacting with your skill and set up alarms to understand when indicators of a bad user experience surface, like when customers land on the <code>AMAZON.FallbackIntent</code> at a higher rate than normal. The Get Metrics API (Beta) also works across multiple skills so you can now set up aggregated reporting for your entire skills dialog without switching back-and-fourth to the developer console.<br /> <br /> With the new Get Metrics API, you can save time and increase visibility into the key insights that we provide in order to optimize skill engagement. The Get Metrics API is available for skill builders in all locales and currently supports the Custom skill model, the pre-built Flash Briefing model, and the Smart Home Skill API.<br /> <br /> <strong>Start Optimizing today </strong><br /> Begin working with the three new tools in order to create an optimal customer experience. Start by reading our technical documentation on the <a href="" target="_blank">NLU Evaluation Tool</a>, <a href="" target="_blank">Utterance Conflict Detection</a>, and the <a href="" target="_blank">Get Metrics API (Beta)</a> today!</p>