Grato por sua visita. Neste momento esta página está apenas disponível em inglês.
Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2019-09-04T13:00:00+00:00 Apache Roller /blogs/alexa/post/5253edc1-e295-4e25-9d57-0e33873ad3f8/new-alexa-research-on-task-oriented-dialogue-systems New Alexa Research on Task-Oriented Dialogue Systems Larry Hardesty 2019-09-04T13:00:00+00:00 2019-09-04T13:00:00+00:00 <p>Universal&nbsp;dialogue-act tagging scheme, hybrid slot-tracking system promise to improve dialogue state tracking.</p> <p>Earlier this year, at Amazon’s re:MARS conference, Alexa head scientist Rohit Prasad <a href="https://developer.amazon.com/blogs/alexa/post/9615b190-9c95-452c-b04d-0a29f6a96dd1/amazon-unveils-novel-alexa-dialog-modeling-for-natural-cross-skill-conversations" target="_blank">unveiled</a> Alexa Conversations, a new service that allows Alexa skill developers to more easily integrate conversational elements into their skills.</p> <p>The announcement is an indicator of the next stage in Alexa’s evolution: more-natural, dialogue-based engagements that enable Alexa to aggregate data and refine requests to better meet customer needs.</p> <p>At this year’s Interspeech, our group has a <a href="https://arxiv.org/pdf/1907.00883" target="_blank">pair</a> of <a href="https://arxiv.org/pdf/1907.03020" target="_blank">papers</a> that describe some of the ways that we are continuing to improve Alexa’s task-oriented dialogue systems, whose goal is to identify and fulfill customer requests.&nbsp;</p> <p>Two central functions of task-oriented dialogue systems are language understanding and dialogue state tracking, or determining the goal of a conversation and gauging progress toward it. Language understanding involves classifying utterances as <em>dialogue acts</em>, such as requesting, informing, denying, repeating, confirming, and so on. State tracking involves tracking the status of <em>slots</em>, or entities mentioned during the dialogue; a restaurant-finding service, for instance, might include slots like Cuisine_Type, Restaurant_Location, Price_Range, and so on. Each of these functions is the subject of one of our Interspeech papers.</p> <p>One of the goals of state tracking is to assign values to slot types. If, for instance, a user requests a reservation at an Indian restaurant on the south side of a city, the state tracker might fill the slots Cuisine_Type and Restaurant_Location with the values “Indian” and “south”.</p> <p>If all the Indian restaurants on the south side are booked, the customer might expand the search area to the central part of the city. The Cuisine_Type slot would keep the value “Indian”, but “Restaurant_Location” would get updated to “center”.</p> <p style="text-align:center"><img alt="State_tracking.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/State_tracking.png._CB438077172_.png?t=true" style="display:block; height:190px; margin-left:auto; margin-right:auto; width:750px" />&nbsp;<br /> <em><sup>An example of dialogue state tracking. Blue arrows indicate slots whose values are updated across turns of dialogue.</sup></em></p> <p>Traditionally, state trackers have been machine learning systems that produce probability distributions over all the possible values for a particular slot. After each dialogue turn, for instance, the state tracker might assign different probabilities to the possible Restaurant_Location values north, south, east, west, and center.</p> <p>This approach, however, runs into obvious problems in real-world applications like Alexa. With Alexa’s music service, for instance, the slot Song_Name could have millions of valid values. Calculating a distribution over all of them would be prohibitively time consuming.</p> <p>In <a href="https://arxiv.org/pdf/1811.12891.pdf" target="_blank">work</a> we presented last year, we described a state tracker that selects candidate slots and slot values from the dialogue history, so it doesn’t need to compute millions of probabilities at each turn. But while this approach scales more effectively, it tends to yield less accurate results.</p> <p>In our Interspeech paper, we report a new <a href="https://arxiv.org/pdf/1907.00883" target="_blank">hybrid system</a> that we trained to decide between the two approaches — the full-distribution or the historical-candidate approach — based on the input.&nbsp;</p> <p>We tested it on a data set that featured 37 slots, nine of which could take on more than 100 values each. For eight of those slots, the model that extracted candidates from the conversational context yielded better results. For 27 of the remaining 28 slots, the model that produced full distributions fared better.</p> <p>Allowing the system to decide on the fly which approach to adopt yielded a 24% improvement in state-tracking accuracy versus the previous state-of-the-art system.</p> <p>Our paper on dialogue acts is geared toward automatically annotating conversations between human beings. After all, people have been talking to each other far longer than they’ve been talking to machines, and their conversations would be a plentiful source of data for training state trackers. Our three-step plan is to (1) use existing data sets with labeled dialogue acts to train a classifier; (2) use that classifier to label a large body of human-human interactions; (3) use the labeled human-human dialogues to train dialogue policies.</p> <p>The problem with the first step is that existing data sets use different dialogue act tags: one, for instance, might use the tag “welcome”, while another uses “greet”; one might use the tag “recommend”, while another uses “offer”. Much of our Interspeech paper on dialogue acts concerns the development of a <a href="https://arxiv.org/pdf/1907.03020" target="_blank">universal tagging scheme</a>.</p> <p>We began by manually aligning tags from three different human-machine data sets. Using a single, reconciled set of tags, we then re-tagged the data in all three sets. Next, we trained a classifier on each of the three data sets and used it to predict tags in the other two. This enabled us to identify cases in which our provisional scheme failed to generalize across data sets.</p> <p>On the basis of those results, we made several further modifications to the tagging scheme, sometimes combining separate tags into one, sometimes splitting single tags into two. This enabled us to squeeze out another 1% to 3% in tag prediction accuracy.</p> <p>To test the applicability of our universal scheme, we used a set of annotated human-human interaction data. First, we trained a dialogue act classifier on our three human-machine data sets, re-tagged according to our universal scheme. Then we stripped the tags out of the human-human data set and used the classifier to re-annotate it. Finally, we used both the re-tagged human-human data and the original human-human data to train two new classifiers.</p> <p>We found that we required about 1,700 hand-annotated examples from the original data set to produce a dialogue act classifier that was as accurate as one trained on our machine-annotated data. In other words, we got for free what had previously required human annotators to manually tag 1,700 rounds of dialogue.</p> <p><em>Dilek Hakkani-T&uuml;r is a senior principal scientist in the Alexa AI group.</em></p> <p><strong>Papers</strong>:</p> <p>“<a href="https://arxiv.org/pdf/1907.00883" target="_blank">HyST: A Hybrid Approach for Flexible and Accurate Dialogue State Tracking</a>”<br /> “<a href="https://arxiv.org/pdf/1907.03020" target="_blank">Towards Universal Dialogue Act Tagging for Task-Oriented Dialogues</a>”</p> <p><a href="https://developer.amazon.com/alexa/science" target="_blank"><strong>Alexa science</strong></a></p> <p><strong>Acknowledgments</strong>: Rahul Goel, Shachi Paul, Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel</p> <p><strong>Related</strong>:</p> <ul> <li><a href="https://developer.amazon.com/blogs/alexa/post/b4b33a98-c931-4129-b96a-b2034db2137c/who-s-on-first-how-alexa-is-learning-to-resolve-referring-terms" target="_blank">Who’s on First? How Alexa Is Learning to Resolve Referring Terms</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/ddad5b81-a557-423c-aae4-c55c6715f4cf/teaching-alexa-to-follow-conversations" target="_blank">Teaching Alexa to Follow Conversations</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/9615b190-9c95-452c-b04d-0a29f6a96dd1/amazon-unveils-novel-alexa-dialog-modeling-for-natural-cross-skill-conversations" target="_blank">Amazon Unveils Novel Alexa Dialog Modeling for Natural, Cross-Skill Conversations</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/bd05a237-bfbb-402c-99a5-67b6c5607e73/innovations-from-the-2018-alexa-prize" target="_blank">Innovations from the 2018 Alexa Prize</a></li> </ul> /blogs/alexa/post/b193793b-6267-4abf-bf8d-7d085e959ea4/ilarna-s-bet-on-the-voice-industry From Student to Voice Business Owner in One Year: Ilarna’s Bet on the Voice Industry Emma Martensson 2019-09-02T08:00:00+00:00 2019-09-02T08:00:00+00:00 <p><img alt="Ilarna’s Bet on the Voice Industry" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_ilarna-casestudy.png._CB437581266_.png?t=true" /></p> <p><a href="https://www.linkedin.com/in/ilarnanche/" target="_blank">Ilarna Nche</a> is only 23 years old but already created 30 skills and runs her own voice business, <a href="https://adassainnovations.com/" target="_blank">Adassa Innovations</a>.</p> <p><img alt="Ilarna’s Bet on the Voice Industry" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_ilarna-casestudy.png._CB437581266_.png?t=true" /></p> <p><a href="https://www.linkedin.com/in/ilarnanche/" target="_blank">Ilarna Nche</a> is only 23 years old but has already started her own voice studio. She comes from Cambridge, UK and graduated last year with a Multimedia Technology and Design Degree from the University of Kent. “My interest in voice began when I purchased an Amazon Echo Dot for my mother at Christmas in 2016 and I was incredibly fascinated by its features,” Ilarna says. Following that Christmas she purchased her own Amazon Echo Dot and began playing with Alexa skills, such as ‘<a href="https://www.amazon.co.uk/Musicplode-Media-Ltd-Beat-Intro/dp/B07G4LSLBL" target="_blank">Beat the Intro</a>’. She was surprised to discover these skills were made by third party developers and wanted to try to make one herself. “I was excited at the thought that other people could be using skills I have created on their Alexa devices,” she says. At the time, Ilarna had no experience in voice; her skillset revolved around web and mobile development.</p> <p>After about four months of developing skills, Ilarna received an email saying that she had earned <a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/rewards" target="_blank">developer rewards</a>. “It gave me the drive to continue developing skills,” she says. The developer rewards motivated her to continue updating and improving her skills, ultimately providing a better user experience which now opens the door for her to earn money and building a business.</p> <p>Notable early achievements by Ilarna includes being a finalist in the <a href="https://alexatechforgood.devpost.com/" target="_blank">Alexa Skills Challenge: Tech for Good</a> with ‘<a href="https://www.amazon.co.uk/Adassa-Innovations-Agent-Do-Good/dp/B07GRV4BKL" target="_blank">Agent Do Good</a>’, and winning in the <a href="https://alexakidskills.devpost.com/" target="_blank">Alexa Skills Challenge: Kids</a> with ‘<a href="https://www.amazon.co.uk/Adassa-Innovations-Music-Bop-Adventures/dp/B0748DNRPQ" target="_blank">Music Bop Adventures</a>’, which to date is still one of her most popular skills. The ‘Music Bop Adventures’ skill came about when Ilarna decided she wanted to create an activity to encourage kids to stay off their screens. “My mother runs a childcare service and this was a brilliant way to test the skill,” she says. The skill encourages physical activity and incorporates unique audio, and has been appreciated by parents and kids alike:</p> <p><em>“I have two daughters - 10 and 3, and they love this. Easy enough for the three year old to follow and kept them busy and active while I was cooking tea. Please add more, love the concept! Imagination, listening, and action, brilliant!” – </em><a href="https://www.amazon.co.uk/gp/profile/amzn1.account.AGB5J44JOM444RQNJNOECS3RVHPQ/ref=cm_cr_arp_d_gw_btm?ie=UTF8" target="_blank">britishusa</a>.<em> </em></p> <p>Today, Ilarna has created 30 skills and runs her own voice business, <a href="https://adassainnovations.com/" target="_blank">Adassa Innovations</a>. She used the prize money from the Alexa Skills Challenges to start her business; investing in software and equipment, in addition to renovating her garage. The garage is now her office, and this is where she creates all her Alexa skills. With the ability to monetise skills and earn more money, she sees a future in voice. Ilarna thinks voice is becoming a high growing industry, and is excited to play in this space as it is something she has become very passionate about.</p> <h2>Use Imagination and Listen to Alexa Users for Skill Ideas</h2> <p>“Most of the skill ideas I come up with come to me in the strangest places,” she says, “like for example in the shower.” Ilarna keeps all the ideas, realistic or not, in a notebook and hopes to one day bring them to life. For every idea she comes up with, she browses the <a href="https://www.amazon.co.uk/b/?ie=UTF8&amp;node=10068517031" target="_blank">Alexa Skills Store</a> to see if it has already been done because, she says, “It is all about making original, unique skills in a marketplace where you are competing against thousands of other skills.” She also does research by reading reviews, on other developers’ skills as much as her own, to get customer feedback. Although she thinks it is impossible to please everyone, her view is that she can use bad reviews to improve her skills and continue learning.</p> <p>On average, it takes Ilarna 1-2 weeks to create a skill. The more skills she creates, the more templates and resources she makes for herself and with it the development time shortens. She recommends other developers to gain experience by building several skills whilst continuously improving the ones already published. Ilarna follows a checklist whereby she alternates developing a new skill then updating an existing one, and so on.</p> <h2>Build Great Free Skills, Then Earn Money with In-Skill Purchasing</h2> <p>Ilarna has been keen to experiment with <a href="https://developer.amazon.com/docs/in-skill-purchase/isp-overview.html" target="_blank">in-skill purchasing (ISP)</a>, adopting it in several of her skills. “I focus on how to make the free version a great experience, and this in turn helps convert users into paying customers,” she says. A skill that is engaging and encourages the customers to come back is her foundation for any ISP skill. She also emphasizes the importance of content, saying “content is a big plus” and that “investing time and effort in adding more and better content provides a better customer experience and in return users come back to your skill.”</p> <p>At the moment, Ilarna is testing different in-skill products, from subscriptions to consumables and one-time purchases. “Consumables seem to be the most popular form of in-skill purchasing in my skills so far,” she says.</p> <p>She continues, “For each skill I develop, I aim to provide a different ISP experience.” Ilarna also adapts her upsell timing based on the type of in-skill product. For instance, she has added in-skill products consisting of additional gameplays for users who have run out of the original content, which means they can continue to enjoy playing. For these type of ISPs, where it provides access to additional premium content, she upsells occasionally after the customers have finished their game. If the in-skill product adds a form of aid to help the customers win the game, she upsells occasionally when the customers are unsure or stuck.</p> <p>With her skill ‘<a href="https://www.amazon.co.uk/Adassa-Riddle-Time/dp/B075VBJL52" target="_blank">Riddle Time</a>’, she originally had one consumable (life pack) but she noticed that customers wanted to be able to get another riddle if they got the answer wrong. Based on this, she added a second consumable (unlock pack) which is offered when users give the wrong answer. The unlock pack has since seen an average offer to purchase conversion of 30.5%, with as high as 86.7% in one week.</p> <p>“It is a fantastic feeling seeing customers buying and enjoying the products you have worked hard on,” she says and continues, “It is also great having the opportunity to earn money.”</p> <h2>Voice is Being Integrated into Everyday Life, Everywhere</h2> <p>Whilst she has the coding down, an area Ilarna thinks she needs to improve upon is marketing. Right now she relies solely on organic customers and reviews. Her next step is to promote her skills on social media to create more customer awareness of her skills.</p> <p>Another thing she is excited to learn more about is <a href="https://developer.amazon.com/docs/alexa-design/apl.html" target="_blank">Alexa Presentation Language (APL)</a>, version 1.1. Moving forward, Ilarna will integrate ISP and APL into all her skills. She notes it is important to make sure her ISP skills also provide a high quality screen experience. Her latest skill, Horse Race, is an example of how she optimised the skill with ISP and APL. She added touch wrappers and imagery that complements the skill well and as a result offers a better user experience with premium content.</p> <p>Looking forward, she thinks more developers should be excited about voice. “It is a fast growing industry, which is being integrated into everyday life, everywhere. It is in your home, your car and on the go,” she says.</p> <p>Ilarna did not expect to be where she is today a few years ago. She is excited and says “You never know what you can achieve until you try.” We can expect to see more from Ilarna in the future, as her journey in voice has just begun.&nbsp;</p> <h2>Related Content</h2> <ul> <li><a href="https://developer.amazon.com/blogs/alexa/post/d01924c2-3d93-4b98-8c87-6d9c2484135f/building-alexa-skills-while-seeing-the-world" target="_blank">Hugo’s Move from Digital Nomad to Full Time Alexa Skills Developer</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/07fe7ede-025d-4f8d-b25c-b27a238f51d5/how-vocala-is-creating-a-growing-voice-business" target="_blank">How Vocala is Creating a Growing Voice Business</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/67edf9f0-1ec6-4261-ad6b-46cf36d87fbb/voice-agency-say-it-now-ceo-discusses-reaping-big-rewards-from-the-evolving-voice-industry" target="_blank">Voice Agency 'Say It Now' CEO Discusses Reaping Big Rewards from the Voice Industry</a></li> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/make-money/in-skill-purchasing" target="_blank">Make Money with In-Skill Purchasing</a></li> <li><a href="https://developer.amazon.com/alexa-skills-kit/make-money/in-skill-purchasing" target="_blank">Sell Premium Content to Enrich Your Skill Experience</a></li> <li><a href="https://build.amazonalexadev.com/tips_on_promoting_your_alexa_skill" target="_blank">Tips on Promoting Your Alexa Skill</a></li> </ul> <h2>Grow Your Voice Business with Monetized Alexa Skills</h2> <p>With in-skill purchasing (ISP), you can sell premium content to enrich your Alexa skill experience. ISP supports one-time purchases for entitlements that unlock access to features or content in your skill, subscriptions that offer access to premium features or content for a period of time, and consumables which can be purchased and depleted. You define your premium offering and price, and we handle the voice-first purchasing flow. If you add ISP to your skill, you may be eligible to earn a voucher for the <a href="https://aws.amazon.com/certification/certified-alexa-skill-builder-specialty/" target="_blank">AWS Certified Alexa Skill Builder</a> exam through the <a href="https://developer.amazon.com/en-gb/alexa-skills-kit/alexa-developer-skill-promotion?ref=tsm_1_LINKEDIN_COMPANY_s__2332297150&amp;linkId=67863388#?&amp;sc_category=Owned&amp;sc_channel=SM&amp;sc_campaign=EUPromotion&amp;sc_publisher=LI&amp;sc_content=Promotion&amp;sc_funnel=Publish&amp;sc_country=EU&amp;sc_medium=Owned_SM_EUPromotion_LI_Promotion_Publish_EU_EUDevs&amp;sc_segment=EUDevs" target="_blank">EU Perks Program</a>. <a href="https://build.amazonalexadev.com/alexa-skill-monetization-guide-ww.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=vod-webinar&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_vod-webinar_Convert_WW_visitors_makemoney-page_CTA-graphic&amp;sc_segment=visitors&amp;sc_place=makemoney-page&amp;sc_trackingcode=CTA-graphic" target="_blank">Download our introductory guide</a> to learn more.</p> /blogs/alexa/post/460fe09c-8573-4968-b476-a8bf87a27c63/join-the-ask-cli-beta-now-open-source Join the ASK CLI Beta, Now Open Source Leo Ohannesian 2019-08-30T16:38:08+00:00 2019-08-30T16:38:08+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/CLI_opensource.png._CB437924042_.png" style="height:240px; width:954px" /></p> <p>We’re excited to announce the open source release of the&nbsp;<a href="https://github.com/alexa/ask-cli" target="_blank">ASK CLI in beta on GitHub</a>, now with support for deploying your skill code to AWS using CloudFormation.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/CLI_opensource.png._CB437924042_.png" /></p> <p>We’re excited to announce the open source release of the new <a href="https://github.com/alexa-labs/ask-cli">ASK CLI on GitHub</a>. As an open source tool, anyone can now contribute new features and improvements to the ASK CLI. The new version of the CLI will also have support for AWS CloudFormation, so you can now manage your entire skill infrastructure from a single file.</p> <h2>Contribute to the new ASK CLI</h2> <p>Starting today, the source code for the new CLI is available on GitHub for anyone to contribute. For now, we are releasing the new version of the CLI in beta. Learn more on how to get started and tell us what you think on <a href="https://github.com/alexa-labs/ask-cli" target="_blank">GitHub</a>.</p> <h2>Deploy your skill’s infrastructure with AWS CloudFormation</h2> <p>In addition to many under-the-hood improvements, the new ASK CLI now makes it easier than ever for you to manage and deploy your skill’s infrastructure using <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html" target="_blank">AWS CloudFormation</a>. With CloudFormation, you can define and deploy your skill with just a single file, enabling you to easily version, share, and scale your infrastructure as code.<br /> <br /> When you create your first skill with the beta version of the CLI, you are now given the option of using CloudFormation to deploy your skill. This will provide you with a starter CloudFormation template that includes the most common components for building a great skill experience, including an AWS Lambda function and a S3 bucket. If you are more experienced with CloudFormation you can also provision SageMaker, Personalize, or any other AWS resource supported by CloudFormation with ease.<br /> <br /> Learn more on how get started with the ASK CLI on GitHub at <a href="https://github.com/alexa-labs/ask-cli" target="_blank">https://github.com/alexa-labs/ask-cli</a>.</p> /blogs/alexa/post/77ec9b58-f93d-4cae-8de9-bb1b9fedb5e5/improved-response-structure-for-client-errors-in-the-cli-and-smapi Improved Response Structure for Client Errors in the CLI and SMAPI Leo Ohannesian 2019-08-30T16:37:49+00:00 2019-08-30T16:37:49+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/errorcode.png._CB437924491_.png" /></p> <p>SMAPI and CLI error responses are now more detailed and more actionable for failed operations performed in skill development.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/errorcode.png._CB437924491_.png" style="height:243px; width:974px" /></p> <p>Today, we are excited to announce the improvements in error responses that you receive for skill development related operations in the CLI and SMAPI; error responses for failed operations of create skill, build model, deploy skill, update skill, publish skill etc. Error responses are now more standard in metadata, more detailed, and more actionable, saving you time in debugging.</p> <h2>Error Responses Are Now Standardized and Actionable</h2> <p>Earlier, error messages contained disparate metadata which led to inconsistency across various failed operations that you performed. Error responses are now standardized around three mandatory keys - code, message, and validationDetails. The validationDetails is further broken down into more keys such as originalInstance, allowedDataTypes, and more. With these changes errors are more structured, consistent and understandable. Below is a before and after an error code related to an invalid data type in skill manifest's publishing information:</p> <p><strong><u>Before</u></strong></p> <pre> <code>{ &quot;error&quot;: { &quot;message&quot;: &quot;Invalid data type in skill manifest.&quot; } } </code></pre> <p><strong><u>After</u></strong></p> <pre> <code>{ &quot;error&quot;: { &quot;code&quot;: &quot;INVALID_DATA_TYPE&quot;, &quot;message&quot;: &quot;Instance at property path\ &quot;$.manifest.publishingInformation.distributionCountries\&quot; of type \&quot;string\&quot; does not match any allowed primitive types [\&quot;array\&quot;].&quot;, &quot;validationDetails&quot;: { &quot;originalInstance&quot;: { &quot;propertyPath&quot;: &quot;$.manifest.publishingInformation.distributionCountries&quot;, &quot;dataType&quot;: &quot;string&quot; }, &quot;allowedDataTypes&quot;: [ &quot;array&quot; ] } } } </code></pre> <p>The new error code structure is live on the CLI and SMAPI today. In order to <a href="https://developer.amazon.com/docs/smapi/error-codes.html">further understand the new error code structure, read up about it in our docs</a>.</p> /blogs/alexa/post/077d022d-1d3d-42ce-83b6-1e28a362b7f7/new-ask-sdk-support-for-express-js-and-jinja New ASK SDK Support for Express.js and Jinja Leo Ohannesian 2019-08-30T16:37:16+00:00 2019-08-30T23:50:24+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/SDK(1).png._CB437925595_.png" style="height:480px; width:1908px" /></p> <p>Newly added support for Express.js in the Node.js SDK and Jinja support in the Python SDK now make it easier to build and integrate skills.&nbsp;</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/SDK(1).png._CB437925595_.png" /></p> <p>We’re excited to announce updates to our Alexa Skills Kit (ASK) SDKs for Node.js and Python that make it easier to build skills and integrate skills into your existing infrastructure. Today, we’re adding Express.js support to the Node.js SDK and Jinja template support to the Python SDK.</p> <p>We are also pleased announce the general availability of Django and Flask support for the Python SDK, which we originally released for Beta in April.</p> <h2>Use Your Express.js App to Build Skills</h2> <p>If you develop your skill in Node.js, you can now easily integrate the ASK SDK for Node.js into your new or existing web services built with Express.js. With just a few lines of code, you can now easily secure communications between your Express.js web app and Alexa.</p> <p>Support for Express.js is now available with the latest version of the <a href="https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs" target="_blank">Alexa Skills Kit SDK for Node.js on GitHub</a>. Take a look at our <a href="https://developer.amazon.com/docs/alexa-skills-kit-sdk-for-nodejs/host-web-service.html" target="_blank">examples and documentation</a> to get started.</p> <h2>Use Jinja Templates in Your Python SDK Skills</h2> <p>Python skill developers can simplify and better manage their Alexa JSON responses by using Jinja templates, now supported in the Python SDK. In addition to creating Alexa responses in Python code, you can now have greater control over the resulting JSON using Jinja templates.<br /> <br /> Jinja support is now available in the latest version of the <a href="https://github.com/alexa/alexa-skills-kit-sdk-for-python" target="_blank">ASK SDK for Python on GitHub</a>. We also have <a href="https://developer.amazon.com/docs/alexa-skills-kit-sdk-for-python/build-responses.html" target="_blank">documentation and examples available</a> to help you get started.</p> <h2>Django and Flask Support Now Generally Available</h2> <p>We beta <a href="https://developer.amazon.com/blogs/alexa/post/ee8a8ee4-6a9f-45d7-8205-05222701a5b4/ask-sdks-are-now-easier-to-use-and-integrate" target="_blank">released Django and Flask support for the Python SDK &nbsp;</a>in in April and today we are pleased to announce general availability for all Python skill developers.</p> <p>Get started with our <a href="https://alexa-skills-kit-python-sdk.readthedocs.io/en/latest/WEBSERVICE_SUPPORT.html" target="_blank">documentation</a> to see how you can add skills support to your Django and Flask apps with just a few lines of code using the <a href="https://github.com/alexa/alexa-skills-kit-sdk-for-python" target="_blank">Alexa Skills Kit SDK for Python</a>.</p> /blogs/alexa/post/3334d0c4-1af6-4a8e-82e6-edf6932af10c/celebrating-the-alexa-champions-skill-developers-and-visionaries-named-as-leaders-by-voicebot-ai-s-voice-2019 Celebrating the Alexa Champions, Skill Developers and Visionaries Named as Leaders by voicebot.ai’s VOICE 2019 Michelle Wallace 2019-08-29T19:44:14+00:00 2019-08-29T19:44:14+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_voicebot-leaders_2_954x240@2x.png._CB437943476_.png" style="height:480px; width:1908px" /></p> <p>Earlier this month, voicebot.ai announced their Top 44 Leaders in Voice 2019.&nbsp;Join us in congratulating all the leaders in voice. Thanks to their innovation, dedication, and hard work, voice continues to grow as a leading technology in the world of user interfaces.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_voicebot-leaders_2_954x240@2x.png._CB437943476_.png" style="height:480px; width:1908px" /></p> <p>Earlier this month, <a href="http://voicebot.ai/" target="_blank">voicebot.ai</a> announced their <a href="https://voicebot.ai/the-top-44-leaders-in-voice/" target="_blank">Top 44 Leaders in Voice 2019</a>. As a company focused on voice as the primary interface, voicebot.ai is a leading source for news, research, and commentary related to voice technology.</p> <p>With so many deserving people in this growing field, we were thrilled the VOICE 2019 judges chose to honor many of the Alexa Champions and members of the Alexa developer community, not to mention some of Amazon’s own Alexa team members.</p> <p>Join us in congratulating all the leaders in voice. Thanks to their innovation, dedication, and hard work, voice continues to grow as a leading technology in the world of user interfaces.</p> <h2>Alexa Developers Recognized as Top Leaders in Voice</h2> <p>One of the main drivers behind the growth in voice is the community of talented, innovative developers who create the Alexa skills that delight customers. Among those honored at VOICE 2019 are several members of the Alexa development community, including a number of Alexa Champions.</p> <p><a href="https://developer.amazon.com/alexa/champions/adva-levin"><strong>Adva Levin</strong></a><strong>,</strong> named a top Design &amp; Product Pro, was the Grand Prize winner of the first <a href="https://developer.amazon.com/blogs/alexa/post/346b7b68-7c04-47e5-b558-80276bf483da/kids-court-adva-levin-spotlight">Amazon’ Skills Challenge: Kids</a> with her Alexa skill, <a href="https://www.amazon.com/Pretzel-Labs-kids-court/dp/B078H9R4P3/" target="_blank">Kid’s Court</a>. A recognized <a href="https://developer.amazon.com/alexa/champions/adva-levin">Alexa Champion</a>, she is the founder of Pretzel Labs, a studio that creates voice-first games and learning skills for children and families.</p> <p>“It's an honor to be on the Leaders in Voice list,” said Levin. “I'm fascinated to see how new breakthroughs in voice will enable us to create more personalized, inclusive and memorable experiences for people worldwide.”</p> <p><a href="https://developer.amazon.com/alexa/champions/jessica-williams"><strong>Jess Williams</strong></a> was named a top Design &amp; Product Pro. She’s also a co-founder of Opearlo, a company that builds games and productivity skills. A developer of several popular Alexa skills, she’s an Amazon <a href="https://developer.amazon.com/alexa/champions/jessica-williams">Alexa Champion</a> for her skill, <a href="https://www.amazon.com/www-asklifebot-com-Panda-Rescue/dp/B078LL5ZL3/" target="_blank">Panda Rescue</a>, which won <a href="https://developer.amazon.com/blogs/alexa/post/ef569fe1-5222-4afa-bf2d-a08a8e999b8c/jess-williams-and-oscar-merry-build-an-award-winning-alexa-kid-skill-designed-for-echo-show">Best Skill Designed for an Echo Show in the Alexa Skills Challenge: Kids</a>. She was also nominated for the 2019 Alexa Award Executive of the Year.</p> <p>“It’s incredible to see how many businesses are using voice now compared to before, said Williams. “It's amazing to be listed as one of top 44 people working in voice, especially when you consider who else is on the list.”</p> <p><a href="https://developer.amazon.com/blogs/alexa/post/156b211e-355f-4bc8-b1dc-fde19d9acaad/in-skill-purchasing-takes-volley-s-thriving-voice-business-to-the-next-level"><strong>Max Child</strong></a>, a top Design &amp; Product Pro, co-founded Volley, a company that develops games for Alexa. His Alexa skill <a href="https://www.amazon.com/Volley-Inc-Song-Quiz/dp/B06XWGR7XZ/" target="_blank">Song Quiz</a> was nominated for a 2019 Webby Award in the Games &amp; Entertainment category, and is recognized as a top earner in the <a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/rewards">Alexa Developer Rewards</a> program.</p> <p>“I'm honored to make the list alongside such a brilliant group of founders and designers,” said Child. “The voice space has been dynamic and exciting for the last few years, and I'm sure we have many exciting years ahead.”</p> <p><a href="https://developer.amazon.com/blogs/alexa/post/06318b51-f3e3-4aef-8cd3-d3e8ec5e0d06/tom-hewitson-builds-a-thriving-business-with-voice-credits-alexa-developer-rewards-for-growth"><strong>Tom Hewitson</strong></a><strong>,</strong> named a top Design &amp; Product Pro, is the CEO of labworks.io and a recognized <a href="https://developer.amazon.com/alexa/champions/tom-hewitson">Alexa Champion</a>. He’s developed some of the most popular Alexa games including True or False, Trivia Hero, and Would You Rather. His skill, <a href="https://www.amazon.com/labworks-io-ltd-True-or-False/dp/B073VNDBGC/ref=sr_1_1?keywords=true+or+false&amp;qid=1565974622&amp;s=digital-skills&amp;sr=1-1" target="_blank">True or False</a>, won a <a href="https://voicebot.ai/2019/05/06/the-2019-webby-award-winning-voice-applications/" target="_blank">2019 Webby Award</a> in the Education &amp; Reference and People’s choice categories.</p> <p><a href="https://developer.amazon.com/alexa/champions/mark-tucker"><strong>Mark Tucker</strong></a> of Shazami Design, named a top Design &amp; Product Pro in Voice, was one of the first <a href="https://developer.amazon.com/alexa/champions">Amazon Alexa Champions</a> for his early work as an advocate among voice developers and end users. During his time as the principal architect at VoiceXP, he developed more than 30 Alexa skills. Tucker recently launched the open source Speech Markdown, which helps developers, designers, and content authors with text to speech formatting.</p> <p><a href="https://developer.amazon.com/blogs/alexa/post/Tx2TM2QAL89ND1Q/earplay-an-interactive-audio-only-storyteller-made-even-better-by-alexa"><strong>Jonathon Myers</strong></a>, named a top Technologist, cofounded the company Earplay in 2013 as a way to create and distribute voice operated interactive story experiences. His Alexa skill <a href="https://www.amazon.com/Mr-Robot-Daily-Five-Nine/dp/B076K7QL3V/ref=sr_1_1?keywords=mr.+robot+daily+five%2Fnine&amp;qid=1565929056&amp;s=digital-skills&amp;sr=1-1" target="_blank">Mr. Robot Daily Five/Nine</a> won <a href="https://developer.amazon.com/blogs/alexa/post/dd91e4fc-01ff-4e14-b7e4-b5773adddaa3/congrats-to-the-webby-winning-alexa-skills-explore-the-skills-voted-as-the-best-voice-experiences-in-2019">2019 Webby Awards</a> in the categories of Games &amp; Entertainment, Best Writing, and People’s Voice.</p> <h2>voicebot.ai Invites an Alexa Champion to be a Judge</h2> <p>To compile this list of Leaders in Voice, voicebot.ai put together a panel of judges who themselves have made significant contributions to the voice industry. Among them was <a href="https://developer.amazon.com/blogs/alexa/post/04bb2b8b-0a94-443b-a378-2de443779a64/invoked-apps-adds-in-skill-purchasing-to-fuel-growth-and-financial-success">Nick Schwab</a>, a recognized <a href="https://developer.amazon.com/alexa/champions/nick-schwab">Alexa Champion</a> and founder of his voice company, Invoked Apps. Schwab developed <a href="https://www.amazon.com/Nick-Schwab-Ambient-Noise-Sounds/dp/B01LXQXW3G" target="_blank">Rain Sounds</a>, the first Alexa skill for ambient noise. Nick continues to publish successful Alexa skills and is a recognized leader in the voice-first community.</p> <p>“Each of the judges brought a unique perspective to the panel, and I was honored to participate,” said Schwab. “There are so many people doing amazing work and progressing the voice industry. I can't wait to see what comes next!”</p> <h2>Amazon’s Own Also Recognized as Visionaries in Voice</h2> <p>In addition to the Alexa developers, many of Amazon’s own were included in voicebot.ai’s list.</p> <p>Among those recognized as innovators driving the future of voice were <strong>Jeff Bezos</strong>, Amazon Chairman, CEO and President, <strong>Dave Isbitsky</strong>, the chief evangelist of Alexa and Echo,<strong> Rohit Prasad</strong>, Vice President and Head Scientist of Alexa Artificial Intelligence,<strong> Max Amordeluso</strong>, lead evangelist of Amazon Alexa in Europe, and <strong>Paul Cutsinger</strong>, the head of Amazon Alexa Voice Design Education.</p> <h2>Get Started Building for Voice</h2> <p>So, here’s to all the Leaders in VOICE 2019. We’re looking forward to meeting the new leaders in voice over the next year, and we can’t wait to see who will take the top honors for VOICE 2020.</p> <p>Join the growing community of Alexa developers who are innovating with voice Visit our website to learn about the <a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit">Alexa Skills Kit</a> and start building for voice today.</p> /blogs/alexa/post/9c5174ef-12e3-47a3-9e09-a27d1efbc604/how-to-make-neural-language-models-practical-for-speech-recognition How to Make Neural Language Models Practical for Speech Recognition Larry Hardesty 2019-08-29T13:00:00+00:00 2019-08-29T13:48:29+00:00 <p>Techniques include weighting training samples from out-of-domain data sets and noise contrastive estimation, which turns the calculation of massive probability distributions into simple binary decisions.</p> <p>An automatic-speech-recognition system — such as Alexa’s — converts speech into text, and one of its key components is its language model. Given a sequence of words, the language model computes the probability that any given word is the next one.&nbsp;</p> <p>For instance, a language model would predict that a sentence that begins “Toni Morrison won the Nobel” is more likely to conclude “Prize” than “dries”. Language models can thus help decide between competing interpretations of the same acoustic information.</p> <p>Conventional language models are <em>n</em>-gram based, meaning that they model the probability of the next word given the past <em>n</em>-1 words. (<em>N</em> is typically around four.) But this approach can miss longer-range dependencies between words: for instance, sentences beginning with the phrase “Toni Morrison” may have a high probability of including the phrase “Nobel Prize”, even if the two phrases are more than four words apart.&nbsp;</p> <p>Recurrent neural networks can learn such long-range dependencies, and they represent words as points in a continuous space, which makes it easier to factor in similarities between words. But they’re difficult to integrate into real-time speech recognition systems. In addition, although they outperform conventional <em>n</em>-gram-based language models, they have trouble incorporating data from multiple data sets, which is often necessary, as data can be scarce in any given application context.</p> <p>In a <a href="https://arxiv.org/pdf/1907.01677.pdf" target="_blank">paper</a> we’re presenting at Interspeech, my colleagues and I describe a battery of techniques we used to make neural language models practical for real-time speech recognition. In tests comparing our neural model to a conventional model, we found that it reduced the word recognition error rate by 6.2%.</p> <p>In our experiments, we investigated the scenario in which in-domain data is scarce, so it has to be supplemented with data from other domains. This can be tricky. After all, language models are highly <a href="https://developer.amazon.com/blogs/alexa/post/f2c93a90-5539-4386-aefb-2342f9b1cc4c/new-approach-to-language-modeling-reduces-speech-recognition-errors-by-up-to-15" target="_blank">context dependent</a>: the probabilities of the phrases “Red Sox” and “red sauce”, for instance, would be very different in the sports domain and the recipe domain.</p> <p>To ensure that we were using data from other domains effectively, we first built conventional <em>n</em>-gram language models for our in-domain and out-of-domain training data sets. These models were combined linearly to minimize <em>perplexity</em> — a measure of how well a probability distribution predicts a sample — on in-domain data. On the basis of this, we assigned each data set a score that measured its relevance to the in-domain data.</p> <p>That score then determined the likelihood that a sample from a given data set would be selected for the supplementary data set. If, for instance, the in-domain data set had a relevance score of 75%, and a second out-of-domain data set had a score of 25%, then the first set would contribute three times as many samples to the training data as the second.</p> <p><img alt="NLM_weighting_(1).jpg" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/NLM_weighting_(1).jpg._CB437600316_.jpg?t=true" style="display:block; height:166px; margin-left:auto; margin-right:auto; width:400px" /></p> <p style="text-align:center"><em><sup>We select training examples for our neural language model from both in-domain and out-of-domain data sets, according to probabilities (alpha, beta, etc.) that reflect the data sets’ relevance to the in-domain set.</sup></em></p> <p>We combined this novel data-sampling strategy with transfer learning, by initializing our model’s parameters on out-of-domain data.</p> <p>This addressed the data scarcity problem, but we still had to contend with the challenge of integrating neural networks into the speech recognizer without adding much additional latency to the system. Our approach was one that’s common in neural-language-model research: we begin by passing incoming data through a speech recognizer with a conventional <em>n</em>-gram language model, then refine the first model’s hypotheses using the neural model.</p> <p>The risk with this approach is that the <em>n</em>-gram model will reject hypotheses that the more powerful neural model would find reason to consider. To lower that risk, once we had built our neural model, we used it to generate synthetic data, which provided supplementary training data for the first-pass model. This brought the two models into better alignment.</p> <p>Taking advantage of the two-pass approach also required some changes to the way we trained our neural model. Typically, a neural language model that had been fed a sequence of words would compute the probability that every word in its vocabulary should be the next word. Once you factor in names like Toni Morrison’s, a speech recognition application could easily have a vocabulary of a million words, and computing a million separate probabilities for every input would be time consuming.</p> <p>The first-pass model, however, should have winnowed the number of possibilities down to just a handful, and we wanted our neural model to consider only those possibilities. So we trained it using a technique called noise contrastive estimation.</p> <p>When a recurrent language model is fed a sentence, it processes the words of the sentence one at a time. That’s how it learns to model dependencies across sequences of words.</p> <p>With noise contrastive estimation, the model is trained on pairs of words, not individual words. One of the paired words is the true target — the word that actually occurs at that position in the input sentence. The other word is randomly selected, and the model must learn to tell the difference. At training time, this turns the task of computing, say, a million probabilities into a simple binary choice. Moreover, at inference time, the model can directly estimate the probability of the target word, without normalizing over all of the other words in the vocabulary, thus drastically reducing the computational cost.&nbsp;</p> <p>Finally, to increase the efficiency of our neural model still further, we quantize its weights. A neural network consists of simple processing nodes, each of which receives data from several other nodes and passes data to several more. Connections between nodes have associated weights, which indicate how big a role the outputs of one node play in the computation performed by the next.</p> <p>Quantization is the process of considering the full range of values that a particular variable can take on and splitting it into a fixed number of intervals. All the values within a given interval are then approximated by a single number.</p> <p>Quantization makes the neural language model more efficient at run time. In our experiments, the addition of the neural model increased processing time by no more than 65 milliseconds in 50% of cases and no more than 285 milliseconds in 90% of cases, while reducing the error rate by 6.2%.</p> <p><em>Anirudh Raju is a speech scientist in the Alexa Speech group.</em></p> <p><a href="https://arxiv.org/pdf/1907.01677.pdf" target="_blank"><strong>Paper</strong></a>: “Scalable Multi Corpora Neural Language Models for ASR”</p> <p><a href="https://developer.amazon.com/alexa/science" target="_blank"><strong>Alexa science</strong></a></p> <p><strong>Acknowledgments</strong>: Denis Filimonov, Gautam Tiwari, Guitang Lan, Ariya Rastrow</p> <p><strong>Related</strong>:</p> <ul> <li><a href="https://developer.amazon.com/blogs/alexa/post/9e8392c6-5476-4a34-a2d8-c4e479677954/new-speech-recognition-experiments-demonstrate-how-machine-learning-can-scale" target="_blank">New Speech Recognition Experiments Demonstrate How Machine Learning Can Scale</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/f2c93a90-5539-4386-aefb-2342f9b1cc4c/new-approach-to-language-modeling-reduces-speech-recognition-errors-by-up-to-15" target="_blank">New Approach to Language Modeling Reduces Speech Recognition Errors by Up to 15%</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/ab5bd7dd-ffa4-4607-b5b9-3c87f2bca5d1/contextual-clues-can-help-improve-alexa-s-speech-recognizers" target="_blank">Contextual Clues Can Help Improve Alexa’s Speech Recognizers</a></li> </ul> /blogs/alexa/post/d6ce23bc-cbe3-4658-9df1-a88e56ae5657/daniel-mittendorfs-voice-first-start-up-nutzt-in-skill-purchasing Daniel Mittendorfs Voice-First Start-Up nutzt In-Skill-Purchasing Kristin Fritsche 2019-08-29T08:00:00+00:00 2019-08-29T08:00:00+00:00 <p><img alt="Daniel Mittendorfs Voice-First Start-Up nutzt In-Skill-Purchasing" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/ASMXXX-DE-Developer-Mittendorf-BlogPost.jpg._CB439394488_.jpg?t=true" /></p> <p>Mit dem neu verf&uuml;gbaren Feature In-Skill Purchasing (ISP) erweitert Daniel Mittendorf jetzt seine deutschsprachigen Skills um Premium-Inhalte und teilt seine Erfahrungen mit der Community.</p> <p><img alt="Daniel Mittendorfs Voice-First Start-Up nutzt In-Skill-Purchasing" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/ASMXXX-DE-Developer-Mittendorf-BlogPost.jpg._CB439394488_.jpg?t=true" /></p> <p>Als Daniel 2017 anfing sich mit Alexa zu besch&auml;ftigen, kam ihm schnell eine Skill-Idee, die ihm sein Leben erleichtern w&uuml;rde. Mit dem neu verf&uuml;gbaren Feature In-Skill Purchasing (ISP) erweitert der Entwickler jetzt auch seine deutschsprachigen Skills um Premium-Inhalte und teilt seine Erfahrungen mit der Community. ISP kann f&uuml;r die unterschiedlichsten Skills genutzt werden – egal ob in einem einfachen Quiz-Skill oder einem interaktiven Spiele-Skill wie <a href="https://www.amazon.de/gp/product/B07QC11Q84?ref=cm_sw_em_r_as_dp_9wtankulpCEIq" target="_blank">&quot;Mein Haustier&quot;</a>. Die Implementierung von ISP ist nicht schwierig – entweder direkt in der <a href="https://developer.amazon.com/docs/in-skill-purchase/create-isp-dev-console.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs" target="_blank">Developer Console</a> oder per <a href="https://developer.amazon.com/docs/in-skill-purchase/use-the-cli-to-manage-in-skill-products.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevsE_DEDevs&amp;sc_segment=DEDevs" target="_blank">Command Line Interface</a>.</p> <p>An seinen ersten Skill erinnert sich Daniel noch gut: „Ich stand in der K&uuml;che und wollte w&auml;hrend des Kochens ein Fu&szlig;ballspiel verfolgen, hatte dort aber keinen Fernseher und wusste nicht, ob das Spiel irgendwie im Radio &uuml;bertragen wird. Da kam mir die Idee, diesen Service doch &uuml;ber einen Alexa Skill anzubieten“, erinnert sich Daniel Mittendorf, Gr&uuml;nder der Voice-Agentur <a href="https://digivoice.io/" target="_blank">digivoice.io</a>. Daniel bastelte an seinem ersten Skill „<a href="https://www.amazon.de/gp/product/B078T1ZXPN?ref=cm_sw_em_r_as_dp_CQaagryIqVjdF" target="_blank">Streamplayer</a>“, den er als einer der ersten deutschen Entwickler auch f&uuml;r Echo Show und Echo Spot optimierte.</p> <h2>Vom Entwickler zum Start-Up-Gr&uuml;nder</h2> <p>Seitdem ist viel passiert. Daniel&acute;s erster Skill wurde direkt zum gro&szlig;en Erfolg und wurde von den Kunden begeistert genutzt. Daniel entwickelte weitere Skills im Bereich Ambient Sounds und Spiele. „Ich habe dann gemerkt, dass das Feld der Voice-Technologie ein interessanter Gesch&auml;ftszweig sein kann. Viele Unternehmen wollen ihre Services auch &uuml;ber Alexa anbieten oder ihre Marke bekannter machen“, beobachtete Daniel und erz&auml;hlt weiter „Vorher war ich Projektleiter bei einem Onlineversandhandel. Ich hatte einen sicheren Job, habe mich dann aber entschieden, den Trend zu nutzen und mich mit meiner eigenen Agentur digivoice.io selbstst&auml;ndig zu machen.“ Im Januar 2019 startete er dann seine Voice-first Agentur und bietet jetzt einen Rund-Um-Service zu Spracherlebnissen an - von der Planung, Entwicklung bis hin zur Instandhaltung.</p> <h2>Geld verdienen mit In-Skill Purchasing (ISP)</h2> <p>Seit Kurzem ist <a href="https://developer.amazon.com/blogs/alexa/post/6704c2d4-eac1-4a97-b631-57d2a43a1cd3/developers-in-the-uk-and-germany-can-now-monetize-alexa-skills-with-in-skill-purchasing-isp?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs">In-Skill Purchasing auch in Deutschland und &Ouml;sterreich</a> verf&uuml;gbar. Entwickler k&ouml;nnen ihre Skills damit kommerziell nutzen, indem sie digitale Artikel und Dienstleistungen verkaufen. „Damit wird Alexa f&uuml;r Unternehmen und Entwickler noch interessanter. Es geht nicht mehr nur darum neue Nutzer zu gewinnen und die Bekanntheit der Marke zu steigern, sondern darum damit auch Geld zu verdienen. Das er&ouml;ffnet ganz neue M&ouml;glichkeiten f&uuml;r Entwickler“, ist sich Daniel sicher.&nbsp;</p> <p>Der erste Schritt f&uuml;r jeden Skill-Entwickler sollte sein, sich mit den Besonderheiten des Voice Designs vertraut zu machen. Die Entwicklung f&uuml;r ein Spracherlebnis <a href="https://build.amazonalexadev.com/vui-vs-gui-guide-de.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs" target="_blank">unterscheidet sich sehr von der Entwicklung f&uuml;r eine Bildschirmanwendung</a>. „Man muss bei Sprachanwendungen anders denken, man hat vorrangig die Stimme um den Nutzer anzuleiten und zu unterhalten“, erkl&auml;rt Daniel.</p> <p>Er r&auml;t: „F&uuml;r einen kommerziellen Skill gelten im Grunde die gleichen Voraussetzungen beim Entwickeln, wie bei jedem anderen Skill. Man muss sich zuerst &uuml;berlegen, was aus Sicht der Nutzer sinnvoll ist und einen Mehrwert bringt, unterhaltsam ist oder einfach Spa&szlig; macht. Bevor ich anfange zu entwickeln, schreibe ich mir die gesamte Interaktion (Voice Interface) auf und spiele es auch mit anderen durch, um zu sehen, ob meine Gespr&auml;chsverlauf verst&auml;ndlich und nat&uuml;rlich ist.“</p> <p>Die Implementierung von ISP fand Daniel allerdings einfach: „Die Dokumentation ist gut verst&auml;ndlich, es gibt Blogposts mit Best Practices, das war kein Problem.“ Der n&auml;chste wichtige Schritt war den richtigen Use Case zu finden: „Ich habe genau &uuml;berlegt, was sich f&uuml;r einen Premium-Skill eignet und habe mir ein besonderes Projekt ausgesucht. Ich wollte eine Art Tamagotchi f&uuml;r Alexa entwickeln.“ Erste Versuche mit diesem Use Case machte Daniel schon mit seinem Skill „<a href="https://www.amazon.de/gp/product/B07HZ5VYKW?ref=cm_sw_em_r_as_dp_AruT5yIQ2jIaT" target="_blank">Seehundstation Norddeich</a>“, wo man einen Seehund aufzieht und dann in die Freiheit entl&auml;sst.&nbsp;</p> <p>Mit „<a href="https://www.amazon.de/gp/product/B07QC11Q84?ref=cm_sw_em_r_as_dp_gX1NJpjGQzECq" target="_blank">Mein Haustier</a>“ erweiterte er diese Idee und entwickelte seinen ersten ISP-Skill. Der Skill erm&ouml;glicht dem Nutzer, ein eigenes virtuelles Haustier aufzunehmen, um welches er sich 21 Tage k&uuml;mmern muss, vom F&uuml;ttern bis zum Besuch beim Tierarzt. Da jeder Skill mit Premium-Inhalten auch &uuml;ber kostenlose Inhalte verf&uuml;gen muss, bietet Daniel in seinem Skill zwei Haustiere kostenfrei an, weitere Tiere wie zum Beispiel einen Drachen oder ein Einhorn kann &uuml;ber einen Einmalkauf (One Time Purchase) im Skill erwerben. Au&szlig;erdem kann man Verbrauchsprodukte wie M&uuml;nzen, Futter und Spielzeug einkaufen.</p> <h2>ISP Best Practices</h2> <p>&nbsp;„Bei „Mein Haustier“ war es mir wichtig, viele verschiedene Besch&auml;ftigungen einzubauen, damit es dem Spieler nicht langweilig wird“, berichtet Daniel, „ich habe zum Beispiel die <a href="https://aws.amazon.com/de/polly/" target="_blank">Amazon Polly</a> Stimmen genutzt, um die Skill-Interaktion abwechslungsreicher zu gestalten. Ich habe mit der <a href="https://developer.amazon.com/docs/alexa-design/apl.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs" target="_blank">Alexa Presentation Language (APL)</a> entwickelt, damit die Nutzer ihre Haustiere auch auf verschiedenen Bildschirmformaten sehen k&ouml;nnen. Eine Menge Zeit habe ich auch in die Formulierung der Texte gesteckt, damit diese fesselnd genug sind, um die Nutzer zu unterhalten.“</p> <p>Grunds&auml;tzlich ist es wichtig, die In-Skill-K&auml;ufe organisch in den Spielverlauf einzubinden und diesen nicht unn&ouml;tig zu unterbrechen: „Es macht Sinn den so genannten Upsell dort zu setzen, wo er den Verlauf nicht st&ouml;rt, sondern einen Mehrwert bringt. Man braucht aber auch nicht zu z&ouml;gerlich sein. Am besten man probiert aus, was die Nutzer gut annehmen und passt die Upsell-H&auml;ufigkeit dementsprechend an. In der Developer Console hat man ja eine gute &Uuml;bersicht &uuml;ber die Daten.“, r&auml;t Daniel und erz&auml;hlt weiter: „Die Premium-Services k&ouml;nnen ein echtes Highlight sein. Ich habe zum Beispiel in der Zoohandlung einen Upsell eingebaut, wo der Nutzer den Drachen oder das Einhorn kaufen kann.“</p> <p>Damit es auch bei mehrmaliger Nutzung spannend bleibt, aktualisiert Daniel seine Skills regelm&auml;&szlig;ig. Der „Mein Haustier“-Skill enth&auml;lt au&szlig;erdem eine Rangliste, wo der Nutzer auf- oder absteigen kann. „F&uuml;r mich als Unternehmer sind In-Skill-K&auml;ufe ein Weg von meinem Erfolg mit Alexa Skills auch finanziell zu profitieren. Ich habe selbst in der Hand, welche Inhalte ich anbiete, wie oft ich Upsells im Skill platziere und kann meinen Erfolg direkt beeinflussen.“</p> <p>Daniel entwickelt seine Skills in <a href="https://developer.amazon.com/docs/in-skill-purchase/use-the-cli-to-manage-in-skill-products.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs" target="_blank">ASK Command Line Interface</a> und hostet den Code auf <a href="https://aws.amazon.com/de/lambda/" target="_blank">AWS Lambda</a>. „F&uuml;r mich sind die wichtigsten Themen: Updates und Feedback einholen. Sprich mit echten Nutzern, sieh dir deine Bewertungen an und f&uuml;ge immer neue Inhalte hinzu, damit der Skill spannend bleibt. Marketing ist auch wichtig: Ich teile die Skills auf Twitter, in Facebook Gruppen, einmal habe ich ein Gewinnspiel aufgesetzt, um einen der Skills zu bewerben“, erkl&auml;rt Daniel.</p> <p>In seinem aktuellen Projekt entwickelt Daniel f&uuml;r den bekannten Verlag Westermann den Skill „Fit f&uuml;r&acute;s Abi“, der in K&uuml;rze auch im Alexa Skills Store verf&uuml;gbar sein wird.</p> <h2>Ressourcen</h2> <ul> <li><a href="https://build.amazonalexadev.com/alexa-skill-monetization-guide-de.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs" target="_blank">Guide: Geld verdienen mit Alexa Skills</a></li> <li><a href="https://build.amazonalexadev.com/How_to_Gaming_ISP_StarterKit_German.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs" target="_blank">Guide: Entwickle Voice Games f&uuml;r Alexa</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/ec72e61c-652d-4f0b-934b-802143ce2c61/code-deep-dive-implementing-in-skill-purchasing-for-entitlements-with-node-js?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs">Code Deep Dive: Implementing In-Skill Purchasing for Entitlements with Node.js</a></li> <li><a href="https://developer.amazon.com/docs/in-skill-purchase/add-isps-to-a-skill.html?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs" target="_blank">Add ISP to Your Skill Code</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/156b211e-355f-4bc8-b1dc-fde19d9acaad/in-skill-purchasing-takes-volley-s-thriving-voice-business-to-the-next-level?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs">In-Skill Purchasing Takes Volley’s Thriving Voice Business to the Next Level</a></li> </ul> <h2>Entwickle f&uuml;r Alexa, hol&acute; dir Goodies</h2> <p>Mach mit bei unserer Entwickler-Aktion, ver&ouml;ffentliche bis zum 30.11.2019 Skills und sichere dir Goodies. Ver&ouml;ffentliche deinen ersten Alexa Skill und hol dir einen <a href="https://www.amazon.de/Echo-Dot-3-Gen-Intelligenter-Lautsprecher-mit-Alexa-Anthrazit-Stoff/dp/B07PHPXHQS/ref=sr_1_1?__mk_de_DE=%C3%85M%C3%85%C5%BD%C3%95%C3%91&amp;keywords=echo+dot&amp;qid=1564132122&amp;s=gateway&amp;sr=8-1" target="_blank">Amazon Echo Dot</a>. Du kennst dich schon aus? Nutze die Alexa Presentation Language (APL) in deinem Skill, erreiche 150 Nutzer w&auml;hrend des Aktionszeitraums und qualifiziere dich f&uuml;r ein <a href="https://www.amazon.de/dp/B0793FBLGZ/ref=cm_sw_em_r_mt_dp_U_4aWqDbGF7S6JF" target="_blank">Echo Show</a>. Wenn du In-Skill-K&auml;ufe in deinem Skill anbietest und 150 Nutzer w&auml;hrend des Aktionszeitraums erreichst, qualifizierst du dich f&uuml;r einen Gutschein f&uuml;r das <a href="https://aws.amazon.com/certification/certified-alexa-skill-builder-specialty/" target="_blank">AWS Certified Alexa Skill Builder Training</a>. Auf unserer Webseite findest du Tipps und die Teilnahmebedingungen. <a href="https://developer.amazon.com/en-gb/alexa-skills-kit/alexa-developer-skill-promotion?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Discover_DE_DEDevs&amp;sc_segment=DEDevs">Mehr erfahren</a></p> <p>&nbsp;</p> /blogs/alexa/post/77c8f0b9-e9ee-48a9-813f-86cf7bf86747/setup-your-local-environment-for-debugging-an-alexa-skill Setup Your Local Environment for Debugging An Alexa Skill Leo Ohannesian 2019-08-29T01:28:34+00:00 2019-08-30T17:03:37+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/debugging.png._CB437613271_.png" /></p> <p>If you’re hosting your Alexa skill on AWS Lambda, debugging can be time-consuming and require log parsing. In this blog post, we’ll demonstrate how you can speed up your development process by setting up a local debugging workflow with our provided scripts and other proxy solutions.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/debugging.png._CB437613271_.png" style="height:240px; width:954px" /></p> <p>If you’re hosting your Alexa skill on AWS Lambda, debugging can be time-consuming and require log parsing. In this blog post, we’ll demonstrate how you can speed up your development process by setting up a local debugging workflow with our provided scripts and other proxy solutions. With local debugging, you can debug your skill in the same environment you develop, enabling you to iterate on and deliver improvements to your skill much more quickly.</p> <h2>Set up Your Local Debugging Workflow</h2> <p>If this is your first time building a skill, learn how to get started with our <a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/courses/cake-walk" target="_blank">beginner training course</a>.<br /> <br /> If you’ve already started building your skill and are using Alexa-hosted skills, follow the steps below in our <em>Set up</em><em> Your Local Debugging Workflow with the ASK CLI </em>instead.</p> <h3>1) Download the debug run script</h3> <p>To invoke your skill code in your local environment, you’ll need to utilize our debug run script. <a href="https://raw.githubusercontent.com/alexa/alexa-cookbook/master/tools/LocalDebugger/nodejs/local-debugger.js" target="_blank">Downloading a copy of the script for Node.js</a> and place this script in the root of your skill’s project directory.</p> <h3>2) Forward Alexa requests to your skill</h3> <p>Alexa typically sends requests to skill code that is hosted on a service such as AWS Lambda. However, in order to debug your skill in a local environment, you’ll need to route Alexa requests to your local machine. To achieve this, we’ll be using a proxy service. There are many proxy options available. In this blog post, we’ll be using a third-party service called ngrok, which you can <a href="https://ngrok.com/download" target="_blank">download directly from their website</a>.<br /> <br /> Once you’ve downloaded ngrok, start a connection to the ngrok proxy on an open port. We've chosen to use port 3001 for this example.</p> <pre> $ ./ngrok http 3001</pre> <p><strong>NOTE: </strong>Unless you’ve registered for one of ngrok’s paid plans, sessions will expire after 8 hours and you will need to restart the ngrok process above.<br /> <br /> Next, copy the <code>https</code> forwarding address provided in the ngrok output.</p> <pre> ngrok by @inconshreveable (Ctrl+C to quit) Session Status online Session Expires 7 hours, 59 minutes Update update available (version 2.3.34, Ctrl-U to update Version 2.3.29 Region United States (us) Web Interface http://127.0.0.1:4040 Forwarding http://abc123.ngrok.io -&gt; http://localhost:3001 Forwarding <strong>https://abc123.ngrok.io</strong> -&gt; http://localhost:3001 Connections ttl opn rt1 rt5 p50 p90 0 0 0.00 0.00 0.00 0.00 </pre> <p>Copy and paste the URL to the <strong>Default Region</strong> field under <strong>Endpoint</strong> within the <strong>Build</strong> tab. Ensure the SSL certificate type is set to “My development endpoint is a sub-domain of a domain that has a wildcard certificate”.<br /> <img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Screen_Shot_2019-08-28_at_11.34.25_AM.png._CB437615901_.png" /></p> <p>Once updated, be sure to save your changes by clicking <strong>Save Endpoints</strong>:<br /> <img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Screen_Shot_2019-08-28_at_11.42.44_AM.png._CB437616081_.png" /><br /> Alexa’s requests will now be forwarded through ngrok to your local environment.</p> <h3>3) Start your debugger</h3> <p>To start debugging your skill with <a href="https://code.visualstudio.com" target="_blank">VSCode</a>, you’ll need to add a launch configuration to your skill project. You can do this from the menu by selecting <strong>Debug &gt;</strong> “<strong>Add Configuration...”. </strong>Copy and paste the below into your configuration. Be sure to update the <code>program</code> path to the <code>local-debugger.js</code> file and the <code>skillEntryFile</code> to your Lambda handler file:</p> <pre> { &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;type&quot;: &quot;node&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;name&quot;: &quot;Launch Program&quot;, &quot;program&quot;: &quot;${workspaceRoot}/local-debugger.js&quot;, &quot;args&quot;: [ &quot;--portNumber&quot;, &quot;3001&quot;, &quot;--skillEntryFile&quot;, &quot;Path/To/index.js&quot;, &quot;--lambdaHandler&quot;, &quot;handler&quot; ], } ] }</pre> <p>You can now set breakpoints throughout your skill code. When you’re ready to start debugging, start the debugger from the menu by selecting <strong>Debug &gt; Start Debugging.</strong></p> <h3>4) Invoke your skill and debug</h3> <p>To invoke your skill, head on over to the developer console and select the <strong>Test</strong> tab. Enable your skill for testing by setting the test environment to <strong>Development</strong> if you haven’t already.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Screen_Shot_2019-08-28_at_11.36.52_AM.png._CB437615681_.png" /></p> <p>All requests sent from the Alexa simulator will be forwarded to your local environment, triggering any breakpoints you’ve set.<br /> <br /> <strong>NOTE: </strong>The Alexa service expects a response within 8 seconds. You’ll need to repeat your dialog if a response is not provided within this 8-second window.</p> <h2>Set up Your Local Debugging Workflow with the ASK CLI</h2> <p>If this is your first time creating a skill using the ASK CLI, review our <a href="https://github.com/alexa/skill-sample-nodejs-hello-world/blob/master/instructions/cli.md" target="_blank">beginner training guide for the CLI</a>.</p> <h3>1) Download&nbsp;debug run script</h3> <p>If you’re hosting your skill with Alexa-hosted skills, you’ll first need to use the ASK CLI to clone your skill down to your local environment. Your skill’s ID can be found in the developer console under <strong>Endpoint</strong>.<br /> <img alt="" src=" https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Screen_Shot_2019-08-28_at_12.04.24_AM.png._CB437615683_.png" /></p> <pre> $ ask clone -s &lt;SKILL_ID&gt;</pre> <p>If you’re creating a skill for the very first time using the CLI, just run the following command and follow the on-screen prompts:</p> <pre> $ ask new </pre> <p>Then, download a copy of the <code>local-debugger.js</code> script to the root of your skill project.</p> <pre> $ curl <a href="https://raw.githubusercontent.com/alexa/alexa-cookbook/master/tools/LocalDebugger/nodejs/local-debugger.js">https://raw.githubusercontent.com/alexa/alexa-cookbook/master/tools/LocalDebugger/nodejs/local-debugger.js</a> &gt; Path/To/MySkillProject/local-debugger.js<code> </code></pre> <h3>2) Forward Alexa requests to your skill</h3> <p>We’ll be using ngrok as we did in the previous workflow. You can follow the same steps outlined above in <em>2) Forward Alexa Requests to Your Skill</em> to download and set up ngrok.</p> <pre> $ ./ngrok http 3001</pre> <p><strong>NOTE: </strong>Unless you’ve registered for one of ngrok’s paid plans, sessions will expire after 8 hours and you will need to restart the ngrok process above.<br /> <br /> Next, copy the <code>https</code> forwarding address provided in the ngrok output.</p> <pre> ngrok by @inconshreveable (Ctrl+C to quit) Session Status online Session Expires 7 hours, 59 minutes Update update available (version 2.3.34, Ctrl-U to update Version 2.3.29 Region United States (us) Web Interface http://127.0.0.1:4040 Forwarding http://abc123.ngrok.io -&gt; http://localhost:3001 Forwarding <strong>https://abc123.ngrok.io</strong> -&gt; http://localhost:3001 Connections ttl opn rt1 rt5 p50 p90 0 0 0.00 0.00 0.00 0.00 </pre> <p>To forward requests from Alexa through ngrok to your local machine, you’ll need to add a <code>uri</code> attribute with your ngrok URL, and a <code>sslCertificateType</code> attribute set to <code>Wildcard</code> in your <code>skill.json</code> file:</p> <pre> { &quot;manifest&quot;: { &quot;publishingInformation&quot;: { ... }, &quot;apis&quot;: { &quot;custom&quot;: { &quot;endpoint&quot;: { &quot;sourceDir&quot;: &quot;lambda/custom&quot;, <strong>&quot;uri&quot;: &quot;https://abc123<code>.</code>ngrok.io&quot;,</strong> <strong>&quot;sslCertificateType&quot;: &quot;Wildcard&quot;</strong> } } }, &quot;manifestVersion&quot;: &quot;1.0&quot; } }</pre> <p>Next, deploy your skill to Alexa:</p> <pre> $ ask deploy --target skill</pre> <p>If you’re creating this skill for the first time, make sure your skill’s model has been built. This can also be done from the developer console.</p> <pre> $ ask deploy --target model</pre> <h3>3) Start your debugger</h3> <p>Using VSCode, you can follow the previous steps and configuration outlined above in <em>Set up</em><em> Your Local Debugging Workflow.</em></p> <h3>4) Invoke&nbsp;your skill and debug</h3> <p>You will first need to enable your skill for testing if you haven’t done so before:</p> <pre> $ ask api enable-skill -s &lt;SKILL_ID&gt; </pre> <p>Finally, you can now start a dialog with Alexa using the ASK CLI’s <code>dialog</code> command or from the developer console as outlined above:</p> <pre> $ ask dialog --locale en-US</pre> <h3>5) Revert to Alexa-hosted Lambda</h3> <p>If you’re using Alexa-hosted skills, you’ll need revert your endpoint back to its original value before publishing your skill. This can be done from the <strong>Code</strong> tab in the developer console.<br /> <img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Screen_Shot_2019-08-28_at_11.38.50_AM.png._CB437615709_.png" /></p> <h2>Conclusion</h2> <p>By incorporating a local debugging workflow into our skill development process, we’ve significantly reduced the amount of time required to debug our skills. We no longer need to wait for code to deploy in order to test our changes or parse output logs to find out where things went wrong. We can now debug in the same environment we write our skills, allowing us to iterate much more quickly and focus our efforts on building even better skill experiences.</p> <h2>Related Content</h2> <ol> <li><a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit/courses/cake-walk" target="_blank">Cakewalk Alexa Skill Course</a></li> <li><a href="https://developer.amazon.com/docs/alexa-skills-kit-sdk-for-nodejs/overview.html" target="_blank">Alexa Skills Kit for Node.js SDK Documentation </a></li> <li><a href="https://developer.amazon.com/docs/smapi/quick-start-alexa-skills-kit-command-line-interface.html" target="_blank">Quick Start ASK CLI</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/73fe6778-eb22-45e3-a1a5-a444c8b91c2e/how-to-debug-your-alexa-skill-using-dynamodb-with-the-new-alexa-skills-kit-sdk-helpers-in-python" target="_blank">How to Debug Your Alexa Skill Using DynamoDB with the New Alexa Skills Kit SDK Helpers in Python</a></li> </ol> /blogs/alexa/post/f8251953-17ab-44df-b504-5af79af9684a/use-app-to-app-account-linking-and-new-skill-activation-api-to-simplify-skill-setup Use App-to-App Account Linking and New Skill Activation API to Simplify Skill Setup BJ Haberkorn 2019-08-27T17:03:27+00:00 2019-08-27T17:03:27+00:00 <p><img alt="blog_app-linking_954x240.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_app-linking_954x240.png._CB437417502_.png?t=true" /></p> <p>Now you can use <a href="https://developer.amazon.com/docs/account-linking/app-to-app-account-linking.html">App-to-App account linking</a> and the <a href="https://developer.amazon.com/docs/account-linking/skill-activation-api.html">Alexa Skill Activation API</a> to let customers link their Alexa account and enable your skill from within your mobile application.</p> <p>&nbsp;</p> <p><img alt="blog_app-linking_954x240.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_app-linking_954x240.png._CB437417502_.png?t=true" /></p> <p>We are excited to announce <a href="https://developer.amazon.com/docs/account-linking/app-to-app-account-linking.html" target="_blank">App-to-App account linking</a> and the <a href="https://developer.amazon.com/docs/account-linking/skill-activation-api.html" target="_blank">Alexa Skill Activation API</a>, which allow you to enable customers to link their Alexa account and enable your skill from within your mobile application. This reduces the number of steps required for customers to start using your skill, and can eliminate the need for them to re-enter their account credentials in either of the apps. As a result, customers can enjoy your Alexa skill more quickly and easily.</p> <h2>Two Methods to Reduce Account Linking Friction</h2> <p>Previously, customers could only link their Alexa account to their account with your service from the Alexa app or at <a href="https://alexa.amazon.com/" target="_blank">https://alexa.amazon.com/</a>. After opening the Alexa app or navigating to the Alexa website, customers were required to enter their credentials for your service. Some customers dropped off at the beginning of the process, for example, if they did not want to leave your application, and some dropped off in the middle, for example if they did not remember their credentials.</p> <p>Now, customers can link their account with your service to their Alexa account and enable your skill with a few touches starting in your app. You can prompt users to link their account with Alexa when they set up a new service or device, or surface the option at various places in your experience to improve discovery of your Alexa skill. When a customer chooses to link their account from your app on iOS, the Alexa app launches and asks the user to acknowledge the account linking request. After acknowledging the request, the user is returned to your app. If the customer is signed into the Alexa app, they do not have to remember their account credentials for either your app or the Alexa app to link accounts. The images below show an example user flow.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/AlexaSkillsKit/AcctLink_Happy_Path.png._CB437416230_.png" /></p> <p>This two touch flow is available today on iOS, and only if the customer has the Alexa app installed. If the customer is using Android or does not have the Alexa iOS app installed, you can still simplify account linking using Login with Amazon (LWA). In this scenario, you can open LWA in an in-app browser window as shown below, allowing the customer to enter their Amazon credentials, authenticate, and confirm the account linking request. We recommend you implement the LWA flow as the primary approach on Android, and as the fallback in your iOS application. We hope to add support for the Alexa app flow to Android in the future.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/AlexaSkillsKit/LWA_Acct_Link.png._CB437416442_.png" /></p> <p>In either case, once the user acknowledges the account linking and gets returned to your app, you can use the <a href="https://developer.amazon.com/docs/account-linking/skill-activation-api.html" target="_blank">Alexa Skill Activation API</a> to enable the skill for the user and complete account linking.</p> <h2>Learn More and Get Started!</h2> <p><strong>iRobot</strong>, <strong>Tuya Smart</strong>, <strong>Smart Life</strong> and <strong>Sensi </strong>have already added the App-to-App Account Linking feature in their apps, and <strong>Pandora</strong>, <strong>Wyze</strong><strong>, TP-LINK Kasa, </strong>and<strong> IKEA </strong>are adding it now. “Our users would ask why they have to complete set-up and account linking separately using the Alexa app when they have already set up their device in our app. With this feature, our users will now be more delighted and engaged with our skill.”, says Chris Jones, CTO, iRobot. “With App-to-App Account Linking and New Skill Activation API, our users can more easily discover the Pandora Skill on Alexa, connect their accounts faster, and start playing their favorite music on their Alexa-enabled devices with just a few taps.” says Tony Calzaretta, VP Listener Product and Product Design, Pandora. “Pandora and Amazon have a long history of working together to create great listening experiences for our users, and this is no exception.”</p> <p>You can find complete information on how to configure App-to-App Account linking in our <a href="https://developer.amazon.com/docs/account-linking/app-to-app-account-linking.html" target="_blank">documentation</a>. The documentation also includes code snippets for each of the steps. If you currently have an app or website for your users, we recommend you integrate App-to-App Account Linking to allow users to link accounts seamlessly from within your app or website. Users can continue to enable your skill and link account in their Alexa app also. For more information about account linking, check out the following resources:</p> <ul> <li><a href="https://developer.amazon.com/docs/account-linking/understand-account-linking.html" target="_blank">Account Linking Overview</a></li> <li><a href="https://github.com/alexa/skill-sample-nodejs-linked-profile" target="_blank">Sample Account Linking skill</a> using Amazon Cognito</li> </ul> <p>&nbsp;</p>