あなたのAlexaダッシュボード 設定
アクセスいただきありがとうございます。こちらのページは現在英語のみのご用意となっております。順次日本語化を進めてまいりますので、ご理解のほどよろしくお願いいたします。
Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2018-01-22T16:39:26+00:00 Apache Roller /blogs/alexa/post/5c31e1d7-8dd5-403d-8077-f6f49868b48d/alexa-pioneers-teen-dev-austin-wilson-uses-alexa-to-drive-cars-spaceships-and-his-future Alexa Pioneers: Teen Dev Austin Wilson Uses Alexa to Drive Cars, Spaceships, and His Future Jennifer King 2018-01-22T16:39:26+00:00 2018-01-22T16:39:26+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_austin._CB487873575_.png" /></p> <p>Teen developer Austin Wilson from Rocky River, Ohio, likes to dive deep and learn how things work. While searching the web last year for information on Raspberry Pi, he was inspired to build a voice-powered car using the Alexa Skills Kit.</p> <p style="margin-left:0in; margin-right:0in; text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/hwiENSBQrt4" width="640"></iframe></p> <p style="margin-left:0in; margin-right:0in"><em>Editor’s Note: This is an installment of our new series,</em><em> </em><a href="https://developer.amazon.com/alexa-skills-kit/case-studies/alexa-pioneers"><em>Alexa Pioneers</em></a><em>, which highlights people who are pushing the boundaries of what’s possible with voice. Follow the series to get inspired, then join the pioneers to create your own magical experiences for voice.</em><em> </em></p> <p style="margin-left:0in; margin-right:0in">Teen developer Austin Wilson likes to dive deep and learn how things work. While searching the web last year for information on Raspberry Pi, he stumbled upon Hackster’s <a href="https://developer.amazon.com/blogs/post/Tx2HZ84OSVXXUZF/Introducing-the-Internet-of-Voice-Challenge-with-Hackster.io-and-Raspberry-Pi">Internet of Voice Challenge with Raspberry Pi</a>.</p> <p style="margin-left:0in; margin-right:0in">&quot;I thought, ‘Why not learn Alexa, do something cool with Raspberry Pi, and try my hand at this competition?&quot;’ he says.</p> <p style="margin-left:0in; margin-right:0in">The high school senior in Rocky River, Ohio was inspired to dive in and let his imagination run free. He built a model car using K’NEX components, a Raspberry Pi, Arduino board, and servos to control the wheels. Then he added voice control using the <a href="https://developer.amazon.com/alexa-skills-kit">Alexa Skills Kit</a> (ASK). The result: Wilson can use his voice to move the car in four directions and change the color of its LED lights.</p> <p style="margin-left:0in; margin-right:0in">“I spent an entire weekend working on it with barely any sleep,” he says. “Now I’m driving my car with my voice.”</p> <p style="margin-left:0in; margin-right:0in">Wilson’s Alexa-powered car won second place in the contest. But his fascination with voice didn’t stop there.</p> <p style="margin-left:0in; margin-right:0in">“Everywhere I look, I see things that can really use a voice interface,” says Wilson.</p> <p style="margin-left:0in; margin-right:0in">Wilson decided to see if he could turn Alexa into a virtual cockpit assistant for his favorite video game, Elite: Dangerous.</p> <p style="margin-left:0in; margin-right:0in">“I was still learning about Alexa,” says Wilson. “I thought, ‘Here’s two of my favorite things, let's put them together.&quot;’</p> <p style="margin-left:0in; margin-right:0in">Wilson created the Elite: Dangerous Ship Assistant, using the game’s API to access game events and data on the thousands of star systems the pilot could explore. Each time Wilson finished one function, he was inspired to do more.</p> <p style="margin-left:0in; margin-right:0in">“I kept having moments where I wanted to do more, to keep going,” he says.</p> <p style="margin-left:0in; margin-right:0in">Eventually, Wilson could control many of the ship’s functions with Alexa.</p> <p style="margin-left:0in; margin-right:0in">Wilson is also teaching others about Alexa—even his fellow students in high school.</p> <p style="margin-left:0in; margin-right:0in">When his school hosted Hour of Code, Wilson was assigned to demonstrate how to build an Alexa skill. After only an hour, most of his classroom was able to build and modify a “Hello World” skill.</p> <p style="margin-left:0in; margin-right:0in">“It was a really cool process learning how to develop for Alexa and then being able to tell other people how to do it,” says Austin.</p> <p style="margin-left:0in; margin-right:0in">Wilson now wants to see just how far he can take voice technology.</p> <p style="margin-left:0in; margin-right:0in">“I am definitely inspired. Going forward, I can see myself doing a lot more advanced things with Alexa,” he says. “I can't wait to see what else I can create.”</p> <p style="margin-left:0in; margin-right:0in"><em>Tell us about what you’re building for voice with Alexa.</em><em> </em><a href="https://twitter.com/alexadevs" target="_blank"><em>Tweet us</em></a><em> </em><em>using the hashtag</em><em> </em><a href="https://twitter.com/search?q=%23alexapioneers&amp;src=typd" target="_blank"><em>#AlexaPioneers</em></a><em>.</em></p> /blogs/alexa/post/0293baf9-7f00-4efa-8a1c-65a9eb4e12cf/ces-2018-your-alexa-inventions-will-make-customers-lives-easier CES 2018: Your Alexa Inventions Will Make Customers’ Lives Easier Jennifer King 2018-01-19T16:44:59+00:00 2018-01-22T05:15:27+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/CES_header._CB487605445_.jpg" style="height:240px; width:954px" /></p> <p>At CES 2018, we saw your hard work come to life, bringing Alexa to new categories and enabling voice control on additional products. Innovators shared plans to add Alexa to PCs, cars, headphones, and more. Check out the recap to get inspired, then start building your voice-first creation.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/92L6NgoFt7g" width="640"></iframe></p> <p>Since 2014, Amazon Alexa has delighted customers and inspired hundreds of thousands of developers to build with voice. What started as an idea to make customers lives easier has now become a way for businesses and voice pioneers to solve real problems…and entertain their customers along the way.</p> <p>In 2017, tens of millions of new <a href="https://www.amazon.com/Amazon-Echo-And-Alexa-Devices/b/ref=nav_shopall_1_ods_ha_echo_cp?ie=UTF8&amp;node=9818047011" target="_blank">Amazon Echo</a> customers discovered the magic of hands-free voice control by trying Alexa for the first time. The rising popularity of voice also inspired many of you to integrate Alexa into your own devices and launch innovative Alexa-enabled products. In just the last year, millions of customers have also talked to Alexa through <a href="https://www.amazon.com/alexa-enabled" target="_blank">third-party Alexa-enabled products</a> from brands like Sonos, ecobee, Logitech, Anker, and more.</p> <p>At CES 2018, we saw your hard work come to life, bringing Alexa to new categories and enabling voice control on products, from light switches and smoke alarms to microwaves and automobiles. You can now ask Alexa for directions to the movies or ask Alexa to make popcorn and turn on Prime Video at home. We’re excited to see what you’ll build next.</p> <h2>Device Makers Continue to Invent New Products with the Alexa Voice Service</h2> <p>A larger variety of Alexa-enabled products means even greater choice for customers. To date, brands have launched over 50 products with the <a href="https://developer.amazon.com/alexa-voice-service">Alexa Voice Service (AVS)</a> built-in, and dozens more were announced at CES 2018, pushing voice-forward design into new areas. Now, it’s easy for customers to add Alexa to every room in the home, at the office, in the car, and on the go. Here’s a few of the latest announcements from CES:</p> <p><a href="https://www.amazon.com/b?node=17549366011" target="_blank">Alexa for PC</a> will bring the cloud-based voice service to PCs with the Windows operating system, and will soon be available on computers from HP, Lenovo, Acer, and ASUS, all powered by Intel technology.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/rTokD8oArNc" width="640"></iframe></p> <p>Jabra, Cleer, and Beyerdynamics all announced new headphones with access to Alexa, helping customers bring everything they love about Alexa on-the-go. We recently introduced the <a href="https://developer.amazon.com/blogs/alexa/post/564685cf-0b1b-4fe4-824e-2ce1e88e3e78/amazon-alexa-mobile-accessories-a-new-alexa-enabled-product-category-with-dev-tools-coming-soon">Alexa Mobile Accessories Kit</a>, which makes it even easier to build portable Alexa-enabled devices and will be used by companies like Bose, iHome, and Bowers and Wilkins.</p> <p>And Sound United unveiled the first sound bar with Alexa built-in under the Polk brand, elevating home entertainment for customers.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/m1-UkBq6FAU" width="640"></iframe></p> <p>Alexa is also coming to the car. At CES, <a href="http://pressroom.toyota.com/releases/toyota+introduces+amazon+alexa+vehicles.htm" target="_blank">Toyota</a>, <a href="http://shop.panasonic.com/about-us-latest-news-press-releases/01082018-CES-Amazon.html" target="_blank">Panasonic</a>, <a href="https://www.elektrobit.com/newsroom/elektrobit-eb-announces-will-among-first-amazon-alexa-automotive-software-integrators/" target="_blank">Elektrobit</a>, Faurecia, and <a href="https://www.byton.com/media-kit/files/Press%20Release%20-%202018.01.07%20en.pdf" target="_blank">Byton</a> all announced that they will be building in-car Alexa experiences for customers. These brands join companies like BMW and Ford to bring Alexa to the road. Device makers Garmin and Anker also introduced new Alexa-enabled products that help customers bring Alexa into the vehicle of their choice.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/O2jzNrZ6evk" width="640"></iframe></p> <p>We continue to work with leading technology providers to bring simpler ways to build Alexa-enabled products, which ultimately offers more choices to customers. Qualcomm, Amlogic, and Allwinner announced the industry’s first qualified ARM-based <a href="https://developer.amazon.com/blogs/alexa/post/ba17fd33-6510-45d6-b682-ee9ed9ef589c/single-soc-dev-kits-for-avs">single-SoC solutions for AVS</a> that provide simplified architecture, high performance, and low total bill of materials cost. We introduced the <a href="https://developer.amazon.com/blogs/alexa/post/80facfd2-1176-4c4f-94ac-4c5c781011ca/amazon-alexa-premium-far-field-voice-development-kit">Alexa Premium Far-Field Voice Development Kit</a> that offers the best of Amazon far-field voice capture technology from Amazon Echo and Echo Show.</p> <p>In the 18 months since the first AVS device launched, tens of thousands of you have invested resources in creating amazing, new Alexa-enabled products. Because of your dedication and creativity in building with voice and Alexa, customers can now reach Alexa through smart speakers, sound bars, thermostats, smart phones, home appliances, PCs, TVs, wearables, hearables, in-car accessories, head-unit integrations, light switches, mirrors, smoke detectors, set-top boxes, and more.</p> <h2>Smart Home Developers Connect the Modern Home with Alexa</h2> <p>Customers tell us all the time that the ability to control their smart homes with Alexa is delightful and that it simplifies their lives. Now, turning on the lights, adjusting the thermostat, or voice-controlling the TV is as easy as just asking Alexa.</p> <p>In 2017, we introduced enhancements to the <a href="https://developer.amazon.com/alexa/smart-home">Alexa Smart Home Skill API</a> that made it possible for you to build skills to control smart home cameras, entertainment devices such as smart TVs, and cooking appliances. Customers have already connected tens of millions of smart home devices to Alexa, including lighting, door locks, entertainment devices, thermostats, smart home cameras, and more.</p> <p>At CES 2018, Sony, TiVo, and Hisense unveiled new Alexa smart home skills that enable customers to voice-control the TV experience. Home appliance makers such as Whirlpool, Delta, LG, and Haier added new skills to help people control all aspects of their home, from TVs and microwaves to air conditioning units and faucets. In all, <a href="https://developer.amazon.com/alexa/smart-home/compatible">more than 4,000 smart home devices</a> from over 1,200 brands can be controlled with Alexa.</p> <p>Voice is the unifying force in smart home technology, and we’re motivated by your creativity. Bringing new hands-free Alexa experiences to the home is helping customers see &nbsp;– now more than ever – that the smart home is becoming simple and convenient.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/O-4tWbRJANU" width="640"></iframe></p> <h2>Alexa Skill Developers Enrich Catalog, Delight Customers</h2> <p>Alexa is not only being integrated into more devices; the cloud-based voice service is also growing smarter every day. In 2017, we launched more than <a href="https://developer.amazon.com/blogs/alexa/post/829a615b-301f-407c-96e7-6956fb988570/2017-alexa-skills-kit-year-in-review-more-than-100-new-products-programs-features-and-tools">100 new products, programs, features, and tools</a> in the Alexa Skills Kit to help developers give voice to their vision. And you’ve taken our tools to create more than 30,000 skills, more than doubling our skill catalog in just a year. These innovations include voice-first, multimodal skills that leverage our <a href="https://developer.amazon.com/docs/custom-skills/build-skills-for-echo-show.html">display templates</a> to marry the magic of voice with a screen to enable new scenarios on Echo Show and Echo Spot. Your work has made things faster, easier, and more delightful with voice—and customers have taken notice. Skill engagement has grown by nearly 75% since last January. Four out of five Alexa customers have used Alexa to engage with your skills.</p> <h2>Developers Can Make Alexa More Fun</h2> <p>Customers can now look to Alexa to provide fun for the whole family through engaging, voice-first games. During the holidays, we introduced <a href="https://www.amazon.com/Echo-Buttons-Alexa-Gadget-Pack/dp/B072C4KCQH/ref=sr_1_1?ie=UTF8&amp;qid=1516079670&amp;sr=8-1&amp;keywords=echo+buttons" target="_blank">Echo Buttons</a>, the first product from a new category called <a href="https://developer.amazon.com/alexa/alexa-gadgets">Alexa Gadgets</a>, that offer customers a whole new way to play games with Alexa. Powered by the Gadgets APIs, Alexa Gadgets are Alexa-connected products designed to enhance voice interactions with Echo devices. Customers were excited about Echo Buttons as stocking stuffers, and Buttons sold out just in time for the holidays.</p> <p>Through the <a href="https://developer.amazon.com/alexa/alexa-gadgets/gadgets-skill-api">Gadgets Skill API</a> and the <a href="https://developer.amazon.com/alexa/alexa-gadgets/alexa-gadgets-sdk">Gadgets SDK</a>, developers can build fun experiences and products that turn an Echo device into a hub for interactive play. At CES 2018, we shared more detail about the Gadgets SDK and announced the <a href="https://developer.amazon.com/alexa/alexa-gadgets/dev-kit">Alexa Gadgets Dev Kit</a>. Currently available through an invite-only program, the Alexa Gadgets Dev Kit is designed to help device makers easily create their own Alexa Gadgets. <a href="https://developer.amazon.com/alexa/alexa-gadgets/dev-kit">Sign up to learn more</a>.</p> <h2>Build with Alexa in 2018</h2> <p>Hundreds of thousands of developers around the world are building Alexa experiences with the Alexa Voice Service and Alexa Skills Kit. Where the technology goes next will be up to the pioneers who continue to innovate with the technology and delight their customers. <strong><em>What will you create this year?</em></strong></p> <p>Learn how to integrate Alexa into your product with the <a href="https://developer.amazon.com/alexa-voice-service">Alexa Voice Service</a>, teach Alexa new capabilities with the <a href="https://developer.amazon.com/alexa-skills-kit">Alexa Skills Kit</a>, or connect your device to Alexa with the <a href="https://developer.amazon.com/alexa/connected-devices">Smart Home Skill API and Gadgets SDK</a>.</p> /blogs/alexa/post/cfbd2f5e-c72f-4b03-8040-8628bbca204c/alexa-skill-teardown-understanding-entity-resolution-with-pet-match Alexa Skill Teardown: Understanding Entity Resolution with the Pet Match Skill Jennifer King 2018-01-16T16:07:17+00:00 2018-01-17T16:13:44+00:00 <p>When building conversational user interfaces, it’s important to think about continued engagement and high-quality interactions. In this 2-part blog series, we tear down a skill called Pet Match to show you the ins and outs of dialog management and entity resolution.</p> <p>In a <a href="https://developer.amazon.com/blogs/alexa/post/555d00d6-66b4-4f0b-8974-2021cd9a1630/alexa-skill-teardown-decoding-dialog-management-with-pet-match">previous post</a>, we detailed how a skill called Pet Match uses <strong>Dialog Management</strong> to handle a multi-turn sequence to collect slots from the user. Here we explain how Pet Match uses <strong>Entity Resolution</strong> to strengthen the <strong>interaction model</strong> to understand more complicated responses from the user. We recently covered both of these concepts in a webinar on <a href="https://register.gotowebinar.com/register/3772133411211777283" target="_blank">advanced voice design techniques</a>, during which we shared some best practices for applying advanced features to enable customers to engage in multi-turn conversations with Alexa skills. Check out the webinar recording and <a href="https://github.com/alexa/skill-sample-nodejs-petmatch/" target="_blank">download the source code</a> to dive deep on dialog management and entity resolution.</p> <p>Today, we'll focus on <strong>Entity Resolution</strong>, which enables you to add synonyms to your slot values and validate that a user actually said one of them. The Alexa service will then handle resolving the synonyms to your slot values. This simplifies your code since you don't have to write any code to match the synonym that user said to your slot value.</p> <p>Pet Match finds the perfect pet for the user by asking a series of questions designed to fill the <strong>size</strong>, <strong>temperament</strong>, and <strong>energy</strong> slots. Pet Match only supports dogs, but the same principles can be applied to match the user with cats, birds, turtles, books, movies, video games or whatever you'd like. Once the three required slots are collected, the values are passed to an API through an http get request. The API returns the match in a JSON payload, which the skill unpacks and converts into speech output.</p> <p>Pet Match's <strong>size</strong> slot has 4 values: <strong>tiny</strong>, <strong>small</strong>, <strong>medium,</strong> and <strong>large</strong>. But what happens if the user says <strong>huge</strong>? We're going to miss their size preference. We could add <strong>huge</strong> to our slot values, but our Pet Match API only supports the original 4 values. If we pass the API <strong>huge</strong> it won't find a match. We could update the Pet Match API to support <strong>huge</strong>, but that won't scale well. Also consider what happens if we don't own the API and can't modify it?</p> <p>Furthermore, API's are not designed for personal conversation. They are designed for communication between computer systems. For example, an API that provides Major League Baseball box scores may require a three-letter code instead of the full team name. It would be unnatural to force the user to say the three-letter code so we could map the team name as synonym to the 3 letter code so the user can say the team name, but our backend would use the three-letter code to make the API call. For Pet Match we can map <strong>huge</strong> to <strong>large</strong>. That way we can still pass <strong>large</strong> to the Pet Match API while still capturing the synonym.</p> <p>While it's possible to create our own syononym maps in our back-end code, the Alexa Skills Kit (ASK) provides everything we need to resolve synonyms with <strong>Entity Resolution</strong>. Take a look at Pet Match's <a href="https://github.com/alexa/skill-sample-nodejs-petmatch/blob/master/casestudy/interaction-model-part2.json">Interaction Model</a> and look for <strong>sizeType</strong>.</p> <pre> <code>{ &quot;name&quot;: &quot;sizeType&quot;, &quot;values&quot;: [ { &quot;id&quot;: null, &quot;name&quot;: { &quot;value&quot;: &quot;large&quot;, &quot;synonyms&quot;: [ &quot;huge&quot;, &quot;truck&quot;, &quot;gigantic&quot;, &quot;eat me out of house&quot;, &quot;scary big&quot;, &quot;ginormous&quot;, &quot;ride&quot;, &quot;waist height&quot; ] } }, ... } </code></pre> <p>As you can see for each <strong>value</strong> we have a list of <strong>synonyms</strong>. The <strong>large</strong> slot value has been mapped to <strong>huge</strong> so if the user says <strong>huge</strong> the value will resolve to <strong>large</strong>. <strong>Entity Resolution</strong> is not limited to single word phrases. You're able to combine several words to make more complex synonyms such as, &quot;eat me out of the house&quot; and &quot;scary big.&quot; This allows the user to speak with your skill in a more natural way. If the user says, &quot;I want a dog that will eat me out of house&quot; the skill will resolve the value to <strong>large</strong> and pass it to the Pet Match API. Let's take a look at the JSON that is sent to our backend when we have a match. In this case the user said, &quot;eat me out of house.&quot;</p> <pre> <code>&quot;size&quot;: { &quot;name&quot;: &quot;size&quot;, &quot;value&quot;: &quot;eat me out of house&quot;, &quot;resolutions&quot;: { &quot;resolutionsPerAuthority&quot;: [ { &quot;authority&quot;: &quot;amzn1.er-authority.echo-sdk.[skill-id].sizeType&quot;, &quot;status&quot;: { &quot;code&quot;: &quot;ER_SUCCESS_MATCH&quot; }, &quot;values&quot;: [ { &quot;value&quot;: { &quot;name&quot;: &quot;large&quot;, &quot;id&quot;: &quot;afacdb0a401ccdf6b48551bbc00e8a74&quot; } } ] } ] }, ... </code></pre> <p>The value &quot;eat me out of house&quot; is the synonym and the value or values it resolved to are contained in the <strong>resolutions</strong> object. Upon a match we get an <strong>ER_SUCCESS_MATCH</strong> status code. To access the resolved value programmatically you can call this.request``.event.intent.slots.size.resolutions.resolutionsPerAuthority[0].values[0].value.name.</p> <h2>Simplification</h2> <p>The object is pretty complex and Pet Match really only needs the synonym and the resolved value. To simplify the object, use the getSlotValues function. It will return a simplified object of the form:</p> <pre> <code>{ &quot;SlotName&quot;: { &quot;synonym&quot;: '', &quot;resolved&quot;: '', &quot;isValidated&quot;: false }, ... } </code></pre> <p><strong>SlotName</strong> is the name of the slot, <strong>synonym</strong> is what the user said, <strong>resolved</strong> is the value that <strong>synonym</strong> resolved to, and <strong>isValidated</strong> is true when the status is <strong>ER_SUCCESS_MATCH</strong>. For example, if the user filled the <strong>energy</strong> slot with &quot;play fetch with,&quot; the resulting object would look like:</p> <pre> <code>{ ... &quot;energy&quot;: { &quot;synonym&quot;: &quot;play fetch with&quot;, &quot;resolved&quot;: &quot;high&quot;, &quot;isValidated&quot;: true }, ... } </code></pre> <p>In code we could access the <strong>synonym</strong> and <strong>resolved</strong> values by simplying doing:</p> <pre> <code>let isTestingWithSimulator = false; let filledSlots = delegateSlotCollection.call(this, isTestingWithSimulator); let slotValues = getSlotValues(filledSlots); console.log('Energy - Synonym: &quot;', slotValues.energy.synonym, '&quot; Resolved: &quot;', slotValues.energy.resolved,'&quot;'); </code></pre> <p>For Pet Match, we are resolving the slot values down to their <strong>resolved</strong> values and sending to the Pet Match API through and HTTP GET request in order to perform the match and we leveraged <strong>Entity Resolution</strong>. Doing so allows us to easily map the synonym to the API Key value and provide a more natural interaction with the skill since user can say things like plays tug of war and will resolve to medium.</p> <h2>Disambiguation</h2> <p>Pet Match also combines <strong>Dialog Management</strong> and <strong>Entity Resolution</strong> to disambiguate <strong>synonyms</strong> that have resolved to more than one value. Pet Match's <strong>size</strong> slot has 4 values: tiny, small, medium, and large.</p> <p>The <strong>synonym</strong> &quot;little&quot; has been mapped to both <strong>tiny</strong> and <strong>small</strong>. If the user says, &quot;I want a little dog&quot; the JSON sent to our skill would look like:</p> <pre> <code>&quot;size&quot;: { &quot;name&quot;: &quot;size&quot;, &quot;value&quot;: &quot;little&quot;, &quot;resolutions&quot;: { &quot;resolutionsPerAuthority&quot;: [ { &quot;authority&quot;: &quot;amzn1.er-authority.echo-sdk.[skill-id].sizeType&quot;, &quot;status&quot;: { &quot;code&quot;: &quot;ER_SUCCESS_MATCH&quot; }, &quot;values&quot;: [ { &quot;value&quot;: { &quot;name&quot;: &quot;small&quot;, &quot;id&quot;: &quot;eb5c1399a871211c7e7ed732d15e3a8b&quot; }, &quot;value&quot;: { &quot;name&quot;: &quot;tiny&quot;, &quot;id&quot;: &quot;d60cadf1a41c651e1f0ade50136bad43&quot; } } ] } ] }, ... </code></pre> <p>In this case we still get an <strong>ER_SUCCESS_MATCH</strong> but our <strong>values</strong> array now has two items in it. We can detect if a <strong>synonym</strong> has resolved to more than value by check to see if the size is greater than 1:</p> <pre> <code>if (this.size.resolutions.resolutionsPerAuthority[0].values.length &gt; 1) { // then we need to disambiguate. } </code></pre> <p>If the array length is greater than 1, Pet Match disambiguates the slot by using <strong>Dialog Management</strong> to re-elicit the slot. Let's take a look at the disambiguateSlot function:</p> <pre> <code>function disambiguateSlot() { let currentIntent = this.event.request.intent; Object.keys(this.event.request.intent.slots).forEach(function(slotName) { let currentSlot = this.event.request.intent.slots[slotName]; let slotValue = slotHasValue(this.event.request, currentSlot.name); if (currentSlot.confirmationStatus !== 'CONFIRMED' &amp;&amp; currentSlot.resolutions &amp;&amp; currentSlot.resolutions.resolutionsPerAuthority[0]) { if (currentSlot.resolutions.resolutionsPerAuthority[0].status.code == 'ER_SUCCESS_MATCH') { // if there's more than one value that means we have a synonym that // mapped to more than one value. So we need to ask the user for // clarification. For example if the user said &quot;mini dog&quot;, and // &quot;mini&quot; is a synonym for both &quot;small&quot; and &quot;tiny&quot; then ask &quot;Did you // want a small or tiny dog?&quot; to get the user to tell you // specifically what type mini dog (small mini or tiny mini). if ( currentSlot.resolutions.resolutionsPerAuthority[0].values.length &gt; 1) { let prompt = 'Which would you like'; let size = currentSlot.resolutions.resolutionsPerAuthority[0].values.length; currentSlot.resolutions.resolutionsPerAuthority[0].values.forEach(function(element, index, arr) { prompt += ` ${(index == size -1) ? ' or' : ' '} ${element.value.name}`; }); prompt += '?'; let reprompt = prompt; // In this case we need to disambiguate the value that they // provided to us because it resolved to more than one thing so // we build up our prompts and then emit elicitSlot. this.emit(':elicitSlot', currentSlot.name, prompt, reprompt); } } else if (currentSlot.resolutions.resolutionsPerAuthority[0].status.code == 'ER_SUCCESS_NO_MATCH') { // Here is where you'll want to add instrumentation to your code // so you can capture synonyms that you haven't defined. console.log(&quot;NO MATCH FOR: &quot;, currentSlot.name, &quot; value: &quot;, currentSlot.value); if (REQUIRED_SLOTS.indexOf(currentSlot.name) &gt; -1) { let prompt = &quot;What &quot; + currentSlot.name + &quot; are you looking for&quot;; this.emit(':elicitSlot', currentSlot.name, prompt, prompt); } } } }, this); } </code></pre> <p>The function loops through all the slots and checks to see if it needs to be disambiguated. If so, builds up the prompt:</p> <pre> <code>let prompt = 'Which would you like'; let size = currentSlot.resolutions.resolutionsPerAuthority[0].values.length; currentSlot.resolutions.resolutionsPerAuthority[0].values.forEach(function(element, index, arr) { prompt += ` ${(index == size -1) ? ' or' : ' '} ${element.value.name}`; }); prompt += '?'; let reprompt = prompt; </code></pre> <p>If the user said &quot;little&quot; for the <strong>size</strong> slot, the above snippet will create a prompt and a reprompt that says, &quot;Which would like tiny or small?&quot; To have <strong>Dialog Management</strong> reprompt the user for the slot we then need to emit <strong>:elicitSlot</strong> with this.emit``(':``elicitSlot``', currentSlot.name, prompt, ``reprompt``); where <strong>currentSlot.name</strong> is <strong>size</strong>.</p> <h2>Slot Validation</h2> <p>There are times when the user may give an answer that doesn't fit with your paradigm. For example, what if the user asks Pet Match for a dragon or a unicorn? As much I would love to own and care for a dragon, sadly they don't exist. With <strong>Entity Resolution</strong> we can add a <strong>mythical_creatures</strong> slot value to the <strong>pet</strong> slot type and add all the mythical creatures that we want to capture as <strong>synonyms</strong>. After adding mythical creatures to the <strong>petType,</strong> the JSON should appear as below:</p> <pre> <code>... { &quot;name&quot;: &quot;petType&quot;, &quot;values&quot;: [ { &quot;id&quot;: null, &quot;name&quot;: { &quot;value&quot;: &quot;dog&quot;, &quot;synonyms&quot;: [ &quot;puppy&quot;, &quot;doggie&quot;, &quot;canine&quot;, &quot;canis familiaris&quot;, &quot;canis&quot; ] } }, { &quot;id&quot;: null, &quot;name&quot;: { &quot;value&quot;: &quot;mythical_creatures&quot;, &quot;synonyms&quot;: [ &quot;dragon&quot;, &quot;unicorn&quot; ] } } ] }, ... </code></pre> <p>In the Pet Match code, we check slotValues.pet.resolved and if it's equal to <strong>mythical_creatures</strong> then we stop <strong>Dialog Management</strong> and return a random funny response for example, &quot;Ah yes dragons are majectic creatures, however owning one is outlawed.&quot; This adds character to our skill. The user can have fun interacting with the skill to see how many different ways the skill will respond. In the case where we get a <strong>mythical_creature</strong> we don't even call the Pet Match API because we know it's something that the API can't match.</p> <h2>No Match</h2> <p>One thing to note is that <strong>synonyms</strong> are not enumerations. There are cases where the user may say something that's not a <strong>synonym</strong> but it still resolves. This is a low confidence match because it's not in the list of <strong>synonyms</strong>. In this case, the status code returned is <strong>ER_SUCCESS_NO_MATCH</strong>. For example, what if the user said &quot;pizza&quot; for <strong>size</strong>. The JSON sent to our skill's service would look like:</p> <pre> <code>... &quot;energy&quot;: { &quot;name&quot;: &quot;size&quot;, &quot;value&quot;: &quot;pizza&quot;, &quot;resolutions&quot;: { &quot;resolutionsPerAuthority&quot;: [ { &quot;authority&quot;: &quot;amzn1.er-authority.echo-sdk.amzn1.ask.skill.[skill-id].energyType&quot;, &quot;status&quot;: { &quot;code&quot;: &quot;ER_SUCCESS_NO_MATCH&quot; } } ] }, &quot;confirmationStatus&quot;: &quot;NONE&quot; } ... </code></pre> <p>Pizza isn't a valid size so we should ignore it and re-elicit the slot. One thing that you may want to consider is to upon an <strong>ER_SUCCESS_NO_MATCH</strong> capture the slot name, and value so you can capture potential new synonyms. For example, what if the user says &quot;itty bitty&quot; for size?</p> <pre> <code>... &quot;energy&quot;: { &quot;name&quot;: &quot;size&quot;, &quot;value&quot;: &quot;itty bitty&quot;, &quot;resolutions&quot;: { &quot;resolutionsPerAuthority&quot;: [ { &quot;authority&quot;: &quot;amzn1.er-authority.echo-sdk.amzn1.ask.skill.[skill-id].energyType&quot;, &quot;status&quot;: { &quot;code&quot;: &quot;ER_SUCCESS_NO_MATCH&quot; } } ] }, &quot;confirmationStatus&quot;: &quot;NONE&quot; } ... </code></pre> <p>In this case, we want to know that the user said &quot;itty bitty&quot; because it's valid so we should update the Interaction Model so the next time a user says &quot;itty bitty&quot; your skill will understand. The disambiguate slot function does a check for <strong>ER_SUCCESS_NO_MATCH</strong>.</p> <pre> <code>else if (currentSlot.resolutions.resolutionsPerAuthority[0].status.code == 'ER_SUCCESS_NO_MATCH') { // Here is where you'll want to add instrumentation to your code // so you can capture synonyms that you haven't defined. console.log(&quot;NO MATCH FOR: &quot;, currentSlot.name, &quot; value: &quot;, currentSlot.value); if (REQUIRED_SLOTS.indexOf(currentSlot.name) &gt; -1) { let prompt = &quot;What &quot; + currentSlot.name + &quot; are you looking for&quot;; this.emit(':elicitSlot', currentSlot.name, prompt, prompt); } } </code></pre> <p>Notice that after logging the no match, the slot is re-elicited if it is a required slot.</p> <h2>More Resources on Entity Resolution</h2> <p>Pet Match leverages <strong>Entity Resolution</strong> to map <strong>synonyms</strong> in three specific ways:</p> <ul> <li>Natural Conversation: It allows the user to say things like &quot;plays tug of war&quot; instead of &quot;medium energy&quot;</li> <li>API Key Mapping: It facilitates mapping natural language into the API key we pass to the Pet Match API</li> <li>Slot Validation: It identifies when a user asks to be matched with an animal that doesn't exist</li> </ul> <p>Here are some additional resources to help you as you start using entity resolution to build more engaging skills:</p> <ul> <li><a href="https://github.com/alexa/skill-sample-nodejs-petmatch/blob/master/instructions/2-entity-resolution.md">Pet Match Entity Resolution Tutorial</a></li> <li><a href="https://developer.amazon.com/docs/custom-skills/define-synonyms-and-ids-for-slot-type-values-entity-resolution.html">Technical Documentation for Entity Resolution</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/5882651c-6377-4bc7-bfd7-0fd661d95abc/entity-resolution-in-skill-builder">Blog: Entity Resolution in Skill Builder</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/5de2b24d-d932-4c6f-950d-d09d8ffdf4d4/entity-resolution-and-slot-validation">Blog: Entity Resolution and Slot Validation</a></li> </ul> <h2>Build Engaging Skills, Earn Money with Alexa Developer Rewards</h2> <p>Every month, developers can earn money for eligible skills that drive some of the highest customer engagement. Developers can increase their level of skill engagement and potentially earn more by improving their skill, building more skills, and making their skills available in in the US, UK and Germany. <a href="https://developer.amazon.com/alexa-skills-kit/rewards">Learn more</a> about our rewards program and start building today.</p> /blogs/alexa/post/762fee8e-9576-4c70-88c5-418ac809413f/resource-roundup-top-alexa-tips-and-tutorials-for-building-skills-with-visual-components Resource Roundup: Top Alexa Tips and Tutorials for Building Skills with Visual Components Jennifer King 2018-01-15T16:30:20+00:00 2018-01-15T16:30:20+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_green._CB489587576_.png" style="height:241px; width:954px" /></p> <p>Our final post in the resource roundup series features top tips and tutorials for building multimodal skills for Echo Show and Echo Spot.&nbsp;</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_green._CB489587576_.png" style="height:241px; width:954px" /></p> <p><em>Editor’s Note: This is the final installment of a new series that showcases the top developer tips, tutorials, and educational resources to help you build incredible Alexa skills. <a href="https://developer.amazon.com/blogs/alexa/tag/Resources">Follow the series</a> to learn, get inspired, and start building your own voice experiences for Alexa. </em></p> <p>With the release of <a href="https://developer.amazon.com/echo-show">Echo Show and more recently Echo Spot</a>, developers have even more opportunities to design captivating voice-first experiences. By combining the power of voice with a visual display, you can use Alexa to deliver a completely new way for customers to interact across voice and graphical user interfaces.</p> <p>All Alexa skills are automatically available on Echo Show and Echo Spot. While your skills will work out of the box, we know that you may want to create visual experiences to complement your skill. Whether you’re building skills for Echo Show and Echo Spot for the first time or optimizing an existing skill for these devices, use these resources to guide you.</p> <h2>Building for Echo Show and Echo Spot: VUI &amp; GUI Best Practices</h2> <p>For skill developers, voice-enabled devices with a screen create unique opportunities to reimagine voice innovations. With Echo Show and Echo Spot, you can now design interactions that combine a <a href="https://developer.amazon.com/alexa-skills-kit/vui">voice user interface (VUI)</a> and a graphical user interface (GUI) for an enhanced customer experience. Check out <a href="https://developer.amazon.com/blogs/alexa/post/05a2ea89-2118-4dcb-a8df-af3d8ac623a8/building-for-echo-show-and-echo-spot-vui-gui-best-practices">this post</a> for best practices for designing multimodal interactions, and start building your voice-first skills that shine on all devices with Alexa.</p> <h2>7 Tips for Creating Great Multimodal Experiences</h2> <p>The visual displays of Echo Show and Echo Spot are designed to enhance the voice experience, not replace it. This means you can now complement your skill interactions with imagery, blocks of text, video, list navigation and selection, touch input, and more. With all of these options available, it’s important to design the right voice-first experience for your customer before you build. Review our <a href="https://developer.amazon.com/blogs/alexa/post/a7f25291-5418-4488-a6e3-fa531e49545c/7-tips-for-creating-great-multimodal-experiences-for-echo-show">best practices for creating great multimodal experiences</a> for tips you can apply to your voice design process for Echo Show and Echo Spot.</p> <h2>Choosing the Right Templates</h2> <p>Designing skills for Echo Show and Echo Spot raises the need to think about new design patterns. With the Echo Show release we covered best practices for choosing the right <a href="https://developer.amazon.com/blogs/alexa/post/982c9134-fbf6-4465-a105-5f5c4b4774f6/building-for-echo-show-choosing-the-right-template">display template</a> and the right <a href="https://developer.amazon.com/blogs/alexa/post/b877f8df-e842-431e-aefa-65db00d67281/designing-skills-for-echo-show-choosing-the-right-list-template">list template</a>. The <a href="https://developer.amazon.com/docs/custom-skills/display-interface-reference.html">templates for Echo Show</a> will scale for Echo Spot, making it easier to quickly design visual experiences across devices. However, there are some notable differences in how you should use the templates due to the different sizes and shapes of the devices. <a href="https://developer.amazon.com/blogs/alexa/post/75d3115c-b95a-4387-aa58-b0fd06734675/design-alexa-skills-for-echo-spot-7-tips-to-get-started">Reference this post</a> for more insight on choosing the right templates to streamline your voice experience.</p> <h2>Voice Design Guide</h2> <p>We’ve updated the <a href="https://developer.amazon.com/designing-for-voice/what-alexa-says/#choose-the-right-template-on-echo-show">Amazon Alexa Voice Design Guide</a> with new design best practices and guidelines to teach you how to build compelling experiences across voice and graphical user interfaces. This includes using imagery, video, and formatted text in the new visual templates. The guide is also a great resource for voice design basics, providing a thorough overview of the design process, conversational voice experiences, and how to apply fundamental Alexa concepts.</p> <h2>Tips for a Successful Alexa Skills Certification</h2> <p>As you build new skills or optimize your skills for Echo Show and Echo Spot, incorporating visual elements introduces new layers to the skill certification process. <a href="https://developer.amazon.com/blogs/alexa/post/50eae7c6-28b5-483e-ae01-277b121b9768/tips-for-successful-echo-show-alexa-skills-certification">Follow our tips</a> to ensure you have a smooth certification process for Alexa skills intended for Echo Show or Echo Spot.</p> <p>Thanks for tuning into our series in which we shared our <a href="https://developer.amazon.com/blogs/alexa/tag/Resources">top resources</a> for new skill builders, advanced skills, smart home skills, kid skills, and skills for devices with screens. Apply these best practices and tutorials as you’re building for voice with Alexa.</p> <h2>Build Engaging Skills, Earn Money with Alexa Developer Rewards</h2> <p>Now you can get paid when you build eligible skills that drive some of the highest customer engagement. Increase your level of skill engagement and potentially earn more by improving your skill, building more skills, and making your skills available in in the US, UK and Germany. <a href="https://developer.amazon.com/alexa-skills-kit/rewards">Learn more</a> about our rewards program and start building today.</p> /blogs/alexa/post/555d00d6-66b4-4f0b-8974-2021cd9a1630/alexa-skill-teardown-decoding-dialog-management-with-pet-match Alexa Skill Teardown: Decoding Dialog Management with the Pet Match Skill Jennifer King 2018-01-12T16:13:48+00:00 2018-01-12T21:38:21+00:00 <p>When building conversational user interfaces, it’s important to think about continued engagement and high-quality interactions. In this 2-part blog series, we tear down a skill called Pet Match to show you the ins and outs of dialog management and entity resolution.</p> <p>When building conversational user interfaces, it’s important to think about continued engagement and high-quality interactions. During a recent <a href="https://register.gotowebinar.com/register/3772133411211777283" target="_blank">webinar on advanced voice design techniques</a>, we shared some best practices for applying advanced features like dialog management, entity resolution, and memory to enable customers to engage in multi-turn conversations with Alexa skills.</p> <p>We used a skill we developed called Pet Match to demonstrate these concepts. Watch the <a href="https://register.gotowebinar.com/register/3772133411211777283" target="_blank">on-demand webinar</a> to see the skill in action. You can also tune into this 2-part blog series where we tear down Pet Match to show you the ins and outs of dialog management and entity resolution. Our first post dives into the dialog management aspects of the skill.</p> <p>Pet Match uses <strong>Dialog Management,</strong> which <strong>delegates</strong> slot collection to <strong>Alexa</strong>. Pet Match finds the perfect pet for the user by asking a series of questions designed to fill the <strong>size</strong>, <strong>temperament</strong>, and <strong>energy</strong> slots as long as they are looking for a dog. For example, the user can say, &quot;Alexa, tell Pet Match I want family dog that is high energy.&quot; However, the same process can be applied to match the user with cats, birds, turtles, books, movies, video games or whatever you'd like to match the user with. Once the 3 required slots are collected, the values are passed to an API through an http get request. The API returns the match in a JSON payload, which the skill unpacks and converts into speech output.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/petmatch_overview_graphic_1._CB489325235_.png" /></p> <p>Through <strong>Dialog Management,</strong> Pet Match gains the flexibility of collecting the slots, all at once in a one-shot utterance and one or many slots in a multi-turn sequence without writing any code to manage slot elicitation. <strong>Dialog Management</strong> does provide a way to hook into it so you can override the default behavior. This blog post will refer to several blocks of code in Pet Match's <strong>dialog model</strong> and <strong>AWS Lambda Function</strong>. The entire codebase is available on <a href="http://alexa.design/petmatch" target="_blank">github.com</a>.</p> <p>Pet Match's <strong>PetMatchIntent</strong> is using <strong>Dialog Management</strong>. If you look at the <strong>Intent Slots</strong> for <strong>PetMatchIntent</strong>, you'll notice that <strong>size</strong>, <strong>temperament</strong>, and <strong>energy</strong> are marked <strong>REQ</strong> which indicates that they are required slots.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/petmatch_intent_slots_graphic_2._CB489325197_.png " /></p> <p>In the <a href="https://raw.githubusercontent.com/alexa/skill-sample-nodejs-petmatch/master/casestudy/interaction-model-part2.json" target="_blank">interaction-model</a> you should see the following:</p> <pre> <code>&quot;prompts&quot;: [ { &quot;id&quot;: &quot;Elicit.Intent-PetMatchIntent.IntentSlot-size&quot;, &quot;variations&quot;: [ { &quot;type&quot;: &quot;PlainText&quot;, &quot;value&quot;: &quot;There are dogs that are tiny, small, medium, and large. Which would you like?&quot; }, { &quot;type&quot;: &quot;PlainText&quot;, &quot;value&quot;: &quot;What size of a dog would you like?&quot; } ] }] </code></pre> <p>The id <strong>Elicit.Intent-PetMatchIntent.IntentSlot-size</strong> indicates that Alexa should handle elicitation of the <strong>size</strong> slot. The <strong>variations</strong> is a set of prompts that Alexa will use to elicit the size slot from the user. Be sure to take a look at the <strong>temperament</strong> and <strong>energy</strong> slots as well.</p> <p>In the slots array, for the <strong>size</strong> slot you should see:</p> <pre> <code>&quot;slots&quot;: [ //... { &quot;name&quot;: &quot;size&quot;, &quot;type&quot;: &quot;sizeType&quot;, &quot;samples&quot;: [ &quot;{I_Want} {article} {size} {pet}&quot;, &quot;{I_Want} {size}&quot;, &quot;{comparison} than a {size}&quot;, &quot;the {size}&quot;, &quot;Something i can {size}&quot;, &quot;{size} size&quot;, &quot;{I_Want} {article} {size} {pet} that {energy}&quot;, &quot;{I_Want} {article} {size} {temperament} {pet}&quot;, &quot;{I_Want} {article} {size} {temperament} to {energy}&quot;, &quot; {temperament} {pet}&quot;, &quot;{energy} energy&quot;, &quot;{size}&quot; ] }, // ... </code></pre> <p>The <strong>size</strong> slot contains an object called <strong>samples</strong>, which are the <strong>utterances</strong> that the user will say to fill the slot. You may have noticed that we have defined some additional unrequired slots called <strong>I_Want</strong>, <strong>article</strong>, <strong>pet</strong>, and <strong>comparison</strong>. These slots will be filled if provided but <strong>Dialog Management</strong> will not prompt for them because they are optional. One great <strong>Dialog Management</strong> feature is the ability to provide slots in addition to the one being prompted. This enables the <strong>PetMatchIntent</strong> to capture additional slots when prompting the user for <strong>size</strong>.</p> <p>For example, if Pet Match prompts the user for the <strong>size</strong> slot by asking, &quot;What size of a dog would you like?&quot; and the user responds, &quot;I want a large guard dog.&quot; The {I_Want} {article} {size} {temperament} {pet} utterance will allow <strong>PetMatchIntent</strong> to capture the <strong>temperament</strong> in addition to the <strong>size</strong>. Likewise, if the user responded with, &quot;I want a guard dog&quot; The {I_Want} {article} {size} {temperament} {pet} will a capture the <strong>temperament</strong> even though the user was prompted for <strong>size</strong>. The skill would then have to reprompt for the <strong>size</strong> slot because it's still empty, but we didn't lose the <strong>temperament</strong>.</p> <p>Notice at this point we haven't written any additional code to be able handle the variations of conversation. We simply provided training data through our <strong>slot-level</strong> prompts and utterances and Alexa figured out how to handle the input and deliver it to our skill in a meaningful way.</p> <p>Now that you have an understanding of how the front-end works. Let's take a look at the <a href="https://raw.githubusercontent.com/alexa/skill-sample-nodejs-petmatch/master/lambda/custom/index.js">back-end</a>.</p> <pre> <code>'PetMatchIntent' : function () { // delegate to Alexa to collect all the required slots let isTestingWithSimulator = true; //autofill slots when using simulator, dialog management is only supported with a device let filledSlots = delegateSlotCollection.call(this, isTestingWithSimulator); // Code has been truncated for brevity } </code></pre> <p>From the <strong>PetMatchIntent</strong> we are delegating slot elicitation to Alexa via delegateSlotCollection.call(this, isTestingWithSimulator). <strong>Dialog Management</strong> has 3 states, <strong>STARTED</strong>, <strong>IN_PROGRESS</strong>, and <strong>COMPLETED</strong>. The delegateSlotCollection function checks the present dialog state, this.event.request.dialogState, and returns the slots when <strong>COMPLETED</strong>. At any state in <strong>Dialog Management</strong> we can fill the slots with default data, override the prompts, ask the user to confirm a slot, or re-elicit a slot. In a separate post, we'll cover in detail how to plug into dialog management to disambiguate a slot value that resolved to more than one synonym using <strong>Entity Resolution</strong>. The disambiguateSlot function is where we identify ambigous slot values and re-elicit the slot.</p> <pre> <code>function delegateSlotCollection(shouldFillSlotsWithTestData) { console.log(&quot;in delegateSlotCollection&quot;); console.log(&quot;current dialogState: &quot; + this.event.request.dialogState); // This will fill any empty slots with canned data provided in defaultData // and mark dialogState COMPLETED. // USE ONLY FOR TESTING IN THE SIMULATOR. if (shouldFillSlotsWithTestData) { let filledSlots = fillSlotsWithTestData.call(this, defaultData); this.event.request.dialogState = &quot;COMPLETED&quot;; }; if (this.event.request.dialogState === &quot;STARTED&quot;) { console.log(&quot;in STARTED&quot;); console.log(JSON.stringify(this.event)); var updatedIntent=this.event.request.intent; // optionally pre-fill slots: update the intent object with slot values // for which you have defaults, then return Dialog.Delegate with this // updated intent in the updatedIntent property disambiguateSlot.call(this); console.log(&quot;disambiguated: &quot; + JSON.stringify(this.event)); return this.emit(&quot;:delegate&quot;, updatedIntent); console.log('shouldnt see this.'); } else if (this.event.request.dialogState !== &quot;COMPLETED&quot;) { console.log(&quot;in not completed&quot;); //console.log(JSON.stringify(this.event)); disambiguateSlot.call(this); return this.emit(&quot;:delegate&quot;, updatedIntent); } else { console.log(&quot;in completed&quot;); //console.log(&quot;returning: &quot;+ JSON.stringify(this.event.request.intent)); // Dialog is now complete and all required slots should be filled, // so call your normal intent handler. return this.event.request.intent.slots; } } </code></pre> <p>Once this.event.request.dialogState is <strong>COMPLETED</strong> all of the required slots have been collected and delegateSlotCollection returns them. Then we simplify the slots object by passing filledSlots to getSlotValues which returns the mapping of synonyms to resolved values. This will be covered more in detail in a separate post.</p> <p>Once the object has been simplified, we generate the http get request to the Pet Match API that returns the match based upon the <strong>size</strong>, <strong>temperament</strong> and <strong>energy</strong> slots.</p> <pre> <code>'PetMatchIntent' : function () { // delegate to Alexa to collect all the required slots let isTestingWithSimulator = true; //autofill slots when using simulator, dialog management is only supported with a device let filledSlots = delegateSlotCollection.call(this, isTestingWithSimulator); let petMatchOptions = buildPetMatchOptions(slotValues); httpGet(petMatchOptions).then( response =&gt; { if( response.result.length &gt; 0 ) { this.response.speak(&quot;So a &quot; + slotValues.size.resolved + &quot; &quot; + slotValues.temperament.resolved + &quot; &quot; + slotValues.energy.resolved + &quot; energy dog sounds good for you. Consider a &quot; + response.result[0].breed); } else { this.response.speak(&quot;I'm sorry I could not find a match for a &quot; + slotValues.size.resolved + &quot; &quot; + slotValues.temperament.resolved + &quot; &quot; + slotValues.energy.resolved + &quot; dog&quot;); } }) ).then(() =&gt; { // after we get a result, have Alexa speak. this.emit(':responseReady'); } ); </code></pre> <p>httpGet returns a promise and in the then we pass the anonymous function that builds the speech response.</p> <p>The helper functions delegateSlotCollection, getSlotValues and httpGet, have been written a way that will allow you to paste them directly into your code so you can easily add <strong>Dialog Management</strong> to your skill.</p> <p>Now that you have a deeper understanding of <strong>Dialog Management</strong>, read the developer documentation on Dialog Interface Reference.</p> <p>Also consider how you would use <strong>Dialog Management</strong> in your existing and future skills. It’s perfect for multi-turn interactions between the user and your skill and greatly reduces the amount of complex logic you would otherwise have to write on your own to keep track of the data that you have versus what you need, as well as capturing additional <strong>slots</strong>.</p> <p>Stay tuned for the next deep dive on Pet Match's implementation of <strong>Entity Resolution</strong>, which will help make the interaction between the user and your skill more natural in addition to simplifying the normalization of input.</p> <h2>More Resources on Dialog Management</h2> <ul> <li><a href="https://github.com/alexa/skill-sample-nodejs-petmatch/blob/master/instructions/1-build-and-customize.md" target="_blank">Pet Match Dialog Management Tutorial</a></li> <li><a href="https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html" target="_blank">Technical Documentation for Dialog Management</a></li> <li><a href="https://github.com/alexa/alexa-cookbook/tree/master/handling-responses/dialog-directive-delegate" target="_blank">Plan My Trip Dialog Management Tutorial</a></li> </ul> <h2>Build Engaging Skills, Earn Money with Alexa Developer Rewards</h2> <p>Every month, developers can earn money for eligible skills that drive some of the highest customer engagement. Developers can increase their level of skill engagement and potentially earn more by improving their skill, building more skills, and making their skills available in in the US, UK and Germany. <a href="https://developer.amazon.com/alexa-skills-kit/rewards">Learn more</a> about our rewards program and start building today.</p> /blogs/alexa/post/05a2ea89-2118-4dcb-a8df-af3d8ac623a8/building-for-echo-show-and-echo-spot-vui-gui-best-practices Building for Echo Show and Echo Spot: VUI &amp; GUI Best Practices Jennifer King 2018-01-11T15:00:00+00:00 2018-01-11T15:57:05+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/rook_best_practices._CB489078548_.png" style="height:240px; width:954px" /></p> <p>Here are some tips for designing multimodal, voice-first experiences that prove engaging across all Alexa-enabled devices.</p> <p style="margin-left:0in; margin-right:0in"><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/rook_best_practices._CB489078548_.png" /></p> <p style="margin-left:0in; margin-right:0in">With the Echo family of devices now including devices with screens like <a href="https://developer.amazon.com/echo-show">Echo Show and Echo Spot</a>, Alexa skill builders have to consider their graphical user interface (GUI) in addition to their <a href="https://developer.amazon.com/alexa-skills-kit/vui">voice user interface (VUI)</a> during the voice design process. Here are some tips for designing multimodal, voice-first experiences that prove engaging across all Alexa-enabled devices.</p> <h2>1. Create a Voice-First Experience with Visuals</h2> <p style="margin-left:0in; margin-right:0in">Voice needs to be the primary interaction method with Alexa, even when designing for devices with screens. Consider the display as a way to enhance your skill. Design your voice interaction first, then think about how you can enhance the conversation with visuals.</p> <p style="margin-left:0in; margin-right:0in">Be sure keep your VUI consistent across all devices to avoid unnecessary development work. Your customers rely on your skill to deliver an unvarying voice experience. The interaction model for your skill on a voice-only device should be the same as on a multimodal device. Create an experience that avoids display-centric commands like “touch the screen” or “click here.” &nbsp;</p> <p style="margin-left:0in; margin-right:0in">It is good practice to account for what customers might say when interacting with a display. If they are looking at an Echo Spot screen, their interaction with the voice component may be different than that of a user looking away. For example, to return to a previous response in a skill, a user might say “Back” or “Up.” If so, what should the behavior be for the latter case, if any? Plan how you want the user to interact with voice in your skill, but also how they may interact with the visual components.</p> <h2>2. Choose the Right Templates to Streamline Your Designs</h2> <p style="margin-left:0in; margin-right:0in">The <a href="https://developer.amazon.com/docs/custom-skills/display-interface-reference.html">templates for Echo Show</a> are consistent with Echo Spot, which makes it easy to quickly design visual experiences that will work across devices.</p> <p style="margin-left:0in; margin-right:0in">There are some notable differences in how you should use the templates due to the different sizes and shapes of the devices. The same fundamental principles per template still apply:</p> <p style="margin-left:0in; margin-right:0in"><em>Body Template 1</em></p> <p style="margin-left:0in; margin-right:0in">Use this template to present information in long blocks of text or full-width images.</p> <p style="margin-left:0in; margin-right:0in"><a href="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/bt1-scroll"><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/bt1-scroll" style="height:302px; width:766px" /></a></p> <p style="margin-left:0in; margin-right:0in"><em>Body Template 2 and 3</em></p> <p style="margin-left:0in; margin-right:0in">Use this template for presenting information on a specific entity with a lot of detail. This screen typically follows selecting an item from a list or if a user’s request yields only one item. Note: Hints can be displayed on Echo Show, but not on Echo Spot.</p> <p style="margin-left:0in; margin-right:0in"><a href="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/bt-2._TTH_.png"><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/bt-2._TTH_.png" style="height:305px; width:775px" /></a></p> <p style="margin-left:0in; margin-right:0in"><em>Body Template 6</em></p> <p style="margin-left:0in; margin-right:0in">This template is used as an introductory, title, or header screen.</p> <p style="margin-left:0in; margin-right:0in"><a href="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/bt-6._TTH_.png"><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/bt-6._TTH_.png" style="height:308px; width:774px" /></a></p> <p style="margin-left:0in; margin-right:0in"><em>Body Template 7</em></p> <p style="margin-left:0in; margin-right:0in">Use this template to display a large image, video, or audio.</p> <p style="margin-left:0in; margin-right:0in"><a href="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/bt-7-full._TTH_.png"><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/bt-7-full._TTH_.png" style="height:305px; width:780px" /></a></p> <p style="margin-left:0in; margin-right:0in"><em>List Templates</em></p> <p style="margin-left:0in; margin-right:0in">Your list templates can display multiple choices or items to a user. List items should be selectable via both voice and touch.</p> <p style="margin-left:0in; margin-right:0in"><em>List Template 1</em> should be used for lists where images are not the primary content because the content will be relatively small on Echo Spot.</p> <p style="margin-left:0in; margin-right:0in"><a href="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/list-1-all._TTH_.png"><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/list-1-all._TTH_.png" style="height:308px; width:781px" /></a></p> <p style="margin-left:0in; margin-right:0in"><em>List template 2</em> should be used for lists where images are the primary content. Note that for Echo Spot, only one item will be visible at a time.</p> <p style="margin-left:0in; margin-right:0in"><a href="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/list-2-square._TTH_.png"><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/ask-customskills/list-2-square._TTH_.png" style="height:321px; width:800px" /></a></p> <p style="margin-left:0in; margin-right:0in">Finally, regardless of the templates you choose to use, remember that you are building for both Echo Show and Echo Spot. You cannot design for a specific device. The templates make multimodal development easier and faster. Design once, and the content will translate appropriately for the device.</p> <h2>3. Use Body Content and Graphics to Complement Voice</h2> <p style="margin-left:0in; margin-right:0in">When designing for multimodal devices, it is important that your content is easy to consume. Consider brevity, arrangement, and pacing when you are writing your dialogue and designing your visuals.</p> <p style="margin-left:0in; margin-right:0in">There are some important technical design principles to consider with your visual components for both Echo Show and Echo Spot:</p> <ul> <li><strong>Links: </strong>Do not nest action links within list items. These will be difficult to select by voice and will yield unpredictable results with touch.</li> <li><strong>Text Aligning: </strong>Use the new text alignment attributes to selectively align important text. Avoid using line breaks to vertically align text within a TextView. Note that modifying the alignment will change it on all form factors.</li> <li><strong>Font size: </strong>Use font size overrides sparingly. Default font sizes have been set for all templates to allow for maximum legibility at the recommended distances.</li> <li><strong>Markup: </strong>Use markups (such as bold and underline) in meaningful ways to enhance the way your content displays on devices.</li> <li><strong>Actions: </strong>Action links should not be underlined and need to be accessible by voice.</li> </ul> <ul> <li><strong>Hints: </strong>Use the header text and hint directives appropriately instead of relying on the content of your background images. Note that text hints will not appear on Echo Spot, so incorporate them into your VUI as needed.</li> <li><strong>Images:</strong> Images should be used to make for a more delightful and colorful experience. They should not be sized specifically for Echo Show or Echo Spot as that’s not scalable for larger and smaller form factors. Use images that look great on all multimodal devices.</li> </ul> <h2>Get Started with the Voice Design Guide</h2> <p>We’ve updated the <a href="https://alexa.design/guide">Amazon Alexa Voice Design Guide</a> with additional design practices and guidelines to help you deliver with the new capabilities using Echo Show and new Echo Spot visual templates. <a href="http://alexa.design/guide" target="_blank">Visit the guide to get started.</a></p> <h2>More Resources</h2> <ul> <li><a href="https://developer.amazon.com/docs/custom-skills/build-skills-for-echo-show.html">Build Skills for Echo Show and Echo Spot</a></li> <li><a href="https://www.amazon.com/dp/B073SQYXTW/ref=fs_ods_rk">Learn More About Echo Spot </a></li> <li><a href="https://developer.amazon.com/docs/custom-skills/best-practices-for-echo-show-skills.html">Best Practices for Designing Skills with a Screen</a></li> <li><a href="https://developer.amazon.com/docs/custom-skills/display-interface-reference.html">Display Interface Reference</a></li> </ul> <h2>Build Engaging Skills, Earn Money with Alexa Developer Rewards</h2> <p>Every month, developers can earn money for eligible skills that drive some of the highest customer engagement. Developers can increase their level of skill engagement and potentially earn more by improving their skill, building more skills, and making their skills available in in the US, UK and Germany. <a href="https://developer.amazon.com/alexa-skills-kit/rewards">Learn more</a> about our rewards program and start building today.</p> /blogs/alexa/post/294825b1-47bb-4f2d-943e-1b6217d94c3b/alexa-pioneers-with-voice-jon-myers-has-found-a-much-bigger-audience-than-ever-thought-possible Alexa Pioneers: With Voice, Jon Myers Has Found a ‘Much Bigger Audience Than Ever Thought Possible’ Jennifer King 2018-01-10T17:00:00+00:00 2018-01-10T17:03:54+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_jon._CB489077540_.png" style="height:350px; width:954px" /></p> <p>It took just a weekend for Jon Myers and his team to build a prototype skill using Earplay’s library of stories. Since then, the team has refined its voice user interface, turning Alexa into a truly interactive storyteller.</p> <p>&nbsp;</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/H0cF8kXxswU" width="640"></iframe></p> <p><em>Editor’s Note: This is an installment of our new series, </em><a href="https://developer.amazon.com/alexa-skills-kit/case-studies/alexa-pioneers"><em>Alexa Pioneers</em></a><em>, which highlights people who are pushing the boundaries of what’s possible with voice. Follow the series to get inspired, then join the pioneers to create your own magical experiences for voice. </em></p> <p>As a former playwright, Myers is passionate about telling stories, especially ones that allow listener participation. He co-founded <a href="https://www.earplay.com/">Earplay</a> to create what he calls “the world’s first interactive radio dramas.”</p> <p>“Imagine if you’re listening to an audio book or a radio play. The difference is you’re a character,” he says. “You use your voice to path one way or the other by saying this or that. That’s what an ear play is.”</p> <p>Earplay began with a mobile-first approach. Then Myers learned about Amazon Alexa and immediately saw the potential of voice.</p> <p>“I remember going to the office and telling my team, ‘We are going to make a pivot here and focus on Alexa,”’ says Myers. “They all tried it and said, ‘Yes, this is exactly what we should be doing.”’</p> <p>It took just a weekend for Myers and his team to build a prototype skill using Earplay’s library of stories. Since then, the team has refined its voice user interface, turning Alexa into a truly interactive storyteller. Now, says Myers, there are “thousands and thousands of people” who use the Earplay skill for Alexa.</p> <p>“As a result of the shift to voice we've reached a much, much broader audience than we ever thought was possible,” says Myers. “That is a much bigger future than anything that I originally expected.”</p> <p>Myers believes we are on the cusp of a voice revolution. He calls it “the next big thing.”</p> <p>“I'm so passionate about voice because I believe it's the future of interaction,” he says. “I think it's the future of entertainment and storytelling as well. And who doesn’t want to be a part of that future?”</p> <p><em>Tell us about what you’re building for voice with Alexa. </em><a href="https://twitter.com/alexadevs" target="_blank"><em>Tweet us</em></a><em> using the hashtag </em><a href="https://twitter.com/search?q=%23alexapioneers&amp;src=typd" target="_blank"><em>#AlexaPioneers</em></a><em>.</em></p> /blogs/alexa/post/9f852a38-3a44-48bd-b78f-22050269d7c7/hamaridokoro Alexa 開発者アカウント作成時のハマりどころ Toshimi Hatanaka 2018-01-10T07:03:47+00:00 2018-01-18T03:50:20+00:00 <p>Alexa Skills Kit を使って Alexa のスキルを開発を行っていく中で、せっかく作ったスキルがAlexaアプリのスキル一覧に表示されず、Echoデバイスでテストできないというトラブルがよく発生します。正しいステップで開発者アカウントを作成すればこの問題を避けることができます。この記事では開発者アカウントを作る際の注意点について説明しています。</p> <p>Alexa Skills Kit を使って Alexa のスキルを開発する場合、Amazon の開発者アカウントを作成する必要があります。</p> <p>通常は、普段日本のアマゾンのショッピングサイト (amazon.co.jp) でお使いのアカウント名(Eメールアドレス)を使って、開発者ポータル (developer.amazon.com) から開発者アカウントを作成することが推奨されますが、どうしても都合が悪い場合には、新たにアマゾンのアカウントを作ることになります。</p> <p>ところが・・・ここで、注意が必要です。</p> <h1>開発者ポータルから新規に Amazon Developer (開発者) アカウントを作成すると必ずハマる!</h1> <p>まず開発者ポータル (<a href="http://developer.amazon.com/" target="_blank">http://developer.amazon.com</a>) にアクセスして、サインインをクリックします。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/DevPortalTop._CB489158533_.png" style="border-style:solid; border-width:1px; height:526px; width:800px" /></p> <p>サインイン画面が表示されます。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/DevPortalLogin._CB489158538_.png" style="border-style:solid; border-width:1px; height:499px; width:500px" /></p> <p>ここに<strong>「Amazon Developer アカウントを作成」</strong>というボタンが表示されます。「なぁんだ、ここで開発者アカウント作れちゃうじゃん!」と思って、安易に新しいDeveloperアカウントを作ってしまうと必ずハマるのです。</p> <p>実はこのボタンをクリックして、必要事項を入力し開発者アカウントを作成すると、バックエンドでは amazon.com のアカウントが作られてしまいます。実際に確認してみると、amazon.com にログインできてしまいます。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/amazon_com._CB489158529_.png" style="border-style:solid; border-width:1px; height:482px; width:800px" /></p> <p>この状態で、開発者ポータルにサインインし、Alexa スキルの開発を始めると、画面は日本語で、いかにも日本向けのスキルを開発しているように見えますが、実際はUS向けのスキルを日本語で作っていることになります。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/DevPortal_Testing._CB489158533_.png" style="border-style:solid; border-width:1px; height:912px; width:800px" /></p> <p>知らずに開発を進めていき、いざ日本語版のEchoデバイスでテストをしようとすると、せっかく作ったスキルが Alexa アプリ (<a href="http://alexa.amazon.co.jp/" target="_blank">http://alexa.amazon.co.jp</a>) の有効なスキルのリストに現れてきません。つまり日本のEchoデバイスを使ってせっかく作ったスキルのテストができないのです(泣)。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/alexa_amazon_co_jp_skilllist._CB489158542_.png" style="border-style:solid; border-width:1px; height:441px; width:800px" /></p> <p>では、alexa.amazon.com にサインインすれば出てくるかといえば、サインインはできますが、US版のEchoデバイスのセットアップが求められます。つまりUS版のEchoデバイスを持っていない限り、有効なスキル一覧の画面にアクセスすることはできないのです。手詰まり!! これは痛い!!!</p> <hr /> <h1>では、どうすれば良いか?</h1> <p>もし、こうなってしまった場合はどうすれば良いでしょうか?</p> <h2>解法1: Amazon.com で作成されてしまったアカウントのメールアドレスを変更する。</h2> <p>developer.amazon.com にサインインし、「設定」メニューをクリックします。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/DevPortalSetting._TTH_.png" style="border-style:solid; border-width:1px; height:169px; width:381px" /></p> <p><strong>「マイアカウント」</strong>の画面が開いたら「編集」ボタンをクリックします。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/ChangeEmailAddress._TTH_.png" style="border-style:solid; border-width:1px; height:363px; width:750px" /></p> <p><strong>「サインインとセキュリティ」</strong>の画面から、Eメールアドレスの編集ボタンをクリックします。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/ChangeEmailAddress2._TTH_.png" style="border-style:solid; border-width:1px; height:553px; width:569px" /></p> <p>Eメールアドレスを変更します。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/ChangeEmailAddress3._TTH_.png" style="border-style:solid; border-width:1px; height:851px; width:595px" /></p> <p>一旦 developer.amazon.com をサインアウトして、再度サインインします。ここでは変更する前のEメールアドレスで<strong>サインイン</strong>します。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/DevPortalSignIn._TTH_.png" style="border-style:solid; border-width:1px; height:474px; width:630px" /></p> <p>再びアカウントの登録画面が表示されるはずです。ここで新たに日本のデバイス向けのアカウントが作られようとしています。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/AccountRegistration._CB489158542_.png" style="border-style:solid; border-width:1px; height:542px; width:799px" /></p> <p>「国/リージョン」を「日本」にして、その他の必要事項を入力したら<strong>「保存して続行」</strong> &gt; <strong>「承認して続行」</strong> ボタンをクリックします。</p> <p>これで、ようやく日本向けEchoデバイス用のスキルを作ることができるようになります。<br /> 残念ながらUS用アカウントで作成したスキルを、日本用アカウントにインポートする機能はありません。変更したEメールアドレスでサインインし直して、コードを全てコピーし、再び日本用アカウントにサインインし直してペーストするか、全てゼロから作り直すしかないでしょう。</p> <p>こうしてスキルを作り直すと、ご覧のように日本のデバイスでテストできるスキルが表示されるようになります。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/alexa_amazon_co_jp_skilllist2._CB489158542_.png" style="border-style:solid; border-width:1px; height:507px; width:800px" /></p> <h2>解法2: Amazon.co.jp で新しいアカウントを作成する。</h2> <p><a href="http://amazon.co.jp/" target="_blank">http://amazon.co.jp</a>&nbsp;にアクセスし、<strong>「アカウント &amp; リスト」</strong>をクリックすると表示されるプルダウンメニューの<strong>「新規登録はこちら」</strong>をクリックします。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/Amazon_co_jp_register._CB489158537_.png" style="border-style:solid; border-width:1px; height:232px; width:500px" /></p> <p>必要項目を入力し、アカウントを作成します。アカウントを作るだけならクレジットカード情報は不要です(このアカウントで何かお買い物をする際に入力が求められます)。</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/CreateJPAccount._CB489158542_.png" style="height:642px; width:429px" /></p> <p>ここで使用したアカウントのEメールアドレスとパスワードを使って開発者ポータルにログインします。(注意:ここで<strong>「Amazon Developerアカウントを作成」</strong>ボタンをクリックして、新たにアカウントを作ってはいけません。また同じトラブルに見舞われます)</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/toshimin/devpotral_login_with_JP_account._CB489158542_.png" style="height:430px; width:500px" /></p> <p>すると、アカウントの情報入力画面が表示されます。改めて「国/リージョン」を日本にして、その他の必要項目を入力しましょう。<strong>「保存して続行」</strong>、<strong>「承認して続行」</strong>ボタンを2回クリックすると、正しく日本向けのアカウントが作られます。</p> <h1>結論</h1> <ul> <li>Amazon の開発者ポータルから新規でDeveloperアカウントを作るべからず。</li> <li>日本のスキル開発を始める前に、スキル開発で使用しようとしているアカウントで、amazon.com にログインできるかどうかをチェックすべし。もしログインできてしまったらすぐに開発者ポータルでEメールアドレスを変更し、改めて元のEメールアドレスでDeveloperアカウントを作成すべし。</li> <li>どうしてもダメなら早々に諦めて、新規で amazon.co.jp のアカウントを作るところからやり直すべし。</li> </ul> /blogs/alexa/post/fc74ee4c-b80f-4eff-85e8-76e0b45a5f7d/resource-roundup-top-alexa-tips-and-tutorials-for-smart-home-skill-builders Resource Roundup: Top Alexa Tips and Tutorials for Smart Home Skill Builders Jennifer King 2018-01-09T16:00:05+00:00 2018-01-09T16:01:00+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_purple._CB488767354_.png" style="height:240px; width:954px" /></p> <p>Our next installment of the resource roundup series features top tutorials and documentation we released last year for smart home skills. Reference these resources as you start building new smart home experiences for your customers.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_purple._CB488767354_.png" /></p> <p><em>Editor’s Note: This is an installment of a new series that showcases the top developer tips, tutorials, and educational resources to help you build incredible Alexa skills. <a href="https://developer.amazon.com/blogs/alexa/tag/Resources">Follow the series</a> to learn, get inspired, and start building your own voice experiences for Alexa. </em></p> <p>Over the last few months we’ve announced several exciting new ways for developers to create seamless voice-first experiences for Alexa smart home customers. We released <a href="https://developer.amazon.com/blogs/alexa/post/0a55ae8a-1f39-411f-a3ca-6a19be80b2f3/now-available-routines-alexa-enabled-groups-and-smart-home-device-state-in-the-amazon-alexa-app">Routines and Groups</a> to give customers additional ways to control their smart home products, and we added <a href="https://developer.amazon.com/blogs/alexa/post/181a2237-8bb8-4dcb-badd-f70d14884c9f/introducing-smart-home-camera-control-with-alexa">smart home camera control</a> with Alexa. Additionally, we updated the <a href="https://developer.amazon.com/blogs/alexa/post/bd91b3e6-9799-445e-9cfc-7e3e2d78980f/alexa-delivers-rich-easy-to-use-new-features-for-smart-home-consumers-and-developers">Alexa Smart Home Skill API</a> to make it easier for you to support these and other smart home experiences with Alexa.</p> <p>Our next installment of the resource roundup series features top tutorials and documentation we released last year to help you leverage these updates as you start building new smart home experiences for your customers.</p> <h2>Fundamentals of the Updated Smart Home Skill API</h2> <p>Tune into this <a href="https://register.gotowebinar.com/register/3830432815784122114" target="_blank">on-demand webinar</a> to for an overview of the updated Alexa Smart Home Skill API. You’ll learn the basics of the API and how to add voice control to your smart devices. We’ll cover new functionality and important considerations for migrating existing smart home skills to incorporate new capabilities.</p> <h2>How to Migrate Your Alexa Skill to the Updated Smart Home Skill API</h2> <p>The updated Smart Home Skill API makes it easier for you to support smart home experiences with Alexa. For those of you who already have smart home skills, it’s easy to migrate existing functionality to the updated API. Check out <a href="https://developer.amazon.com/blogs/alexa/post/87c5cf67-1f27-42d3-a07a-e4f559674392/how-to-migrate-your-alexa-skill-to-the-updated-smart-home-skill-api">this tutorial</a> and the <a href="https://developer.amazon.com/docs/smarthome/smart-home-skill-migration-guide.html">smart home migration guide</a> to learn how you can ensure a seamless migration for existing customers.</p> <h2>Best Practices for Supporting the Alexa App</h2> <p>The Smart Home Skill API enables you to provide the most up-to-date information on the state of a smart home device to Alexa, and that information is in turn provided to the customer in the Alexa app.</p> <p><a href="https://developer.amazon.com/docs/smarthome/best-practices-for-the-alexa-app.html">Read our documentation</a> to learn how smart home skills display in the Alexa app and review best practices to enable the best customer experience with the Alexa app.</p> <h2>Certify Your Device with Works with Amazon Alexa</h2> <p>Device makers building smart home skills also have the opportunity to qualify for the <a href="https://developer.amazon.com/alexa/works-with-amazon-alexa">Works with Amazon Alexa</a> certification program. This program ensures that customers have an intuitive, hassle-free experience when shopping for a smart home device. When your products are certified, they can carry the Works with Amazon Alexa badge, appear in the Alexa Smart Home store, and be considered for additional co-marketing opportunities. Check out the <a href="https://developer.amazon.com/docs/smarthome/certify-your-device-with-works-with-amazon-alexa.html">technical documentation</a> on how to obtain Works with Amazon Alexa certification.</p> <p><em>Check out the <a href="https://developer.amazon.com/blogs/alexa/tag/Resources">previous posts</a> in our series featuring resources for building new skills, advanced skills, kid skills, and skills for devices with screens. Then, </em><em>tell us about what you’re building for voice with Alexa. </em><a href="https://twitter.com/alexadevs" target="_blank"><em>Tweet us</em></a><em> using the hashtag </em><a href="https://twitter.com/search?q=%23alexapioneers&amp;src=typd" target="_blank"><em>#AlexaPioneers</em></a><em>.</em></p> /blogs/alexa/post/ba17fd33-6510-45d6-b682-ee9ed9ef589c/single-soc-dev-kits-for-avs Introducing Single-Chip Solutions for Building Alexa-Enabled Products Rachel Bennett 2018-01-08T23:00:00+00:00 2018-01-08T23:00:00+00:00 <p><img alt="SOC-blog-2x.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/AlexaVoiceService/SOC-blog-2x._CB488640343_.png?t=true" /></p> <p>Today, we the launch the first single system-on-chip (SoC) development kits for the Alexa Voice Service (AVS).&nbsp;These new System Development Kits for AVS are ideal for OEMs and ODMs looking for a complete system SoC solution, including the audio front end (AFE) to build Alexa-enabled products.</p> <p><img alt="SOC-blog-2x.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/AlexaVoiceService/SOC-blog-2x._CB488640343_.png?t=true" /></p> <p style="margin-left:0in; margin-right:0in; text-align:justify">Today, we the launch the first single System-on-Chip (SoC) development kits for the <a href="https://developer.amazon.com/alexa-voice-service" target="_blank">Alexa Voice Service (AVS)</a>. We are excited to broaden the portfolio of development kits for AVS to include complete single-chip solutions that provide a simplified architecture, high performance, and low total BOM cost. These new System Development Kits for AVS are ideal for OEMs and ODMs looking for a complete solution to build Alexa-enabled products.</p> <h2 style="text-align:justify">Why Single-Chip SoC Solutions</h2> <p style="margin-left:0in; margin-right:0in; text-align:justify">A typical system designed for AVS requires a microphone and speaker with three processing blocks: an audio front end (AFE) to clean up the input speech, a wake word engine (WWE) to recognize the voice command for “Alexa,” and an AVS client to send utterances to and receive directives from the cloud.</p> <p style="margin-left:0in; margin-right:0in; text-align:justify">Existing <em>Audio Front End Development Kits for AVS</em> utilize a dedicated digital signal processor (DSP) to implement the AFE processing block, while leveraging a Raspberry Pi to host the WWE and AVS client for prototyping purposes. This provides a modular solution to easily integrate voice capture abilities into connected products. OEMs can use a qualified solution to easily add Alexa voice capabilities, and choose the production-ready SoC they prefer for their final product.</p> <p style="margin-left:0in; margin-right:0in; text-align:justify"><em>System Development Kits for AVS</em> offer a complete, end-to-end system reference design for building AVS products. This category includes AFE development kits that are also available in qualified bundles with production-ready SoCs. The <em>new</em> <em>System Development Kits for AVS</em> that we’re introducing today have SoCs that perform all three processing block functions on a single chip. They allow OEMs to leverage existing SoC relationships and existing infrastructure, while minimizing the impact on their design cycles. The single-chip SoC development kits in this category do not require an external DSP.</p> <p style="margin-left:12pt; text-align:justify"><em>“By qualifying these first single SoC solutions, we’re expanding the portfolio of development kits for AVS in an important way for OEMs and ODMs building Alexa-enabled products,” said Priya Abani, Director, Amazon Alexa. “These dev kits reduce the cost of integrating Alexa, as well as accelerate time to market due to a near-production-ready architecture that streamlines technology integration and accelerates product development.”</em></p> <h2 style="text-align:justify">Introducing single-chip SoC development kits from Qualcomm, Amlogic, and Allwinner</h2> <p style="margin-left:0in; margin-right:0in; text-align:justify">The first single-chip SoC solutions qualified by Amazon are development kits from Qualcomm, Amlogic, and Allwinner:</p> <ul> <li style="text-align:justify"><strong><a href="https://developer.amazon.com/alexa-voice-service/dev-kits/qualcomm-6-mic" target="_blank">Qualcomm's Smart Audio 6-Mic Development Kit for Amazon AVS</a></strong> is based on the Qualcomm<sup>&reg;</sup> Smart Audio platform. It supports the development of Alexa-enabled table-top smart home devices and premium smart speakers. The kit includes Qualcomm’s Snapdragon™ application processor and system software; Fluence IoT technology for beamforming, echo cancellation, noise suppression, barge-in support, and far-field voice capture in high-noise environments; integrated audio decoding and post-processing algorithms to support premium sound quality; WWE tuned to “Alexa”; and AllPlay™ multi-room networking capabilities.</li> <li style="text-align:justify"><strong><a href="https://developer.amazon.com/alexa-voice-service/dev-kits/amlogic-6-mic" target="_blank">Amlogic's A113X1 Far-Field Development Kit for Amazon AVS</a></strong> is designed for commercial developers integrating Alexa into smart speakers, smart home hubs, and other devices. The development kit comes with a board that is configurable for 2-mic linear, 4-mic triangular, and 6-mic circular arrays, making it a flexible solution for a range of hands-free and far-field applications. The SoC supports advanced algorithms for echo cancellation, beamforming, noise reduction, as well as audio post-processing effects. The A113X SoC boasts high performance with low power consumption, enabling application headroom for cost-sensitive OEM designs with option for battery-powered applications.</li> <li style="text-align:justify"><strong><a href="https://developer.amazon.com/alexa-voice-service/dev-kits/allwinner-3-mic" target="_blank">Allwinner's SoC-Only 3-Mic Far-Field Development Kit for Amazon AVS</a></strong> is designed for OEMs and ODMs integrating Alexa into devices including smart speakers, table-top smart home devices with displays, home appliances, and robotic gadgets. The development kit offers far-field performance with a 3-mic array, and uses a single quad-core SoC architecture that does not require an external DSP – the combination of which delivers a compelling and cost-effective solution for integrating Alexa into finished products. The SoC supports advanced algorithms for audio echo cancellation, beamforming, and noise reduction, and provides application headroom for OEM designs with option for battery-operated designs.</li> </ul> <p style="margin-left:0in; margin-right:0in; text-align:justify">Our vision at Amazon is to bring Alexa to customers everywhere - at their homes, in the office, on-the-go and everywhere in between. Voice will be woven into our daily lives and will extend beyond our Echo family of devices. As Alexa continues to be directly integrated into the brands customers know and love, we want to make it incredibly easy for developers and manufacturers to build Alexa into their products.</p> <p style="margin-left:0in; margin-right:0in; text-align:justify">We continue to work with leading technology providers to bring simpler ways to build Alexa-enabled products which ultimately offers more choices to customers. See our full portfolio of qualified <a href="https://developer.amazon.com/alexa-voice-service/dev-kits/" target="_blank">Development Kits for AVS</a>.</p> <h2 style="text-align:justify">Getting Started with AVS</h2> <p style="margin-left:0in; margin-right:0in; text-align:justify">The Alexa service is always getting smarter both for features, and for natural language understanding and accuracy. As a developer, your product also gains access to new capabilities with Alexa through API updates, feature launches, and custom skills. Learn how AVS can add rich voice-powered experiences to your connected products on the <a href="https://developer.amazon.com/alexa-voice-service" target="_blank">AVS Developer Portal</a>.</p> <p style="text-align:justify">Have questions? We’re here to help. Visit us on the <a href="https://forums.developer.amazon.com/spaces/38/index.html" target="_blank">AVS Forum</a> or <a href="https://github.com/alexa/alexa-avs-sample-app" target="_blank">Alexa GitHub</a> to speak with one of our experts.</p>