Vielen Dank für deinen Besuch. Diese Seite ist nur in Englisch verfügbar.
Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2018-12-14T23:02:33+00:00 Apache Roller /blogs/alexa/post/173575ce-3303-493a-a2e1-22ec81e7525d/test-your-live-alexa-skills-to-maintain-a-consistent-customer-experience-over-time Test Your Live Alexa Skills to Maintain a Consistent Customer Experience Over Time Leo Ohannesian 2018-12-14T23:02:33+00:00 2018-12-14T23:02:33+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Starting today, developers can test live Alexa skills with the same toolset used to test skills during development.</p> <p><img alt="" src="" /></p> <p>&nbsp;</p> <p>Starting today, developers can test live Alexa skills with the same toolset used to test skills during development. As Alexa gets smarter and new capabilities are introduced, it’s always good to run ongoing regression tests after publishing your skill to ensure customers continue to have the great voice experience you have designed.</p> <p>In the past, testing tools were only available in the developer console during skill development. Now you can debug skills and test after publishing your skill directly from the developer console, allowing you to deliver a consistent skill experience over time. With the ability to test live skills, you can simulate and reproduce any issues reported by customers. Live skill testing also allows you to set up automated test cases for production environments through the Skill Management API (SMAPI), which will report if there are any changes in expected skill behavior or the customer experience over time.</p> <p>In this post, we walk you through three ways to test your live skill: in the developer console, through the Alexa Skills Kit (ASK) Command-Line Interface (CLI), and through the Simulation and Invocation SMAPI APIs.</p> <h2>Live Skill Testing in the Developer Console</h2> <p>The first place you can test live skills is in the developer console. In the “Test Tab,” select “Live” as your skill stage and begin to test your published skill.</p> <p><a href="" target="_blank"><img alt="" src="" style="height:314px; width:847px" /></a></p> <p>After you select the live skill to test, you can debug customer-reported issues or investigate feedback using the Alexa simulator with the provided JSON responses.</p> <p><img alt="" src="" /></p> <p>If you need to dive deeper into a reported issue, the “Device Logs” option provides further insight into your skill, including information on the time it took a skill to respond and the list of directives being sent between requests. If you are a smart home developer, review the <a href="" target="_blank">Device Change Reports</a> for live debugging of smart home devices’ interaction with your published skill.</p> <p><img alt="" src="" /></p> <h2>Set “Live” as Your Testing Stage in the Developer Console</h2> <p>You can only test one skill stage at a time, either in development or live. In other words, by enabling your skill for testing while live, you will disable the development version of your skill from testing. Once you enable live testing in the developer console, your live skill will now be available for testing in the Developer Console and devices. After you make the “Live” selection, the testing website will redirect to the live stage URL and all session and testing information will be reset. If you have any information you want to preserve, save it before proceeding. This activity will also generate a new User ID and can be used to simulate a first-time user experience. If you set a specific AWS Lambda function to operate the live version of your skill, it will also be used while testing the live version of your skill.</p> <h2>Live Skill Testing with the Simulation, Dialog, and Invocation ASK CLI Commands</h2> <p>In order to test a published skill when using the simulation, invocation, or dialog ASK CLI commands, simply set “live” as the stage in the format below. Commands will default to the “development” stage if stage is not specified.</p> <p style="margin-left:0in; margin-right:0in"><code>dialog</code> command format:</p> <pre> <code>ask dialog [-s|--skill-id &lt;skill-id&gt;] [-l|--locale &lt;locale&gt;] [-g|--stage &lt;stage&gt;] [-r|--replay &lt;file-path&gt;] [-o|--output &lt;file-path&gt;]</code></pre> <p>When using the dialog CLI with the ASK Toolkit for Visual Studio Code , you can now have a multi-turn conversation with your published skill from the independent development environment (IDE). This will return the JSON responses and debugging information in the output file for further investigation.</p> <p>The “enable-skill” CLI command now allows you to enable your live skill for testing, so you can switch between testing the development and live versions of your skill without having to leave the ASK CLI. This setting will persist and allow you to test your live skill in the Developer Console and devices.</p> <p>For details on simulation, invocation, dialog, and other ASK CLI commands, visit the <a href="" target="_blank">ASK CLI Command Reference</a> documentation.</p> <h2>Live Skill Testing with the Simulation and Invocation SMAPI APIs</h2> <p>You can now use version 2 of the <a href="" target="_blank">Simulation</a> and <a href="" target="_blank">Invocation</a> SMAPI APIs to test against your published skills. These APIs serve the purpose of programmatically performing Alexa skill management tasks. In this case, they can simulate skill execution and invoke your HTTPS endpoint (Lambda or otherwise). Using these APIs, you can create automated tests that safeguard your published skill from regressions. With the Invocation API, you can test your skill endpoint in isolation in order to verify that expected JSON responses are returned, and that your endpoint latency is always below an expected threshold. Similarly, the Simulation API allows you to perform end-to-end tests that ensure that your skill’s interaction model and endpoint continue to work as expected.</p> <p>For details on the Simulation and Invocation APIs, visit the <a href="" target="_blank">Get Started with Skill Testing Operations</a> page.</p> <h2>Continually Test Your Alexa Skills to Deliver a Consistent User Experience</h2> <p>Testing your skill after it has been published allows you proactively validate that customers are receiving a consistent experience. Whether you use the Developer Console, the ASK CLI, or SMAPI APIs, you now have the ability to test your published skill.</p> <p>Get started by creating automated regression tests that continually run against your published skill. Visit our <a href="" target="_blank">testing documentation</a> page to start testing your live skill today. To see examples of how to build unit tests for an Alexa skill, visit our <a href="" target="_blank">Github repository. </a>&nbsp;&nbsp;</p> <h2>Related Resources</h2> <p>For more resources on Alexa skill testing, check out the following blogs:</p> <ul> <li><a href="" target="_blank">Building Engaging Alexa Skills: Why Testing and Automation Matter</a></li> <li><a href="" target="_blank">Now It’s Easier to Test Your Alexa Skill for a Great Customer Experience on Echo Spot</a></li> <li><a href="" target="_blank">Unit Testing: Creating Functional Alexa Skills</a></li> <li><a href="" target="_blank">How to Test Your In-Skill Products for a Great Customer Experience: 10 Test Cases</a></li> <li><a href="">Things Every Alexa Skill Should Do: Beta Testing</a></li> </ul> /blogs/alexa/post/a5a66a87-3e1a-41d8-808d-e9853d615aaf/deliver-better-user-experiences-faster-with-new-built-in-slot-types-and-an-intent Deliver Better User Experiences Faster with New Built-in Slot Types and an Intent Drew Meyer 2018-12-14T20:03:22+00:00 2018-12-14T20:03:22+00:00 <p><img alt="NLU-slot_blog.png" src="" /></p> <p>Today, six new built in slot types along with the new <strong>AMAZON.NavigateHomeIntent </strong>are available in most Alexa locales. These can help you improve dialog models and the customer experience on devices with screens.</p> <p><img alt="NLU-slot_blog.png" src="" /></p> <p>Built-in slot types and intents help you build Alexa skills faster and deliver better user experiences by reducing the number of the sample utterances you need to provide. Today, six new built in slot types along with the new <strong>AMAZON.NavigateHomeIntent </strong>are available in most Alexa locales. The built-in slot types are all in public beta and the intent is generally available, so you can build and publish new skills or rebuild and re-publish existing skills with these new features in the Alexa Developer Console today.</p> <h2>Simplify and Improve Your Interaction Model with 6 New Built-In Slot Types (Beta)</h2> <p>Built-in slot types are defined by Amazon for many common skill use cases to save development time and provide a more consistent skill experience. For example, the Alexa Skills Kit (ASK) includes built-in list slot types for capturing date, time, and numbers so you don't have to provide utterances. We define the representative list of values for each built-in slot type and improve it over time, eliminating the need for you to do this work.<br /> <br /> Starting today, you can use the following five list slot types in all locales supported by Alexa:</p> <p style="margin-left:40px"><strong>AMAZON.Actor,</strong> which captures names of screen actors, such as “Kevin Bacon,” “Bruce Lee,” &quot;Rachel Maddow,&quot; or “G&eacute;rard Depardieu.”</p> <p style="margin-left:40px"><strong>AMAZON.Animal, </strong>which captures animal names, such as “cat,” “hippopotamus,” and “giraffe.”</p> <p style="margin-left:40px"><strong>AMAZON.Airline, </strong>which captures the names of airlines, such as “Alaska,” “Delta,” and “Singapore Airlines.”</p> <p style="margin-left:40px"><strong>AMAZON.Airport, </strong>which captures IATA airport codes or names, such as “SEA,” “Sea-Tac,” or “Seattle Tacoma International Airport.”</p> <p style="margin-left:40px"><strong>AMAZON.Person,</strong> which captures full names of popular real and fictional people, such as &quot;Barack Obama,&quot; &quot;Bruce Wayne,&quot; and &quot;Voltaire.&quot;</p> <p>You may now also use this sixth slot type in all locales EXCEPT en_IN (expected in 2019).</p> <p style="margin-left:40px"><strong>AMAZON.StreetName, </strong>which captures street titles, such as “Downing St.,” “Melrose Place,” “Evergreen Terrace,” and “E. Thomas Street.”</p> <p>You can find more information on these new list slot types in the <a href="" target="_blank"><u>Slot Type Reference</u></a> documentation.</p> <h2>Take Your Customers “Home” More Easily with a New Standard Built-in Intent</h2> <p>We've also updated the developer console in all locales with a new, standard built-in intent for Alexa-enabled devices with screens called <strong>AMAZON.NavigateHomeIntent</strong>. This provides a consistent way for you to help customers on multimodal devices (such as Amazon Fire TV or Echo Show) exit a skill and return to the device's home screen. This intent is available for all new skills and automatically applied to all existing multimodal skills.<br /> <br /> You can find more information this new built-in intent in the <a href="" target="_blank"><u>Standard Built-in Intents documentation</u></a>.</p> <h2>Learn More and Get Started Today</h2> <p>To learn how to use built-in intents and slots in your interaction model, read about <a href="" target="_blank"><u>creating intents, utterances and slots</u></a>.</p> <h2>Related Resources</h2> <ul> <li><a href="" target="_blank"><u>Validate Slot Values</u></a></li> <li><u><a href="" target="_blank">Build Advanced Alexa Skills with Dialog Management</a></u></li> <li><u><a href="" target="_blank">Guide: Advanced Skill Building with Dialog Management</a></u></li> </ul> /blogs/alexa/post/f23d0796-6c98-40d4-b499-e198b347a998/location-services-launch Enhance Your Customer Experience with Real-Time Location Services for Alexa Skills June Lee 2018-12-13T18:09:10+00:00 2018-12-13T20:41:33+00:00 <p><img alt="" src="" /></p> <p>We are excited to announce that you can now add location services to your Alexa skills to provide customers with real-time responses based on their location.</p> <p><img alt="" src="" /></p> <p>We are excited to announce that you can now add location services to your Alexa skills to provide customers with real-time responses based on their location. Your skill can ask the customer’s permission to use the real-time location of their Alexa-enabled device, only at the time of their request, in order to provide key functionality for the skill or to enhance the customer experience. For example, a map skill could use location services to provide directions when a customer asks where they can find the nearest coffee shop. Location services are supported on the Alexa app and Echo Auto at launch, and will expand to other devices in the future.</p> <p>The <a href=";ie=UTF8&amp;qid=1544724156&amp;sr=1-1&amp;keywords=snap+travel?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=locationservices&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_detail=forum&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_locationservices_launch_Content_forum_Discover_WW_unknown&amp;sc_segment=unknown">SnapTravel skill</a> uses location services to help customers quickly get relevant recommendations based on their real-time location.</p> <p>“Adding the location services to the Snap Travel Alexa skill allows us to customize the voice experience based on customers’ location,” says Hussein Fazal, CEO of SnapTravel. “By integrating this feature into our skill, SnapTravel customers can now get the hotel recommendations close to them even faster.”</p> <h2>How Location Services Work for Alexa Skills</h2> <p>To enable location services, the customer must grant permission for Alexa to share their location with your skill and their Alexa-enabled device must have location sharing capabilities turned on. When a customer enables a skill that requests to use location services, the customer will be prompted in the Alexa app to consent to the location data being shared with the skill. Customers can visit the Alexa Privacy Settings page at any time to manage their skill permissions. Once a customer grants permission, your skill will be able to request current device location in a geo-coordinate format.</p> <h2>Get Started with Location Services for Alexa Skills</h2> <p>To get started with location services, you’ll need to:</p> <ul> <li>Step 1–Enable Location Services permissions</li> </ul> <p>Go to the developer console, sign in, and click Skills at the upper right. Click Edit for your skill page in the developer console to open your skill. Select Build &gt; Permissions and enable the Location Services button.</p> <ul> <li>Step 2–Modify your skill service logic to handle location services data</li> </ul> <p>Inside your skill function, location data will be transferred as a JSON object sent to your web service or AWS Lambda function. The API passes location information only at the time of the request, and will rewrite the location with every new customer request. To determine whether the customer's device can share location, check whether the <em>context.System.device.supportedInterfaces</em> object has a geolocation field. If the customer’s device does not have geolocation capabilities, such as with a stationary Echo device, your skill should provide a customer experience that does not require the customer’s real-time location.</p> <ul> <li>Step 3–Test your skill to ensure location services work as expected</li> </ul> <p>Once your skill logic is written to handle real-time locations, you can start testing with a mobile device like the Alexa app.</p> <p>Location services for Alexa skills is available in all locales supported by Alexa. We plan to support additional Alexa-enabled portable devices over time and will share updates via <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=locationservices&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_locationservices_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">technical documentation</a>. To get started, check out the skills that have already enabled location services such as <a href=";ie=UTF8&amp;qid=1544722916&amp;sr=1-2&amp;keywords=gasbuddy?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=locationservices&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_locationservices_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">Gas Buddy</a>, <a href=";ie=UTF8&amp;qid=1544722999&amp;sr=1-1&amp;keywords=parkwhiz?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=locationservices&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_locationservices_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">Parkwhiz</a>, and <a href=";ie=UTF8&amp;qid=1544723031&amp;sr=1-1&amp;keywords=big+sky?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=locationservices&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_locationservices_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">Big Sky</a> and read our <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=locationservices&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_locationservices_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">documentation</a>. As always, please share your feedback on the <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=locationservices&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_detail=forum&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_locationservices_launch_Content_forum_Discover_WW_unknown&amp;sc_segment=unknown">Alexa developer forum</a> and let us know what you think. Make sure add the &quot;location services&quot; topic to your post.</p> /blogs/alexa/post/b53a01b4-ae70-44d0-baee-593c550733ed/recordings-and-resources-the-best-of-alexa-at-aws-re-invent-2018 Recordings and Resources: The Best of Alexa at AWS re:Invent 2018 Jennifer King 2018-12-13T15:00:00+00:00 2018-12-13T15:00:00+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>We recently wrapped up AWS re:Invent 2018, where the Alexa team engaged with the largest gathering of global Amazon developers. Check out the recordings and resources from the breakout sessions.</p> <p>We recently wrapped up <a href="">AWS re:Invent 2018</a>, where the Alexa team engaged with the largest gathering of global Amazon developers. With over 100 technical sessions, this was Alexa’s biggest AWS re:Invent ever. Our <a href="">favorite sessions</a> included hands-in training workshops, interactive chalk talks, breakout sessions, and builder sessions to help developers dive deep into various voice design topics.</p> <p>If you weren’t able to attend all the Alexa sessions you wanted this year, or you were following AWS re:Invent activities at home, we’re excited to share that the presentation slides and recordings for breakout sessions are now available for you to enjoy.</p> <p>Check out the videos below to watch our top Alexa sessions from AWS re:Invent. We also share a variety of resources applicable to each session to help you start applying the concepts shared during the session right away.&nbsp;</p> <h2>Alexa Everywhere, a Year in Review</h2> <p>Chief Alexa Evangelist Dave Isbitski (<a href="" target="_blank">@thedavedev</a>) gets you up to speed on the current voice-first movement and conversational AI trends. He also shares a demonstrations of some of the latest Alexa features and devices. Learn about the new Alexa Skills Kit (ASK) multimodal framework, the Alexa Presentation Language (APL), Alexa skill fulfillment and consumables for customers, and some of the latest device offerings utilizing the Alexa Voice Service (AVS) and the new Alexa Gadgets Toolkit.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related Resources:</p> <ul> <li>Get the presentation <a href="" target="_blank">slides</a></li> <li>Learn about <a href="">Alexa-hosted skills</a> to build, edit, and publish a skill without leaving the Alexa Developer Console</li> <li>Learn more about the <a href="">Alexa Presentation Language</a>, a new design language that allows you to build interactive, multimodal Alexa skills and customize them for different Alexa-enabled devices</li> <li>Explore the <a href="">AVS Device Software Development K</a>it (SDK), which is an SDK for commercial device makers to integrate Alexa directly into connected products</li> <li>Create fun and delightful Echo-connected accessories with the <a href="">Alexa Gadgets Toolkit</a></li> </ul> <h2>Make Money with Alexa Skills</h2> <p>Alexa Evangelist Jeff Blankenburg (<a href="" target="_blank">@jeffblankenburg</a>) talks about how you can leverage Alexa in-skill purchasing (ISP), Amazon Pay, and developer reporting tools to help unlock premium digital content in a custom voice experience. You will discover how in-skill purchasing gives customers and developers the flexibility to make payments through consumables, subscriptions, and one-time entitlements.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related Resources:</p> <ul> <li>Get the presentation <a href="" target="_blank">slides</a></li> <li>Get started with <a href="">in-skill purchasing</a></li> <li>Check out our ISP <a href="" target="_blank">sample code on GitHub</a> to learn how to leverage in-skill purchasing to build a one-time purchase and a subscription in an Alexa skill</li> <li>Get an overview of the <a href="">different types of in-skill products</a> you can offer in your Alexa skill</li> </ul> <h2>Learn from the Field: Best Practices for Making Money with Alexa Skills</h2> <p>Watch as Alexa Product Manager Neelam Saboo and Max Child, founder of Volley and creator of Alexa skills Yes Sire and Song Quiz, walk you through the process of designing and adding in-skill purchasing to your skills. Max shares Volley’s in-skill purchasing journey, lessons learned, and best practices.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related Resources:</p> <ul> <li>Get the presentation <a href="" target="_blank">slides</a></li> <li>Read the case study to learn <a href="">how in-skill purchasing is taking Volley’s thriving voice business to the next level</a></li> </ul> <h2>Use Alexa to Reach Millions of New Customers by Developing for Multiple Screens</h2> <p>This session introduces the Alexa Presentation Language (APL), a new design language that makes it easy to develop interactive voice and touch experiences that are portable to any Alexa-enabled device with a screen. Watch as Arunjeet Singh, senior product manager for Alexa skills, and Alexa Champion <a href="">Steven Arkonovich</a>, who developed the popular skill <a href="">Big Sky</a>, as share how Arkonovich used APL to make a voice-first skill visually rich and even more engaging.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related Resources:</p> <ul> <li>Get the presentation <a href="" target="_blank">slides</a></li> <li>Learn more about the <a href="">Alexa Presentation Language</a>, a new design language that allows you to build interactive, multimodal Alexa skills and customize them for different Alexa-enabled devices</li> <li>Check out our <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=skillschallenge&amp;sc_content=Promotion&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_skillschallenge_Promotion_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">APL code samples on GitHub</a></li> <li>Watch our on-demand webinar <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=certification&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_certification_Content_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">Getting Started with the Alexa Presentation Language</a> to learn the basics of APL</li> <li>Watch our on-demand webinar on <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=skillschallenge&amp;sc_content=Promotion&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_skillschallenge_Promotion_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">Advanced Template Building with the Alexa Presentation Language</a> to watch an Alexa evangelist build an APL skill live using advanced techniques</li> <li>Check out these <a href="">10 tips for designing Alexa skills with visual responses</a></li> <li>Read the case study to learn <a href="">how Steven Arkonovich enhanced his voice-first skill</a> with visuals and touch using APL</li> </ul> <h2>Smart Home Skill API: Connect Any Device to Alexa &amp; Control Any Feature</h2> <p>In this session, Rick Carragher – Director of Alexa Smart Home, and Mark Aiken - Principal Software Engineer, walk you through the updates to the Smart Home Skill API, featuring new capability interfaces you can use as building blocks to connect any device to Alexa. You will also learn how to create Alexa skills that contain multiple interaction models to provide a seamless customer experience.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related Resources:</p> <ul> <li>Get the presentation <a href="" target="_blank">slides</a></li> <li>Learn more about <a href="">Alexa smart home skills</a></li> <li>Learn about <a href="">which types of devices are supported</a> with the Smart Home Skill API</li> </ul> <h2>Three Lessons from “Escape the Room” on Making Money with Your Alexa Skills</h2> <p>Games push the boundaries of tech, enabling us to learn from them for a wide range of use cases. In this session, Gal Shenar, founder of Stoked Skills, and Paul Cutsinger (<a href="">@paulcutsinger</a>), head of developer education at Alexa, talk about designing and implementing in-skill purchasing for successful game skills like “Escape the Room” and “Escape the Airplane.”</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related Resources:</p> <ul> <li>Get the presentation <a href=";v=&amp;b=&amp;from_search=1" target="_blank">slides</a></li> <li>Read the case study to learn how <a href="">Gal Shenar is using in-skill purchasing</a> to set his growing voice business up for long-term success</li> <li>Get started with <a href="">in-skill purchasing</a></li> <li>Check out our ISP <a href="" target="_blank">sample code on GitHub</a> to learn how to leverage in-skill purchasing to build a one-time purchase and a subscription in an Alexa skill</li> <li>Get an overview of the <a href="">different types of in-skill products</a> you can offer in your Alexa skill</li> </ul> <h2>Alexa Skill Developer Tools: Build Better Skills Faster</h2> <p>Dylan Zwick (<a href="" target="_blank">@pulselabs_cpo</a>), chief product officer at Pulse Labs, joins Paul Cutsinger to share the suite of Alexa developer tools you can use to increase productivity when coding, deploying, testing, debugging, and collaborating with others on your skill.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related Resources:</p> <ul> <li>Get the presentation <a href=";v=&amp;b=&amp;from_search=1" target="_blank">slides</a></li> <li>Check out this <a href="">roundup of developer tools</a></li> <li>Check out our blog on <a href="">how to test your in-skill products for a great customer experience</a></li> </ul> <h2>Voice Assistants Beyond Smart Speakers: Integrate Alexa into Your Unique Product</h2> <p>Alexa Voice Service Solutions Architect Donn Morrill you through a deep dive into the system architecture for voice-enabled products with &quot;Alexa Built-In.&quot; Watch to learn how to choose the right hardware and software tools, ensure great customer experience with test and certification guidelines, and leverage qualified solution providers to get your products (smart speakers to headphones, screen-based devices to smart home, and more) to market faster.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe>&nbsp;&nbsp;&nbsp;&nbsp;</p> <p>Related Resources:</p> <ul> <li>Get the presentation <a href=";v=&amp;b=&amp;from_search=1" target="_blank">slides</a></li> <li>Learn more about the <a href="">Alexa Presentation Language</a>, a new design language that allows you to build interactive, multimodal Alexa skills and customize them for different Alexa-enabled devices</li> <li>Explore the <a href="">AVS Device Software Development K</a>it (SDK), which is an SDK for commercial device makers to integrate Alexa directly into connected products</li> <li>Select qualified hardware solutions for your unique product at our <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=TopNav&amp;sc_publisher=website&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=US&amp;sc_medium=Owned_WB_TopNav_website_Content_Discover_US_newdev&amp;sc_segment=newdev">Alexa for Device Makers</a> page</li> <li>Get started today by building your first Alexa Built-in prototype on Raspberry Pi at our <a href="">AVS tutorials</a> page</li> </ul> <h2>Keep Learning with the Alexa Team</h2> <p>Stay in touch with the Alexa team on <a href="" target="_blank">Twitch </a>to keep learning about voice-design trends and skill-building best practices. Connect with Alexa on <a href="" target="_blank">Twitter</a>, <a href="" target="_blank">Facebook</a>, and <a href="" target="_blank">LinkedIn</a>. To get the latest Alexa developer news delivered straight to your inbox, <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_reInvent&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=newsletter&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_reInvent_ASK_Content_newsletter_Convert_WW_ask-reinvent_reinventpage&amp;sc_segment=ask-reinvent&amp;sc_place=reinventpage" target="_blank">subscribe to our newsletter</a>.&nbsp;</p> <h2>Build Skills, Earn Developer Perks</h2> <p>Bring your big idea to life with Alexa and earn perks through our <a href="">milestone-based developer promotion</a>. US developers, publish your first Alexa skill and earn an Alexa developer t-shirt. Publish a skill for Alexa-enabled devices with screens and earn an Echo Spot. Publish a skill using the Gadgets Skill API and earn a 2-pack of Echo Buttons. If you're not in the US, check out our promotions in <a href="">Canada</a>, the <a href="" target="_blank">UK</a>, <a href="" target="_blank">Germany</a>, <a href="">Japan</a>, <a href="">France</a>, <a href="">Australia</a>, and <a href="" target="_blank">India</a>. <a href="">Learn more</a> about our promotion and start building today.</p> /blogs/alexa/post/7064802d-1f63-4be1-aa78-8a65bc1016b4/alexa-arm-my-security-system-customers-can-now-control-their-security-systems-with-alexa-using-the-security-panel-controller-api Alexa, Arm My Security System. Now You Can Connect Security Systems with Alexa Using the Security Panel Controller API Brian Crum 2018-12-13T14:27:47+00:00 2018-12-13T15:53:04+00:00 <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>We are excited to announce the <a href="">Security Panel Controller API</a>, enabling your customers to control their security systems via Alexa-enabled devices.</p> <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>We are excited to announce the Security Panel Controller API, enabling your customers to control their security systems via Alexa-enabled devices. Once you’ve implemented the&nbsp;<a href="" target="_blank">Security Panel Controller API</a>, your customers can arm, disarm, and query their security systems with Alexa. Alexa supports arming in away, home or stay, and night modes. The Security Panel Controller API is available today in the US, and security system providers such as ADT, Ring, Honeywell Home, abode, and Scout Alarm are already leveraging these new smart home capabilities.</p> <p>&nbsp;</p> <h2>How the Security Panel Controller API Works</h2> <p>Arming or disarming a security system is done using the Arm and Disarm directives. The Arm and Disarm directives signal that a customer has asked Alexa to activate or deactivate their security system. Customer utterances that support Arm are, “Alexa, arm &lt;device name&gt; in &lt;mode type&gt; mode,” and “Alexa, arm.” If a customer doesn’t specify a mode, the default arming mode is stay mode (a.k.a. home mode). Customer utterances that support Disarm are “Alexa, disarm &lt;device name&gt;,” and “Alexa, disarm.” Here is what the arm and disarm requests would look like:</p> <p>Arm Directive:</p> <pre> <code>{ &quot;directive&quot;: { &quot;header&quot;: { &quot;namespace&quot;: &quot;Alexa.SecurityPanelController&quot;, &quot;name&quot;: &quot;Arm&quot;, &quot;messageId&quot;: &quot;2bfeb157-89b1-40f3-ba50-677e233b3312&quot;, &quot;correlationToken&quot; : &quot;an opaque correlation token&quot;, &quot;payloadVersion&quot;: &quot;3&quot; }, &quot;endpoint&quot;: { &quot;scope&quot;: { &quot;type&quot;: &quot;BearerToken&quot;, &quot;token&quot;: &quot;&lt;an OAuth2 bearer token&gt;&quot; }, &quot;endpointId&quot;: &quot;&lt;the identifier of the target endpoint&gt;&quot;, &quot;cookie&quot;: { // key/value pairs as received during discovery }, }, &quot;payload&quot;: { &quot;armState&quot;: &quot;ARMED_AWAY&quot;, &quot;isArmInstant&quot;: true } } </code></pre> <p>Disarm Directive:</p> <pre> <code>{ &quot;directive&quot;: { &quot;header&quot;: { &quot;namespace&quot;: &quot;Alexa.SecurityPanelController&quot;, &quot;name&quot;: &quot;Disarm&quot;, &quot;messageId&quot;: &quot;2bfeb157-89b1-40f3-ba50-677e233b3312&quot;, &quot;correlationToken&quot; : &quot;an opaque correlation token&quot;, &quot;payloadVersion&quot;: &quot;3&quot; }, &quot;endpoint&quot;: { &quot;scope&quot;: { &quot;type&quot;: &quot;BearerToken&quot;, &quot;token&quot;: &quot;&lt;an OAuth2 bearer token&gt;&quot; }, &quot;endpointId&quot;: &quot;&lt;the identifier of the target endpoint&gt;&quot;, &quot;cookie&quot;: { // key/value pairs as received during discovery }, }, &quot;payload&quot;: { &quot;authorization&quot;: { &quot;type&quot;: &quot;FOUR_DIGIT_PIN&quot;, &quot;value&quot;: &quot;1234&quot; } } } } </code></pre> <p>&nbsp;</p> <p>Security panel customers must enable the disarm-by-voice feature in order to use the Disarm capabilities. Customers can enable disarm-by-voice during set-up flow or by visiting their panel’s settings page in the Alexa app. If your security system supports 4-digit PIN codes known to your security system control cloud, customers can choose to use either their existing PIN or an Alexa-specific voice code to disarm their system. If your security system does not support 4-digit PIN codes known to your cloud, customers must create an Alexa-specific voice code to disarm.</p> <p>Security panel providers such as ADT, Ring, Honeywell Home, abode, and Scout Alarm are already leveraging the Security Controller APIs, and customers can use their respective skills today. To get started, you can follow the instructions in the <a href="" target="_blank">Security Panel Controller API</a> documentation. &nbsp;</p> /blogs/alexa/post/f2c93a90-5539-4386-aefb-2342f9b1cc4c/new-approach-to-language-modeling-reduces-speech-recognition-errors-by-up-to-15 New Approach to Language Modeling Reduces Speech Recognition Errors by Up to 15% Larry Hardesty 2018-12-13T13:03:08+00:00 2018-12-13T13:38:58+00:00 <p>New Alexa capabilities are often bootstrapped using &quot;grammars&quot;, formal rules that can generate artificial training examples for machine learning systems. A new method for constructing statistical language models directly from grammars can improve speech recognition on new capabilities by up to 15%.</p> <p>Language models are a key component of automatic speech recognition systems, which convert speech into text. A language model captures the statistical likelihood of any particular string of words, so it can help decide between different interpretations of the same sequence of sounds.</p> <p>Automatic speech recognition <a href="" target="_blank">works better</a> when language models adapt to conversational context: the probabilities of the phrases “Red Sox” and “red sauce”, for instance, are very different if the customer is asking about sports or recipes. So when a voice service introduces a new capability, which creates a new set of conversational contexts, it makes sense to update the associated language model.</p> <p>But building language models requires a large body of training data, which may be lacking for newly launched capabilities. New capabilities are frequently bootstrapped with formal grammars, which generate sample sentences by producing variations on a sometimes very complex set of templates.&nbsp;</p> <p>Using a formal grammar to produce enough data to train a language model would be prohibitively time consuming, so instead, AI researchers usually make do with a random sampling of the grammar’s output.&nbsp;</p> <p>In a <a href="" target="_blank">paper</a> we’re presenting at this year’s IEEE Spoken Language Technologies conference, we propose an alternative. We describe an algorithm that can analyze a particular mathematical representation of a grammar’s rules — a graphical representation — and directly calculate the probability that the grammar will produce any given string of words.&nbsp;</p> <p>We also describe a technique for integrating language models generated directly from grammars with existing language models, in a way that doesn’t degrade the performance of established capabilities.</p> <p>In our experiments, language models produced by our method reduced the error rates of speech recognition systems by as much as 15%, relative to language models that sampled the output of the same grammars. We believe that our method could improve the performance of newly launched Alexa capabilities, before their ordinary use begins to generate more realistic training data.</p> <p>In natural-language-understanding (NLU) research, a grammar generally consists of a list of rules governing word or phrase substitutions. For instance, one rule might declare that in the phrase “I want to”, the words “need” or “would like” can be substituted for “want”. Other rules might link to catalogues of entity names — a list of song names that can follow the word “play,” for instance.</p> <p><img alt="Sample_grammar.png" src="" style="display:block; height:159px; margin-left:auto; margin-right:auto; width:550px" /></p> <p style="text-align:center"><sup><em>A sample grammar for a recipe application. The variable&nbsp;</em>DISH_NAME<br /> <em>&nbsp;links to a catalogue of entity names.</em></sup></p> <p>NLU researchers usually implement grammars using so-called finite-state transducers, or FSTs. An FST can be represented as what computer scientists call a graph. Graphs, in this sense, are usually depicted as circles, or nodes, that are connected by line segments, or edges. A network diagram is a common example of a graph.</p> <p>In the graph of a formal-grammar FST, the edges (line segments) represent valid linguistic substitutions, and the nodes (circles) represent states of progress in the production of a text string. So, for instance, if a given node of the graph represents the text string “I want,” it might have two edges, one representing the “need”/“want” substitution and the other representing the “would like”/“want” substitution. Traversing one of those edges leads to a different state — either “I need” or “I would like”.</p> <p>When the FST generates sample sentences, it works its way through the graph, building up strings of text one word or phrase at a time. Associated with each edge is (in addition to a pair of substitutable words) a probability, which indicates how likely the FST is to choose one branch or another when constructing a sample sentence.</p> <p>We use these probabilities to construct a language model. Our algorithm first identifies every string of text encoded by the FST, and then, for each of them, it identifies every path through the graph that could lead to it. Using the probabilities associated with the edges along all of those paths, it then computes the frequency with which the FST will produce that particular string.</p> <p>To integrate our new language model with the existing one, we use a machine learning system to infer the optimal balance of the probabilities encoded in both. In the paper, we present three different ways of doing this (three different loss functions), depending on the type of training data available for the new NLU capability: no data, synthetic (grammar-generated) data, or real audio transcriptions.</p> <p>We evaluated our system on three different NLU capabilities: one that looks up stock prices, one that finds and dictates recipes, and one that books airline tickets. We found that the improvement offered by our method reflected the complexity of the associated grammar, peaking at 15% for the flight-booking capability.&nbsp;</p> <p>It makes intuitive sense that, the more complex the grammar, the more training data would be required to produce an accurate language model, and the less reliable the data-sampling approach would be. Consequently, we suspect that our method will provide greater gains on more challenging tasks.</p> <p><em>Ankur Gandhe is a speech scientist in the Alexa Speech group.&nbsp;</em></p> <p><a href="" target="_blank"><strong>Paper</strong></a>: “Scalable Language Model Adaptation for Spoken Dialogue Systems”</p> <p><a href="" target="_blank">Alexa science</a></p> <p><strong>Acknowledgments</strong>: Ariya Rastrow, Bj&ouml;rn Hoffmeister</p> <p><strong>Related</strong>:</p> <ul> <li><a href="" target="_blank">How Voice and Graphics Working Together Enhance the Alexa Experience</a></li> <li><a href="" target="_blank">How Alexa Is Learning to Ignore TV, Radio, and Other Media Players</a></li> <li><a href="" target="_blank">3 Questions about Interspeech 2018 with Bj&ouml;rn Hoffmeister</a></li> <li><a href="" target="_blank">Contextual Clues Can Help Improve Alexa’s Speech Recognizers</a></li> <li><a href="" target="_blank">Amazon Alexa at SLT</a></li> </ul> /blogs/alexa/post/628dfc18-c4e2-4a2f-986c-bf1a62ff9ae2/now-available-enable-reminders-for-your-skills-with-alexa-reminders-api1 リマインダーAPIを使用して重要なタスクやイベントをユーザーに知らせよう Motoko Onitsuka 2018-12-13T01:19:16+00:00 2018-12-13T01:29:49+00:00 <p><img alt="" src="" /></p> <p>Alexaスキルでユーザーのリマインダーを設定、管理できる、リマインダーAPIがこのほど使用できるようになりました。このAPIは、Alexaでサポートされているすべての国・地域で使用できます。</p> <p><img alt="" src="" /></p> <p>Alexaスキルでユーザーのリマインダーを設定、管理できる、リマインダーAPIがこのほど使用できるようになりました。このAPIは、Alexaでサポートされているすべての国・地域で使用できます。</p> <h2><strong>もっと魅力的で、ずっと使ってもらえるスキルを</strong><strong> </strong></h2> <p>リマインダー機能は、日々の様々なイベントやタスクを忘れないようにするためになくてはならないものになってきています。今回Alexaスキルからリマインダーを操作できるようになり、よりエンゲージメントの高いスキルをお客様に届けることができるようになりました。たとえば、リマインダーAPIでユーザーをアクティブにサポートしている米国の<a href="">KAYAKスキル</a>という旅行関連スキルがあります。KAYAKのチーフサイエンティストのMatthias Kellerは「リマインダーAPIを使用すると、状況を知りたいフライトが到着すると、すぐに通知を受け取ることができます。フライトの到着時刻は変更されることもありますが、変更に柔軟に対応して、修正後の正しい到着時刻にリマインダーを送信できるため、ユーザーエクスペリエンスがとても向上しました」と話します。</p> <h2><strong>リマインダーに特化した新しいスキルを作成</strong></h2> <p>リマインダーAPIを使用しない場合、必要な情報を得るには、適切なスキルを適切な時刻に忘れずに呼び出さなければなりませんでした。リマインダーAPIを使ったスキルでは、ユーザーがスキルを有効にし、このスキルに対してリマインダーへのアクセス許可を与えることで、スキルからユーザーのリマインダーを操作できるようになります。このAPIが使用できるようになったことで、スキル開発者はユーザーに、これまでにない、新しいユーザー体験を提供できるようになりました。たとえば、重要な顧客との会議予約、トレーニング講座、レストラン予約、うっかり忘れてしまって無断キャンセルしてしまうとキャンセル料が発生するようなイベントを忘れないようにリマインダーを送信するようなスキルが挙げられます。</p> <p>日本企業のエムティーアイは、リマインダーAPIを使用して、決められた時刻に薬を飲むのを忘れないようにすることに特化した新しいスキル<a href="">CARADA 声でおくすり記録</a>を作成しました。 このスキルでは、リマインダーを設定すると、毎日指定時刻にAlexaが「おはようございます。朝のおくすりは飲みましたか?」などと声で教えてくれるので、薬の飲み忘れを防ぐことができます。外出時でも、Alexaアプリの通知をONに設定していれば、スマホで通知を受け取ることができます。また、ユーザーが3日間連続して服薬記録をしなかった場合、Alexaがユーザーの体調を心配するメッセージを伝えます。エムティーアイ システムアーキテクト部のエンジニア、葛馬凌氏は「日本では慢性疾患患者の服薬中断率が問題になっています。私たちのサービスでは、『ユーザーに服薬を記録してもらうこと』以上に、『ユーザーに服薬を継続してもらうこと』が課題となっていました。そのためには、ユーザーに Alexa 側からお知らせする機能が必要でした。リマインダーAPIは我々のニーズを解決し、飲み忘れを防止する機能を提供することができました」と話します。</p> <p>&nbsp;</p> <h2><strong>リマインダーAPIのしくみ</strong><strong> </strong></h2> <p>スキルからリマインダーを設定するには、ユーザーが不要なリマインダーを受け取らないように、2つの段階でユーザーの許可を得る必要があります。1つ目は、リマインダーへの読み取り/書き込み全般に関するアクセスの許可で、ユーザーがスキルを有効した際、許可、非許可の確認が求められます。2つ目は、個別のリマインダーを設定するたびに行う確認で、リマインダーを新規に設定する際には毎回必ず、ユーザーに明示的に許可を求める必要があります。</p> <p>米国のNHL(ナショナルホッケーリーグ)の公式スキルが、リマインダー実装のよい例です。このスキルでは、リマインダーの詳細を明確に説明し、ユーザーの許可を求めています。NHLのデジタルディレクター、Chris Fosterは「NHL公式スキルは、新しいリマインダーAPIを利用して、お気に入りのチームの試合開始30分前にリマインダーを設定できます。NHL公式スキルにこの機能が追加されたことで、シーズン中毎晩ホッケーファンにパーソナライズされた情報を提供できるようになりました」と話します。</p> <h2><strong>リマインダーAPIを今すぐお試しください</strong></h2> <p>リマインダー機能を追加するには、Alexa開発者ポータルでリマインダーAPIをスキルに組み込みます。英国ののCEO Chetan Damaniと、CTO Vytas Kanclerisは、リマインダーAPIを簡単に<a href="">自社のスキル</a>に追加できたと言います。「見たい番組があれば、番組が始まる数分前に通知を受け取るようリマインダーをセットするだけです。開発チームによると、実装はとても簡単だったそうです。公開されたら、利用率も大きく伸びるでしょうし、ユーザーエクスペリエンスも向上すると期待しています」</p> <p>リマインダーAPIの使用については、<a href="">テクニカルドキュメント</a>をご覧ください。</p> /blogs/alexa/post/7822f3ee-1735-4eaa-9aa6-5b8e39953c07/proactiveeventsapi-launch-announcement Send Timely Information to Your Alexa Skill Customers with the ProactiveEvents API June Lee 2018-12-12T22:31:09+00:00 2018-12-13T01:29:31+00:00 <p><img alt="" src="" /></p> <p>We are excited to announce the availability of the ProactiveEvents API. With the API, you can enable your Alexa skills to send notifications to customers who have granted permissions.</p> <p style="text-align:justify"><img alt="" src="" /></p> <p style="text-align:justify">We are excited to announce the availability of the ProactiveEvents API. With the API, you can enable your Alexa skills to send notifications to customers who have granted permissions. By providing timely, relevant information, you can keep your customers engaged and retain them effectively. The <a href=";ie=UTF8&amp;qid=1544651620&amp;sr=1-16&amp;refinements=p_n_date:4843906051?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=proactiveevents&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_proactiveevents_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">Gomi Maru skill</a>, which helps with recycling and garbage collection in Japan, is a great example. “The garbage collection schedules in Japan are different every week, and that makes it difficult for customers to remember when it’s garbage day. Thanks to the ProactiveEvents API, we built a skill that proactively sends notifications to customers on the garbage collection days. It helps customers avoid having to keep garbage in their houses for a long time,” says Shinya Terasaki, Principal Engineer at Shaxware Inc.</p> <h2 style="text-align:justify">How It Works</h2> <p style="text-align:justify">You can simply choose from a pre-defined set of schemas that best describe your events and send proactive events information that conforms to a schema via the Skill Management API (SMAPI) as part of a skill manifest. Each schema has a predefined template that represents the text read back to the end customers by Alexa. For example, the <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=proactiveevents&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_proactiveevents_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">TorAlarm Skill</a> in Germany uses the sports event schema to send notifications to customers who are soccer fans whenever their favorite team scores a goal. If you are in retail, you may want to use the order status schema and send notifications to customers about their order status. A notification using the order status schema looks like the following: “Your order from &lt;company&gt; has been shipped and will arrive &lt;date&gt;”. In this example, you simply need to supply the &lt;company&gt; and &lt;date&gt; information. Once the proactive events are created by calling the ProactiveEvents API, Alexa does the heavy lifting of checking which customers are subscribed to receive events, creating notifications, and delivering them to customers’ Alexa-enabled devices. To ensure great customer experience, make sure your notifications are highly relevant, and timely. Our goal is to delight customers with every notification.</p> <p style="text-align:justify">To make sure Alexa notifications provide relevant updates, customers have the ability to enable notifications for each skill, and they can opt out at any time using the Alexa app. When an <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=proactiveevents&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_proactiveevents_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">Alexa notification</a> is sent, customers see a yellow light on devices without screens and an on-screen banner on devices with screens that indicate that they have new notifications. Customers can ask Alexa to read their notifications when they want to hear them.</p> <h2 style="text-align:justify">Try the ProactiveEvents API Today</h2> <p style="text-align:justify">Daniel Mittendorf, a skill developer in Germany who integrated the ProactiveEvents API for his <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=proactiveevents&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_proactiveevents_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">Watch TV Stream Player skill</a> mentioned that it was very easy to implement the product. &quot;With the ProactiveEvents API, my skill enables customers to get notified about the upcoming shows or movies that are playing on their favorite German TV channels. It was very easy to use the ProactiveEvents to create notifications&quot; says Mittendorf.</p> <p style="text-align:justify">ProactiveEvents API is available in all locales supported by Alexa. Review our <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=proactiveevents&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_proactiveevents_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">technical documentation</a> to learn more. If you have integrated the Notifications API previously offered in extended preview, you can follow instructions in our <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=proactiveevents&amp;sc_publisher=launch&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_proactiveevents_launch_Content_Discover_WW_unknown&amp;sc_segment=unknown">technical documentation</a> to migrate over to the ProactiveEvents API. As we offer more ways for your skills to send notifications in the future, you’ll be able to take advantage of them via ProactiveEvents API without integrating a new API.</p> /blogs/alexa/post/5959e319-1656-40cb-b689-b35c988d6b91/how-to-design-visual-components-for-voice-first-alexa-skills How to Design Visual Components for Voice-First Alexa Skills Jennifer King 2018-12-12T15:00:00+00:00 2018-12-12T15:54:28+00:00 <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>Here are a few things to keep in mind as you start designing visual components to complement your voice-first experience.</p> <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>When we communicate with others, we use a variety of visual cues with our body language to give subtle emphasis to what we're saying. Just like we use body language to convey expression, Alexa can provide rich and engaging experiences to customers by adding visual and touch interactions to responses, in addition to the voice experience.</p> <p>Imagine I was talking to you and your eyes were closed. You would still understand the message I am conveying and would be able to follow along and interact with me without any problems. But now imagine your eyes are open. You can see me as I talk to you, my facial expressions, and my hand movements. These visuals create a richer conversational experience and help you become more engaged in what I'm communicating. That's how multimodal skills can enhance the voice experience.</p> <p>With the recent release of the all-new Echo Show and the introduction of <a href="">Alexa on Fire TV Cube</a>, there are now tens of millions of Alexa-enabled devices with screens available to customers. Using the <a href="">Alexa Presentation Language (APL)</a>, developers can easily add visuals to skills and <a href="">build engaging multimodal voice experiences</a> in a responsive way, while tailoring the skill to each device to enhance the customer experience.</p> <p>Here are a few things to keep in mind as you start designing visual components to complement your voice-first Alexa skill.</p> <h2>Start with a Storyboard and Be Mindful with Your Visuals</h2> <p>If you've already designed and built your Alexa skill, you already have a script and voice flow. Storyboarding is a great way to quickly sketch out how to pair visuals to your text-to-speech output to enhance your experience. You can sketch, draw it on a whiteboard, or even use a graphics program. Use anything that allows you to quickly put your thoughts down on paper and visualize how what you show will pair with what is spoken.</p> <p>Think about your visuals carefully. Remember the visuals, or the graphical user interface, you design for your skill should augment your voice experience, adding relevant content and context for the customer. What you display on the screen should always be in harmony with what Alexa is saying and shouldn't distract from the overall voice experience.</p> <p>Remember, the visual display on Alexa-enabled devices should be used to enhance the experience, but it shouldn't be required for the customer to proceed through your skill. Customers are likely to multitask and will often alternate between looking at their device and just listening. Therefore, it is important to make sure your visual designs supplement the overall experience, rather than replace the voice interactions.</p> <h2>Consider Where and How a Customer Might Be Using Your Skill</h2> <p>Customers may engage with their device at different distances, such as casually glancing at the device from across a room (5-7 ft distance) or sitting next to the device (1-3 ft distance) to be able to interact with touch features. When designing the visual responses in your skill, determine the level of interaction required from the customer, such as touch or interactive elements. Always keep in mind that Alexa is a voice-first experience with complementary visuals that are delightful for the customer to use.</p> <p>Also, due to the communal nature of Alexa devices, too much interaction could seem demanding to the customer and they may stop using the skill if they need to stay near a device all the time. On the other hand, too little interaction will not keep a customer engaged with the skill for long. It's all about finding that perfect balance for your skill's content and your customer.</p> <h2>Design from the Smallest Form Factor and Work Your Way Up</h2> <p>Devices come in all shapes and sizes, ranging from the small round Echo Spot, up to a 50-inch television with Fire TV, and everything in between. Starting with the smallest device allows you to perfect that core experience, giving the minimum visual information a customer needs to continue to move through a voice interaction. As screen size increases, you can add more contextual information (like text and additional images), but be careful to find a balance between what's needed at that moment and what is extraneous information. In other words, don't add content just to add content or fill up the screen.</p> <h2>Use Text and Images in Meaningful Ways</h2> <p>By pairing text and image components together in APL, you can create visual layouts that add context or content that words alone could not express. Change the visual response with each interaction the customer has with your skill to acknowledge that the customer made a choice and Alexa responded. But be sure to use visual responses consistently for similar functionality or results to add predictability and reduce the learning curve for your customers. For example, if you have a recipe skill, the general layout for all recipes should look the same each time a customer requests one, changing only the data that is being displayed. Once you have your basic layouts and visual flows designed, work with adjusting font sizes and weights to add visual hierarchy to your content. Or include TouchWrappers as another way for customers to interact with richer responses.</p> <h2>Tailor Your Visual Output for Different Device Form Factors</h2> <p>With APL and viewport characteristics, you can adapt your visual experience to fit each device, delivering a skill that feels tailored to that device for the customer. For example, a horizontal list of multiple items may be appropriate for an Echo Show or Fire TV. But given the small screen size of an Echo Spot, that same list may need to be reformatted to show only one item at a time. In addition to different layouts for each device, you can also use the viewport characteristics to send different resolution images to different sized devices. This helps cut down on latency for the customer, and gives developers the ability to send higher-quality images to larger devices, like a television screen.</p> <h2>Have Fun</h2> <p>The opportunity to pair visuals with the voice responses in your skill gives you new ways to surprise and delight your customers. Have fun with this–your customers will appreciate it. And remember, while visuals can enhance your voice experience on Alexa-enabled devices with screens, customers will still be able to enjoy your voice-only skill on their screen-based devices.</p> <h2>Enter the Alexa Skills Challenge: Multimodal</h2> <p>In addition to building a visually rich Alexa skill with APL, you can <a href="">enter the Alexa Skills Challenge: Multimodal with Devpost</a> and compete for cash prizes and Amazon devices. We invite you to participate and build voice-first multimodal experiences that customers can enjoy across tens of millions of Alexa-enabled devices with screens. <a href="">Learn more</a>, start building APL skills, and enter the challenge by January 22.</p> <h2>More Resources to Get Started with APL</h2> <ul> <li><a href="">Alexa Design Guide</a></li> <li><a href="">Steven Arkonovich Enhances Voice-First Alexa Skills with Visuals and Touch Using the Alexa Presentation Language</a></li> <li><a href="">Blog: How to Get Started with the Alexa Presentation Language to Build Multimodal Alexa Skills</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=certification&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_certification_Content_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">Webinar: Get Started with the Alexa Presentation Language</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=skillschallenge&amp;sc_content=Promotion&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_skillschallenge_Promotion_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">Webinar: Advanced Template Building with the Alexa Presentation Language</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=skillschallenge&amp;sc_content=Promotion&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_skillschallenge_Promotion_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">APL Sample Skill: Space Explorer</a></li> </ul> /blogs/alexa/post/bba2cad0-f664-406c-b8cc-4007abea9565/2-new-ways-to-enhance-alexa-skill-accuracy-and-deliver-more-natural-customer-engagements 2 New Ways to Enhance Alexa Skill Accuracy and Deliver More Natural Customer Engagements Drew Meyer 2018-12-11T17:36:00+00:00 2018-12-11T18:18:49+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>We're excited to announce two new capabilities for Alexa Skill developers that help you quickly improve the performance of your interaction model and build more accurate and engaging skills.</p> <p><img alt="" src="" /></p> <p>We're excited to announce two new capabilities for Alexa Skill developers available in all locales. Both help you quickly improve the performance of your interaction model and build more accurate and engaging skills from the Alexa developer console.</p> <h2>Map Requests to Slots and Intents</h2> <p>The <u><a href="" target="_blank">Intent History</a></u> tab in the developer console provides insight into how customers are engaging with your skill so you can analyze the data, update sample utterances and slots, and deliver more natural experiences. Intent History shows frequent intent requests from anonymous users in the last 30 days. Now the Intent History developer console tab and the Skill Management API (SMAPI) include a new tool that helps you map these utterances to the intents or slots in your interaction model, which makes it easier to adjust the model and correctly resolve more spoken requests. You can use this tool with your skills to deliver more natural dialogs starting today.</p> <h2>See More Information in the Debugging Tools</h2> <p>You can use the developer console Testing tab or the Skill Simulation API to test your skill and see the intent a simulated device returns from your interaction model. Now the device logs also show the intents considered and discarded. This new information in the debugging tools can show you where additional samples might help train your model to resolve utterances to their intended intents and slots, so you can further improve accuracy.</p> <h2>Learn More and Get Started Today</h2> <p>Start using these new capabilities to increase skill accuracy and create more conversational experiences for customers. To learn more, read the Alexa Skills Kit (ASK) documentation on <a href="" target="_blank"><u>intent history</u> </a>and on the <u><a href="" target="_blank">Skill Simulation API</a></u>.</p>