Vielen Dank für Ihren Besuch. Diese Seite ist momentan nur auf Englisch verfügbar. Wir arbeiten an der deutschen Version. Vielen Dank für Ihr Verständnis.
Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2018-09-26T10:38:14+00:00 Apache Roller /blogs/alexa/post/aab44ff4-c19a-4862-8014-35c0e8846743/introducing-consumables-a-new-way-to-make-money-with-your-alexa-skill Now You Can Sell Consumables to Enrich Your Voice Experience and Make Money with Alexa Skills Metty Fisseha 2018-09-25T14:33:46+00:00 2018-09-26T03:01:36+00:00 <p><img alt="" src="" /></p> <p>We are excited to announce the release of consumables. Consumables are in-skill product that customers can purchase, use, and then purchase again.</p> <p><img alt="" src="" /></p> <p>Starting today, developers can sell consumables in their Alexa skills in the US to further enrich the skill experience with <a href="">in-skill purchasing</a>. A consumable is an in-skill product that customers can purchase, use, and then purchase again. In addition to one-time purchases and subscriptions, consumables give developers more ways to deliver premium experiences to customers. With consumables, you can sell products that are relevant in the moment to customers as they experience your skill.</p> <p>For example, consider a trivia skill that offers customers a pack of hints. If a customer gets stuck on a question, rather than get the question wrong, they can purchase and use hints to keep the round going. When they run out of hints, they have the option to purchase again. Consumables are a great way to keep your customers engaged during pivotal moments of your skill and drive revenue for your voice business.</p> <h2>How Developers Are Building Skills with Consumables</h2> <p>As part of a private beta, developers used consumables to elevate their voice-first experience. While consumables are most common in the game and trivia space, Alexa developers are using this feature creatively across various categories to drive engagement with their skills. See below for some examples:</p> <ul> <li><a href="" target="_blank">Would You Rather for Family</a> (Voice Games) – In this family-friendly game, you have to make a choice between two lighthearted and silly situations, such as “Would you rather have bad breath or smelly feet” or “Would you rather forget to pack your neck pillow or your headphones.” The free version of the skill contains general questions and the premium version offers themed questions, such as Superheroes, Travel, and Halloween. You can purchase a 7-day pass to access all of the available themed premium packs. After 7 days, your access returns to the free version.</li> <li><a href="" target="_blank">Yes Sire</a> (Volley) – In this medieval role-play game, you, the Sire, are presented with a series of difficult decisions with consequences that impact your wealth and influence, which are measured on a scale from 0 to 100. If either score reaches zero, the game will end, or you can purchase 50 extra points to keep the game going.</li> <li><a href="" target="_blank">Hypno Therapist</a> (Innomore LLC) – In this relaxing skill, you can access therapy sessions administered by clinical hypnotist Barry Thain. Each therapy session is intended to help you with a specific goal, such as appreciating yourself and feeling fabulous. You can purchase a bundle of 10 hypnotherapy sessions from a catalog of 70+ therapies. Once you use up all 10, you have the option to purchase a new bundle.</li> </ul> <p>Coming Soon: In addition to the above skills that are already live, we are excited to announce that Sony Pictures Television will soon launch a skill for their hit trivia game, Who Wants to Be a Millionaire. In this skill, if you get stuck on a question, you can purchase one-time-use lifelines.</p> <h2>Getting Started with Consumables</h2> <p>Similar to one-time purchases and subscriptions, you can use the <a href="">Alexa Skills Kit (ASK) Command-Line Interface (CLI)</a> to build and manage your consumables. An important feature to note when building consumables versus a one-time purchase or subscription is the inventory management function required to monitor consumable usage. For example, in a game skill where a customer purchases a pack of three additional lives, you will need to know how many of those lives they have left in order to present them with relevant in-skill content. And, you will need to know when they’ve run out of lives in order to offer them the product again. To do this, you will use a database and offer an inventory intent. Learn more about inventory management for consumables <a href="">here</a>. You can also check out technical documentation <a href="">here</a>.</p> <p>To reference an example as you get started, check out <a href=";ie=UTF8&amp;qid=1537309121&amp;sr=1-1&amp;keywords=name+the+show" target="_blank">Name The Show</a>, a skill built by Alexa evangelist <a href="" target="_blank">Jeff Blankenburg</a>, and see how he incorporates consumables into the skill experience. This trivia skill tells you the name of an actor and you have to guess the show that Alexa is thinking of. If you’re stuck, you can purchase five hints to keep the round going. Each hint gives you the name of an additional actor from the show. When you run out of hints, you are presented the option to buy more. Tip: you can access the code for this skill on GitHub, <a href="">here</a>.</p> <h2>Other Ways to Build Skills with Consumables</h2> <p>In addition to using the ASK CLI, you can use tools provided by <a href="" target="_blank">Storyline</a> and <a href="" target="_blank">Voice Apps</a> to build skills with consumables as well as one-time purchases and subscriptions. Storyline and Voice Apps offer a visual design approach to skill building that makes it easy for everyone to build skills, from people with zero coding experience to advanced developers.</p> <h2>Register for Our Webinar to Learn More</h2> <p>Check out technical documentation <a href="">here</a>. If you have questions, check out our <a href="">FAQs</a>, attend <a href="">Office Hours</a> (no sign up required) or chime in on our <a href="">developer forums</a>. You can also <a href="">register for our webinar</a> on “Building Alexa Skills with Consumables” webinar on Tuesday, October 16, from 10-11am PST, to learn more.</p> <p>We can’t wait to see the premium experiences you build with consumables.</p> /blogs/alexa/post/0fecdb38-97c9-48ac-953b-23814a469cfc/skill-discovery Making Alexa Skills More Discoverable: Alexa Technologies That Bring Skills and Customers Together Jennifer King 2018-09-24T16:32:35+00:00 2018-09-25T15:21:33+00:00 <p>There are many ways Alexa technologies help people find engaging skills. Check out this post for a deeper look at those technologies and how you can leverage them to make your skills more discoverable.</p> <p>One of the promises of a voice-first user experience is that it’s natural. People can say what they think and the user interface can understand them, parsing their meaning or intent, rather than simply understanding their words. In this case, technology bends to our way of thinking and acting, rather than making us learn how to use it. Alexa handles <a href="" target="_blank">automatic speech recognition (ASR)</a> and <a href="" target="_blank">natural language understanding (NLU)</a>. (You can read about new research and advancements presented by the Alexa Science team at Interspeech 2018 <a href="" target="_blank">here</a>).</p> <p>But creating <a href="" target="_blank">conversational experiences</a> won’t be solved by Amazon alone. It’s a tough and important challenge and it’ll take the developer community to solve it. Our vision is for Alexa to be able to help you with whatever you need. To make that a reality, we need help from third-party developers who are coming up with engaging voice experiences every day. Alexa skills and the developers that build them are incredibly important to that vision.</p> <p>A recent discussion with industry vets and voice-first enthusiasts inspired me to write this post on the Alexa technologies that help people find engaging skills. This includes the technologies that support finding, launching, and re-engaging with skills in natural ways and how you can use them to make your skill more discoverable.</p> <h2>Making Alexa More Friction-Free and Skills More Discoverable</h2> <p>You can use any Alexa skill by saying the skill’s invocation name, and then following up with additional utterances. For example, you might say, “Alexa, open Tide Stain Remover” and then ask for advice on an ink stain. Of course, that’s not how people actually talk. We don’t use the same exact phrases day in and day out; we like variety, we forget the words, we explore new things, and we want to skip the steps.</p> <p>You shouldn’t have to remember the name of a skill or think of the proper phrasing to use it. Instead you might say, “Alexa, remove an ink stain.” You are trying to accomplish two things. First, you want to launch the Tide Stain Remover skill and you want to go straight to the ink stain features of the skill. Really, what you want is for Alexa to immediately walk you through stain mitigation steps (probably urgently; does the 10-second rule stand for stains?). So, for a customer in that moment, saying “Alexa, remove an ink stain” provides a voice shortcut, linking deep into the best skill for the task. It lets you use the skill even if you forgot the name, don’t know that skill exists, or just want to say fewer words.</p> <h2>Finding the Right Skill Easily with Name-Free Interaction</h2> <p>Millions of customers use skills every month without using the invocation name. If a customer requests something broad like, “Alexa, let’s play a game,” Alexa will pick a skill or suggest a few that are well-suited for the request. We test and rotate the suggestions to see which skills customers are responding to and give more developers visibility. When customers use your skill (either through these broad utterances or any other invocation channel), the back end powering this system gathers metrics on the aggregate usage of skills and on the preferences of each individual customer. As customers re-engage with your skill, this will improve the likelihood of it being discovered by others. For a select (and growing) set of utterances, we also provide a rich voice-first conversational discovery experience, which will allow customers to discover your skill based on its capabilities.</p> <h2>Surfacing the Most Relevant Skills via Shortlister and HypRank</h2> <p>It’s important that customers are able to easily reengage with their skills – even if they describe what they want instead of calling the skill by name. Let’s look again at the phrase “Alexa, remove an ink stain” where we have an utterance, “remove a {stain type} stain,” and a slot value, stain type of “ink,” that can be mapped to a skill’s interaction model. To make this work, two techniques are employed to deliver the right experience. The first is a neural model architecture we call Shortlister, which solves a domain classification problem to find the most statistically significant (k-best) matches of candidate skills (details in a recent <a href="" target="_blank">blog</a> and <a href="" target="_blank">research paper</a>). The goal of Shortlister is to efficiently identify as many relevant skills as possible – it’s high volume. The second is a hypothesis reranking network, we call HypRank. It uses contextual signals to select the most relevant skills – it’s high precision (details in a recent <a href="" target="_blank">blog</a> and <a href="" target="_blank">research paper</a>). When combined, these two help classify which skill can provide the most likely correct response from an invocation name-free utterance. Beyond helping people reengage with the skills they’ve manually or automatically enabled, the Shortlister and HypRank surface specific and relevant skills to people that have never used them.</p> <p><img alt="" src="" /></p> <h2>Encouraging Skill Use with Voice-First Recommendations</h2> <p>You may have noticed that on detail pages for products on, we include a widget titled “Customers who bought this item also bought” that provides recommendations for similar products. In the Alexa App and the Alexa Skills Store, we do the same for skills. Likewise, Alexa does this via voice. When customers have finished engaging with a skill, they may receive a recommendation from Alexa for other skills they may enjoy. For example, Alexa might say, &quot;Because you used &lt;skill1 name&gt;, Amazon recommends &lt;skill2 name&gt;. Would you like to try it?&quot; If they say yes, they can try it out. To optimize the experience for customers and to expose a wide range of skills, we experiment with which skills to recommend and how frequently. So far, we’re seeing that a lot of people are getting to experience new skills this way and this is improving discovery for skills overall.</p> <h2>Removing Enablement Friction</h2> <p>Enabling a skill is similar to downloading an app. Customers can enable skills in the <a href="" target="_blank">Alexa App</a>, from <a href="" target="_blank"></a>, or simply via voice by saying, “Alexa, enable [the skill].” Because skills live in the cloud, rather than being downloaded to a device, we find that customers want simple, frictionless access. We now automatically enable a list of thousands of the most popular skills – and we’re constantly adding more. Auto-enablement means that you can use a skill immediately when you say, “Alexa, open [the skill].” Some of the criteria we use to choose auto-enabled skills include customer satisfaction, such as number of customers, customer engagement, and ratings, as mentioned above. When a customer invokes one of these skills, whether through direct or name-free interaction, they can immediately engage with the skill. For other skills, including those that have a <a href="" target="_blank">mature rating</a> or extra steps like <a href="" target="_blank">signing in through account linking</a> or <a href="" target="_blank">parental permission</a>, customers will be directed to use <a href="" target="_blank"></a> or the Alexa App to choose and configure skills to enable before launching.</p> <h2>Optimizing Your Skill for Better Discovery</h2> <p>You might wonder how you can optimize your skill experience for name-free interaction. The machine-learning model takes a lot of factors into account including the quality and usage of your skill. So start by making sure you deliver a lot of value to customers. Some of the ways that we identify high-quality skills are the number of customers, customer engagement, and ratings and reviews. Beyond ensuring you have an engaging skill, there are things you can do to give the system explicit signals, like adding accurate descriptions and keywords in the skill metadata, selecting the correct category, and implementing CanFullfillIntentRequest.</p> <h2>Implement CanFullfillIntentRequest (Beta)</h2> <p>Alexa is getting smarter every day and the Alexa skills you create play directly into that, providing a broad range of information, experiences, and games that can meet customer requests. We aim to seamlessly route customers to the best response, which is often a skill. Using the <a href="" target="_blank">CanFulfillIntentRequest beta</a>, your skill provides information about its ability to fulfill a given customer request at runtime, which can then respond with a few explicit signals, like if it can fulfill the intent, if it can understand the slot(s), and if it can fulfill the slot(s). Alexa combines this information with a machine-learning model to choose the right skill to use when a customer makes a request without an invocation name. As a result, customers find the right skill faster, using the search terms they say most naturally.</p> <p>Boost your skill’s signal to our various systems by implementing the CanFulfillIntentRequest interface in your skill. To enable this, open your skill in the developer console. Go to the <strong>Build &gt; Custom &gt; Interfaces</strong> page, and enable the CanFulfillIntentRequest interface.</p> <p>Then in your code, implement your response. See the <a href="" target="_blank">quick-start guide</a> and <a href="" target="_blank">docs</a> to get started. To enable HypRank, to choose the best response to the customer, it’s important to be accurate with the CanFulfillIntentRequest responses. A skill’s response should be closely aligned with what the skill can really accomplish. Appropriately saying “Maybe” or “No” is a stronger signal for HypRank than saying “Yes” and failing to deliver. A stronger signal increases the likelihood that your skill is matched to fulfill a customer’s request. To this end, the key concept that you’ll need to think through is how to respond at the intent level and for each of the slots.</p> <h2>Add Accurate Skill Keywords and Categories</h2> <p>When customers make a request, the keywords and categories that you’ve included with your skill submission help us surface appropriate skills. For example, a customer could ask for specific kinds of games: &quot;Alexa, do you have any party games? Kids games? Sports games? {keyword} games?&quot; To select from thousands of games available today, Alexa makes use of the metadata for relevant skills in the games category with that keyword. We use customer engagement to prioritize the skills for these experiences. We measure how customers are using skills, and depending on the customer’s request provides skill lists by popularity, recency and other measures.</p> <p><img alt="" src="" style="display:block; height:203px; margin-left:auto; margin-right:auto; width:600px" /></p> <p style="text-align:center"><em>Assign keywords and the proper category to your skill so that Alexa can offer your skill for broad or topical utterances.</em></p> <p>As customers discover and use your skills through keyword browse experiences, Alexa tracks acceptance/rejection. We use this alongside other data to optimize skill discovery experiences. Ensure that your keywords and categories are chosen carefully. This is critical to connecting customers get the right skill for their request.</p> <h2>Improve Skill Descriptions and Other Metadata</h2> <p>In addition to keywords and categories, we scan description fields to surface the skills that customers are asking for. Make sure that your description (and particularly your detailed description) includes the various things your skill can do. For the one-sentence description, imagine that the text being read aloud. It’s a best practice to read it out loud to a friend and make sure it sounds right. We use these descriptions in graphical UIs and when Alexa offers skill suggestions and customers want to learn more. A strong one-sentence description will improve the conversion rate for customers offered your skill.</p> <p><img alt="" src="" style="display:block; height:232px; margin-left:auto; margin-right:auto; width:600px" /></p> <p style="text-align:center"><em>Use both of your description fields. They serve different purposes.</em></p> <h2>Optimize for Voice Discovery: Customer Engagement Will Always Be a Core Signal for Our Systems</h2> <p>Let’s recap. To enable people to use your skill in a natural, name-free way, be sure to add descriptive <a href="" target="_blank">keywords, the proper category</a>, and great descriptions and implement <a href="" target="_blank">CanFullfillIntentRequest</a>. The foundation under that, always, is to deliver a <a href="" target="_blank">high-quality</a> product that meets or exceeds your customer’s expectations. Think deeply about things like <a href="" target="_blank">voice design</a>, <a href="" target="_blank">scalability</a>, and <a href="" target="_blank">beta testing</a>.</p> <p>When your skills are exposed to customers through any of our name-free interaction systems or considered for promotion by Amazon, you want to be sure to deliver a great first impression. Over time we will continue to improve skill discovery and enhance our machine learning models to create the best customer experience.</p> <p>Reach out to me on Twitter at <a href="" target="_blank">@PaulCutsinger</a> to continue the conversation.</p> <h2>Related Content</h2> <ul> <li><a href="">Alexa at Interspeech 2018: How Interaction Histories Can Improve Speech Understanding</a></li> <li><a href="">The Scalable Neural Architecture behind Alexa’s Ability to Select Skills</a></li> <li><a href="">HypRank: How Alexa Determines What Skill Can Best Meet a Customer’s Need</a></li> <li><a href="">Implement CanFulfillIntentRequest for Name-free Interaction</a></li> <li><a href="" target="_blank">Guide: How to Shift from Screen-First to Voice-First Design</a></li> <li><a href="" target="_blank">Video: How Building for Voice Differs from Building for the Screen</a></li> <li><a href="" target="_blank">7 Tips for Building Standout Skills Your Customers Will Love</a></li> </ul> <h2>Build Skills, Earn Developer Perks</h2> <p>Bring your big idea to life with Alexa and earn perks through our <a href="">milestone-based developer promotion</a>. US developers, publish your first Alexa skill and earn a custom Alexa developer t-shirt. Publish a skill for Alexa-enabled devices with screens and earn an Echo Spot. Publish a skill using the Gadgets Skill API and earn a 2-pack of Echo Buttons. If you're not in the US, check out our promotions in <a href="">Canada</a>, the <a href="" target="_blank">UK</a>, <a href="" target="_blank">Germany</a>, <a href="">Japan</a>, <a href="">France</a>, <a href="">Australia</a>, and <a href="" target="_blank">India</a>. <a href="">Learn more</a> about our promotion and start building today.</p> /blogs/alexa/post/1dee3fa0-8c5f-4179-ab7a-74545ead24ce/introducing-the-alexa-presentation-language-preview Introducing the Alexa Presentation Language (Preview) BJ Haberkorn 2018-09-20T19:02:23+00:00 2018-09-20T19:24:40+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Today we’re excited to announce a preview of a new design language and tools that make it easy to create visually rich Alexa skills for tens of millions of Alexa devices with screens.</p> <p><img alt="" src="" /></p> <p>Today we’re excited to announce a preview of a new design language and tools that make it easy to create visually rich Alexa skills for tens of millions of Alexa devices with screens. The Alexa Presentation Language (APL) enables you to build interactive voice experiences that include graphics, images, slideshows, and video, and to customize them for different device types. Echo Show, Echo Spot, Fire TV, and select Fire Tablet devices will support skills built using APL next month, as will third-party devices built using the <a href="" target="_blank">Alexa Smart Screen and TV Device SDK</a> in the coming months. To learn more about building Alexa skills, visit: <a href="" target="_blank"></a>.</p> <h2>Alexa Presentation Language: Purpose-Built for Voice</h2> <p>Customers embrace voice because it’s simple, natural, and conversational. When you build a multimodal experience, you combine voice, touch, text, images, graphics, audio, and video in a single user interface. Combining voice and visual experiences can make skills even more delightful, engaging, and simple for customers. You can provide customers with complementary information that’s easily glanceable from across the room. Or you can use the screen to enrich your voice-first experience and reduce friction for customers. For example, you can offer visual cues for confirmation or show lists of items during a content search experience.</p> <p>APL is designed from the ground up for creating voice-first, visual Alexa skills that adapt to different device types. Included in the Alexa Skills Kit, APL gives you flexible tools and resources to translate voice-first experiences to the screen. With APL, you can build skills that are:</p> <ul> <li><strong>Rich: </strong>You can build interactive voice-first experiences that&nbsp;include text, graphics, slideshows, and, soon, video content. You can synchronize on-screen text and images with the associated spoken voice. You can support voice commands as well as touch and remote controls, when available, and take advantage of automatic entity resolution for voice-based selection of on-screen elements.</li> <li><strong>Flexible:</strong> You can control your user experience by defining where visual elements are placed on-screen, and match the visual expression to your brand guidelines. You can reuse designs to deliver a familiar visual experience across multiple skills, and share your designs with others.</li> <li><strong>Adaptable:</strong> You can customize experiences to reach customers anywhere, through an expanding range of Alexa devices with screens. The experiences developers create with APL can be tailored according to the unique characteristics of the Alexa device they are being rendered on, and can be targeted to devices with a broad range of memory and processing capabilities.</li> <li><strong>Easy: </strong>To get started quickly, you can take advantage of Amazon-supplied sample APL documents that are designed to work well across a broad range of different device types. You can use these samples as-is, modify them, or build your own from scratch. Although APL is a new language, it adheres to universally understood styling practices, and the syntax will be familiar to anyone with front-end development experience.</li> </ul> <p>“This year alone, customers have interacted with visual skills hundreds of millions of times. You told us you want more design flexibility – in both content and layout - and the ability to optimize experiences for the growing family of Alexa devices with screens,” said Nedim Fresko, Vice President, Alexa Devices and Developer Technologies. “With the Alexa Presentation Language, you can unleash your creativity and build interactive skills that adapt to the unique characteristics of Alexa Smart Screen devices. We can’t wait to see what you create.”</p> <h2>APL Components You Can Use to Build</h2> <p>When you design with APL, you create APL documents, which are JSON files sent from your skill to a device. The device evaluates the APL document, imports images and other data as needed, and renders the experience. In your APL documents, you can use:</p> <ul> <li><strong>Images, Text, and ScrollViews:&nbsp; </strong>You can deliver images and text on-screen, and specify text color, size, and weight for available fonts. You can use ScrollViews to display text that is outside the bounds of the container. And, you can make both text and images responsive to touch using TouchWrappers.</li> <li><strong>Pagers and Sequences: </strong>You can use Pagers to show a time-ordered sequence of items that typically advance automatically, such as slideshows. Or you can use Sequences to show a continuous list of choices, such as local restaurants, and allow customers to navigate the list via voice or by touch / remote control.<strong> </strong></li> <li><strong>Layouts and Conditional Expressions: </strong>You can use Layouts to group components such as images, text, ScrollViews, pagers, and sequences, as well as describe their placement on screen. You can nest layouts, and take advantage of header, footer, and hint layouts provided by Amazon. You can customize by device type using the <em>when</em> property in your layouts; for example, you might conditionally select one nested layout when the device shape is round, and another when the shape is rectangular.</li> <li><strong>Speech Synchronization and Other Commands: </strong>You can send commands that change the audio or visual presentation of the content of the screen or generate them automatically within your APL documents. For example, you can highlight the line or block of text currently being read using the SpeakItem command and highlightMode. You can use SetPage and AutoPage commands to control the pages displayed in a Pager component, and the Idle command to insert pauses.</li> <li><strong>Video, Audio, and HTML5 (Coming Soon): </strong>In the coming months, you'll be able to include video and audio content within your APL layouts and continue your skill experience when media playback is done. You’ll also be able to use HTML5 in your skills. We’ll share more about these capabilities in the coming months.</li> </ul> <p>You can use the new APL authoring tool and test simulator in the Alexa Developer Console to iterate on your designs, visualize how they’ll render, and test interactions.</p> <h2>Combine APL Components to Create Unique Experiences</h2> <p>APL provides you with the tools you need to combine these components to build rich, flexible, adaptable, and easy-to-use skills for customers. If you have already enhanced your Alexa skill with visuals, then you are familiar with display templates. APL is more powerful and allows greater flexibility. With APL, you can: &nbsp;</p> <ul> <li><strong>Complement voice-first experiences with visuals. </strong>You can display images, text, and overlays to communicate requested information at a glance, and provide additional, relevant details. For example, when a customer requests a stock quote, <strong>CNBC</strong> shows a graph of the stock’s performance, and adds more information based on display size, including a table of stock fundamentals such as market capitalization and dividend. When a customer asks for the forecast, <strong>Big Sky</strong> displays images for sun, clouds, snow, and other conditions, and overlays projected temperature ranges. Both skills use the <em>when</em> property to adjust layout and information density based on display size.</li> <li><strong>Simplify navigation of large data sets.</strong> You can use on-screen lists and sequences to help customers quickly review options, and enable them to choose using either touch or voice. For example, <strong>Demaecan</strong> uses custom layouts and pagers to enable customers to view and order from the menus of thousands of restaurants in Japan. <strong>NextThere </strong>enables customers to<strong> </strong>view<strong> </strong>public transit schedule information across all of Australia and New Zealand, and uses custom APL layouts with images for route maps and text for scheduled trip times. German skill&nbsp;<strong>kicker</strong> uses APL layouts and TouchWrappers to provide up to date Bundesliga soccer scores, share news about the top leagues in Europe, and enable easy navigation.</li> <li><strong>Create immersive and delightful experiences</strong>. You can build highly engaging experiences that your customers can sit back and watch, or lean into to get things done. For example, <strong>Kayak </strong>uses an APL pager to<strong> </strong>deliver slideshows of iconic images for potential travel destinations, and provides additional information using a multi-row list custom layout on larger displays, such as Fire TV. <strong>Food Network</strong> created a three-column layout with recipe information including difficulty and total time, and complemented this with buttons that provide verbal suggestions and can be touched to take action. <strong>Best Buy</strong>&nbsp;uses APL styles to create an on-brand experience aimed at the goal of having the expertise of Best Buy available in your home, and takes advantage of text highlighting and ideal commands in a new step-by-step tech tips feature.</li> <li><strong>Adapt skills to work on different devices</strong>. Take advantage of conditional expressions and layouts to deliver the right experience on the device a customer is using, from focused personal information delivery on Echo Spot to communal, 7’ experiences on Fire TV. For example, the <strong>FX </strong>experience on Spot focuses primarily on the ability to play a trailer, while the Echo Show provides additional content description and the TV experience is optimized for large-screen viewing and more content/menu options. <strong>Who Wants to be a Millionaire</strong> brings the well-known gameshow to Alexa, and uses APL styles and layouts to deliver a familiar game experience that is optimized for Fire TV, Echo Show, and Echo Spot.</li> </ul> <p>Customers can use the skills described above starting next month.</p> <h2>Join Us for a Demonstration and Apply for the APL Preview</h2> <p>Please join us tomorrow, September 21, at 10 am PST&nbsp;<a href="" target="_blank">on Twitch</a> for a brief APL demonstration. You can also watch the demonstration later. You can also apply to participate in the APL preview, whether you've built a multimodal skill before or have never designed for Alexa. Tell us about your use case via the short survey <a href="" target="_blank">here</a>. We’ll notify you if your application is selected. Either way, you’ll be able to use APL soon when we release the public beta next month.</p> /blogs/alexa/post/acf7689b-f118-469e-b452-0c4da8d3e61a/new-alexa-smart-home-developer-tools-enable-seamless-voice-control-of-any-device-and-any-feature New Alexa Smart Home Developer Tools Enable Seamless Voice Control of Any Device and Any Feature BJ Haberkorn 2018-09-20T19:02:04+00:00 2018-09-20T20:32:38+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Today, we’re excited to announce new developer tools in preview that make device setup easier for customers and let you extend voice control to any device—from the simplest wall plug to the most complex appliance—and to any feature of your device.</p> <p><img alt="" src="" /></p> <p style="text-align:justify">Device makers have already connected <a href="" target="_blank">more than 20,000 smart devices, from more than 3,500 brands</a>, to Alexa. Today, we’re excited to announce new developer tools in preview that make device setup easier for customers and let you extend voice control to any device - from the simplest wall plug to the most complex appliance - and to any feature of your device. With the updated Smart Home Skill API, you can use new toggle, range, and mode APIs as building blocks to model the complete feature set of your device, and combine native Alexa smart home skill utterances and your own custom utterances in a single skill. With Wi-Fi simple setup, a part of the Frustration-Free Setup program, you can simplify customer setup of Wi-Fi devices using the same approach we use on Amazon devices such as Echo and Fire TV. Finally, with the Alexa Connect Kit, you can connect any device to the Internet and Alexa by integrating a hardware module over a simple serial interface, without worrying about managing cloud services, Alexa skills, or complex networking and security firmware.</p> <h2 style="text-align:justify">Use the Updated Smart Home Skill API (Preview) to Add Voice Control for All of Your Features in a Single Skill</h2> <p style="text-align:justify">We’re adding new capabilities for smart home skills&nbsp;that let you take advantage of Alexa’s native voice user interface (VUI) for more of the features of your device, and extend the VUI with your own custom intents. With these new tools, you can support any smart device you can imagine, and deliver the right voice user interface for all of your current features and any you invent in the future. The updated API, available in preview in the US starting today, allows you to:</p> <ul> <li style="text-align:justify"><strong>Use new </strong><strong>Toggle, Range, and Mode interfaces</strong><strong>&nbsp;to </strong><strong>add native voice support for a broad range of smart device features</strong>. Alexa already supports the most common smart device features, such as turning on lights and locking doors; these new primitives give you the flexibility to control many more. You can use <strong>toggle</strong> for features with binary states, <strong>range</strong> for features with numeric values, and <strong>mode</strong> for features with multiple, named states. In addition, you can use these building block APIs multiple times for a single device. For example, if you build smart guitar amplifiers, you might use toggle to turn on “overdrive,” range for &quot;gain&quot; and &quot;master&quot; dials, and mode for presets such as “jazz,” “blues,” and “rock.” Your customers could say, “Alexa, turn on overdrive,” “Alexa set the gain to eight,” and “Alexa, change the tone to jazz.”&nbsp;</li> <li style="text-align:justify"><strong>Add custom intents to complete the voice experience for your customers. </strong>You can now combine native Alexa smart home intents with custom intents that you define, in a single skill, to let your device’s features shine through in the VUI just as you designed them. You can support unique utterances and unique features outside Amazon’s native smart home VUI. For example, your guitar amplifier skill can support “Alexa, more bass” or “Alexa, give me that Vox chime” as alternatives to specifying different ranges and modes. You can support utterances like “Alexa, what is a crunch tone?” to provide voice access to help information. You could even add Easter eggs like “Alexa, crank it up to eleven.”</li> <li style="text-align:justify"><strong>Combine all of your native and custom intents in a single skill. </strong>By combining native and custom intents in a single smart home skill, you can enable customers to use their voices to easily interact with your devices in all the ways that you can imagine, including unique utterances and branding. Setup becomes easier because customers need to enable only a single skill for your device, and usage becomes easier because customers will be able to say “Alexa, &lt;perform action&gt;” instead of “Alexa, ask &lt;skill name&gt; to &lt;perform action&gt;” for your custom intents.</li> </ul> <p style="margin-left:0in; text-align:justify">Several device makers are already developing with the updated smart home skill API:</p> <ul> <li style="text-align:justify"><strong>Moen</strong> is adding the new toggle, mode, and range interfaces to their U by Moen Shower skill to provide support for simple phrases such as, “Alexa, set my shower to 100 degrees.”</li> <li style="text-align:justify"><strong>Ecobee</strong> is moving the intents in their custom Ecobee Plus skill – including equipment status queries and vacation scheduling – into their Ecobee smart home skill. Customers will only have to enable a single skill, and can simply say, “Alexa, is heating enabled?” and “Alexa, return to my schedule.”</li> <li style="text-align:justify"><strong>iRobot</strong> is working to enable name-free Roomba intents like “Alexa, schedule a cleaning job with {robot_name}” and “Alexa, schedule a cleaning job for {time} with all of my robots.”</li> </ul> <p style="text-align:justify">The Updated Smart Home Skill API preview is available in the US. You can read <a href="" target="_blank">Understanding the Smart Home Skill API</a> to learn more, or <a href="" target="_blank">click here</a>&nbsp;to apply for the preview.</p> <p style="text-align:justify">In the coming months, we will also extend our native smart home APIs with additional doorbell, camera, and security system capabilities. You’ll be able to use Alexa to notify customers when someone rings the doorbell, and use Alexa to talk to your visitor. You’ll also be able to use the new SecurityPanelController interface to enable customers to arm and disarm your security systems, and query for system status.</p> <h2>Introducing the Frustration-Free Setup Program and the Wi-Fi Simple Setup SDK and Service (Preview)</h2> <p>Getting smart devices connected to the Internet is still too difficult, which is why today we announced the Frustration-Free Setup program&nbsp;to help remove or reduce steps in setting up connected devices. Last year, we made setup easier with Zigbee simple setup, available on Amazon devices with a built-in smart home hub like Echo Plus and the new Echo Show. Today, we are introducing Wi-Fi simple setup, an SDK and service that helps customers connect smart home devices to Wi-Fi. When a new smart home device that incorporates the Wi-Fi simple setup SDK is powered on for the first time, it searches for and connects to the Wi-Fi simple setup network established by other devices in the home. This network is used to authenticate the device, securely connect to the Amazon Wi-Fi locker, an encrypted cloud credential store, and deliver a customer’s stored home network credentials to the new device. Our goal with Wi-Fi simple setup is not just to simplify setup but also to reduce costs for device makers that come through customer support calls and device returns.</p> <p>Any Wi-Fi smart home device can host the Wi-Fi simple setup network and any smart home device that has implemented the Wi-Fi simple setup SDK can seamlessly connect to it. The SDK, which has been built for embedded Linux and will be available for other embedded platforms in the future, is provided for free for device makers. We are already working with partners including Wemo, eero, Kasa Smart by TP-Link, and TP-Link to bring the Wi-Fi simple setup experience to devices launching next year. To apply for the Wi-Fi simple setup developer preview, <a href="" target="_blank">click here</a>.</p> <h2 style="text-align:justify">Connect to Alexa via a Serial Port using the Alexa Connect Kit (Preview)</h2> <p style="text-align:justify">The Alexa Connect Kit (Preview), or ACK, gives you the tools to turn any device with a microcontroller (MCU) into a smart&nbsp;device. Simply add an Amazon-managed ACK module to your device, connect it to your device’s MCU over a serial interface, and add ACK interface code to your device’s firmware to interpret control messages from the module. To make the integration as easy as possible, the ACK development kit includes a sample MCU equipped with sample code you can port to your device’s specific MCU. The module uses Wi-Fi to securely connect your device to Amazon-managed cloud services that provide Alexa control, Wi-Fi simple setup, Dash Replenishment Service, and all the device metrics and logs you need to manage devices in the field. With an ACK module on your device, you can focus on building great hardware. ACK provides you:</p> <ul> <li style="text-align:justify"><strong>A simple way to connect your device to Alexa.</strong> With ACK, you don’t have to build and manage cloud services or write complex connectivity and security firmware. Instead, you simply integrate the Amazon-managed hardware module and production ready ACK code into your device. Basic devices like fans and smart plugs require as little as 50 lines of code to handle Alexa directives.</li> <li style="text-align:justify"><strong>Faster time to market.</strong> You can prototype using the ACK development board, sample MCU, and code, and be controlling your device using voice commands in as little as an hour. Once you’ve modeled your device’s physical features with ACK integration code, you’re ready to place production-ready ACK modules into finished devices for testing. By eliminating the requirement to build your own cloud service and mobile application, ACK enables you to get to market in months instead of years, with a smaller development team.</li> <li style="text-align:justify"><strong>High-quality customer experiences.</strong> With ACK-based devices, you can take advantage of all the capabilities of the Smart Home Skill API, as well as Wi-Fi simple setup, and the Dash Replenishment Service. ACK’s managed cloud service is built on <a href="" target="_blank">AWS IoT</a>, and meets the cloud reliability requirements for the Works with Alexa (WWA) certification program.</li> <li style="text-align:justify"><strong>Cost certainty.</strong> With ACK, you pay for the hardware module and a low, upfront fee that covers your ongoing use of the ACK cloud service. ACK enables you to turn the ongoing variable cost of managing your own cloud service into a fixed, one-time cost.</li> <li style="text-align:justify"><strong>Extensibility</strong>. ACK will offer cloud extensibility points, enabling you to further enrich your customer experience. You can integrate not only with the ACK cloud service, but also with your own mobile application, your own cloud service, and third-party cloud services such as IFTTT.&nbsp;</li> </ul> <p>We used ACK internally to build the AmazonBasics Microwave announced today. In addition, leading device makers and consumer products companies like Procter &amp; Gamble, Hamilton Beach, Tonly, and Midea are already using the Alexa Connect Kit to develop new devices.</p> <p>“We’ve been surprised at how easy it is to use the Alexa Connect Kit to prototype devices and create Alexa commands with just a few lines of code,” said Scott Tidey, Sr. Vice President, North American Sales and Marketing, Hamilton Beach. “We look forward to using the Alexa Connect Kit to reduce time to market for new product lines while simultaneously offering more of the features our consumers want.”</p> <p style="text-align:justify">To learn more and apply for the ACK preview, click <a href="" target="_blank">here</a>.</p> <h2 style="text-align:justify">Reach and Delight More Customers via Works with Alexa Certification</h2> <p style="text-align:justify">Regardless of the type of smart device you build, you can submit it for certification via the Works with Alexa (WWA) program. WWA raises the bar on responsiveness, reliability, and functionality, ensuring your customers have a great smart home experience. When your products are certified, they can carry the Works with Alexa badge in the Amazon Smart Home Store and on product packaging. This increases customer confidence that your product integrates seamlessly with Alexa. Visit the <a href="" target="_blank">Works with Alexa program page</a> for the full list of detailed requirements.</p> <h2 style="text-align:justify">Alexa Smart Home: Improved Tools and Customer Experiences</h2> <p style="text-align:justify">Amazon is making it easier every day for developers and device makers to create new experiences for customers, and Alexa smart home development tools now provide everything you need to deliver rich, delightful customer experiences for all your devices. We can’t wait to see what you build. Learn more about <a href="" target="_blank">connecting devices to Alexa</a>.</p> /blogs/alexa/post/043748fd-1ea2-4fb0-98ff-be911ceeb140/introducing-new-tools-for-device-makers-to-create-screen-based-products-with-alexa Introducing New Tools for Device Makers to Create Screen-Based Products with Alexa Gagan Luthra 2018-09-20T19:01:38+00:00 2018-09-20T19:01:38+00:00 <p><img alt="" src="" /></p> <p>Introducing the Alexa Smart Screen and TV Device SDK&nbsp;and dev kit. The solutions enable device makers to create voice-forward, screen-based products&nbsp;with rich visual experiences.</p> <p><img alt="" src="" /></p> <p>With the Echo Show, Amazon introduced a new device category that combined the best of voice-forward&nbsp;experiences with a visual display. We’ve brought Alexa to even more screen-based devices in the time since, including Echo Spot and Fire tablets, and today we announced an all-new Echo Show with a beautiful 10” screen and enhanced visuals. Alongside its release, we’re excited to introduce new tools for device makers to create their own screen-based products with Alexa. These tools will be available to commercial device makers through a developer preview.</p> <h2>New SDK to Create Screen-Based Devices with Alexa&nbsp;</h2> <p>The <a href="" target="_blank"><em>Alexa Smart Screen and TV Device SDK</em></a> is a new solution that enables device makers to create beautiful screen-based products with Alexa, combining visual experiences with standard Alexa capabilities like music, information, smart home control, and tens of thousands of Alexa skills. The SDK brings rich visual experiences to voice-forward products and includes support for visual Alexa skills developed using the new <a href="">Alexa Presentation Language (APL)</a>. Device makers can bring differentiated products to market for a variety of form factors, including smart displays, ambient screens, tablets, TVs, and more. The <em>Alexa Smart Screen and TV Device SDK</em> is available for Android- and Linux-based devices.&nbsp;</p> <p>The <em>Alexa Smart Screen and TV Device SDK for Android</em> provides an application package that brings Echo Show-like visual experiences to tablets, TVs, and other screen-based products. It offers many of the best Alexa features right out of the box (ambient home screen, video calling and messaging, music with lyrics, video flash briefing, security camera feeds, photos, shopping, timers &amp; alarms, weather, Q&amp;A, to-do lists, movies &amp; showtimes, and more) and includes support for the new APL-based visual skills. The Android variant gives device makers access to the same technology behind the next generation Echo Show, offering a faster integration path and a consistent user experience.&nbsp;</p> <p>The <em>Alexa Smart Screen and TV Device SDK for Linux</em> includes new capability agents to interface with and parse the new APL-based skills, bringing rich visual experiences to a variety of screen-based products with Alexa, including smart home devices, home appliances, smart mirrors, and more. The SDK offers a new web-based visual runtime that renders the new visual skills and interactions authored using APL.&nbsp;</p> <p>Commercial device makers interested in using the <em>Alexa Smart Screen and TV Device SDK</em> can sign-up to request an invite to the developer preview. <a href="" target="_blank">Learn more&nbsp;&raquo;</a></p> <h2>New Hardware Development Kit for Alexa Built-in Devices</h2> <p>We are also excited to introduce the <a href="" target="_blank"><em>Alexa Smart Screen Dev Kit</em></a>, a hardware and software reference solution for screen-based products with Alexa built-in. Featuring a complete system design, the kit is based on a production-ready ARM Cortex-A53 chipset that hosts the Alexa client software and audio front end technology for ‘Alexa’ wake word detection. It features a 4” touch screen, a 3-mic array for far-field speech recognition, the Amazon wake word engine, and includes the new <em>Alexa Smart Screen and TV Device SDK</em> and a sample voice-forward GUI. In addition to support for screens with rich visuals, this dev kit features the latest AVS technologies like Alexa Multi-Room Music and Alexa communications.&nbsp;</p> <p>Commercial device makers can request an invite and learn more about the dev kit. <a href="" target="_blank">Learn more&nbsp;&raquo;&nbsp;</a></p> <p style="margin-left:40px"><em>“Customers love placing visual, voice-enabled products at the center of their home, and these new tools will allow device makers to deliver even more choice and flexibility when it comes to visual products with Alexa,” said Pete Thompson, Vice President of Alexa Voice Service. “Because the new SDK and dev kit were built using the same core technology from the all-new Echo Show, device makers can build truly differentiated products while continuing to deliver a more consistent customer experience that includes the latest Alexa features. We can’t wait to see what they create.”&nbsp;</em></p> <h2>Product Spotlight</h2> <p>Lenovo used&nbsp;the <em>Alexa Smart Screen and TV Device SDK for Android</em> in beta to develop the Lenovo Smart Tabs, a new series of incredible Android tablets that feature Alexa built-in and Show Mode&nbsp;for more visual, immersive Alexa experiences. The Smart Tabs will enhance the way customers interact with their music, control smart home devices, and more.</p> <p>Coming soon: Sony collaborated with Amazon to use the SDK to bring audio and visual experiences to select Sony Smart TV models that work with companion Alexa devices, like the Amazon Echo. Customers can talk to Alexa through the Echo and see visual responses on the TV for music and camera feeds.&nbsp;</p> /blogs/alexa/post/1bbb6601-4ce4-4a8d-8b72-dbb0b5c100bb/use-the-ask-music-skill-api-to-stream-from-your-music-service-to-alexa-customers Coming Soon: Use the ASK Music Skill API to Stream Music from Your Service to Alexa Customers BJ Haberkorn 2018-09-20T19:01:08+00:00 2018-09-20T19:03:39+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>With the ASK Music Skill API, coming soon, you can enable your customers to access your entire music catalog via voice on tens of millions of Alexa-enabled devices from Amazon and other manufacturers.</p> <p><img alt="" src="" /></p> <p>Soon you will be able to use the Alexa Skills Kit (ASK) Music Skill API to stream from your music service to Alexa customers in the US. The Music Skill API gives you access to the capabilities and features that other music providers such as Spotify, Pandora, Saavn, and Deezer use today. With these APIs, you can enable your customers to access your entire music catalog via voice on tens of millions of Alexa-enabled devices from Amazon and other manufacturers. To learn more and signup to be notified when the API is available, <a href="">click here</a>.</p> <h2>Take Advantage of Amazon’s Voice User Interface Expertise</h2> <p>Customers are increasingly using voice interfaces as a hands-free way to listen to music. By using Alexa’s built-in music capabilities, you will make it easier for your customers to engage with your service just by asking. With the Music Skill API, you can enable your customers to play music by simply saying, for example, “Alexa, play songs by &lt;artist&gt; from &lt;your music service&gt;.” You do not need to build your own voice user interface; instead, you simply provide your music catalog metadata to Amazon on a regular basis, and we automatically update the Alexa voice model and grammars used by customers. This enables you to deliver a reliable and accurate voice experience without requiring automatic speech recognition expertise on your team.</p> <h2>Enable Multi-Room Music, Music Alarms, and More&nbsp;</h2> <p>Once you build and publish a music skill, your customers can stream to any Amazon Echo or other Alexa-enabled device,&nbsp;or to multiple Echo devices using multi-room music. They can also use visual search on Echo Show and Echo Spot, and use music in your catalog for alarms. You can control the experience with skip limits and stream protection, and monitor and improve ongoing skill performance using event metrics including skill enablement and disablement, account linking, and music playback. Best of all, once the Music Skill API is available, you can develop your skill in a completely self-service manner using the Alexa Skills Kit.</p> <p>TIDAL and several other music service providers are already using the Music Skill API and expect to launch skills later this year. “It's TIDAL’s priority to provide members with easy access to our platform so they can have a seamless listening experience, wherever they are,” said Lior Tibon, Chief Operating Officer at TIDAL. “We’re excited to be the first streaming platform to take advantage of Amazon’s innovative self-service Music Skill API and bring TIDAL to Alexa-enabled devices.”</p> <h2>Sign Up for More Information</h2> <p>The Music Skill API will be available to developers soon. To learn more and signup to be notified when the API is available, <a href="">click here</a>.</p> /blogs/alexa/post/2ccbb04d-47c0-41c8-92fe-feb581deb676/understanding-the-updated-smart-home-skill-api-preview Understanding the Updated Smart Home Skill API (Preview) Brian Crum 2018-09-20T19:00:37+00:00 2018-09-21T12:49:33+00:00 <p>Today we <a href="" target="_blank">announced</a>&nbsp;two important updates to the Smart Home Skill API, available today in preview to developers in the US, which let you extend voice control to any device—from the simplest wall plug to the most complex appliance—and to any feature of your device.</p> <p>Today we <a href="" target="_blank">announced</a> two important updates to the Smart Home Skill API, available today in Preview to developers in the US, which let you extend voice control to any device—from the simplest wall plug to the most complex appliance—and to any feature of your device. First, we are adding Toggle, Range, and Mode capability interfaces, which you can use like building blocks to model the full feature set of your devices. The second update is the ability to use both custom intents and smart home directives in one skill. You no longer need to create a second custom skill when you want to add voice control for your device’s unique features or have a more conversational experience, and your customers only need to enable a single skill.</p> <p>To get a first look at the Smart Home Skill API updates, check out our <a href="" target="_blank">technical documentation</a> and apply to <a href="" target="_blank">join the Smart Home Skill API (Preview)</a>&nbsp;to begin developing skills using these new capabilities.</p> <h2>Connect Any Device to Alexa with Flexible Building Block APIs</h2> <p>We’re adding three new capability interfaces: <a href="" target="_blank">Alexa.ToggleController</a>, <a href="" target="_blank">Alexa.RangeController</a>, and <a href="" target="_blank">Alexa.ModeController</a> to the Smart Home Skill API (Preview) that allow you to connect the features of any device to Alexa. You can customize these new capabilities to provide native integration for settings or features of a device that follow an on/off, numeric, or enumeration pattern. To provide fine control of your device, you can combine multiple instances of these capabilities as building blocks together with any of the <a href="" target="_blank">dozens of existing capability interfaces</a> like <a href="" target="_blank">Alexa.ContactSensor</a>. This enables you to provide voice control to anything from simple devices like plugs, to complex home appliances like washers and dryers with several toggles, multiple groups of settings, and many modes of operation. Alexa already supports the most common smart device features, such as turning on lights and locking doors; these new primitives give you the flexibility to control many more.</p> <p><strong><a href="" target="_blank">Alexa.ToggleController</a> </strong>- The Alexa.ToggleController interface enables Alexa to control on/off based settings like “oscillate” on a fan, enabling/disabling the guest network of a network router, or starting/stopping a zone of a sprinkler system. By using the ToggleController interface, your customers can use on/off-based voice utterances like “Alexa, turn on child-lock on the dishwasher,” “Alexa, start the ice maker,” or “Alexa, enable auto-mode on the fan.”</p> <p><strong><a href="" target="_blank">Alexa.RangeController</a> </strong>- The Alexa.RangeController interface provides an integration for numeric settings like the speed of a fan, the temperature of a shower, or other measurement values. By using the Alexa.RangeController interface, your customers can ask &quot;Alexa, turn up the fan speed,&quot; or &quot;Alexa, what is the temperature of the shower?&quot;</p> <p><strong><a href="" target="_blank">Alexa.ModeController</a></strong> - The Alexa.ModeController interface provides an integration for settings that can be one of several named states or modes. These possible modes may be ordered, like low, medium, and high modes, or unordered, such as wash cycles of &quot;Normal&quot;, &quot;Delicates&quot;, or &quot;Permanent Press&quot;. By using the Alexa.ModeController interface, your customers can use utterances such as &quot;Alexa, set the wash cycle to delicates.&quot;</p> <p>Declaring devices is done the same way as other capabilities interfaces such as <a href="" target="_blank">Alexa.ColorController</a>, <a href="" target="_blank">Alexa.LockController</a>, or <a href="" target="_blank">Alexa.Speaker</a>. With these new APIs, however, a single device can include multiple instances of <a href="" target="_blank">Alexa.ToggleController</a>, <a href="" target="_blank">Alexa.RangeController</a>, and <a href="" target="_blank">Alexa.ModeController</a>. This allows you to easily combine collections of settings that map the features of your device, however complicated they might be, to Alexa.</p> <p>For example, a fan implementing the building block APIs could have three toggle settings with friendly names like “Auto Mode,” “Night Mode,” and “Oscillate.” This allows a customer to control each with phrases like, “Start Auto Mode,” “Turn on Night Mode,” or “Enable Oscillate.” These capabilities are defined on the device with three <a href="" target="_blank">Alexa.ToggleControllers</a>; one for each of the toggle settings defined.</p> <p>Digging deeper, a Smart Home Discovery response describing a dishwasher appliance with multiple instances of <a href="" target="_blank">Alexa.ToggleController</a> capability interfaces in the capabilities array of an endpoint could look something like the following:</p> <pre> <code>&quot;endpoints&quot;: [ { &quot;endpointId&quot;: &quot;endpoint-dishwasher&quot;, &quot;friendlyName&quot;: “dishwasher”, ... &quot;capabilities&quot;: [ { &quot;type”: &quot;AlexaInterface&quot;, &quot;interface”: &quot;Alexa.ToggleController&quot;, &quot;version”: &quot;3&quot;, &quot;instance&quot;: &quot;SampleManufacturer.Washer.AutoDry&quot;, ... }, { &quot;type”: &quot;AlexaInterface&quot;, &quot;interface”: &quot;Alexa.ToggleController&quot;, &quot;version”: &quot;3&quot;, &quot;instance&quot;: &quot;SampleManufacturer.Washer.Buzzer&quot;, ... } ] } ] </code></pre> <p>Toggle, Range, and Mode capabilities also support a new mechanism for specifying a list of capability friendly names, allowing you to customize what words customers can use to control their device. For example, the SampleManufacturer.Washer.Buzzer toggle instance in the previous example could have a Friendly Name of &quot;Buzzer&quot;, with alternatives such as &quot;Beeper&quot; or &quot;Alarm&quot;. This enables customers to more easily discover and naturally use this toggle by giving them additional names for the buzzer.</p> <p>The Range and Mode capabilities allow further customization of names, with Range allowing named &quot;presets&quot; corresponding to particular numeric values, and Mode allowing named &quot;modes.” For example, an electric tea kettle with various temperature set points might provide aliases for each type of tea that should steep at a particular temperature so customers could say “Alexa, set the tea kettle to 212 degrees Fahrenheit,” or simply “Alexa, set the tea kettle to boiling.” In this example, the value of the Range of 212 F and “boiling” represent the same Range value. If we look at the Smart Home Discovery response for a device that specifies might specify natural aliases for an <a href="" target="_blank">Alexa.RangeController</a> we could see a collection of capability presets that looks like the following:</p> <pre> <code>{ &quot;rangeValue&quot;: 212, &quot;presetResources&quot;: { &quot;friendlyNames&quot;: [ { &quot;@type&quot;: &quot;text&quot;, &quot;value&quot;: { &quot;text&quot;: &quot;boiling&quot; } }, { &quot;@type&quot;: &quot;text&quot;, &quot;value&quot;: { &quot;text&quot;: &quot;very hot&quot; } } ] } } </code></pre> <p>For names of your device features or settings, you will also be able to take advantage of catalogs of pre-defined terms that are automatically localized to new regions. For terms that are specific to your devices or brand, you can provide translations in each language you support and Alexa will take care of choosing the right language.</p> <p>Any functionality modeled using Toggle, Range, and Mode is available for customers to use as part of an Alexa Routine as well. For example, say “Alexa, good morning,” and your shower automatically turns to the customer’s pre-defined settings. Or “Alexa, good night,” and the fan turns down to night mode as the lights blink out. When you model your devices’ features with the native smart home device capabilities, Alexa knows how to make your devices work together to create a smarter, more comfortable, and productive home. The <a href="">Alexa.ToggleController</a>, <a href="">Alexa.RangeController</a>, and <a href="">Alexa.ModeController</a> capability interfaces make it even easier for you to let Alexa to control any smart home device.</p> <p>For more technical information, see <a href="" target="_blank">Connect Any Device to Alexa</a>, the Alexa Skills Kit <a href="" target="_blank">Capability Interface Message Guide</a>, and the <a href="" target="_blank">device templates</a> documentation.</p> <h2>Use Multiple Skill Types in a Single Alexa Skill</h2> <p>Customers no longer need to search for and enable multiple skills to access all the features of their Alexa-connected device. Having to enable multiple skills also meant that you would have to configure and maintain separate skills for custom and smart home functionality. Now, you can publish and maintain a single skill that enables both the smart home and custom features of your device. This reduces effort for you and simplifies the customer experience.</p> <p>Implementing multiple skill types in a single skill involves adding an additional model to a base skill either from the <a href="" target="_blank">Alexa Developer Console</a>, through <a href="" target="_blank">SMAPI</a> directly, or via the <a href="" target="_blank">Alexa Skills Kit Command-Line Interface (ASK CLI)</a> and updating your skill logic to handle requests from those models. You can also update existing skills through the <a href="" target="_blank">Alexa Developer Console</a>.</p> <p>For example, adding multiple models from the <a href="" target="_blank">Alexa Developer Console</a> is simply part of the skill configuration workflow.</p> <p><img alt="" src="" style="height:285px; width:768px" /></p> <p>After adding a model to your skill, you will need to properly handle the functionality of the model in your skill logic in the same way you would previously handle custom, video, or smart home requests in separate skills.</p> <p>For example, if you have an existing skill that uses a smart home model and you would like to update it to use a custom model, you would also need to add skill logic to handle the session requests from the custom interaction model either manually or with one of the Alexa Skill Kit Software Development Kits (SDKs) for <a href="" target="_blank">Node.js</a>, <a href="" target="_blank">Java</a>, or <a href="" target="_blank">Python</a>. In practical terms, this means evaluating the incoming request and responding appropriately to incoming custom session or smart home directive requests.</p> <p>If you build a skill that combines video with smart home or custom skills, you will need to start the skill configuration workflow with the video skill. This is because the first run experience of a video skill requires users to associate a service provider with an AV device.</p> <p>For more information on using multiple models in your skills, see <a href="" target="_blank">Create Skills that Combine Models</a> and the <a href="" target="_blank">Alexa.CustomIntent Interface</a> for declaring custom intents during device discovery.</p> <p>The new updates are backwards-compatible and will be available later this year. To get a first look at the Smart Home Skill API updates, check out our <a href="" target="_blank">technical documentation</a>. To begin development using these new capabilities, apply to join the Smart Home Skill API (Preview). Your feedback is important to us and we are excited to see what you will connect to Alexa.</p> /blogs/alexa/post/6d7c726f-eff3-4d0c-95d2-63ced3951263/introducing-the-alexa-connect-kit-connect-devices-to-alexa-without-managing-a-cloud-writing-an-alexa-skill-or-developing-complex-networking-and-security-firmware-apply-today-for-the-preview Introducing the Alexa Connect Kit: Connect Devices to Alexa without Managing Cloud Services, Writing an Alexa Skill, or Developing Complex Networking and Security Firmware Brian Crum 2018-09-20T19:00:08+00:00 2018-09-20T20:16:23+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Today, we are excited to announce the <a href="" target="_blank">Alexa Connect Kit (Preview)</a>, or ACK. ACK is a new way for device makers to connect devices to Alexa without worrying about managing cloud services, writing an Alexa skill, or developing complex networking and security firmware.</p> <p><img alt="" src="" /></p> <p>Today, we are excited to announce the <a href="" target="_blank">Alexa Connect Kit (Preview)</a>, or ACK. ACK is a new way for device makers to connect devices to Alexa without worrying about managing cloud services, writing an Alexa skill, or developing complex networking and security firmware. ACK enables device makers to make any device an Alexa-connected smart device. You can <a href="" target="_blank">apply today to join the ACK preview</a>.</p> <p>With ACK, you pay for the hardware module and a low, upfront fee that covers your ongoing use of the ACK cloud service. ACK enables you to turn the ongoing and variable cost of managing your own cloud service into a fixed, one-time cost. ACK will also offer cloud extensibility options in addition to ACK cloud services for you to connect your device to your own mobile applications, your own cloud service, and third-party cloud services such as IFTTT. &nbsp;While you build and manage devices more quickly and economically, your customers enjoy Alexa control, Wi-Fi simple setup, and the Dash Replenishment Service. Leading device makers&nbsp;and consumer products companies including&nbsp;Procter &amp; Gamble, Hamilton Beach, Tonly, and Midea are using ACK to develop smart devices. Amazon used ACK to build the new <a href=";qid=1537467683&amp;sr=8-1&amp;keywords=amazon+basics+microwave" target="_blank">AmazonBasics Microwave</a>, available for pre-order today.&nbsp;</p> <h2>No Cloud to Manage, No Alexa Skills to Create, Simplified Firmware Development</h2> <p>With ACK, you don’t need to manage cloud services, create an Alexa skill, or write complex connectivity and security firmware. Instead, you simply add an Amazon-managed ACK module to your device, connect it to your device’s microcontroller unit (MCU) over a serial interface, and add interface code to your device’s firmware to interpret control messages from the ACK module. To make the integration as easy as possible, the ACK development kit includes a sample MCU and&nbsp;sample code you can port to your device’s specific MCU. The module uses Wi-Fi to securely connect your device to Amazon-managed cloud services that provide Alexa control, Wi-Fi simple setup, Dash Replenishment Service, and all the device metrics and logs you need to manage devices in the field. With an ACK module on your device, you can focus on building great hardware.</p> <h2>Create High-Quality Alexa-Connected Devices for Your Customers</h2> <p>Simplifying your development process doesn’t mean compromising customer experience. ACK-based devices can take advantage of all the capabilities of the <a href="" target="_blank">updated Smart Home Skill API (Preview)</a>, as well as <a href="" target="_blank">Wi-Fi simple setup</a>, and the <a href="" target="_blank">Dash Replenishment Service</a>. The ACK managed cloud service is built on AWS IoT, and meets the cloud reliability requirements for the <a href="" target="_blank">Works with Alexa (WWA) certification program.</a> After your device is certified, you can feature the WWA badge in the <a href="" target="_blank">Amazon Smart Home Store</a> and on product packaging, increasing customer confidence that your devices integrate seamlessly with Alexa.</p> <h2>Turn Cost Uncertainty into Cost Certainty</h2> <p>Instead of worrying about usage spikes and variable cloud costs, you pay for the hardware module and a low, upfront fee that covers your ongoing use of the ACK cloud service. ACK enables you to turn a previously ongoing and variable cost of managing your own cloud service into a fixed, one-time cost, enabling more certain business planning.</p> <h2>Extend Your Investment in ACK</h2> <p>In addition to ACK cloud services, ACK will offer options for you to add cloud extensibility points, enabling you to further enrich your customer experience. You can integrate not only with the ACK cloud service, but also with your mobile applications, your own cloud service, and third-party cloud services such as IFTTT. Learn more by applying for the ACK preview.</p> <h2>Get Started Quickly with the ACK Development Board</h2> <p>The ACK development board makes it easy to get started. ACK development is as simple as modeling your device’s features with corresponding Alexa capabilities and integrating lightweight interface code to interpret messages from the ACK module for your device. Basic devices like fans and smart plugs require as little as 50 lines of new code to handle Alexa directives. Using the included sample MCU and interface code, you can quickly prototype a device and, when you’re ready, port the implementation to your device’s MCU for testing and device production.</p> <p>“We’ve been surprised at how easy it is to use the Alexa Connect Kit to prototype devices and create Alexa commands with just a few lines of code,” says Scott Tidey, Sr. Vice President, North American Sales and Marketing, Hamilton Beach. “We look forward to using the Alexa Connect Kit to reduce time to market for new product lines while simultaneously offering more of the features our consumers want.”</p> <h2>Apply to Join the ACK Preview</h2> <p>Learn more and apply today to join the <a href="" target="_blank">Alexa Connect Kit (Preview)</a>. We can’t wait to see what you build.</p> /blogs/alexa/post/c4bbbbaf-3559-4b5e-b920-83a9d594e732/alexa-play-rolling-in-the-deep-everywhere-announcing-multi-room-music-for-products-with-alexa-built-in &quot;Alexa, Play Rolling in the Deep Everywhere.&quot; Announcing Multi-Room Music for Products with Alexa Built-in Sanjay Ramaswamy 2018-09-20T18:59:27+00:00 2018-09-20T18:59:27+00:00 <p><img alt="" src="" /></p> <p>When we announced Alexa Multi-Room Music (MRM) for Echo devices last year, we introduced a developer preview for third-party device makers to enable them to create similar Alexa MRM experiences for their products. Today, we make the development tools generally available to all device makers.</p> <p><img alt="" src="" /></p> <p>There are millions of people who own&nbsp;multiple devices with Alexa built-in, and these customers want to be able to combine their speakers to create whole-home audio experiences. That’s why we created Alexa Multi-Room Music (MRM), which enables&nbsp;you to play music synchronized on multiple Echo devices throughout your home. The Alexa App lets you create a group with two or more Echo devices and name it such as “Living Room.” You can then simply say, “Alexa, play my music in the Living Room.”</p> <p>But customers don’t just have Echo devices in their home. They also have speakers from other brands with Alexa built-in. When we announced Alexa MRM last year, we also announced a developer preview of tools for device makers to create multi-room experiences.&nbsp;Today, we’re excited to make these tools available for all device makers to integrate Alexa MRM into their products. By doing so, customers can group Alexa MRM-compatible products across brands (including Echo devices) to create delightful multi-room experiences.</p> <h2>We have enabled multiple options for developing devices with Alexa MRM:</h2> <p><strong>Prototype with a Development Kit:</strong> You can prototype using the Alexa MRM-qualified <a href="" target="_blank">i.MX 8M development kit from NXP</a>. We have provided the Alexa MRM binaries for this dev kit in the ‘Resources’ section of the AVS Developer&nbsp;Portal.</p> <p><strong>Accelerate with Solution Providers:</strong> Bring your devices to market faster with an Alexa MRM solution from a <a href="" target="_blank">Systems Integrator</a> (Libre LS9 or Linkplay A98) or a turnkey Alexa MRM solution from an <a href="">Original Design Manufacturer</a> (Tonly).</p> <p><strong>Build Your Own with the AVS Device SDK:</strong> If your device meets the minimum hardware and software requirements for Alexa MRM, you can work with us to get binaries built for your platform. You can use these binaries with the <a href="">AVS Device SDK</a> to create multi-room music experiences.</p> <p style="margin-left:40px"><em>“Our customers are passionate about music. It is a true delight to be able to play synchronized music in multiple rooms of your house using just your voice, and we’re thrilled to give AVS developers access to the same Alexa MRM experience available on Amazon&nbsp;Echo devices,” said Priya Abani, Director, Amazon Alexa. “Developers, systems integrators, and ODMs can use this generally available Alexa MRM solution, and we can’t wait to see them bring the experience to more devices and customers over time.”</em></p> <h2>Product Spotlights</h2> <p>Many popular brands are integrating Alexa MRM into their speakers for customers to play synchronized music across a multi-brand group of Alexa built-in devices. This will be available to customers of these devices in the coming weeks via an over-the-air update.</p> <p>Harman Kardon, a premier speaker company, will issue a software update to enable Alexa MRM on their existing Alexa Built-in Allure speaker, allowing their customers to play synchronized music in a group along with other Alexa devices. “Multi-Room Music allows customers to enjoy crisp, synchronized sound across multiple Allure speakers and Alexa-enabled devices,” said Dave Rogers, President, Consumer Audio at Harman Kardon. “A simplified configuration process done via the Alexa App makes Harman Kardon Allure speakers even more versatile and delightful for customers.”</p> <p>Sound United is adding Alexa MRM functionality to their recently-launched Polk Command Bar: “The Polk Command Bar with Alexa built-in already sounds awesome and functions as an incredibly simple and convenient way to access music, movies, TV, and other content in your home entertainment center via voice,” said Brendon Stead, SVP Product Development and Engineering at Sound United. “Now with the addition of Alexa Multi-Room Music, users can easily group the Command Bar with other MRM-enabled or Alexa devices, regardless of brand, to distribute great-sounding audio content anywhere throughout the home.”</p> <p>Alexa MRM is just one of several new features available to speakers makers building with Alexa.&nbsp;In addition to today’s release, we are also working to bring additional capabilities to device makers such as stereo pairs (to play left/right channel separately), subwoofer pairs (to combine speakers for more accurate reproduction of frequencies at the bass and mid/high ranges), and preferred speaker setup (to set a speaker or a multi-room music group for streaming music playback).</p> <h2>Things to Try with Alexa MRM</h2> <p>In the Amazon Alexa App, create a group of devices named <em>Living Room.</em></p> <p>To initiate playback on the group</p> <ul> <li>“Alexa, play Rolling Stones in the <em>Living Room.</em>”</li> </ul> <p>After playback begins</p> <ul> <li>“Alexa, pause music in the <em>Living Room.</em>”</li> <li>“Alexa, pause.” (pauses the active group)&nbsp;</li> <li>“Alexa, next on the <em>Living Room.</em>”</li> <li>“Alexa, next.” (makes the active group move on to play the next song)</li> <li>“Alexa volume 5 on the <em>Living Room.</em>”&nbsp;</li> <li>“Alexa volume 5.” (control volume on <strong>device you are speaking to</strong>)</li> </ul> <p>Interrupt during playback</p> <ul> <li>“Alexa, what’s the weather?”&nbsp; (volume drops on the device you are speaking to, and weather info is played)</li> </ul> <h2>Get Started Today</h2> <ul> <li><a href="" target="_blank">Learn about Alexa for Speakers</a></li> <li><a href="" target="_blank">Read the MRM technical blog for details</a></li> <li><a href="" target="_blank">Learn about the NXP i.MX 8M dev kit with MRM</a></li> <li>T<a href="" target="_blank">alk to Libre, Linkplay, and Tonly about their MRM solutions</a></li> </ul> <h2>New to AVS?</h2> <p>AVS makes it easy to develop products with Alexa built-in and bring voice-forward experiences to your customers. Through AVS, you can add a new natural user interface to your products and offer your customers access to a growing number of Alexa features, smart home integrations, and skills. <a href="" target="_blank">Get Started</a>.</p> /blogs/alexa/post/a9acaabd-d5a0-4fdf-b74a-c20cfe4d4ef9/deliver-whole-home-audio-with-alexa-multi-room-music-and-device-targeting Deliver Whole-Home Audio with Alexa Multi-Room Music and Device Targeting Ted Karczewski 2018-09-20T18:59:07+00:00 2018-09-20T18:59:07+00:00 <p><img alt="" src="" /></p> <p>Learn how to get up and running with Alexa Multi-Room Music by understanding the right integration path for your use case. This blog explores two options for building whole-home audio experiences with Alexa.</p> <p><img alt="" src="" /></p> <p>Customers love listening to music on devices through Alexa. In most cases, they’re asking Alexa to play music on the same device where they ask for it. In others,&nbsp;they want to play music in different locations at the same time, or on different devices in the same space. For example, when they say, “Alexa, play jazz in the <em>Living Room</em>,” they expect jazz music to start playing in the living room. “Living Room” could point to a single device or a group of devices. The <a href="" target="_blank">Alexa Voice Service (AVS)</a> has multiple technologies available to help device makers deliver this experience to customers: Named Device Targeting, Alexa Multi-Room Music (MRM), and proprietary cloud-controlled multi-room. The cloud-controlled solution, enabled by a <a href="" target="_blank"><em>Connected Speaker Skill</em></a>, is for device makers that prefer to use their own proprietary multi-room synchronization protocol. Below is a simplified diagram showing how they fit together.</p> <p style="text-align:center"><img alt="" src="" style="height:377px; width:954px" /></p> <p>When the consumer says, “Alexa, play jazz in the <em>Living Room</em>,” the Named Device Targeting capability determines the meaning of “Living Room” and routes the play directive to a single device, cloud-controlled targets, or Alexa MRM targets as appropriate.</p> <p>While there are exceptions, the technology is best summarized as follows:</p> <ul> <li>For synchronized audio playback of Alexa music service provider (MSP) content between AVS devices and Amazon devices, add Alexa Multi-Room Music functionality to the AVS client.</li> <li>For device makers with their own proprietary synchronization protocol to control devices without an AVS client, connect the device-maker cloud with the AVS cloud using Connected Speaker Skill.</li> <li>For targeted playback on a single Alexa device, no action is needed. That functionality comes standard with Named Device Targeting.</li> </ul> <h2>Alexa Multi-Room Music</h2> <p><a href="" target="_blank">Alexa MRM</a> enables synchronized music playback across multiple devices. In the case above, “Living Room” is the name of an Alexa MRM virtual device group. Those devices can be any mixture of Amazon Echo branded devices and other Alexa MRM-compatible devices. Alexa MRM uses a sender and receiver model to play content. One device in the group is selected as the sender, which connects to the MSP to retrieve the content. The others are assigned as receivers and are instructed to connect to the sender and play the content in sync with it. Alexa MRM is optimized to ensure reliable playback and maintain synchronized clocks between the devices over a standard home Wi-Fi network. The distinction of Alexa MRM device group members as either senders or receivers is invisible to the consumer.</p> <p>Alexa MRM clients receive their directives from AVS. Since Alexa MRM group playback uses named targeting, that playback can be initiated from consumer input on one of the member devices in an Alexa MRM group or from a separate Alexa device. For example, in “Alexa, play jazz in the <em>Living Room</em>”, the named target is determined to be an Alexa MRM group and Alexa Named Device Targeting routes the request to be handled by Alexa MRM. The sender is instructed via the AVS downchannel to connect to the content source.&nbsp;Receivers are instructed via the AVS downchannel to connect to the sender to receive the content. All members in the group then play the content in sync.</p> <p><img alt="" src="" style="height:427px; width:954px" /></p> <p>The diagram above shows Alexa MRM playback to a group of three AVS devices different from the AVS device receiving the voice command. The four devices may be include any Alexa MRM-compatible product (including Amazon&nbsp;Echo devices).</p> <h2>Getting Up and Running With Alexa MRM</h2> <p>For devices using the <a href="" target="_blank">AVS Device SDK</a>, getting up and running with Alexa MRM is straightforward. Some AVS development kits have Alexa MRM built-in, so you can just download and go.</p> <p><a href="" target="_blank">Learn about the NXP i.MX 8M dev kit with Alexa MRM &raquo;</a></p> <p>For other platforms, work with your Amazon business contact to get your platform evaluated for Alexa MRM compatibility. Once your platform is confirmed, Amazon will provide you with a pre-built Alexa MRM SDK binary, which you can integrate using the Alexa MRM patch for the AVS Device SDK.&nbsp;</p> <p>Music and Alexa go together naturally and Alexa MRM brings that experience to consumers across their entire home.</p>