Amazon Developer Blogs Amazon Developer Blogs 2018-07-18T14:00:00+00:00 Apache Roller Weblogger /blogs/alexa/post/5ed87ab4-16ee-4e7d-8f6e-6c45a844b75d/best-practices-for-building-alexa-skills-for-fire-tv-cube Best Practices for Building Alexa Skills for Fire TV Cube Jennifer King 2018-07-18T14:00:00+00:00 2018-07-18T14:00:00+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Here I’ll walk you through some tips for building an Alexa skill for Fire TV Cube, using core best practices for creating voice-first experiences for devices with screens.</p> <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>There are tens of millions of Alexa-enabled devices out there today that allow you to communicate with Alexa across multiple mediums. Amazon just recently <a href="">launched the new Fire TV Cube</a>, which is the first hands-free 4K Ultra HD streaming media player with Alexa—delivering an all-in-one entertainment experience. Now, Alexa skill developers can create a voice-first experience with a large screen in mind.</p> <p>Here I’ll walk you through some tips for building an Alexa skill for Fire TV Cube, using core best practices for creating voice-first experiences for devices with screens.</p> <h2>First, Understand Customer Interactions with Fire TV Cube</h2> <p>Fire TV Cube can enable all published Alexa skills. This means that customers can use the skills you built for any Echo device, including Echo Show and Echo Spot, on Fire TV Cube.</p> <p>For the most part, you can assume that a customer using Fire TV Cube will interact with your custom skill as they would with any other Echo device with a screen. They will primarily use it for voice interactions, reference the Alexa app to see display cards, and then look toward the screen to view the display templates.</p> <p>As with any device, consider where and how a customer might be using your skill when crafting visual responses. With Echo Spot, for example, we previously advised to make the primary content of each template visible and recognizable from up to five feet away. For the Echo Show display, we advised a template that is visible from seven feet away. With Fire TV Cube, we suggest designing for something 10 feet away or more.</p> <p>So, how should you optimize your skill design across devices? The best approach is to keep it simple. Regardless of the visual response, the display should always be second to what Alexa is saying. Rely on your voice experience, and use the screens to complement that experience.</p> <h2>Set Up Your VUI to Support Your Fire TV Cube GUI</h2> <p>As with any multimodal skill, you can choose what interfaces—voice user interfaces (VUI) or graphical user interfaces (GUI)—you can support. Regardless of the interface, you want to assure you are delivering an experience that can be handled on any device.</p> <p>Developing an Alexa skill with a TV could easily be interpreted as creating a skill that more heavily relies on visuals, thus making them necessary for a standout skill. This is not the case for voice-first experiences: If you do not incorporate screens into your custom skill, Fire TV Cube will display the cards you provide for the Alexa app.</p> <p>To add displays as a supported interface into your Alexa skill, you need to edit your skill’s manifest. There are two easy ways to do this—programmatically or through the Alexa Developer Console.</p> <p>In your skill.json, under apis.custom.interfaces, add a type RENDER_TEMPLATE. This is essentially saying that you want your skill to support rendering templates, so a response including a render template directive should be interpreted as valid.</p> <pre> <code class="language-javascript">&quot;apis&quot;: { &quot;custom&quot;: { &quot;endpoint&quot;: { &quot;sourceDir&quot;: &quot;lambda/custom&quot; }, &quot;interfaces&quot;: [ { &quot;type&quot;: &quot;RENDER_TEMPLATE&quot; } ] } },</code></pre> <p>Once you have added the interface, you need to add the required display intents to your interaction model.</p> <pre> <code class="language-javascript">{ &quot;interactionModel&quot;: { &quot;languageModel&quot;: { &quot;invocationName&quot;: &quot;my custom skill&quot;, &quot;intents&quot;: [ { &quot;name&quot;: &quot;AMAZON.MoreIntent&quot;, &quot;samples&quot;: [] }, { &quot;name&quot;: &quot;AMAZON.NavigateHomeIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.NavigateSettingsIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.NextIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.PageUpIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.PageDownIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.PreviousIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.ScrollRightIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.ScrollDownIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.ScrollLeftIntent&quot;, &quot;samples&quot;: [] },{ &quot;name&quot;: &quot;AMAZON.ScrollUpIntent&quot;, &quot;samples&quot;: [] }, ...</code></pre> <p>Another way to do this is through the <a href="">developer console</a>. Navigate to your skill and scroll to interfaces. Once there, toggle “Display Interface,” save and build your skill. This will enable rendering templates in your skill and automatically add all of the required intents to your interaction model.</p> <p><img alt="" src="" style="display:block; margin-left:auto; margin-right:auto" /></p> <h2>Check for Display Support</h2> <p>Now that you have enabled templates to be enabled within your skill, you need to assure you are incorporating them into your response at the appropriate time. Before adding the template to your response, you need to assure that the current device supports the display interface. To do so, you can use the supportsDisplay()function in the <a href="" target="_blank">Alexa Skill-Building Cookbook</a>. Then, within your skill code you can use the function to determine if you should include the template in your responseBuilder.</p> <pre> <code class="language-javascript">if (, handlerInput)) { // insert render template code here }</code></pre> <p>We include this condition because your code needs to respond to both cases where the screen is or is not supported.</p> <h2>Add Render Templates</h2> <p>There are two categories of display templates that you can use within your custom skill: Body Templates and List Templates. A great example of how both template types are handled can be seen in the <a href="">demo display directive</a>. Note that the skill builds with an Echo Show in mind, but the use of <a href="">Body Templates</a> and <a href="">List Templates</a> renders similarly across devices.</p> <p>Remember that each template is displayed alongside a speech response. Currently, you cannot render a template without a user prompting a response. It is bad practice to render a template without Alexa saying something to go with it. Along with that, you can only render one template per response. The templates should help direct the conversation.</p> <p>At its core, a display template is just a JSON file. Each template has fields you can specify, such as title, backgroundImage, backButton, etc. Regardless of which template you are using, at minimum, to create the display you need to follow three steps:</p> <ol> <li>Assure the display is supported by calling supportsDisplay.</li> <li>Include a Display.RenderTemplate directive in your skill response.</li> <li>Set the type of your template to whatever you are trying to render.</li> </ol> <p>From there, you can specify various fields to build a complementary display. Here is an example of rendering ListTemplate2 in the <a href="">Alexa Skills Kit (ASK) Software Development Kit (SDK) for Node.js</a>.&nbsp;</p> <pre> <code class="language-javascript">if (, handlerInput)) { handlerInput.responseBuilder.addRenderTemplateDirective({ type: 'ListTemplate2', backButton: 'visible', backgroundImage: '', title: 'This is my title', listItems: [ { image: '', textContent: 'This is my list item 1', }, { image: '', textContent: 'This is my list item 2', }, { image: '', textContent: 'This is my list item 3', } ]}); }</code></pre> <p>Templates can become large. To reduce the size of your skill code file, consider decoupling your template code from your skill code and hosting it or organizing it into a separate file. You can use data-binding logic to incorporate speech text into your template. Doing so allows for quick editing, reusing your templates across multiple skills, and more readable code.</p> <h2>Interpret Touches and Clicks</h2> <p>Remember that Fire TV Cube is also not touch-enabled and the customer will be using a remote control. The touch components on the display templates will be translated to be selectable. Assure that you are using the templates to intuitively handle both touch and click. See the <a href="">Display Interface Reference</a> for more information.</p> <p>To accommodate clicks within your template, all you have to do is add a token attribute to whatever you want to be selectable. Following our previous example, here is each list item with a token attribute, making the image + text together selectable.</p> <pre> <code class="language-javascript"> listItems: [ { token: &quot;listItem1&quot;, image: '', textContent: 'This is my list item 1', }, { token: &quot;listItem2&quot;, image: '', textContent: 'This is my list item 2', }, { token: &quot;listItem3&quot;, image: '', textContent: 'This is my list item 3', } ]</code></pre> <p>Every touchable or clickable item should lead to a response that is already incorporated into your voice interaction. In other words, every event from one of these items should be discoverable via voice. For example, when a customer clicks on a list item, the display might show them more information about that particular entry. Another way to navigate to it could be by the customer saying, “Tell me more about list item one.”</p> <p>To handle the touch event fired from selecting one of these items, you need to incorporate the events in the canHandle of the appropriate intent. Doing so is a simple Boolean statement. Here is an example using the ASK SDK for Node.js:</p> <pre> <code class="language-javascript">canHandle(handlerInput) { const request = handlerInput.requestEnvelope.request; const hasBeenClicked = request.type === 'Display.ElementSelected' &amp;&amp; request.token === 'listItem1'; return hasBeenClicked || (request.type === 'IntentRequest' &amp;&amp; === 'MoreInfoIntent'); }</code></pre> <p>You can have multiple tokens navigate to the same intent, and adjust the response according to whatever token was selected. Logically, tokens can be used alongside slot logic. For example, if a customer says, “Tell me more about list item one,” then “list item one” could be the slot value for the MoreInfoIntent. You could use the same slot evaluation logic to evaluate what to do when the token has been clicked.</p> <h2>Video and Audio App Directives with Fire TV Cube</h2> <p>Particularly with Fire TV Cube, you will probably get instances of a customer wanting to view something with their conversation. Using a <a href="">Video</a> or <a href="">Audio</a> App directive in your skill is easy to handle and navigate to within your templates. Both of these directives can be initiated via voice or click.</p> <p>To use the directives, you will also need to incorporate the Video Player or Audio Player interfaces into your skill manifest. Indicating these interfaces is done similarly to the Render Template interface.</p> <p>Both of these directives also have required, built-in intents. These are AMAZON.PauseIntent and AMAZON.ResumeIntent.</p> <p>Beyond that, the implementation using both of these is very similar. You can reference the <a href="">Alexa Audio Player skill sample</a> to view the implementation. The sample demonstrates how to use single- or multiple-streams in your skill.</p> <h2>Next Steps and Helpful Links</h2> <p>When building a custom skill with display for Fire TV Cube, remember to build a voice-first experience with templates that create harmony with what Alexa is saying. Cater your experience to a large audience and a wide set of Alexa-enabled devices. And remember to keep it simple.</p> <p>Check out these resources to find out more information on Fire TV Cube and how to build multimodal skills:</p> <ul> <li><a href="">Creating Skills for AlexaEnabled Devices with Screens</a></li> <li><a href="">Learn More about the Video Skill API</a></li> <li><a href="">VUI &amp; GUI Best Practices</a></li> <li><a href="">How Hulu Built Its Living Room Experience with Amazon Alexa</a></li> </ul> /blogs/alexa/post/f7aad965-76f5-435f-86de-7861ece48709/how-to-create-a-self-test-room-and-evaluate-your-alexa-voice-service-integration How to Create a Self-Test Room and Evaluate Your Alexa Voice Service Integration Ted Karczewski 2018-07-17T15:01:09+00:00 2018-07-17T15:01:09+00:00 <p><img alt="" src="" /></p> <p>Testing your Alexa-enabled product in a variety of scenarios will help ensure that you bring a delightful hands-free Alexa experience to your customers.&nbsp;In this blog post, we’ll cover the minimum requirements for a self-test room that you can construct at your workplace.</p> <p><img alt="" src="" /></p> <p>Alexa offers your customers a new way to interface with technology – a convenient UI that enables them to plan their day, control their smart home devices, and access news and information. If you’re already building a voice-forward product with the <a href="">Alexa Voice Service (AVS)</a>, you’ll want to ensure you have the right setup to effectively test your integration prior to launch. Testing your Alexa-enabled product in a variety of scenarios will help ensure that you bring a delightful hands-free Alexa experience to your customers and encourage habitual usage of Alexa capabilities.</p> <h2>Preparing for Product Testing</h2> <p>We recommend a two-phased approach to test your integration: self-tests and then product submission to Amazon. During self-tests, you will complete a series of exercises to ensure your product meets functional and user experience (UX) requirements. If you’re building a far-field device or want to enable certain domains like music, you’ll also want to execute additional testing prior to launch. Before you start, make sure your integration adheres to our <a href="">Functional Requirements</a>, review our <a href="">UX Guidelines</a>, and set up your self-test environment.</p> <p>In this blog post, we’ll cover the minimum requirements for a self-test room that you can construct at your workplace.</p> <h2>Setting Up a Self-Test Room</h2> <p>Our experience has helped us hone in on some standard characteristics that make a self-test room effective for voice performance evaluation. The room requirements are similar to those defined by the <a href="">European Telecommunications Standards Institute (ETSI)</a>, with some customization. Specifically, we are leveraging <a href="">ETSI EG 202 396-1 V1.2.2, Section 6.1</a>. Best of all, our recommendation doesn’t require you to build a specific type of room – you can enable self-testing in a typical office room. These are the minimum requirements:</p> <p><strong>Room Size</strong></p> <p>We recommend a room at least 2.5 m x 3.5 m x 2.2 m. These dimensions are in reference to usable space.</p> <p><strong>Room Treatment</strong></p> <p>Look for a room with wall-to-wall carpet and some acoustical damping (ceiling tiles) in the ceiling. If the room has a lot of windows or a large whiteboard, consider covering these with a set of curtains to avoid strong reflections by hard surfaces. We target a reverberation time that is less than 0.7 s and higher than 0.2 s. This is targeted for the frequency range captured by Alexa, between 100 Hz and 8 kHz. For technical specifics we defer to ISO 3382-3:2012. If you are building a test chamber, please consider 10 Hz neoprene isolators and bass trap panels in the corners.</p> <p><strong>Noise Floor</strong></p> <p>To reduce the influence of unwanted noise on your results, the background noise of your test room should be less than 35 dBA.</p> <p><strong>Equipment</strong></p> <p>Your setup requires 1 noise speaker, 4 speech speakers, your device under test and at least a 0.5 m clearance between the walls, test speakers, and your device. To control the output of the speech speakers and noise speaker you will want to get a multi-channel sound card, such as the Roland Octa-Capture or RME Fireface.</p> <p><img alt="" src="" style="display:block; height:434px; margin-left:auto; margin-right:auto; width:500px" /></p> <h2>Getting Started with Self-Tests and Placement</h2> <p>The above image shows the placement and angles to complete testing. After you’ve completed the self-test room set up, you can evaluate your Alexa integration, and correct bugs uncovered during the process. For access to Amazon self-tests and the audio files required for testing, reach out to your Amazon point-of-contact or follow the instructions for how to launch with AVS. Refer to our <a href="">Product Testing overview</a> for more information on self-tests and how to submit your device for additional Amazon review.</p> <h2>New to AVS?</h2> <p>AVS makes it easy to integrate Alexa directly into your products and bring voice-forward experiences to customers. Through AVS, you can add a new natural user interface to your products and offer your customers access to a growing number of Alexa features, smart home integrations, and skills. <a href="">Get Started</a>.</p> /blogs/alexa/post/1a4f4efb-21e1-445b-af3e-1b740a12e6fd/things-every-alexa-skill-should-do-handle-the-unexpected-gracefully Things Every Alexa Skill Should Do: Handle the Unexpected Gracefully Jennifer King 2018-07-17T14:00:00+00:00 2018-07-17T14:00:00+00:00 <p><img alt="" src="" /></p> <p>Learn how to handle unexpected requests from customers, or instances when a customer says something that doesn’t map to any intents in your skill.</p> <p><img alt="" src="" /></p> <p><em>Editor's Note: This is an installment of our new series called </em><a href=""><em>Things Every Alexa Skill Should Do</em></a><em>, which highlights the important features and lessons that every skill builder can use to make their skills more engaging for customers. Follow the series to learn, get inspired, and build engaging Alexa skills.</em></p> <p>When building a skill, you might find yourself making assumptions about what a customer might say. It is important to make sure that you’re anticipating something completely outside your expected set of responses, and handling it in a way that allows the customer to get back on the rails.</p> <p>In the <a href="" target="_blank">Dev Tips skill</a>, for example, we encourage developers to ask the skill about the issue or topic that they want to know more about. This also means that there will be times that a customer says something we didn’t expect.</p> <p>When testing how a skill handles unexpected utterances, we like to use the phrase “pizza pie.” We use this to see if the skill handles words the developer didn’t plan for, and see how the skill responds. In the case of Dev Tips, “pizza pie” will deliver a response similar to this one:</p> <p style="margin-left:40px"><em>“I heard you say pizza pie. I’m sorry, I don’t know how to help you with that.”</em></p> <p>The skill acknowledges that it heard the customer, and it even repeats the words captured in the slot value so that the user understands why it missed. This gives the customer an opportunity to try their question again, or ask a different one.</p> <p>By handling these errors gracefully, the customer understands that what they requested wasn’t available, but they can continue to interact with the skill.</p> <p>To learn more, check out the <a href="" target="_blank">Alexa Voice Design Guide</a>. Or enable the new <a href="">Fallback Intent</a> in the Alexa Skills Kit built-in library to respond gracefully to out-of-domain requests to your Alexa skill.</p> <h2>Get the Guide: 10 Things Every Alexa Skill Should Do</h2> <p>With more than 40,000 skills in the Alexa Skills Store, we’ve learned a lot about what makes a skill great and what you can do to create incredible voice experiences for your customers. Download the complete guide about <a href=";sc_channel=SM&amp;sc_details=Blog1" target="_blank">10 Things Every Alexa Skill Should Do</a> for more tips, code samples, and best practices to build engaging skills.</p> /blogs/alexa/post/5f9f44b4-8e1c-499a-94e5-1b113c6e42a7/mario-johansson-begeistert-mit-seinen-ambient-sound-skills-f%C3%BCr-alexa Mario Johansson begeistert mit seinen Ambient Sound Skills f&uuml;r Alexa Kristin Fritsche 2018-07-17T08:00:00+00:00 2018-07-17T08:15:20+00:00 <p><img alt="DE_Mario_Johansson.jpg" src="" /></p> <p>Mario Johansson entwickelt unter anderem Ambient Sound Skills f&uuml;r Alexa. In der Developer Story erz&auml;hlt er, wie er zur Skillentwicklung gekommen ist, teilt seine Best Practices und gibt Ratschl&auml;ge f&uuml;r neue Skill-Entwickler.</p> <h2><img alt="DE_Mario_Johansson.jpg" src="" /></h2> <h2>Mario Johansson begeistert mit seinen Ambient Sound Skills f&uuml;r Alexa</h2> <p>Neue Technologien interessieren Mario Johansson, Inhaber und Entwickler von <a href="" target="_blank">Envy Eden</a>, schon von Berufswegen. Als IT-Berater muss er sich auskennen, um seine Kunden gut beraten zu k&ouml;nnen. Alexa hat er zuerst einmal als Nutzer ausprobiert, bevor er selbst mit der Skill-Entwicklung gestartet hat. „Ich habe mich langsam herangetastet, mir das Alexa Skills Kit angeschaut, in Tutorials sowie der Dokumentation gest&ouml;bert. Es ist nicht schwer, alles ist super erkl&auml;rt“, erz&auml;hlt Mario.</p> <p>Sein erster Skill sollte aber auch einen Nutzen bringen und so kam Mario die Idee, die Gebetszeiten in seinem Skill <a href=";ie=UTF8&amp;qid=1527585746&amp;sr=1-26&amp;keywords=envy+eden" target="_blank">Muslim</a> zu hinterlegen. „Mit einem Skill, den die Nutzer t&auml;glich nutzen und der ihnen etwas Allt&auml;gliches versch&ouml;nert oder erleichtert, ist man als Entwickler gut beraten“ findet Mario. Der Erfolg gibt ihm recht, seine Nutzer sind begeistert, was auch die positiven Bewertungen im <a href="" target="_blank">Alexa Skill Store</a> belegen.</p> <p>Bald darauf startete er seine erfolgreichen Ambient Sound Skills, mit denen seine Nutzer zu ihrer Stimmung passende Ger&auml;usche h&ouml;ren k&ouml;nnen. Der Skill mit Sounds f&uuml;r einen <a href=";ref-suffix=ss_rw&amp;ref_=cm_sw_r_tw_a2s_a2s_ZXesBbRZP1X8N785&amp;sr=1-2&amp;keywords=guten+morgen" target="_blank">Guten Morgen </a>gewann sogar die Entwickler-Aktion im Februar und hatte bereits in den ersten 30 Tagen die meisten Nutzer unter allen Teilnehmern.</p> <h2>Das Voice User Interface (VUI) entwickeln</h2> <p>Ein guter Use Case allein reicht aber nicht, wei&szlig; Mario: „Man muss sich genau &uuml;berlegen, wie die Nutzer den Skill verwenden wollen. Ich mache mir vor der eigentlichen Entwicklung eine Skizze mit den verschiedenen Wegen und Optionen im Skill und daraus entsteht dann mein VUI.“ Trotzdem kann man nicht jede Art der Skill Nutzung voraussehen.</p> <p>„Ich arbeite eigentlich oft an meinen Skills, auch wenn ich nur neben der Arbeit daf&uuml;r Zeit habe. Ich schaue mir regelm&auml;&szlig;ig die Bewertungen und das Feedback der Nutzer an und verbessere die Skills“, berichtet Mario und erz&auml;hlt weiter: „Ganz cool finde ich die neue <a href="">Intent Request History API</a> im Alexa Skills Kit, mit der man mehr Infos zur Nutzung eines Skills erh&auml;lt.“ Die neue API erlaubt Entwicklern den Einblick in h&auml;ufig verwendete Formulierungen und zeigt oft auftretende Interaktionen der Nutzer mit einem Skill. Die Daten werden dabei anonymisiert zur Verf&uuml;gung gestellt.</p> <h2>Der Entwicklungsprozess</h2> <p>Mario programmiert mittlerweile in Python lokal auf dem Rechner und l&auml;dt den Code dann mit <a href="">ASK Command Line Interface</a> hoch. Als Backend benutzt er <a href="" target="_blank">AWS Lambda</a>.</p> <p>„Ich habe w&auml;hrend der Entwicklung immer dazu gelernt. Zum Beispiel waren meine Streams f&uuml;r die Ambient Sound Skills am Anfang zu lang. Dadurch gab es keine weitere Interaktionsm&ouml;glichkeit mit dem Nutzer, was ich mittlerweile ge&auml;ndert habe. Bei meinem Skill <a href=";ref-suffix=ss_rw&amp;ref_=cm_sw_r_tw_a2s_a2s_JZesBbPGDXGDH" target="_blank">Quiz Million&auml;r</a> habe ich sp&auml;ter das Session Management eingebaut, damit der Nutzer immer bei seinem letzten Spielstand weiterspielen kann“, erz&auml;hlt Mario.</p> <h2>Wie generiert man neue Nutzer?</h2> <p>Eine wichtige Frage, die viele Skill Entwickler besch&auml;ftigt: Wie macht man seinen Skill bekannt? Mario hat mit Keywords gearbeitet, um die Sichtbarkeit zu erh&ouml;hen. „Organische Suche klappt immer noch am besten. Dazu sollte man sich den Namen des Skills und die Skill Beschreibung genau &uuml;berlegen und sinnvolle Keywords einf&uuml;gen. So wird der Skill von Suchmaschinen besser gefunden. Ich mache vor der Entwicklung immer eine Keyword-Suche und schaue mir an, was gut funktioniert. Das Skill Icon sollte auch ansprechend sein. Ich habe auch Facebook Ads ausprobiert, was eine zus&auml;tzliche Option sein kann.“</p> <h2>Tipps von Entwickler zu Entwickler</h2> <p>„Wenn man anf&auml;ngt, ist es gut, sich ein Grundverst&auml;ndnis &uuml;ber das Thema zu verschaffen. Was ist eine Lambda Funktion, wie baue ich einen ersten Skill etc. Der <a href="">Alexa Blog</a> ist eine sinnvolle Quelle, ich schaue eigentlich t&auml;glich vorbei. Die <a href="">kostenlosen Events</a> sind auch zu empfehlen, da trifft man auch andere Entwickler und kann sich austauschen“, r&auml;t Mario.</p> <p>Mario hat einige Pl&auml;ne f&uuml;r seine Skills. Neben der Optimierung f&uuml;r den Bildschirm von Echo Show und Echo Spot steht auch ein <a href="">Skill f&uuml;r Kinder</a> auf dem Plan.</p> <h2>Ressourcen</h2> <ul> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Visit&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Visit_DE_Devs&amp;sc_segment=DEDevs" target="_blank">Sprachdesign Guide</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Visit&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Visit_DE_DEDevs&amp;sc_segment=DEDevs">Alexa Events</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Start&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Start_DE_DEDevs&amp;sc_segment=DEDevs">Video-Tutorial: Entwickle deinen ersten Skill</a></li> </ul> <h2>Skill entwickeln, Entwickler-Goodie erhalten</h2> <p>Verwirkliche deine Alexa Skill Idee und mach mit bei unserer <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Visit&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Visit_DE_Devs&amp;sc_segment=DEDevs" target="_blank">Entwickler-Aktion</a>. Alle Entwickler in Deutschland, &Ouml;sterreich und Luxemburg, die zwischen dem 1. und dem 31. Juli 2018 einen deutschsprachigen Alexa Skill entwickeln, im Skill Store ver&ouml;ffentlichen und die <a href="" target="_blank">Teilnahmebedingungen</a> erf&uuml;llen, erhalten ein Alexa Entwickler Shirt. Erreicht dein Skill in den ersten 30 Tagen nach der Ver&ouml;ffentlichung mehr als 100 Nutzer (Unique User), qualifiziert er sich au&szlig;erdem f&uuml;r ein 2-er Pack Echo Buttons. Ein Entwickler eines Skills hat au&szlig;erdem die Chance, einen Echo Spot zu gewinnen. Sobald dein Skill ver&ouml;ffentlicht ist, kannst du daf&uuml;r die Werbetrommel r&uuml;hren. <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=BlogDevStory&amp;sc_publisher=BL&amp;sc_content=Content&amp;sc_funnel=Visit&amp;sc_country=DE&amp;sc_medium=Owned_WB_BlogDevStory_BL_Content_Visit_DE_Devs&amp;sc_segment=DEDevs" target="_blank">Leg jetzt los und entwickle deinen Skill!</a></p> /blogs/alexa/post/fc3a3e60-30d7-45d7-8416-af264ff45169/alexa-kids-skills-now-available-in-australia-and-new-zealand Alexa Kid Skills Now Available in Australia and New Zealand James Ang 2018-07-17T00:00:00+00:00 2018-07-17T00:55:24+00:00 <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>We are excited to announce that Alexa now supports kid skills in Australia and New Zealand. Aussie and Kiwi developers can now use the <a href="" target="_blank">Australia and New Zealand Alexa Skills Kit</a> to create skills that educate, entertain, and engage kids.</p> <p><img alt="" src="" style="height:240px; width:954px" /></p> <p>We are excited to announce that Alexa now supports kid skills in Australia and New Zealand. Aussie and Kiwi developers can now use the <a href="" target="_blank">Australia and New Zealand Alexa Skills Kit</a> to create skills that educate, entertain, and engage kids. From interactive games to instructional skills, you can create engaging kid-friendly voice experiences that will entertain the whole family and make them available in the <a href=";node=4931595051" target="_blank">Australia Alexa Skills Store</a>.</p> <h2>How It Works</h2> <p>Parents can turn on kid skills using the <a href=";nodeId=201549920" target="_blank">Alexa app</a>. When a kid skill is enabled for the first time, Alexa will ask the parent to turn on access to all kid skills by enabling the “Allow Kids Skills” toggle under Settings / Kids Skills in the Alexa app. Access to kid skills only need to be turned on once to enable and use all kid skills on the parent’s account. Like with any other Alexa skill, each kid skill will need to be enabled separately. Parents can turn off all kid skills at any time by disabling the “Allow Kids Skills” toggle in the Settings menu of their Alexa app. When turned off, kid skills cannot be used on the parent’s account.</p> <p>Join developers around the world who have already created engaging and award-winning Alexa kid skills for the whole family. Get inspired from the winners of the <a href="" target="_blank">Alexa Skills Challenge: Kids</a>.</p> <h2>3 Resources to Get Started</h2> <p>You have a unique opportunity to reach a whole new audience with your skills, as kids are quick to take to voice experiences and talk to Alexa in a way that’s uniquely their own. This means there are a few key things to keep in mind as you build kid skills—from vocabulary to style of speech to areas of interest. Here are 3 recommended resources to help you start building engaging kid skills:</p> <ol> <li>Download our guide on <a href="" target="_blank">6 Tips for Building Stellar Kid Skills</a> and <a href="" target="_blank">watch our video</a> for step-by-step tips and examples.</li> <li>Check out our tutorials and templates to get inspired, including the <a href="" target="_blank">Mix Master Kid Skill</a>, <a href="" target="_blank">Interactive Adventure Games</a>, <a href="" target="_blank">Quiz Games</a>, and <a href="" target="_blank">Trivia Games</a>.</li> <li>Review the <a href="" target="_blank">Certification Guidelines for Kid Skills</a> for best practices to ensure your skill is published smoothly.</li> </ol> <h2>Build Engaging Skills, Earn Developer Perks</h2> <p>Publish a new skill this month, including your engaging kid skill, and earn an exclusive Alexa developer t-shirt. If more than 30 unique users use your skill in the first 30 days, you could also earn an Echo Dot. <a href="" target="_blank">Learn more</a> about our promotion and start building today.</p> /blogs/appstore/post/ca8ce405-5674-4528-a81d-0584ff1d8bfb/announcing-monthly-gameon-developer-office-hours Announcing Monthly GameOn Developer Office Hours Tess Selim 2018-07-16T21:08:54+00:00 2018-07-16T21:26:01+00:00 <p><img alt="" src="" style="display:block; height:350px; margin-left:auto; margin-right:auto; width:700px" /></p> <p>During these office hours, you will be able to ask your technical questions, view live code demos, and discuss your GameOn use case. We will also explore best practices for competition management, real-world prizes, and more.<br /> &nbsp;</p> <p><img alt="" src="" style="display:block; height:350px; margin-left:auto; margin-right:auto; width:700px" />Over the past few weeks, I've had the opportunity to chat with Amazon GameOn developers at events like Nordic Game Jam and Develop: Brighton. The direct feedback and questions I receive from you help influence the GameOn product roadmap.<br /> <br /> I'm thrilled to extend this conversation to GameOn developers everywhere with the addition of monthly GameOn office hours. Our goal is to help you seamlessly integrate and offer the best competitive play experiences for your players.<br /> <br /> During these office hours, you will be able to ask your technical questions, view live code demos, and discuss your GameOn use case. We will also explore best practices for competition management, real-world prizes, and more.<br /> <br /> Office hours are offered on the second Thursday of every month. Register to reserve your spot for August 9 and send in your questions.</p> <p><a href="" target="_blank"><img alt="" src="" style="display:block; height:50px; margin-left:auto; margin-right:auto; width:200px" /></a></p> <h2>Get started with GameOn</h2> <p>Are you new to GameOn? <a href="" target="_blank">Get started today</a>, it's easy and quick! Some developers have even been able to set up competitions in their game with GameOn in as little as one day. You can also learn more about how to integrate competitive play into your game by watching <a href="" target="_blank">this webinar</a> on-demand.</p> <h2>About GameOn</h2> <p>GameOn is a set of flexible APIs built on AWS cloud infrastructure and works on any operating system, giving you the ability to scale quickly, while allowing you to invest more time in what you do best—designing great games. With GameOn, you have an easy tool to bring more players in on the action—allowing them to compete for real-world prizes fulfilled by Amazon or other in-game rewards. Drive engagement and increase monetization of your games by adding leaderboards, leagues, and multi-round competitions to your games, or strengthen your fanbase by allowing players and streamers to create their own user-generated competitions.</p> /blogs/alexa/post/bc619d2d-53ce-42c4-a169-f9aec5bd4c12/alerts-interface-version-1-3-brings-new-capabilities-to-alexa-enabled-products Alerts Interface Version 1.3 Brings New Capabilities to Alexa-Enabled Products Ted Karczewski 2018-07-16T16:34:12+00:00 2018-07-16T16:34:12+00:00 <p><img alt="" src="" /></p> <p>Today we’re excited to announce Alerts interface version 1.3, which allows Alexa-enabled products to support several new features, including the deletion of multiple alerts, volume control using the Amazon Alexa App, and the selection of custom alert tones using the Amazon Alexa App.</p> <p><img alt="" src="" /></p> <p>The <a href="" target="_blank">Alerts interface</a> of the <a href="" target="_blank">Alexa Voice Service (AVS)</a> has enabled device makers to build voice-based alerts such as timers, alarms, and reminders into connected products. Today we’re excited to announce Alerts interface version 1.3, which allows Alexa-enabled products to support several new features:</p> <ul> <li>Deletion of multiple alerts (via the <em>DeleteAlerts</em> directive, and the <em>DeleteAlertsFailed</em> and <em>DeleteAlertsSucceeded</em> events)</li> <li>Control of alert volume using the Amazon Alexa App (via the <em>SetVolume</em> and <em>AdjustVolume</em> directives, and the <em>VolumeChanged</em> event)</li> <li>Selection of custom alert tones using the Amazon Alexa App (no API change).</li> </ul> <p><strong>Device makers who want to upgrade their Alexa-enabled products to support new alerts features can follow these steps:</strong></p> <p><em>Note: If you are using the AVS Device SDK v1.8.1 or later, no further action is required.</em></p> <h2>1. Familiarize yourself with the new Alerts interface version</h2> <p>For more details, please see the Alerts interface version 1.3 API documentation on the AVS Developer Portal.</p> <h2>2. Update your AVS client code to support Alerts 1.3</h2> <p>Update your device software to support the Alerts 1.3 interface version and use the Capabilities API to provide the complete list of interfaces and interface versions that your product supports.</p> <p>Please see the <a href="" target="_blank">Capabilities API documentation</a> for a more detailed discussion of the API and a complete set of use cases.</p> <h2>AVS Sample App on GitHub</h2> <p>The AVS Device SDK v1.8.1 and later supports Alerts version 1.3,&nbsp;the Capabilities API, and interface versioning. <a href="" target="_blank">Build your first prototype</a> with Raspberry Pi or <a href="" target="_blank">download the latest code</a> to see the new features.</p> <h2>What Is AVS?</h2> <p>AVS is a customizable suite of development tools and resources that make it easy to integrate Alexa directly into your products and bring voice-forward experiences to customers. Through AVS, device makers can add a new natural interface to their products and offer customers access to a growing number of Alexa features, smart home integrations, and skills. Visit the <a href="" target="_blank">AVS Developer Portal</a> to get started.</p> /blogs/alexa/post/9930d39c-1e06-4e9c-9e99-08ff011a50ff/recordings-and-resources-discovering-the-joy-of-voice Recordings and Resources: Discovering the Joy of Voice Jennifer King 2018-07-16T14:00:00+00:00 2018-07-16T17:34:55+00:00 <p>I recently concluded a Twitch series called the Joy of Voice, during which I built a Star Wars database Alexa skill from start to finish. Here is a quick recap of each episode and what viewers learned.</p> <p>Every week the Alexa team streams unique content on the <a href="" target="_blank">Amazon Alexa Twitch channel</a> to share voice design best practices, build skills from start to finish, and interact with the skill-building community. We stream shows covering most of the topics developers are interested in, and we also broadcast our <a href="">weekly developer office hours</a>.</p> <p>Some of our most popular Twitch streams are ones where we build voice experiences from scratch, giving viewers an end-to-end look at the skill-building process. I recently concluded a series of episodes for a show called the Joy of Voice, during which I built a <em>Star Wars</em> database Alexa skill from start to finish. I did all of the development on the stream, so you can watch the episodes in order and see how everything was created.</p> <p>Here is a quick recap of each episode and what viewers learned. Watch the on-demand videos below to check out the full series.</p> <h2>Episode 1: The One Where We Get Started</h2> <p>In this opening episode, we start creating the <em>Star Wars</em> database skill, which will allow a customer to ask about a droid, person, planet, vehicle, or weapon from the <em>Star Wars</em> universe. The episode covers invocation naming, creating and testing intents, and how to build a new AWS Lambda function to catch the user interactions from those intents. Watch the video below:</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related resources:</p> <ul> <li><a href="">Tips for Choosing an Invocation Name for Your Alexa Custom Skill</a></li> <li><a href="">Things Every Alexa Skill Should Do: Use a Memorable Invocation Name and Utterances</a></li> <li><a href="">Things Every Alexa Skill Should Do: Respond to Intents, Not Just Commands</a></li> </ul> <h2>Episode 2: The One With Custom Slots</h2> <p>Each of our intents has its own custom slot for the different types of data in our <em>Star Wars</em> skill, but managing that data can become an extensive task. We introduce an external data service called <a href="" target="_blank">AirTable</a>, which makes it easier for us to create content and manage skill data. Watch the video below:</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related resources:</p> <ul> <li><a href="">Why a Custom Slot is the Literal Solution</a></li> <li><a href="">How to Make It Easy for Teams to Contribute Content to Your Alexa Skill</a></li> </ul> <h2>Episode 3: The One With Entity Resolution</h2> <p>Now that we have a full database and customers can make requests for content from our skill, we need to be able to match a customer’s input to one of our records. This episode covers how to use entity resolution to accomplish this. Watch the video below:</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related resources:</p> <ul> <li><a href="">Entity Resolution in Skill Builder</a></li> <li><a href="">Alexa Skill Teardown: Understanding Entity Resolution with the Pet Match Skill</a></li> <li><a href="">New Training Course: Learn How to Create Conversational Skills on Codecademy</a></li> </ul> <h2>Episode 4: The One Where We Change the SDK Version</h2> <p>While we were filming this Twitch series, we released version 2 of the Alexa Skills Kit (ASK) Software Development Kit (SDK) for Node.js. This episode focused on porting our entire skill from version 1 to version 2 of the SDK. Watch the video below:</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related resources:</p> <ul> <li><a href="">Now Available: Version 2 of the ASK Software Development Kit for Node.js</a></li> </ul> <h2>Episode 5: The One With In-Skill Purchases</h2> <p>Now with in-skill purchasing, you can make money with your Alexa skills. This episode shows how to monetize your skill by creating and adding in-skill products. We put some of our skill content behind a paywall and walk through the upsell experience that offers premium content to a customer. Watch the video below:</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="360" src="//" width="640"></iframe></p> <p>Related resources:</p> <ul> <li><a href="">Understanding the In-Skill Purchasing Experience</a></li> <li><a href="">How to Build an Alexa Skill with In-Skill Purchasing</a></li> <li><a href="">New Alexa Skill Sample: Add In-Skill Products with One-Time Purchases and Subscriptions</a></li> </ul> <p>We are currently planning our next series of episodes for the Joy of Voice (in addition to a number of other shows), and we would love your feedback. Reach out to me at @jeffblankenburg on Twitter to let us know what you would like to see!</p> <h2>Join Us Every Week on Twitch</h2> <p>You can find the Alexa team on Twitch every week at <a href="" target="_blank"></a>. If you would like to get notifications every time we stream, make sure to follow our channel by clicking the purple heart icon. You can also tune into Twitch every Tuesday at 1 p.m. PST for developer office hours. During these weekly one-hour sessions, a rotating cast of Alexa evangelists are available to answer your skill-building questions.</p> /blogs/alexa/post/df460d13-612d-4a33-b4b4-547ab119d99d/how-alexa-can-use-song-playback-duration-to-learn-customers-preferences How Alexa Can Use Song-Playback Duration to Learn Customers’ Preferences Larry Hardesty 2018-07-16T12:30:00+00:00 2018-07-16T14:43:19+00:00 <p>Bo Xiao, a machine learning scientist in the Alexa AI organization, describes his and his colleagues' Interspeech paper about using playback duration to gauge customer affinity for audio content.</p> <p>To be as useful as possible to customers, Alexa should be able to make educated guesses about the meanings of ambiguous utterances. If, for instance, a customer says, “Alexa, play the song ‘Hello’”, Alexa should be able to infer from the customer’s listening history whether the song requested is the one by Adele or the one by Lionel Richie.</p> <p><img alt="Hello.jpg" src="" style="border-style:solid; border-width:0px; float:left; height:217px; margin-left:10px; margin-right:10px; width:400px" />One natural way to resolve such ambiguities is through collaborative filtering, the technique that uses to recommend products: Alexa would simply choose the song that the customer is likely to enjoy more. But voice-service customers tend not to explicitly rate individual instances of the content they receive, in the way that customers rate individual products. So a collaborative-filtering algorithm would have little data from which to deduce customer preferences. Moreover, customers on click to view items proactively, whereas voice-service customers often receive resolutions of requests passively. So a playback record does not necessarily indicate a customer preference for the played item.</p> <p>In a <a href="" target="_blank">paper</a> titled “Play Duration based User-Entity Affinity Modeling in Spoken Dialog System”, which we’re presenting at Interspeech 2018, my colleagues and I demonstrate how to use song-play duration as an implicit rating system, on the assumption that customers will frequently cancel the playback of songs they don’t want while permitting the playback of songs they do.</p> <p>We use machine learning to analyze playback duration data to infer song preference, and we use collaborative-filtering techniques to estimate how a particular customer might rate a song that he or she has never requested. Although we tested our approach on music-streaming records, it generalizes easily to any other streaming-data service, such as video or audiobooks.</p> <p>Amazon has long been a leader in the field of collaborative filtering. Indeed, last year, as part of its 20th-anniversary celebration, the journal <em>IEEE Internet Computing</em> chose a 2003 paper on collaborative filtering by three Amazon scientists as the one paper in its publication history that had best withstood the “<a href="" target="_blank">test of time</a>”.</p> <p>In our work, we frame the problem in the same way that the earlier paper does, but we use more contemporary techniques for solving it.</p> <p>Customers’ ratings for all the songs in a music service’s catalogue could be represented as an enormous grid, or matrix. Each row represents a customer, each column represents a song, and the value entered at the intersection of any row and any column indicates the customer’s “affinity” for a particular song.</p> <p>With tens of millions of customers and tens of millions of songs, however, that matrix swells to unmanageable size. So our system learns how to <em>factorize</em> the matrix into two much smaller matrices, one with only about 50 columns and the other with only about 50 rows. Multiplying the two matrices, however, yields a good approximation of the values in the full matrix.</p> <p>As we train our machine-learning model, we divide playbacks into two categories: those that lasted less than 30 seconds and those that lasted more. The short playbacks are assigned a score of -1, the long playbacks a score of 1.</p> <p>Of course, playback duration is not a perfect proxy for affinity: a knock at the door might force a customer to turn off a beloved song just as it’s beginning, while a child’s shouts might pull a customer out of the room even though the song that’s playing is the wrong one. So we also add a weighting function, which gives the binary scores more or less weight depending on playback duration. For instance, during training, a score of -1 will receive a greater weight if it’s based on a one-second playback than if it’s based on a 25-second playback, and a score of 1 will receive a greater weight if it’s based on a three-minute playback than if it’s based on a 35-second playback.</p> <p>We experimented with several different weighting functions and found that a convex quadratic function gave better results than a concave quadratic function, a linear function, or a logarithmic function. That is, a function that gives extra weight to particularly short durations below the 30-second threshold and particularly long durations above it works better than those that spread weights out more evenly.</p> <p><img alt="weight_curve.jpg" src="" style="display:block; height:27px; margin-left:auto; margin-right:auto; width:500px" /></p> <p style="text-align:center"><em><sup>A graph of the weighting functions we evaluated</sup></em></p> <p>Another wrinkle to our system is that it doesn’t try to learn a matrix factorization that will exactly reproduce the values in the giant affinity grid; instead, it learns the factorization that will best preserve the <em>relative</em> values of any two entries in the grid. This approach is known to help the system generalize better and avoid overobsessing about noisy observations.</p> <p>Because we don’t have a ground truth against which to measure our system’s predictions — such as customers’ ratings of the audio content they’ve streamed — we evaluated its performance by correlating the inferred affinity scores with the playback durations. For evaluation, we used data collected on dates other than those we had used to train the model. The correlation was strong enough to demonstrate the effectiveness of the modeling approach, given the challenge of only implicit observations. In the future, we plan to incorporate lexical information about customer requests into the model and to move the technique into production.</p> <p><em>Bo Xiao is a machine learning scientist in the Alexa AI organization. He and colleagues will present their work at the Interspeech conference in September.</em></p> <p><a href="" target="_blank">Paper</a>: “Play Duration based User-Entity Affinity Modeling in Spoken Dialog System”<br /> Acknowledgements: Nicholas Monath, Shankar Ananthakrishnan, Abishek Ravi</p> <p>Related:</p> <p><a href="" target="_blank">HypRank: How Alexa Determines What Skill Can Best Meet a Customer’s Need</a><br /> <a href="">The Scalable Neural Architecture behind Alexa’s Ability to Select Skills</a><br /> <a href="" target="_blank">Where computer science and linguistics meet</a><br /> <a href="" target="_blank">Making Alexa More Friction-Free</a></p> <p><sub><em>Photo credits: DFree / (Adele); Anthony Mooney / (Lionel Richie)</em></sub></p> <h6>&nbsp;</h6> /blogs/alexa/post/894cc4cd-59e7-4782-af41-85c2d5d5cb97/now-your-voice-first-skills-can-shine-on-fire-tablets Now Your Voice-First Skills Can Shine on Fire Tablets Metty Fisseha 2018-07-13T21:37:46+00:00 2018-07-13T21:37:46+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>The Show Mode update allows US customers to use their Fire Tablet as an Echo Show. The Show Mode Charging Dock props up the Fire Tablet while charging and defaults to Show Mode, allowing for even greater ease in hands-free use of voice-first skills on Alexa-enabled devices in the US.</p> <p><img alt="" src="" /></p> <p>The new Show Mode and <a href=";ie=UTF8&amp;qid=1531337378&amp;sr=8-1&amp;keywords=echo+show+mode" target="_blank">Show Mode Charging Dock</a> for Fire HD 8 and 10 tablets deliver a full-screen Alexa experience identical to that of Echo Show to customers in the US. When you enhance your voice-first skills in the US for devices with screens, customers can enjoy them on Echo Show, Echo Spot, Fire TV Cube, and now Fire HD 8 and 10 tablets.</p> <h2>How Customers Experience Alexa Skills on Fire Tablets</h2> <p>Through Show Mode, a software update for Fire HD 8 and 10 tablets, customers have the ability to toggle between an Echo Show and tablet interface. When in Show Mode, customers experience skills the same way they would on an Echo Show. The Show Mode Charging Dock props up the Fire tablet and defaults it to Show Mode while charging, making it even easier for customers to enjoy hands-free Alexa skills the same way they would on Echo Show.</p> <h2>Design Engaging Voice-First Experiences for Devices with Screens</h2> <p>If your Alexa skill already supports screen displays, it will work in Show Mode with no changes required. If your skill does not yet support displays, here are some tips for designing voice-first experiences that prove engaging across all Alexa-enabled devices:</p> <ol> <li>Voice needs to be the primary interaction method with Alexa, even when designing for devices with screens. Consider the display as a way to enhance your skill. Design your voice interaction first, then think about how you can <a href="">add visual elements to</a> support display on Alexa-enabled devices with a screen.</li> <li>It is essential that you choose the <a href="" target="_blank">right templates</a> to develop visual experiences that work across devices.</li> <li>When designing for screen devices, it is important that your content is easy to consume. Consider brevity, arrangement, and pacing when you are writing your dialogue and designing your visuals.</li> </ol> <p>Learn more about <a href="" target="_blank">best practices to build skills for Echo devices with screens</a>.</p> <h1>Make Money by Creating Engaging Skills Customers Love</h1> <p>You can make money through Alexa skills using <a href="">in-skill purchasing</a> or <a href="">Amazon Pay for Alexa Skills</a>. You can also make money for eligible skills that drive some of the highest customer engagement with <a href="">Alexa Developer Rewards</a>. <a href="" target="_blank">Download our guide</a> to learn which product best meets your needs.</p>