Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2019-01-19T00:14:49+00:00 Apache Roller /blogs/alexa/post/0a9d8388-ef40-4da0-ad22-fa7d08ee867c/alexa-gadgets-toolkit-beta-how-it-makes-the-twerking-bear-groove-to-amazon-music Alexa Gadgets Toolkit (Beta): How It Makes the Twerking Bear Groove to Amazon Music Karen Yue 2019-01-17T18:07:11+00:00 2019-01-17T18:11:40+00:00 <p><a href=";sc_category=Owned&amp;sc_channel=EM&amp;sc_publisher=EM&amp;sc_content=Content&amp;sc_funnel=Start&amp;sc_country=US&amp;sc_medium=Owned_EM_EM_Content_Start_US_RegisteredDevs&amp;sc_segment=RegisteredDevs" target="_blank"><img alt="Twerking Bear" src="" style="height:480px; width:1908px" /></a></p> <p>Today, we are excited to announce a new addition to the Alexa Gadgets Toolkit (Beta) – the MusicData Interface which enables you to build Bluetooth-connected gadgets that respond to Amazon Music playing on an Echo device.</p> <h3><img alt="Twerking Bear" src="" style="height:480px; width:1908px" /></h3> <h2>Alexa Gadgets Toolkit (Beta): How It Makes the Twerking Bear Groove to Amazon Music</h2> <p>Today, we are excited to announce a new addition to the <a href="" target="_blank">Alexa Gadgets Toolkit (Beta)</a> – the <a href="" target="_blank">MusicData Interface</a> which enables you to build Bluetooth-connected gadgets that respond to Amazon Music playing on an Echo device. Tens of millions of Alexa customers enjoy playing music on their Echo devices, and we’re excited to give you the ability to make that experience even more enjoyable with accompanying gadgets.</p> <p>So how does it work? When a customer says “Alexa, play popular songs on Amazon Music” and the first song begins, a single value representing the average tempo of the song (expressed in beats per minute) is sent to the gadget in the form of a <a href="" target="_blank">Tempo</a> directive. Another directive is sent when a song stops or transitions to another song on the Echo device. With these directives, you can choose how you want your gadget to react to the BPM data shared. For example, you can map the data to control the speed of motors, flash colors based on predefined “fast” songs, and more. There are endless possibilities for you to build gadget experiences for Alexa customers to enjoy with music.</p> <p>The functionality provided through the MusicData Interface will continue to grow, but here are some ways you can use tempo data to get started on your first gadget: build a hula girl that sways its hips, a disco ball that lights the room, or even a bear that twerks when Amazon Music is playing on an Echo device.</p> <h2>Alexa Gadgets Spotlight: MusicData Interface + Twerking Bear, by Gemmy Industries</h2> <p>Gemmy Industries introduced its customers to the world of twerking plush animatronics in 2015. Now, they have extended the fun to Alexa customers with <a href="" target="_blank">Twerking Bear</a> – an animated plush that brings fun into the home of Echo device owners. Twerking Bear is the first commercial product that uses the MusicData Interface.</p> <p>The diagram below illustrates how all of this works. First, a customer requests a song from Amazon Music on their Echo device, and the command is processed in the cloud. Then, the Alexa Gadgets tempo directive with the BPM value is sent from the Echo device to Twerking Bear via Bluetooth. Based on the BPM value shared, Twerking Bear moves its body motors to the song played, pauses its motors when the song stops, and adjusts the speed of its motors when a new song begins. &nbsp;</p> <p><img alt="MusicData Interface" src="" style="display:block; height:434px; margin-left:auto; margin-right:auto; width:800px" /></p> <h2>Other Gadget Interfaces Used by Twerking Bear</h2> <p>In addition to music, Twerking Bear responds to speech, timers, alarms, reminders, wake word, and notifications – which are available in the Alexa Gadgets Toolkit. You can check out <a href="" target="_blank">Alexa Gadgets Toolkit (Beta): How It Makes the Echo Wall Clock Tick</a> for more detail on these Gadget Interfaces.</p> <p>Let’s take a look at how Twerking Bear responds to the following directives that are sent from an Echo device:</p> <h3>Speech</h3> <p>When Alexa starts speaking on an Echo device, <a href="" target="_blank">Speechmarks</a> Directives (SpeechData Interface) are sent to Twerking Bear with speechmark data. The Speechmarks Directives contain visemes, which are mouth positions that correspond to spoken sounds. Twerking Bear moves its mouth motors to demonstrate visual experiences of Alexa's text-to-speech (TTS) so that when a customer asks &quot;Alexa, what's the weather,&quot; he/she can observe Twerking Bear moving its mouth in sync with Alexa's response.</p> <h3>Wake Word Detection</h3> <p>When the wake word “Alexa” is detected on an Echo device, a <a href="" target="_blank">StateUpdate</a> Directive (StateListener Interface) is sent to Twerking Bear which includes the name and value of the state type. For the wakeword state, the valid values are active and cleared. When active, Twerking Bear tilts its head and moves its body motors for a dance movement that acknowledges the customer’s intent. This directive is also used for Twerking Bear to react to timers, alarms, and reminders on an Echo device.</p> <h2>Get Started with the Alexa Gadgets Toolkit</h2> <p>Available in the US, UK, and Germany, you can create your own gadget using one or more of the available <a href="" target="_blank">Gadget Interfaces</a>. The possibilities for designing delightful gadgets for customers to enjoy are endless. Visit our resources below to get started:</p> <ol> <li><a href="" target="_blank">Alexa Gadgets technical documentation</a></li> <li><a href="" target="_blank">Sample code</a> on GitHub</li> <li><a href="" target="_blank">Sign up for our email list</a> to get updates</li> </ol> <p>We can’t wait to see what you build!</p> /blogs/alexa/post/6d4927d1-0b13-470f-8ab8-0e3aa128a5f4/best-practices-for-using-imagery-and-text-in-a-multimodal-alexa-skill Best Practices for Using Imagery and Text in a Multimodal Alexa Skill Jennifer King 2019-01-17T15:00:00+00:00 2019-01-17T15:00:00+00:00 <p><img alt="" src="" style="display:block; height:368px; margin-left:auto; margin-right:auto; width:900px" />Now, with the Alexa Presentation Language, you can enhance voice-first experiences using imagery, typography, and text within a skill. Follow these best practices for using graphics and text within your multimodal skill.</p> <p>Visuals play a very important part of our daily lives, whether we are are conscious of it or not. Every day we pick up on visual queues from our surroundings–the environment, coworkers, friends, and families–that help us navigate through the world. Maybe it's the flashing of a “Don't Walk” sign to warn us of oncoming traffic, or the look on a family member's face after a long day that indicates they need a little extra attention. Regardless of the context or situation, we use these visuals and visual queues to guide and enhance our daily experiences.<br /> <br /> Similarly, in the world of Alexa skills, we can enhance voice-first experiences using imagery, typography, and text within a skill to provide these types of visual queues to customers and help them navigate through a skill experience on devices with screens. And when developers add visual and touch interactions to a skill, it enables Alexa to provide a richer, more engaging voice-first experience. By combining voice with complementary visuals, you can deepen connection and engagement between your skill and target audience.<br /> <br /> To achieve this, follow these best practices for using imagery and text within your <a href="">multimodal skill</a>.</p> <h2>Using Imagery in Your Alexa Skills</h2> <p><img alt="" src="" style="display:block; height:245px; margin-left:auto; margin-right:auto; width:900px" /></p> <p>When adding images and other visual elements, like backgrounds and icons, ask yourself the following:<br /> <br /> <strong>Does this image compliment the voice response and add additional clarity or context for the customer?</strong> An image is worth a thousand words, and contextually relevant images are a great way to bring visuals to your skill while providing information that may be overwhelming to hear by voice. Use images thoughtfully to enhance your voice response by adding complementary metadata and queues for customers as they take the journey through the experience. Just make sure the images you show the customer in your visual response are in harmony with what Alexa is saying, and with the customer's current request.<br /> <br /> <strong>Can a customer differentiate this image from others offered in the same visual response?</strong> Long lists of titles or similar-looking search results can lead to high cognitive load or additional friction for customers. Displaying differentiating images for each item in a search result will help simplify the customer's choices and allow them to easily scan and select the item they are looking for.<br /> <br /> <strong>Do the background images enrich the visual experience without interfering with, or distracting from, the primary content layered on top? </strong>Layering text over images in the background of your layout is an easy way to provide texture to the content shown on the screen. But resist the urge to overly complicate your visual responses. Also make sure to apply a colored opacity layer, or scrim, over your image to help with the legibility and accessibility of the foreground text.<br /> <br /> <strong>Is the imagery device specific?</strong> Using the Alexa Presentation Language (APL) and conditional logic, it's possible to send different variations of your experience to different devices, meaning you can now tailor your imagery per device. This could include sending appropriate image resolutions, or even different background art that scales correctly according to the <a href="">Viewport Property</a> of the device. Remember, if an image is large, it may be slow to load, or may not load entirely on the screen leading to a broken experience for your customer.<br /> <br /> Learn more about <a href="">images</a> and <a href="">imagery</a> in the Alexa Design Guide.</p> <h2>Using Text in Your Alexa Skills</h2> <p><img alt="" src="" style="display:block; height:539px; margin-left:auto; margin-right:auto; width:700px" /></p> <p>When adding text and applying styling to the text in your multimodal responses, consider the following:<br /> <br /> <strong>Is the text it legible at different distances?</strong><a href=""> Text size</a> is critical to the readability and accessibility of your visual experience. Because a customer may not always be near a device, we recommend using type sizes that are larger than what you may be accustomed to. This will ensure that if a customer is far from their device, they still can quickly read the content being shown on screen.<br /> <br /> <strong>Has the the text been styled appropriately to add visual and information hierarchy?</strong> By taking advantage of a combination of font sizes, weights and markup tags (such as italics, strong, underline, etc.), developers can create visual and information hierarchy within their responses. This level of detail will help customers to identify important information at a glance and decipher the difference between that information. They will be able to read it better, and have overall better recognition and comprehension of what they read. To make typography easier for developers, APL offers a variety of <a href="">type scales</a> to help.<br /> <br /> <strong>Are the line lengths of your text responses appropriate for the device and for your content?</strong> The right <a href="">line length</a> can decrease eye fatigue and increase a customer's comprehension or reading of the text being presented by Alexa on their screen. When addressed successfully, line length can also drastically change the look and feel of each response, creating a higher level of visual polish for the skill. For text within your multimodal skill, we recommend a maximum line length of 40 characters for shorter text, such as headlines or titles, and 60 characters for longer form text, such as body copy.<br /> <br /> <strong>Have hints been used for visual and contextual help?</strong> Styling text as hints is the ideal way to provide additional contextual help to customers. Hints are a great way to not only help customers identify and learn what to say as they interact with your skill, but they can also be used to help customers discover new functionality. Remember, whenever possible, hints should be contextually relevant and the hint style should be used exclusively for showing a customer what they can say to invoke Alexa as they use your skill.<br /> <br /> Learn more about <a href="">text</a> and <a href="">typography</a> in the Alexa Design Guide.</p> <h2>Last Chance to Enter the Alexa Skills Challenge: Multimodal</h2> <p>In addition to building a visually rich Alexa skill with APL, you can <a href="">enter the Alexa Skills Challenge: Multimodal with Devpost</a> and compete for cash prizes and Amazon devices. We invite you to participate and build voice-first multimodal experiences that customers can enjoy across tens of millions of Alexa devices with screens. <a href="">Learn more</a>, start building APL skills, and enter the challenge by January 22.</p> <h2>Related Content</h2> <ul> <li><a href="">Alexa Design Guide: Visual Experiences</a></li> <li><a href="">Alexa Design Guide: Presentation </a></li> <li><a href="">New Alexa Skill Sample: Learn Multimodal Skill Design with Space Explorer</a></li> <li><a href="">Alexa Skill Teardown: Building the Interaction Model for the Space Explorer Skill</a></li> <li><a href="">How to Quickly Update Your Existing Multimodal Alexa Skills with the Alexa Presentation Language</a></li> </ul> <p>&nbsp;</p> /blogs/alexa/post/b46d8c0a-acf8-4360-9efc-59d314847d8b/amazon-alexa-at-ces-20191 What's New with Amazon Alexa at CES 2019 Adam Vavrek 2019-01-17T13:57:51+00:00 2019-01-18T19:37:39+00:00 <p><a href="" target="_self"><img alt="CES 2019 Recap: Amazon Alexa" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></a></p> <p>CES is officially a wrap for 2019. More than 100 Alexa products were announced at the four-day event, where new voice experiences spanned a variety of categories, including smart home devices, TVs, tablets, and even cars with Alexa built-in.</p> <p><img alt="Event Recap: Amazon Alexa at CES 2019" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <p>CES is officially a wrap for 2019. More than 100 Alexa products were announced at the four-day event, where new voice experiences spanned a variety of categories, including smart home devices, TVs, tablets, and even cars with Alexa built-in. Tens of thousands of attendees visited Amazon’s four&nbsp;<a href="" target="_blank">public exhibits</a>, where they interacted with technologies that show how Alexa makes life easier when you’re at home, work, and on-the-go. Hundreds of devices were on display at our booth from companies like LG, Lenovo, Audi, Bosch, North, Razer, and many more.&nbsp;</p> <p>Leading into the show, Dave Limp, SVP of Amazon Devices, revealed that more than <a href="" target="_blank">100 million devices with Alexa</a> have been sold. As witnessed at CES 2019, Amazon is already busy supporting the next generation of voice experiences, enabling companies from around the world to add Alexa into their products.</p> <p><img alt="Alexa Voice Service New Solutions Unveiled CES 2019" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>New Solutions for Device Makers</strong></h2> <p>Amazon Alexa debuted many solutions designed to help developers and device makers create products with Alexa.</p> <p>Two new <a href="" target="_blank">Original Device Manufacturers</a> (ODMs) were announced: <strong>SGW Global’s</strong> Mono wireless home telephone with Alexa built-in, and <strong>Sirena Vood</strong>, the world’s smallest portable speaker with Alexa built-in.</p> <p>Three new <a href="" target="_blank">Systems Integrators</a> (SIs) also debuted. <strong>C-Chip</strong> provides device makers with customized hardware and software solutions for creating products with Alexa. <strong>Sagemcom</strong> provides audio/video solutions for service providers. <strong>StreamUnlimited’s</strong> StreamSDK is a high-level, multi-dimensional software solution featuring AVS and qualified by Amazon.</p> <p>Updates to two development kits were also announced: <a href="" target="_blank">Microsemi’s AcuEdge</a>, with multiple mic-array configurations, and <a href="" target="_blank">Synaptics’ AudioSmart</a>, which is now able to support development of far-field devices with only two microphones.</p> <p>In Alexa’s public exhibit in the Venetian, developers spoke with Solutions Architects from Amazon. The area, called the <em>Speakeasy</em>, was focused on solutions that make it easy for developers and device makers building with Alexa. There, hundreds received technical guidance and were educated on integrating Alexa into their products.</p> <p><img alt="New Alexa Devices CES 2019" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>New Alexa Devices</strong></h2> <p>Device makers introduced many new products with Alexa -- ranging from pianos to bikes to bed frames to routers to vehicle navigation systems. Some new products are listed below.</p> <p><strong>Alexa Built-in Devices</strong>: <strong>Lenovo </strong>kicked off CES by announcing Smart Tabs, which cleverly function 2-in-1 tablets with Alexa built-in and Show-Mode visuals when docked as a smart screen. <strong>LG</strong> announced the roll-up OLED TV with Alexa built-in. <strong>Razer </strong>announced that it will be integrating Alexa into its gaming platform via Razer Synapse 3 to let users control aspects of the gaming experience by voice.&nbsp;<strong>Kohler </strong>revealed several new products for the smart home, including bathroom devices and lighting arrangements. <strong>Vuzix</strong> Blade AR headset is the first pair of Alexa-compatible smart glasses. <strong>First Alert </strong>unveiled the Onelink Bell, a new smart doorbell with Alexa built-in. <strong>ASUS Networking </strong>announced Asus Lyra Voice, a mesh Wi-Fi router with Alexa built-in. <strong>Jabra </strong>announced the Elite 85h headphones with hands-free access to Alexa.<strong> JBL </strong>LIVE Series headphones includes four models with Alexa. <strong>Haltfords</strong> announced the Cybic Legend bike, the first smart bike with Alexa built-in for voice-based GPS, intelligent cycling performance data, and more. <strong>Archos</strong> announced its Mate smart display. <strong>Petcube </strong>announced two new smart pet cameras and a treat-flinging mechanism.<strong> Legrand/Bticino</strong> announced a far-field wall mounted switch with Alexa built-in. <strong>Sony </strong>announced its over-ear-noise-canceling wireless headphones. <strong>Roland Corporation </strong>announced the GO:PIANO with Alexa built-in – the first instrument with Alexa functionality. <strong>Robotemi </strong>announced they are bringing Alexa to their Temi robot, which includes a screen and can control lights, make video calls, and more. <strong>Dux </strong>announced they’re working with Stelle to create a new bed frame with Alexa built-in. <strong>Cleer </strong>announced its Mirage Smart Display Speaker. <strong>Acuity Brands </strong>featured the Juno AI smart home downlight, the first in-ceiling lighting retrofit kit with Alexa built-in. <strong>Leviton </strong>announced the development of the new Decora Voice Dimmer light switch with Alexa built-in. <strong>HiMirror </strong>announced mirrors with an entertainment center and Alexa built-in.</p> <p><strong>Smart Home Devices with Alexa:</strong> <strong>LG </strong>announced their connected TWINWash washing machines and dryers will work with Alexa. <strong>Ikea </strong>announced new Kadrilj and Fyrtur smart roller blinds, both can be controlled through Alexa. <strong>Hamilton Beach </strong>announced a new smart coffee maker with Alexa. <strong>EZVIZ </strong>announced a new video doorbell and camera kit, both of which are compatible with Alexa. <strong>Nanoleaf </strong>announced light panels that can be controlled by Alexa. <strong>Somfy </strong>introduced its Tahoma gateway hub, which has a Zigbee 3.0 radio built-in to customize and automate motorized shades. <strong>Panasonic </strong>introduced the GZ2000, a custom-made Professional Edition 4K OLED panel TV that works with Alexa. <strong>C by GE</strong> was also announced which is a family of smart wall switches, full-color lights, and accessories that are compatible with Alexa. <strong>Moen </strong>announced upgrades to its Alexa voice control. <strong>Samsung</strong> announced its expanded vision of connected living with improvements in smart technology and international expansion. <strong>Hisense </strong>announced it would add Alexa into its smart TVs. <strong>Lutron</strong> is adding fan control to its Caseta smart lighting system. <strong>Gourmia</strong> announced a deluxe multicooker that works with Alexa.<strong> ShadeCraft Bloom</strong> added Alexa voice controls to parasols. <strong>Daikin </strong>introduced the Daikin One smart thermostat. <strong>Trifo </strong>unveiled its first home robot, a smart vacuum with Alexa. <strong>Robotics </strong>announced a new line of cleaning robots including an AI-powered floor cleaning robot and its smart autonomous air purifying robot, both work with Alexa. <strong>iHome </strong>debuted two new portable, battery powered Alexa AMA smart speakers. <strong>Tuya Smart </strong>announced a new home security system that uses facial recognition technology to identify each member of a family using just a photograph. <strong>Klipsch</strong> unveiled five new soundbars.<strong> </strong><strong>Altro</strong> showcased a new smart lock that will enable users to control their deadbolts through Alexa.</p> <p><strong>Alexa Auto</strong>: <strong>Byton </strong>announced its new electric SUV will incorporate Alexa for voice control. <strong>iOttie </strong>announced the Easy One Touch Connect car phone mount with Alexa built-in.<strong> Luxoft </strong>announced Alexa will be integrated into vehicle dashboards, infotainment, and navigation systems.<strong> Telenav </strong>announced Alexa will be integrated into its automotive navigation system. <strong>BOSS Audio Systems </strong>demonstrated the LXA5, a Double Din head unit with Alexa built-in. <strong>Qualcomm </strong>showcased in-vehicle experiences for next generation vehicles using Alexa. <strong>HERE Technologies </strong>will integrate Alexa with their navigation and location services; Alexa will come pre-integrated within HERE Navigation On-Demand. <strong>Pioneer </strong>announced plans to add Alexa to its multimedia receivers. <strong>Abalta Technologies </strong>announced plans to integrate Alexa into its WebLink connected car platform, enabling drivers to access Alexa on WebLink-enabled in-vehicle infotainment systems. <strong>Nextbase </strong>showcased two models of dashcams with Alexa built-in. <strong>Elektrobit</strong> is demoing its Android-based cockpit software platform integration with Alexa. <strong>P3</strong> announced a new software integrator for Alexa Auto.<strong> ICS </strong>announced that it has created three Automotive Grade Linux demo applications for Alexa in collaboration with The Qt Company. <strong>MOLEX </strong>announced it’s working with Accenture and AWS to bring edge computing and voice service to the Molex Automotive Ethernet Network Platform.</p> <p><img alt="Build with Alexa" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>Build with Alexa </strong></h2> <p>Start building today to reimagine your customer experience for voice and reach customers where they are. Hundreds of thousands of device makers and developers are constructing devices with Alexa using the <a href="" target="_blank">Alexa Voice Service</a> or teaching Alexa new capabilities with the <a href="" target="_blank">Alexa Skills Kit</a>. Get started by <a href=";;;marketPlaceId=ATVPDKIKX0DER&amp;language=en_US&amp;pageId=amzn_developer_portal&amp;;prevRID=HY52MN1HTHQE7CE7VKHT&amp;openid.assoc_handle=mas_dev_portal&amp;openid.mode=checkid_setup&amp;prepopulatedLoginId=&amp;failedSignInCount=0&amp;;" target="_blank">creating your developer account</a>.</p> /blogs/alexa/post/a7bb4a16-c86b-4019-b3f9-b0d663b87d30/new-method-for-compressing-neural-networks-better-preserves-accuracy New Method for Compressing Neural Networks Better Preserves Accuracy Larry Hardesty 2019-01-15T14:00:00+00:00 2019-01-15T14:04:56+00:00 <p>By compressing the huge lookup tables that list &quot;embeddings&quot;, or vector representations of individual words, a new system can shrink neural-network&nbsp;models by up to 90%, with minimal effect on accuracy.</p> <p><sup><em>Rahul Goel cowrote this post with&nbsp;Anish Acharya</em></sup></p> <p>Neural networks have been responsible for most of the top-performing AI systems of the past decade, but they tend to be big, which means they tend to be slow. That’s a problem for systems like Alexa, which depend on neural networks to process spoken requests in real time.</p> <p>In natural-language-understanding (NLU) applications, most of a neural network’s size comes from a huge lookup table that correlates input words with “embeddings.” An embedding is a large vector (usually a sequence of 300 numbers) that captures information about a word’s meaning.</p> <p>In a <a href="" target="_blank">paper</a> that we and our colleagues are presenting at the 33rd conference of the Association for the Advancement of Artificial Intelligence (AAAI), we describe a new method for compressing embedding tables that compromises the NLU network’s performance less than competing methods do.</p> <p>In one set of experiments, for instance, we showed that our system could shrink a neural network by 90 percent while reducing its accuracy by less than 1%. At the same compression rate, the best prior method reduced the accuracy by about 3.5%.</p> <p>The ability to compress NLU models means that, as Alexa learns to perform more and more complex tasks, she can continue to deliver responses in milliseconds. It also means that Alexa’s skill base can continue to expand unfettered. Alexa currently supports more than 70,000 third-party skills, with thousands more being added every month. Compression means that those skills’ NLU models can be stored efficiently.</p> <p>In our experiments, we used a set of pretrained word embeddings called Glove. Like other popular embeddings, Glove assesses words’ meanings on the basis of their co-occurrence with other words in huge bodies of training data. It then represents each word as a single point in a 300-dimensional space, such that words with similar meanings (similar co-occurrence profiles) are grouped together.</p> <p>NLU systems often benefit from using such pretrained embeddings, because it lets them generalize across conceptually related terms. (It could, for instance, help a music service learn that the comparatively rare instruction “Play the track ‘Roadrunner’” should be handled the same way as the more common instruction “Play the song ‘Roadrunner”.) But it’s usually possible to improve performance still further by fine-tuning the embeddings on training data specific to the task the system is learning to perform.</p> <p>In previous work, NLU researchers had taken a huge lookup table, which listed embeddings for about 100,000 words, reduced the dimension of the embeddings from 300 to about 30, and used the smaller embeddings as NLU system inputs.&nbsp;</p> <p>We improve on this approach by integrating the embedding table into the neural network in such a way that it can use task-specific training data not only to fine-tune the embeddings but to customize the compression scheme as well.</p> <p>To reduce the embeddings’ dimensionality, we use a technique called singular-value decomposition. Singular-value decomposition (SVD) produces a lower-dimensional projection of points in a higher-dimensional space, kind of the way a line drawing is a two-dimensional projection of objects in three-dimensional space.</p> <p><img alt="Projection.jpg" src="" style="display:block; height:333px; margin-left:auto; margin-right:auto; width:500px" /></p> <p style="text-align:center"><sub><em>Singular-value decomposition projects high-dimensional data into a lower-dimensional space, much the way a three-dimensional object can be projected onto a two-dimensional plane.</em></sub></p> <p>The key is to orient the lower-dimensional space so as to minimize the distance between the points and their projections. Imagine, for instance, trying to fit a two-dimensional plane to a banana so as to minimize the distance between the points on the banana’s surface and the plane. A plane oriented along the banana’s long axis would obviously work better than one that cut the banana in half at the middle. Of course, when you’re projecting 300-dimensional points onto a 30-dimensional surface the range of possible orientations is much greater.</p> <p>We use SVD to break our initial embedding matrix into two smaller embedding matrices. Suppose you have a matrix that is 10,000 rows long (representing a lexicon of 10,000 words) and 300 columns wide (representing a 300-dimensional vector&nbsp;for each word). You can break it into two matrices, one of which is 10,000 columns long and 30 columns wide, and the other of which is 30 columns long and 300 columns wide. This results in a reduction of parameters, from 10,000 x 300 to ((10,000 x 30) + (30 x 300)), or almost 90%.&nbsp;</p> <p>We represent one of these matrices as one layer of a neural network and the second matrix as the layer above it. Between the layers are connections that have associated “weights,” which determine how much influence the outputs of the lower layer have on the computations performed by the higher one. The training process keeps readjusting those weights, trying to find settings that reduce the projection distance still further.</p> <p>In our paper, we also describe a new procedure for selecting the network’s “learning rate”. The relationship between the weight settings of the entire network and the network’s error rate can be imagined as a landscape with peaks and valleys. Each point in the landscape represents a group of weight settings, and its altitude represents the corresponding error rate.</p> <p>The goal is to find a group of weights that correspond to the bottom of one of the deepest valleys, but we can’t view the landscape as a whole; all we can do is examine individual points. At each point, however, we can calculate the slope of the landscape, and the standard procedure for training a neural network is to continually examine points that lie in the downhill direction from the last point examined.&nbsp;</p> <p>Every time you select a new point, the question is how far in the downhill direction to leap, a metric called the learning rate. A recent approach to choosing the learning rate is the cyclical learning rate, which steadily increases the leap length until it hits a maximum, then steadily steps back down to a minimum, then back up to the maximum, and so on, until further exploration no longer yields performance improvements.</p> <p>We vary this procedure by decreasing the maximum leap distance at each cycle, then pumping it back up and decreasing it again. The idea is that the large leaps help you escape local minima — basins at the tops of mountains rather than true valleys. But tapering the maximum leap distance reduces the chance that when you’ve found a true valley and have started down its slope, you’ll inadvertently leap out of it.</p> <p><img alt="Learning_rate_comparison_(1).jpg" src="" style="display:block; height:169px; margin-left:auto; margin-right:auto; width:550px" /></p> <p style="text-align:center"><sub><em>A comparison of the learning-rate-selection strategies adopted<br /> in the cyclical learning rate (left) and the cyclically annealed learning rate (right).</em></sub></p> <p>We call this technique the cyclically annealed learning rate, and in our experiments, we found that it led to better performance than either the cyclical learning rate or a fixed learning rate.</p> <p>To evaluate our compression scheme, we compared it to two alternatives. One is the scheme we described before, in which the embedding table is compressed before network training begins. The other is simple quantization, in which all of the values in the embedding vector — in this case, 300 — are rounded to a limited number of reference values. So, for instance, the numbers 75, 83, and 87 might all become 80. This can reduce, say, 32-bit vector values to 16 or 8 bits each.</p> <p>We tested all three approaches across a range of compression rates, on different types of neural networks, using different data sets, and we found that in all instances, our approach outperformed the others.</p> <p><em>Anish Acharya is an applied scientist, and Rahul Goel is a machine learning scientist, both in the Alexa AI group.</em></p> <p><a href="" target="_blank"><strong>Paper</strong></a>: &quot;Online Embedding Compression for Text Classification using Low Rank Matrix Factorization&quot;</p> <p><strong><a href="" target="_blank">Alexa science</a></strong></p> <p><strong>Acknowledgments</strong>: <a href="">Angeliki Metallinou</a>, Inderjit Dhillon</p> <p><strong>Related</strong>:</p> <ul> <li><a href="" target="_blank">With New Data Representation Scheme, Alexa Can Better Match Skills to Customer Requests</a></li> <li><a href="" target="_blank">Shrinking Machine Learning Models for Offline Use</a></li> <li><a href="" target="_blank">How Alexa Can Use Song-Playback Duration to Learn Customers’ Preferences</a></li> <li><a href="" target="_blank">Amazon at AAAI</a></li> </ul> <p><em><sub>Projection image adapted from <a href="" target="_blank">Michael Horvath</a> under the&nbsp;<a href="">CC BY-SA 4.0</a>&nbsp;license</sub></em></p> /blogs/alexa/post/02732c1d-bab8-41fa-8afe-30d02d9a4280/hear-it-from-a-skill-builder-how-to-design-and-validate-an-alexa-skill-idea-in-5-days Hear It from a Skill Builder: How to Design and Validate an Alexa Skill Idea in 5 Days Jennifer King 2019-01-14T15:00:00+00:00 2019-01-14T15:00:00+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Many people have asked us where to start with designing an Alexa skill, and we think we’ve found a great method that anyone can use to design a voice experience, from ideation to validation.</p> <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p><strong><em>Editor's Note:</em></strong><em> What if you’re tasked with prototyping a potential skill idea and you only have five business days to get it done? I’ve asked Alex Baxevanis, Experience Director at Webcredible, to share how he and his team have designed a sprint structure that condenses the prototyping phase of a skill down to a five-day process. While there is no “correct” process to prototype, hopefully the below will help focus your efforts next time you want to validate a new voice idea. </em></p> <p>Designing for a new technology can always bring a load of exciting ideas, alongside many questions and unknowns. Many people have asked us where to start with designing an Alexa skill, and we think we’ve found a great method that anyone can use to design a voice experience, from ideation to validation.</p> <p>You’ve probably heard of the “design sprint” method popularized by venture capital firm GV. A design sprint is a time-boxed, five-day process aimed at refining an idea and increasing its chance of success when it hits the market. It felt like just the right fit for exploring voice interactions. Whether you’re a skill-building hobbyist, a professional developer, or part of a seasoned development team, this process should help you structure your prototyping phase and consider the various steps involved.</p> <p>Here’s an overview of the voice design sprint and how we’ve used it to help clients design a new skill idea.</p> <h2>Day 1: Understand and Ideate</h2> <p>The first day of a voice design sprint starts by making sure that the team you’re working with, whether that’s a team within your own company or a group of people you’ve brought together for a brainstorm, understands how voice services like Alexa work in practice. Bring Alexa-enabled devices for participants to play with and get familiar with Alexa skills that would be relevant to the experience you're trying to build.</p> <p>During our workshops with clients, we’ll also hear from subject matter experts on the customer journey, and the information and interactions that they could deliver through voice.</p> <p>Where possible, we’ll always look for examples where people are already interacting with a brand through voice. This includes listening in to customer service calls, or shadowing staff as they talk to customers. For example, when we worked with Virgin Trains team on their Alexa skill, we went to train stations to hear first-hand (and note down) how exactly customers were wording their questions, and how Virgin Trains staff were responding.</p> <p>We close the day by writing out as many ideas as possible, inspired both by the possibilities of voice and our learnings from customers. At this stage, we don’t set any restrictions. All we ask is that participants note down for each idea:</p> <ul> <li>Who their user might be (e.g. a train traveler)</li> <li>What voice could offer (e.g. purchasing tickets)</li> <li>In what context might people use voice (e.g. at home)</li> <li>What the final outcome or benefit for the customer is (e.g. catching a last-minute train)</li> </ul> <h2>Day 2: Narrow Down the Idea and Start Mapping</h2> <p>Armed with an initial set of ideas, the second day is focused on whittling down the list to those that might best work for voice. We’ve developed a checklist based on our experience and <a href="" target="_blank">working with the Alexa team</a>. We get all sprint participants to go through <a href="">the checklist</a> and see how their ideas fare. In some cases, it’s a clear “yes” or “no.” In others it’s a “maybe,” which means we should definitely test our assumptions when we prototype.</p> <p><img alt="" src="" style="display:block; height:325px; margin-left:auto; margin-right:auto; width:576px" /></p> <p>The team gets to vote and collectively agree on one or two ideas to pursue. Then we get to work, <a href="">writing down scripts and mapping the flows of completing a task through voice</a>. To get people used to the format, we usually present a couple of ready-made examples for interactions that everyone can imagine, such as buying cinema tickets, or a food recommendation service like The Foodie below.</p> <p><img alt="" src="" style="display:block; height:393px; margin-left:auto; margin-right:auto; width:600px" /></p> <p>Before long, we get an idea for how simple or complex each use case can be, and how the ideal scenario might differ from an edge case. For example, in the case of a food recommendation skill, how will the experience differ if users ask for something that the skill supports (e.g. filtering by dietary constraints), versus something not supported (e.g. getting the calorie count by person).</p> <p>However, words on paper never give an accurate view of how the same words might sound while spoken aloud. With that in mind, as soon as people have completed their first scenario we get them to “role play” it. One person plays the role of the user and the other pretends to be “Alexa,” taking turns to read out aloud their part of the script.</p> <p>When people hear themselves saying what they’ve written down, they quickly understand what <a href="">sounds like a real-life conversation</a> and what sounds unnatural. They then spend the rest of the day iterating on their script and role-playing it again, until it sounds engaging and conversational.</p> <h2>Day 3: Prototype the Voice Experience</h2> <p>With a few scenarios mapped out, it’s then time to scale up and build a working prototype of the ideas we’re exploring.</p> <p>Whilst it’s certainly possible to continue testing and iterating by role-playing alone, we’ve always learnt even more by trying our ideas on a real Alexa-enabled device. For example, we get a feeling for how what to do when <a href="">something isn’t recognized</a>, and how our answers sound when read in the voice used by Alexa (is it too fast, too slow or is something harder to understand when read out by a synthetic voice?).</p> <p>Fortunately, there are now many <a href="">prototyping tools</a> that make it really easy to turn an idea into a working Alexa skill, without doing any coding, including services like <a href="" target="_blank">Voice Apps</a> and <a href="" target="_blank">Voiceflow</a>. Whenever you’re prototyping, make sure to keep the <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_guide-page_text-link&amp;sc_segment=visitors&amp;sc_place=guide-page&amp;sc_trackingcode=text-link" target="_blank">situational design guidelines</a> in mind.</p> <p>Again, we walk the sprint team through a working prototype so that they can get a glimpse of the tool’s capabilities before they get going. Be wary of just using flow charts when prototyping for voice as conversations don’t always flow as smoothly as you anticipate. This is where situational design comes in handy. Watch <a href="" target="_blank">this recording</a> from the Alexa team for a primer on situational design.</p> <p>We’ve found that even for complete beginners, one day is enough to learn how Voice Apps or Voiceflow work and create a testable prototype of one or more flows. It helps if people work in parallel, with one person creating the skill in the tool and others supporting by collecting sample data to use in the prototype and thinking of all the possible synonyms and <a href="">sample utterances</a> that users might want to say.</p> <p>Toward the end of the day, the team also creates a discussion guide, listing all the questions and scenarios to be used when testing the idea with real customers.&nbsp;</p> <h2>Day 4: User Testing</h2> <p>We dedicate the fourth day of the sprint to getting our idea in front of real customers. This means people who haven’t been involved in the development of the prototype, but could use our idea in real life.</p> <p>Before we even start with the sprint, we’ll have recruited and lined up around six people for that day. We may not know exactly what we’ll show them, but we can at least find people with some affinity to the domain we’re exploring. For example, if we’re prototyping voice interactions around buying cinema tickets, we’ll recruit a number of regular cinema-goers with a variety of film preferences.</p> <p>We also make sure everyone we recruit has used voice technology, like an Echo device, before so we spend the time testing our idea, not bringing them up to speed with how the service works.</p> <p>On the user testing day, we’ll bring people into a usability testing lab (or similar quiet room) and ask them to try out interacting with our prototype Alexa skill on a real device. We’re experts at running usability testing on a variety of platforms, but we’ve noticed that when testing with voice we had to slightly adapt our approach. For example:</p> <ul> <li>Whilst people can try a lot of things on a prototype of a website or app, voice interactions tend to be quite short. We schedule shorter sessions or we use the extra time to probe more into how participants use voice services in real life.</li> <li>Whilst we normally ask people to “think aloud” and explain what they’re doing while they use a website or app, they obviously can’t do the same while also talking to a voice service. We get them to tell us how the experience felt once they’ve finished a conversation.</li> <li>When testing a website, if a participant feels lost or clicks on the “wrong” button, we can easily intervene and put them back on track. It’s almost impossible for a moderator to intervene and take over a conversation with voice service. If we see people repeatedly fail at something, we’ll give them a hint on what to say.</li> </ul> <p>We take lots of notes and record all the sessions (with participants’ permission), so we get a clear record of how easy or hard to use our prototypes were.</p> <h2>Day 5: Analyze and Plan Next Steps</h2> <p>We start the final day by going through all our notes, reflecting on what puzzled participants, and what they said that our prototype skill couldn’t handle. We’ll go through our notes and recordings and pick out the exact words and phrases that people used. Where possible, we use our findings to get the prototype to understand more real-life scenarios and give clearer responses.</p> <p>We then make a roadmap for future work required to properly build the skill and bring the voice experience to life. We’ll discuss, for example, what APIs are needed to get live data integrated and how we might keep testing it with customers to ensure we stay on the right track.</p> <p>Finally, we have a go at sketching a “landing page” for our skill, showing how we’d promote it to customers. As there’s no way to “screenshot” a voice interaction, we think carefully about <a href="">how we can best sell the idea</a>, both internally and to customers browsing the Alexa Skills Store.</p> <p>There you have it! We’ve gone from zero to a validated Alexa skill idea in just five days. <a href="" target="_blank">Contact me</a> to learn more about the voice design sprint. To learn more about designing for voice, check out the <a href="">Alexa Design Guide</a>.</p> <h2>Related Content</h2> <ul> <li><a href="">Blog Series: 10 Things Every Alexa Skill Should Do</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_guide-page_text-link&amp;sc_segment=visitors&amp;sc_place=guide-page&amp;sc_trackingcode=text-link" target="_blank">Guide: How to Shift from Screen-First to Voice-First Design</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_standoutskill_guide-page_text-link&amp;sc_segment=visitors&amp;sc_keywords=standoutskill&amp;sc_place=guide-page&amp;sc_trackingcode=text-link" target="_blank">Guide: How to Design a Voice User Interface</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_standoutskill_guide-page_text-link&amp;sc_segment=visitors&amp;sc_keywords=standoutskill&amp;sc_place=guide-page&amp;sc_trackingcode=text-link" target="_blank">Guide: Tried-and-Tested Skill-Building Tips from Top Alexa Developers</a></li> <li><a href="">Situational Design: Build Adaptable Voice-First Interactions</a></li> <li><a href="">Situational Design: Individualize Your Entire Interaction</a></li> <li><a href="">Situational Design: Make Your Voice-First Interactions Accessible</a></li> <li><a href="">Situational Design: Talk with Your Customers, Not at Them</a></li> </ul> /blogs/alexa/post/55a49999-21b1-4f66-a80a-cc9034ccc82e/alexa-skill-teardown-building-the-interaction-model-for-the-space-explorer-skill Alexa Skill Teardown: Building the Interaction Model for the Space Explorer Skill Jennifer King 2019-01-11T16:05:03+00:00 2019-01-11T21:11:08+00:00 <p style="text-align:center"><img alt="" src="" style="height:357px; width:800px" /></p> <p>Get a deep dive on our new multimodal sample skill called Space Explorer. We’ll walk you through how we built the interaction model using the Alexa Presentation Language.</p> <p>In my previous post about the <a href="">Space Explorer sample Alexa skill</a>, I talked about how we approached the design for Space Explorer. I also discussed the overall goal of the project, the philosophy that guided our decision making, why we started with voice, and our thoughts on adapting the experience to suit the device.</p> <p>This time around, I'll talk more about how we turned that design into reality using the new <a href="">Alexa Presentation Language (APL)</a>, the Alexa Developer Portal, and AWS Lambda.</p> <h2>Building the Interaction Model</h2> <p>We started off by crafting our interaction model in the Alexa Developer Portal. Using the scripts we created as our guide (covered in the <a href="">first post in this series</a>), we started to create the various intents we knew we needed for users to navigate through the skill.</p> <p>Before building out the rich visuals you see in the final experience, we started by scaffolding all of the layouts using simple text-based labels for each of our target views. We created a minimal set of utterances to support our intended navigation, and confirmed that the correct views were being served.</p> <p style="text-align:center"><img alt="" src="" style="display:block; height:500px; margin-left:auto; margin-right:auto; width:800px" /></p> <p style="text-align:center"><em>Example of the basic layouts used early in development.</em></p> <p>Once the flows were complete, we spent some time expanding the utterances. We knew these basic utterances were only a starting point, so we added as many logical variations as we could think of to ensure we were covering as many scenarios as possible. For example, in addition to handling <em>“Take me to Jupiter,<strong>”</strong></em> we account for <em>“Go to Jupiter”</em> and <em>“Jupiter”</em> as well. But we also knew we would never be able to think of all the possibilities on our own. This is where user testing is a great tool. We reached out to some of our colleagues and asked them to play with the voice interactions in the skills and try to navigate around. Their feedback led to us handling a few more utterances than the original set we considered, resulting in a skill that is more resilient than our initial implementation.</p> <p>When the utterances were robust enough, we looked at how we could refine and make them easier to use in our back end when the time came. Enter slots. Slots are a great way to reduce the number of intents you need to handle on the back end, and make handling the target intent more convenient. Essentially, slots work like variables, with SlotTypes that map to predefined datasets (i.e. movies, actors, cities). Additionally, you can define custom SlotTypes that allow you to limit the set of accepted values for a given slot.</p> <p><img alt="" src="" style="display:block; height:530px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>For example, we created a custom slot type called celestial_objects and filled it with all the available planets and dwarf planets we wanted to make navigable. When a customer says either, <em>“Alexa, take me to Jupiter,”</em> or, <em>“Alexa, what's in Jupiter's atmosphere,”</em> Alexa knows the slot value, and will always return the single, lowercase value “jupiter” from the celestial_objects type. By predefining a collection of available slot values, we have limited the set of terms that Alexa has to map to, increasing the odds of a meaningful utterance.</p> <p>The last component of the voice design we implemented were the screen-based intents. These are the intents that let a customer navigate the screen content using their voice, such as titles or ordinals. Since these are not natively handled by APL yet, we had to implement them ourselves. For example, when presenting customers with lists of data, native Alexa experiences allow for selection using the item number or title, so we created custom intents to mimic that functionality.</p> <p>If you need help getting started with your interaction model, take a look at the Related Resources at the end of this post.</p> <h2>Translating Designs into APL</h2> <p>With our scaffolded skill functioning, it was time to turn our attention to the visuals. APL, as we've mentioned before, gave us the freedom to be as creative with the layouts as we wanted, which meant we needed to figure out how to translate that creativity into actual code. We also needed to make sure that the designs were clear enough to guarantee we used the right components.</p> <p>Just like with any UI development, our designs resulted in a series of redline-style documents to help guide the process. In addition to standard font-sizing and spacing guidelines, we made sure that we specifically included the touch target boundaries. This ensured we started off on the right track with components and minimized the amount of backtracking we had to do later on.</p> <p style="text-align:center"><img alt="" src="" style="display:block; height:450px; margin-left:auto; margin-right:auto; width:800px" /></p> <p style="text-align:center"><em>Example of the redline layers indicating touch targets</em></p> <h2>Importing Pre-Defined Style Packages</h2> <p>Throughout this skill, we're importing the <strong>alexa-styles</strong> and <strong>alexa-layouts</strong> packages from Alexa, as well as two additional custom packages served from our own CDN. The styles package provides developers a number of pre-built styles for text, spacing, colors, and more that have been developed to adapt to different viewport resolutions and viewing distances. In the layouts package, developers can find pre-built layout components developed by Amazon with the same adaptability as the styles package. We've used both extensively to make our development easier and we strongly recommend every developer do so, as well. For more information on what’s available, take a look at the <a href="">Alexa Packages Overview documentation</a>.</p> <p>Below is an example import block using the Alexa packages and custom packages:</p> <pre> <code>... &quot;import&quot;: [ { &quot;name&quot;: &quot;alexa-styles&quot;, &quot;version&quot;: &quot;1.0.0&quot; }, { &quot;name&quot;: &quot;alexa-layouts&quot;, &quot;version&quot;: &quot;1.0.0&quot; }, { &quot;name&quot;: &quot;layouts&quot;, &quot;version&quot;: &quot;1.0.0&quot;, &quot;source&quot;: &quot;; }, { &quot;name&quot;: &quot;styles&quot;, &quot;version&quot;: &quot;1.0.0&quot;, &quot;source&quot;: &quot;; } ] ... </code></pre> <p>For example, in the following snippet from our <a href="" target="_blank">custom layout package</a> you can see how we use the AlexaHeader and AlexaFooter throughout the skill:</p> <pre> <code>... &quot;ZoneList&quot;: { &quot;parameters&quot;: [ &quot;backgroundImage&quot;, &quot;title&quot;, &quot;logo&quot;, &quot;hintText&quot;, &quot;listData&quot; ], &quot;items&quot;: [ { &quot;when&quot;: &quot;${@viewportProfile != @hubRoundSmall}&quot;, &quot;type&quot;: &quot;Container&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;direction&quot;: &quot;column&quot;, &quot;items&quot;: [ ... { &quot;type&quot;: &quot;AlexaHeader&quot;, &quot;headerTitle&quot;: &quot;${title}&quot;, &quot;headerBackButton&quot;: 1, &quot;headerNavigationAction&quot;: &quot;backEvent&quot; }, ... { &quot;type&quot;: &quot;AlexaFooter&quot;, &quot;hintText&quot;: &quot;${hintText}&quot; } ] }, ... ] } ... </code></pre> <p>Notice the <strong>hintText</strong> property on the AlexFooter component. Using this property with a data transform, we can easily create a properly-formatted Alexa hint that references the device's active wake-word. Here's an example of how to use the textToHint transform in your APL datasources block:</p> <pre> <code>&quot;datasources&quot;: { &quot;data&quot;: { &quot;type&quot;: &quot;object&quot;, &quot;properties&quot;: { &quot;hintText&quot;: &quot;take me to Venus.&quot; }, &quot;transformers&quot;: [ { &quot;inputPath&quot;: &quot;hintText&quot;, &quot;outputName&quot;: &quot;hint&quot;, &quot;transformer&quot;: &quot;textToHint&quot; } ] } } } </code></pre> <p>If the active wake word was “Alexa,” this would output the property <strong>hint</strong>, with the value <em>'Try, “Alexa, take me to Venus.”'</em> For more information on this and other transforms, check out the <a href="">tech docs</a>.</p> <p>We've also created our own custom packages for this skill. This gave us more freedom to reuse the same code across the skill and allowed us to circumvent the directive size limit for skills. This was especially important, as the size cap includes datasources, and can quickly outgrow the 24Kb ceiling.</p> <h2>Accommodating Different Viewports</h2> <p>APL is designed to minimize the number of layouts you need to create for your skills, but there are some key things we needed to do to make that as simple as possible. First, we primarily used percentage- or viewport-based units for most of our dimensions. That ensures that spacing and positioning aren't adversely impacted when the viewport dimensions are changed.</p> <p>Second, we took advantage of APL's built-in conditional evaluation to show or hide elements, change dimension values or swap layouts entirely based on certain characteristics. This meant that we could show more information on larger displays, free up space on smaller displays, and drastically alter the layout for specific devices only. For instance, here's a what the APL for the main solar system screen looks like:</p> <pre> <code>... &quot;mainTemplate&quot;: { &quot;parameters&quot;: [&quot;payload&quot;], &quot;item&quot;: { &quot;type&quot;: &quot;Frame&quot;, &quot;backgroundColor&quot;: &quot;black&quot;, &quot;items&quot;: [ { &quot;when&quot;: &quot;${@viewportProfile == @hubRoundSmall}&quot;, &quot;type&quot;: &quot;SolarSystemSmallRoundHub&quot;, &quot;data&quot;: &quot;${}&quot; }, { &quot;when&quot;: &quot;${@viewportProfile != @hubRoundSmall}&quot;, &quot;type&quot;: &quot;SolarSystem&quot;, &quot;data&quot;: &quot;${}&quot; } ] } } ... </code></pre> <p>In the above example, we use conditional statements to determine which layout to display based on a resource called viewportProfile, found in the <strong>alexa-styles</strong> package. This resource is also using conditional evaluation to change it's value based on the viewport characteristics sent by the device.</p> <h2>Using APL Components to Create Scalable Graphic Elements</h2> <p>One of the exciting things about APL is the flexibility to look beyond traditional layouts. Much like HTML and CSS, the possibilities for creating truly dynamic and interesting elements are endless. For Space Explorer, there were a handful of screens that challenged us to use APL in more interesting ways. Among those were the size comparison, distance, and element views.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>The size comparison view uses variably shaped circles to represent the comparative sizing of different planets in our solar system. This effect could have been achieved using images, but that would not have given us the flexibility we needed to scale (and potentially introduced latency). As an alternative, we created the circles using APL Frames, dynamically sizing, coloring, and positioning them based on the characteristics of each planet.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>The distance screen uses a similar methodology. To create the comparison graphics in this view, we built the circles and bars using the following layout from our custom layout package:</p> <pre> <code>&quot;DistanceGraphic&quot;: { &quot;parameters&quot;: [&quot;color&quot;, &quot;name&quot;, &quot;width&quot;, &quot;active&quot;, &quot;test&quot;], &quot;items&quot;: [ { &quot;type&quot;: &quot;TouchWrapper&quot;, &quot;width&quot;: &quot;${width + '%'}&quot;, &quot;height&quot;: &quot;@indicatorSize&quot;, &quot;spacing&quot;: &quot;@indicatorSpacing&quot;, &quot;onPress&quot;: { &quot;type&quot;: &quot;SendEvent&quot;, &quot;arguments&quot;: [&quot;distanceEvent&quot;, &quot;${name}&quot;] }, &quot;items&quot;: [ { &quot;type&quot;: &quot;Frame&quot;, &quot;width&quot;: &quot;100%&quot;, &quot;height&quot;: &quot;100%&quot;, &quot;borderRadius&quot;: &quot;10dp&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;inheritParentState&quot;: true, &quot;style&quot;: &quot;backgroundWithFocusPress&quot;, &quot;item&quot;: { &quot;type&quot;: &quot;Container&quot;, &quot;width&quot;: &quot;100%&quot;, &quot;height&quot;: &quot;100%&quot;, &quot;direction&quot;: &quot;row&quot;, &quot;alignItems&quot;: &quot;center&quot;, &quot;opacity&quot;: &quot;${active ? 1 : 0.3}&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Frame&quot;, &quot;height&quot;: &quot;@indicatorStroke&quot;, &quot;grow&quot;: 1, &quot;backgroundColor&quot;: &quot;${color}&quot; }, { &quot;type&quot;: &quot;Frame&quot;, &quot;height&quot;: &quot;@indicatorSize&quot;, &quot;width&quot;: &quot;@indicatorSize&quot;, &quot;borderRadius&quot;: &quot;@indicatorRadius&quot;, &quot;borderWidth&quot;: &quot;@indicatorStroke&quot;, &quot;borderColor&quot;: &quot;${color}&quot;, &quot;backgroundColor&quot;: &quot;${active ? color : 'transparent'}&quot; } ] } } ] } ] } </code></pre> <p>As you can see, the elements rely on percentage units to scale accordingly, which made both responsive layouts and dynamic sizing easier. We also use conditional statements to fill in the circles and raise the opacity of the active elements.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>The atmospheric composition designs used a periodic table element style. Again, we could have achieved this with images, but APL allowed us to ensure the scaling, placement and crispness of the graphics would be consistent across all devices.</p> <pre> <code>&quot;Element&quot;: { &quot;parameters&quot;: [&quot;element&quot;, &quot;notation&quot;, &quot;title&quot;, &quot;percentage&quot;, &quot;color&quot;, &quot;spacing&quot;], &quot;items&quot;: [ ... { &quot;type&quot;: &quot;Container&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Container&quot;, &quot;width&quot;: &quot;${@isHubLandscapeSmall ? '18vw' : '200dp'}&quot;, &quot;height&quot;: &quot;${@isHubLandscapeSmall ? '18vw' : '200dp'}&quot;, &quot;alignItems&quot;: &quot;center&quot;, &quot;justifyContent&quot;: &quot;spaceAround&quot;, &quot;spacing&quot;: &quot;${spacing}&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Frame&quot;, &quot;width&quot;: &quot;100%&quot;, &quot;height&quot;: &quot;100%&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;top&quot;: 0, &quot;right&quot;: 0, &quot;bottom&quot;: 0, &quot;left&quot;: 0, &quot;borderWidth&quot;: &quot;2dp&quot;, &quot;borderColor&quot;: &quot;#FAFAFA&quot;, &quot;backgroundColor&quot;: &quot;black&quot;, &quot;borderRadius&quot;: &quot;8dp&quot;, &quot;opacity&quot;: 0.4 }, { &quot;when&quot;: &quot;${element != 'other'}&quot;, &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;${@viewportProfile == @hubRoundSmall || @viewportProfile == @hubLandscapeSmall? 'textStyleDisplay3Alt' : 'textStyleDisplay4Alt' }&quot;, &quot;color&quot;: &quot;${color}&quot;, &quot;text&quot;: &quot;${notation}&quot;, &quot;height&quot;: &quot;120dp&quot;, &quot;textAlignVertical&quot;: &quot;center&quot; }, { &quot;when&quot;: &quot;${element == 'other'}&quot;, &quot;type&quot;: &quot;Image&quot;, &quot;source&quot;: &quot;;, &quot;width&quot;: &quot;49dp&quot;, &quot;height&quot;: &quot;83dp&quot;, &quot;scale&quot;: &quot;best-fit&quot; }, { &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;textStyleDetail&quot;, &quot;textAlign&quot;: &quot;center&quot;, &quot;text&quot;: &quot;${title}&quot; } ] }, { &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;textStyleDisplay4&quot;, &quot;textAlign&quot;: &quot;center&quot;, &quot;spacing&quot;: 8, &quot;text&quot;: &quot;${percentage + '%'}&quot; } ] } ] } </code></pre> <p>To make sure the same component would adapt appropriately for larger displays, we created the elements to change form when the viewport characteristics were correct. You can see the full layout on our <a href="" target="_blank">GitHub repo</a>. Unfortunately, it just wasn't possible to create the donut graphs using APL elements alone, so we had to fall back to images for those assets.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>For some screens, we had to be even more creative to achieve the effect the designs called for. The best example of that is the skill's launch screen. By creating a custom splash screen, we were able to launch the skill with a unique, branded experience while simultaneously masking the latency of loading images for our solar system view in the background.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" />To do that with APL's current features, we created a layout that layers an Image component on top of the main solar system layout, which itself sits atop a ScrollView with a single Text component positioned off screen. When we handle the LaunchRequest, a RenderDocument directive is returned to display the launch layout, accompanied by an ExecuteCommand directive with a SpeakItem command targeting the hidden ScrollView's Text component. This command has a delay built in, so that any loading that needs to occur happens before the command is sent.</p> <p>Finally, we used the OnScroll property of the ScrollView to tie the scroll position to the Image component's opacity, which resulted in the smooth fade effect we were after.</p> <p>Here's the final layout:</p> <pre> <code>{ &quot;parameters&quot;: [&quot;payload&quot;], &quot;item&quot;: { &quot;type&quot;: &quot;Container&quot;, &quot;direction&quot;: &quot;column&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;top&quot;: 0, &quot;bottom&quot;: 0, &quot;items&quot;: [ { &quot;type&quot;: &quot;ScrollView&quot;, &quot;width&quot;: &quot;100%&quot;, &quot;height&quot;: &quot;100%&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;onScroll&quot;: [ { &quot;type&quot;: &quot;SetValue&quot;, &quot;componentId&quot;: &quot;splashImage&quot;, &quot;property&quot;: &quot;opacity&quot;, &quot;value&quot;: &quot;${1 - (event.source.value * 2)}&quot; } ], &quot;item&quot;: [ { &quot;type&quot;: &quot;Container&quot;, &quot;paddingTop&quot;: &quot;100vh&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Text&quot;, &quot;text&quot;: &quot;What would you like to explore?&quot;, &quot;opacity&quot;: &quot;0&quot;, &quot;id&quot;: &quot;splashScroller&quot;, &quot;paddingTop&quot;: &quot;100vh&quot;, &quot;speech&quot;: &quot;${}&quot; } ] } ] }, { &quot;type&quot;: &quot;Container&quot;, &quot;items&quot;: [ { &quot;when&quot;: &quot;${@viewportProfile == @hubRoundSmall}&quot;, &quot;type&quot;: &quot;SolarSystemSmallRoundHub&quot;, &quot;data&quot;: &quot;${}&quot; }, { &quot;when&quot;: &quot;${@viewportProfile != @hubRoundSmall}&quot;, &quot;type&quot;: &quot;SolarSystem&quot;, &quot;data&quot;: &quot;${}&quot; } ] }, { &quot;type&quot;: &quot;Frame&quot;, &quot;id&quot;: &quot;splashImage&quot;, &quot;backgroundColor&quot;: &quot;black&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;top&quot;: 0, &quot;right&quot;: 0, &quot;bottom&quot;: 0, &quot;left&quot;: 0, &quot;item&quot;: [ { &quot;type&quot;: &quot;Container&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;justifyContent&quot;: &quot;center&quot;, &quot;alignItems&quot;: &quot;center&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;textStyleDisplay1Alt&quot;, &quot;fontSize&quot;: &quot;20vh&quot;, &quot;fontWeight&quot;: &quot;100&quot;, &quot;text&quot;: &quot;SPACE&quot;, &quot;letterSpacing&quot;: &quot;6.6vw&quot; }, { &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;textStyleHeadline&quot;, &quot;fontSize&quot;: &quot;5.5vh&quot;, &quot;text&quot;: &quot;EXPLORER&quot;, &quot;fontWeight&quot;: &quot;800&quot; }, { &quot;type&quot;: &quot;Image&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;scale&quot;: &quot;best-fill&quot;, &quot;source&quot;: &quot;@landingImage&quot;, &quot;position&quot;: &quot;absolute&quot; } ] } ] } ] } } </code></pre> <h2>What's Next</h2> <p>With our voice and visual interactions built out, the next step is tying it all together. In our next post, we'll wrap up the Space Explorer deep dive by looking at how we used AWS Lambda to handle intents and user events, deliver directives, and manipulate our APL. Stay tuned.</p> <h2>Related Resources</h2> <ul> <li><a href="" target="_blank">Space Explorer Sample Code</a></li> <li><a href="">Alexa Presentation Language Technical Documentation</a></li> <li><a href="">10 Tips for Designing Alexa Skills with Visual Responses</a></li> <li><a href="">4 Tips for Designing Voice-First Alexa Skills for Different Alexa-Enabled Devices</a></li> <li><a href="">How to Design Visual Components for Voice-First Alexa Skills</a></li> <li><a href="">How to Get Started with the Alexa Presentation Language to Build Multimodal Alexa Skills</a></li> </ul> /blogs/alexa/post/73df5551-ad93-401c-8b57-d8a2c56c5ac4/localizing-your-alexa-skills-how-to-tailor-your-voice-experience-for-global-audiences Localizing Your Alexa Skills: How to Tailor Your Voice Experience for Global Audiences Jennifer King 2019-01-09T15:00:00+00:00 2019-01-09T15:00:00+00:00 <p>As Alexa expands to more countries and languages, you have more opportunities to make your skills available to a growing audience around the world. If you're ready to take your skill global, you'll first want to consider the best way to localize, or internationalize, the experience.</p> <p>As <a href="">Alexa expands to more countries and languages</a>, you have more opportunities to make your skills available to a growing audience around the world. If you're ready to take your skill global, you'll first want to consider the best way to localize, or internationalize, the experience.</p> <p>We've all had the experience of reading the instructions for a product made in another country or language that was translated poorly. Oftentimes, these products are challenging to use, which may negatively impact your experience and trust in that product or brand. Effectively handling translation and cultural differences when designing and building your Alexa skills for multiple regions is key to create a positive and engaging experience for customers everywhere.</p> <p>The most important thing to recognize is that localization isn’t limited to just language. Localizing the experience is shifting how Alexa converses with different customers using your skill, using different imagery and phrases appropriate for each country. When localizing your voice experience, consider features for different languages, regional differences, and technical requirements of different target audiences. Think beyond your own native culture and language. Not only should you consider which countries you are planning to make your skill available in, but which languages will you need to support those countries. Also consider what level of translation or localization will be required.</p> <p>Designing and building your skill with the following best practices in mind will help reduce the resources required to localize your skill for new countries, and help your skill have broader appeal.</p> <h2>When Designing the <u>Voice Output</u> for Your Skill</h2> <ul> <li>Be mindful of long strings of nouns or adjectives, or very long sentences that work better as short ones. Long, complex sentences are difficult to translate, and difficult to understand for customers.</li> <li>Avoid colloquialisms, puns, or local jargon when they are not critical to content in your skill. This general rule is especially important to localization, since other spoken languages may have no equivalent jargon.</li> <li>Make sure to define terms, and use them consistently throughout your skill. If your terminology is inconsistent in how you present certain terms, or if you don’t provide proper term definitions to those assisting in translating your voice experience, it will be difficult to provide quality translations for your customers.</li> <li>Keep in mind that different languages have different word order. Grammatical rules in a respective language will dictate in which order these words need to come.</li> </ul> <h2>When Designing the <u>Visual Output</u> for Your Skill</h2> <ul> <li>Remember that most languages typically require more room than English, with longer words and sentences, and possibly larger characters. Make sure your visual layouts account for this, and have room to scale when required.</li> <li>Be sure to define line wrap and truncation behavior for all visual layouts using text components. Text in your layouts should be allowed to wrap and flow to as many lines as needed. Consider accounting for at least 30% extra space within your GUI beyond what the English source requires to accommodate this.</li> <li>Translate any text in the graphics you select. The best way to avoid dealing with localizing graphics is to minimize or avoid using text in graphics. But if you must use text in your images, make sure to verify the images are displaying properly in each locale and that the right image is being displayed.</li> <li>Use general images that are appropriate and easily understood in your intended countries and marketplaces. Not all cultural references will be global, so try to use general images that are appropriate for a worldwide audience.</li> <li>If you're using dates, time, phone numbers, and other general number formatting, make sure to follow local custom. For example, dates in the US are generally month, day, year but in most of Europe dates are written as day, month year.</li> </ul> <p>With Alexa's availability expanding to countries<a href=""> all over the world</a>, it's important to remember that the more localized your skill is, the more customers you will reach. And those customers will appreciate an experience tailored to their culture and language, leading to higher engagement and happier customers. For more examples of how you can localize your voice experience for a global audience, see the <a href="">Alexa Design Guide</a>.</p> <h2>Related Content</h2> <ul> <li><a href="">Alexa Design Guide: Internationalization</a></li> <li><a href="">How to Localize Your Alexa Skills</a></li> <li><a href="">5 Tips for Building Multi-Language Alexa Skills</a></li> </ul> /blogs/alexa/post/f73c5010-5866-4281-90fa-8c9f85fee2e7/alexa-are-you-going-to-ces Alexa, Are You Going to CES? Adam Vavrek 2019-01-07T23:12:05+00:00 2019-01-07T23:14:51+00:00 <p><a href="" target="_self"><img alt="" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></a></p> <p>The Consumer Electronics Show&nbsp;starts tomorrow, January 8, in Las Vegas. More than 180,000 people from over 155 countries will be in attendance showcasing the latest in consumer technologies.</p> <p><img alt="Amazon Alexa CES 2019" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" />Alexa had a busy year. The number of customers interacting with Alexa each day doubled <a href="" target="_blank">in 2018</a>. Equally as exciting is the momentum we’ve seen among Alexa developers and device makers: The number of Alexa-compatible smart home devices increased 6x to 28,000 products from more than 4,500 unique brands; the number of Alexa skills increased to more than 70,000; and the number of products with Alexa built-in more than doubled. In fact, more than 90% of the Alexa devices launched last year were built by someone other than Amazon.</p> <p>These developers help make Alexa smarter, more useful, and more accessible to customers around the world, and we’re excited to showcase what they’ve built this week at the Consumer Electronics Show (CES). The four-day event starts tomorrow, January 8, in Las Vegas. More than 180,000 people from over 155 countries will be in attendance showcasing the latest in consumer technologies.</p> <p><img alt="Amazon Alexa CES Public Exhibits" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>Alexa, Where Can I Meet You at CES?</strong></h2> <p>Amazon will have a public presence in several locations:</p> <ul> <li>The Venetian (Ballrooms C and D) – Alexa public exhibit</li> <li>Las Vegas Convention Center (North Hall - 7506) – Alexa Auto</li> <li>The Sands Convention Center (Lobby) – Amazon Key and Ring</li> </ul> <p>In the Venetian, attendees can experience products and services from across Amazon, including Alexa, Fire TV, AWS, Dash Replenishment Services (DRS), and more. At the center of the Alexa public exhibit is the all new Audi e-tron SUV, which is surrounded by other technologies that showcase how Alexa makes life easier when you’re at home, work, and on-the-go. The fully electric Audi e-tron features Alexa built directly into the vehicle, so customers can ask her to play music, locate points of interest, control smart home devices, and access thousands of Alexa skills.</p> <p>There will also be the <em>speakeasy</em>, an area focused on solutions for device makers building with Alexa. Amazon Solution Architects will be on site to help educate developers on integrating Alexa into their products, and to showcase the newest development kits, systems integrators, and original design manufacturer solutions.</p> <p><img alt="Amazon Alexa CES 2019 What's New" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>What’s New</strong></h2> <p>CES has a history of debuting the latest and greatest innovations in consumer electronics and this year is no different.</p> <p>There are already more than 150 different products with Alexa built-in, from headphones and PCs to cars and smart home devices. You’ll see dozens of products with Alexa announced at CES: televisions from LG and Samsung; headphones from Jabra and JBL; smart home devices from Kohler and First Alert; automotive products from iOttie and BOSS Audio; and much, much more.</p> <p><img alt="Amazon Alexa CES 2019 Panels and Talks" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>Panels and Talks</strong></h2> <p>Amazon will be participating in several sessions and panels you won’t want to miss. If you’re going to CES, click the links below to add them to your agenda.</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">AI Forecasting Famine</a></strong><br /> Tuesday, January 8 | 11:30 a.m. - 12:30 p.m.<br /> Westgate, Level 1, Ballroom F</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">Technology Deployment into the Home</a></strong><br /> Wednesday, January 9 | 11:30 a.m. - 12:30 p.m.<br /> Venetian, Level 4, Marcello 4406</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">Vehicle Tech’s Next Big Thing</a></strong><br /> Wednesday, January 9 | 11:30 a.m. - 12:30 p.m.<br /> Las Vegas Convention Center, North Hall, N262</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">Go Big or Go Home – The IdeaMakers</a></strong><br /> Wednesday, January 9 | 2:40 p.m. - 3:20 p.m.<br /> Aria, Level 1, Joshua 9</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">IoT Software Platforms: Measure Twice, Cut Once</a></strong><br /> Thursday, January 10 | 9:00 a.m. – 10:00 a.m.<br /> Las Vegas Convention Center, North Hall, N253</p> <p style="margin-left:.5in"><strong>Alexa Auto Fireside Chat</strong><br /> Thursday, January 10 | 11:00 a.m. - 11:30 a.m.<br /> Engadget Stage: Las Vegas Convention Center, Central Hall, Grand Lobby</p> <h2><img alt="Amazon Alexa CES 2019 Social Media #ASKALEXA" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></h2> <h2><strong>#AskAlexa</strong></h2> <p>Follow #AskAlexa, #AlexaAuto, and #CES2019 on social media for real-time updates from the show. On Twitter, follow <a href="" target="_blank">@AlexaDevs</a>, <a href="" target="_blank">@AmazonEcho</a>, and <a href="" target="_blank">@AmazonNews</a> where will be sharing news and announcements with videos, photos, and more.&nbsp;</p> <p><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=CES2019PreEventBlogVavrek2&amp;sc_publisher=WB&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_CES2019PreEventBlogVavrek2_WB_Content_Discover_WW_AllDevs&amp;sc_segment=AllDevs" target="_blank">Subscribe here</a>&nbsp;for email updates during and after CES and get the latest information delivered straight to your inbox.</p> /blogs/alexa/post/68025d70-83c2-4251-9e1b-4c7145218a66/consejos-para-crear-frases-de-ejemplo-certificables-en-tu-skill-de-alexa Consejos para Crear Frases de Ejemplo Certificables en tu Skill Alexa German Viscuso 2019-01-07T12:00:00+00:00 2019-01-10T16:42:35+00:00 <p>La mayor&iacute;a de las skills de Alexa enviadas para certificaci&oacute;n se topan con una serie de problemas comunes donde las frases de ejemplo incorrectas son la raz&oacute;n m&aacute;s frecuente. Para ayudarte a evitar este problema hacemos un repaso de los requisitos y mejores pr&aacute;cticas para crear las frases de ejemplo.</p> <p>En <a href="">nuestra entrada de blog anterior</a> te comentabamos que la mayor&iacute;a de las skills de Alexa enviadas para certificaci&oacute;n se topan con una serie de problemas comunes. Las frases de ejemplo incorrectas son la raz&oacute;n m&aacute;s frecuente por la que las Skills de Alexa fallan el proceso de certificaci&oacute;n. Para ayudarte a evitar este problema vamos a hacer un repaso de los requisitos que tienen las frases de ejemplo y compartiremos algunas de las mejores pr&aacute;cticas.</p> <h2>&iquest;Qu&eacute; son las Frases de Ejemplo?</h2> <p>Para poder enviarnos tu Skill y comenzar el proceso de certificaci&oacute;n tienes que proveer al menos una frase de ejemplo en la pesta&ntilde;a <em>Distribution</em> de la consola de desarrollo de Skills Alexa.</p> <p><img alt="example_phrases_spanish.png" src="" style="display:block; margin-left:auto; margin-right:auto" /></p> <p>Los usuarios pueden ver estas frases de ejemplo en la entrada descriptiva de la Skill una vez que la descubren. Nos gusta pensar en este conjunto de frases como una gu&iacute;a que le muestra a los usuarios como comenzar a usar f&aacute;cilmente la Skill en sus dispositivos con Alexa. Tambi&eacute;n es una buena oportunidad para mostrar las funcionalidades clave de tu Skill.</p> <p><img alt="cookpad_app_spanish.jpg" src="" style="display:block; margin-left:auto; margin-right:auto" /></p> <p>La estructura b&aacute;sica que usan las frases de ejemplo para abrir Skills se describe en nuestra <a href="">documentaci&oacute;n</a>, y lo resumimos a continuaci&oacute;n:</p> <p style="margin-left:.5in; margin-right:0in"><strong>[Palabra de Activaci&oacute;n], [Palabra de Lanzamiento] [Nombre de Apertura] [Conector] [Enunciado] </strong></p> <p style="margin-left:.5in; margin-right:0in"><strong>Palabra de activaci&oacute;n</strong> (<em>wake word</em>)<strong>: </strong>Se utiliza “<em>Alexa</em>” por defecto en dispositivos Alexa, pero los clientes pueden cambiarla en sus preferencias. Debes usar “<em>Alexa</em>” como palabra de activaci&oacute;n en tus frases de ejemplo. No olvides comenzar tu primera frase de ejemplo con &quot;<em>Alexa</em>&quot; y pon una coma luego de la palabra de activaci&oacute;n.</p> <p style="margin-left:.5in; margin-right:0in"><strong>Palabra de lanzamiento</strong> (<em>launch word</em>): Como lo especificamos en la <a href="">documentaci&oacute;n</a>, esto incluye varias frases de lanzamiento como &quot;<em>abre</em>,&quot; &quot;<em>preg&uacute;ntale</em>,&quot; &quot;<em>empieza</em>,&quot; &quot;<em>lanza</em>,&quot; &quot;<em>comienza</em>,&quot; ,&quot;<em>corre</em>,&quot; &quot;<em>jugar</em>,&quot; &quot;<em>dile</em>,&quot; &quot;<em>dame</em>,&quot; “<em>pide</em>,” y m&aacute;s. Cuando estas frases combinan bien con tu nombre de apertura (ver abajo) le resultar&aacute; m&aacute;s f&aacute;cil al usuario recordar como abrir tu Skill.</p> <p style="margin-left:.5in; margin-right:0in"><strong>Nombre de apertura</strong> (<em>invocation name</em>): Este es el nombre de apertura que le has asignado a tu Skill al crearla en <a href=""></a>. Puedes consultar <a href="">nuestra documentaci&oacute;n</a> para ver los requisitos. Adem&aacute;s, si utilizas un nombre propio asegurate que tienes derrecho a utilizarlo (cuidado con las marcas registradas).</p> <p style="margin-left:.5in; margin-right:0in"><strong>Conector </strong>(<em>connecting word</em>): Estas son palabras que se usan para conectar la palabra de lanzamiento con los enunciados e incluyen a &quot;<em>y</em>,&quot; &quot;<em>de</em>,&quot; &quot;<em>desde</em>,” “<em>usando</em>,” &quot;<em>que</em>,&quot; &quot;<em>sobre</em>,&quot; &quot;<em>por</em>,&quot; &quot;<em>si</em>,&quot; y m&aacute;s. Para una lista completa echa un vistazo a nuestra <a href="">documentaci&oacute;n</a>. A pesar de que este componente puede ser omitido, el incluirlo har&aacute; que los usuarios entiendan y digan mejor la frase.</p> <p style="margin-left:.5in; margin-right:0in"><strong>Enunciado</strong> (<em>utterance</em>): son obligatorios y deben aparecer en las frases de ejemplo de tu modelo de interacci&oacute;n (es decir los enunciados de tus frases de ejemplo deben estar en tu modelo de interacci&oacute;n y ser id&eacute;nticos).</p> <p>En la frase de ejemplo &quot;<em>Alexa, abre cookpad y busca una receta de tortilla de patatas</em>&quot; , &quot;<em>Alexa</em>&quot; es la palabra de activaci&oacute;n, &quot;<em>abre</em>&quot; es la palabra de lanzamiento, &quot;<em>cookpad</em>&quot; es el nombre de apertura, &quot;<em>y</em>&quot; es el conector y &quot;<em>busca una receta de tortilla de patatas</em>&quot; es el enunciado.</p> <p>Y aqu&iacute; tienes un ejemplo v&aacute;lido que no utiliza conector: &quot;<em>Alexa, preg&uacute;ntale a cookpad como hacer tortitas</em>&quot;, donde &quot;<em>Alexa</em>&quot; es la palabra de activaci&oacute;n, &quot;<em>preg&uacute;ntale a</em>&quot; es la palabra de lanzamiento (en este caso una frase), &quot;<em>cookpad</em>&quot; es el nombre de apertura y &quot;<em>como hacer tortitas</em>&quot; es el enunciado.</p> <h2>Problemas Comunes con Frases de Ejemplo</h2> <p>A continuaci&oacute;n te ofrecemos una lista con los contratiempos m&aacute;s comunes que vemos en las <a href="">frases de ejemplo y sus requisitos</a>:</p> <ol> <li><strong>Faltan componentes:</strong> En muchos casos las frases de ejemplo no tienen el nombre de apertura o la palabra de lanzamiento correctas. Por ejemplo: &quot;<em>Alexa, pide revisar mi balance</em>&quot;. Sin el nombre de apertura especificado luego de un &quot;<em>pide a</em>&quot; y antes del enunciado que comienza con &quot;<em>revisar</em>&quot; Alexa no va a responder de forma apropiada. A veces tambi&eacute;n vemos aqu&iacute; nombres de Intenciones reemplazando al nombre de apertura (esto tambi&eacute;n es incorrecto).<br /> <br /> Aqu&iacute; tienes otro ejemplo que tambi&eacute;n vemos en las Skills que nos env&iacute;an: &quot;<em>Alexa, Voz Social los t&oacute;picos m&aacute;s importantes</em>&quot;. En esta caso Alexa podr&iacute;a no responder adecuandamente porque falta la palabra de lanzamiento</li> <br /> <li><strong>No basarse en enunciados de muestra:</strong> Cada frase de ejemplo debe ser creada en base a los enunciados de muestra presentes en tu modelo de interacci&oacute;n. Por ejemplo &quot;<em>Alexa, preg&uacute;ntale a Registro de Mareas cuando hay marea alta en Barcelona</em>&quot; debe tener un enunciado id&eacute;ntico para proveer una respuesta v&aacute;lida: <pre> <code class="language-javascript">&quot;samples&quot;: [ &quot;cuando hay marea alta en {ciudad}&quot;, &quot;...&quot; ]</code></pre> Si el enunciado no existe, Alexa no podr&aacute; mapear la frase de ejemplo a la Intenci&oacute;n correcta. La Skill no sabr&aacute; como responder y la experiencia de usuario ser&aacute; pobre. Adem&aacute;s, como puedes ver en el ejemplo de arriba, si la frase que utilizas tiene Slots debes asegurarte que el valor de Slot que utilizas (e.g. Barcelona) es un valor v&aacute;lido para el tipo de Slot utilizado (y si es un Custom Slot el valor debe coincidir con alguno de los valores que has asignado al tipo). Vemos este problema en un gran porcentaje de las Skills que nos envian.</li> <br /> <li><strong>Respuestas err&oacute;neas:</strong> Aseg&uacute;rate de que cuando el usuario utilice una frase de ejemplo obtendr&aacute; una respuesta relevante. En muchas de las Skills que se envian para certificaci&oacute;n vemos problemas con respuestas potencialmente confusas:<br /> <br /> User: &quot;<em>Alexa, pr&eacute;guntale a busca recetas como hacer tortitas.</em>&quot;<br /> Skill: &quot;<em>Bienvenido a Busca Recetas. Puedes hacerme preguntas como, cual es la receta para tortitas. &iquest;C&oacute;mo te puedo ayudar?</em>&quot;</li> </ol> <p>Esperamos que estos consejos te hayan sido de utilidad para crear y certificar tus Skills. Si tus frases completas est&aacute;n estructuradas correctamente, se basan en enunciados de muestra y proveen respuestas relevantes, tendr&aacute;s m&aacute;s posibilidades de pasar r&aacute;pidamente el proceso de certificaci&oacute;n. Echa un vistazo a <a href="">nuestra entrega anterior</a> para mas consejos sobre la certificaci&oacute;n.</p> <h2>Env&iacute;anos tus Comentarios</h2> <p>Como de costumbre nos interesa saber sobre tu experiencia con la certificaci&oacute;n para poder mejorar el proceso. Por favor envianos tus comentarios en <a href=";amp;sc_channel=website&amp;amp;sc_publisher=devportal&amp;amp;sc_campaign=Conversion_Contact-Us&amp;amp;sc_assettype=conversion&amp;amp;sc_team=us&amp;amp;sc_traffictype=organic&amp;amp;sc_country=united-states&amp;amp;sc_segment=all&amp;amp;sc_itrackingcode=100020_us_website&amp;amp;sc_detail=blog-alexa">este formulario</a>.</p> <h2>Recursos Relacionados</h2> <p>Para acceder a m&aacute;s recursos sobre Distribuci&oacute;n y Certificaci&oacute;n de tu Skill mira los siguientes enlaces:</p> <ul> <li><a href="">Las Claves para Certificar con &Eacute;xito tu Skill Alexa</a></li> <li><a href="">Certification Requirements for Custom Skills</a></li> <li><a href="">Review and Test Example Phrases</a></li> <li><a href="">Alexa Developer Blog: Certification tag</a></li> <li><a href="">Pregunta al Experto - Alexa Office Hours en Espa&ntilde;ol (Twitch)</a></li> </ul> /blogs/alexa/post/4506e350-1e7a-4ba3-b54c-8abf000d7236/how-to-optimize-your-upsell-strategy-for-your-monetized-alexa-skills How to Optimize Your Upsell Strategy for Your Monetized Alexa Skills Metty Fisseha 2019-01-04T18:20:03+00:00 2019-01-04T18:20:03+00:00 <p><img alt="Can-Handle-Intent_Blog_(1).png" src="" /></p> <p>If you have published a monetized skill, ensure you optimize your upsell strategy to help drive more customers to your premium content.</p> <p><img alt="Can-Handle-Intent_Blog_(1).png" src="" /></p> <p>If you have published a monetized skill, your next step is to optimize your upsell strategy to help accelerate your sales. An effective upsell should present customers with the option to engage even deeper with your skill, at the right time and in the right context, compelling them to make a purchase.</p> <p>Cracking the code on upsell strategy is critical for the success of your monetized skill. And, because each skill will have a unique upsell strategy, it’s important that you test your skill to find what works best. For this reason, we added enhanced reporting on upsell metrics in the Alexa Developer Console to help you better track performance of your monetized skill. To access these new tools, log in to your developer account and click on “Analytics” by your premium skill. In the left hand toolbar, click on “In-Skill Purchases.” Learn more about how to use these metrics <a href="">here</a>.</p> <h2>What is an Upsell? And Why is It Important?</h2> <p>An upsell is when you surface an in-skill product to your customer. The first upsell is important because this is your opportunity to introduce that your skill offers premium content. Your goal with any upsell is to capture your customer’s attention and encourage them to learn more about the product. To do this, you’ll want to be thoughtful about the upsell placement, frequency, and messaging to achieve optimal conversion.</p> <p>If the customer says “yes” to your upsell, they are led to the offer. The offer, which contains important transactional details such as price, ends with Alexa asking the customer “…would you like to buy it?” Amazon handles the voice interaction model and all the mechanics of the offer and the transaction.</p> <p>By presenting his premium content to the customer at just the right time, Steven Arkonovich of <a href=";field-keywords=%22philosophical+creations%22" target="_blank">Philosophical Creations</a>, creator of Big Sky, reports that <strong>50% of people who are offered his in-skill product convert to make the purchase</strong>.</p> <p>“The strength of voice is that it is a very personal experience,” says Arkonovich. “Just as personalizing the experience to each user sets your skill apart from the rest, tailoring your upsell message and its timing to what your customer is looking for at that very moment is key to higher conversion rates.”</p> <p>Read more about Steven’s journey to optimize his monetized skill <a href="">here</a>.</p> <p>To get you started, we’ve compiled a few upsell best practices based on our observations and tips from developers who are making money with in-skill purchasing. There are three key components of an upsell: placement, frequency, and messaging.<strong> </strong></p> <h2>1. Placement: Upsell Early</h2> <p>Upselling early allows you to proactively showcase your skill’s premium content to customers, rather than expecting them to discover it on their own. Use data available to you in the developer console, such as average skill utterances and number of dialogs, to inform where you place the upsell in your premium skill.</p> <p>Sampat Biswas, developer of <a href=";ie=UTF8&amp;qid=1545173577&amp;sr=1-3&amp;keywords=world+of+words" target="_blank">World of Words Game</a> skill, says, “Initially, my upsell was placed in level three of my game. However, I saw from average utterances and dialogs per customer data that customers were most engaged around level two. After moving my upsell placement to level two, I’ve seen more offers being delivered and an improvement in my conversion rate.”</p> <p>Similarly, Sanasar Hovsepian, developer of the <a href="" target="_blank">Smarty Pants Trivia</a> skill, discovered that notifying customers about his skill’s premium content early on helped to drive higher sales.</p> <p>“Thinking about where I placed upsells within my skill has helped me increase the amount of in-skill purchases. As an example, the simple act of mentioning to users that there are premium options to purchase later on in my skill has helped drive my sales by an extra 21%.”</p> <h2>2. Frequency: Upsell Often</h2> <p>Unlike mobile, where at any time customers can see a menu of in-app products, voice-first skills require your skill to remind customers that premium content is available for purchase. Through our work with developers, we learned that one way to address this challenge is to increase your upsell frequency.</p> <p>While you might be concerned about negative customer experiences caused by frequent upsells, we’ve found that customers respond positively when products are presented to them frequently, as long as the placement and context are appropriate. Also, it helps to diversify your premium offerings. By offering different types of premium content, upselling often allows customers to choose which product is right for them.</p> <h2>3. Messaging: Upsell In-Context</h2> <p>A customer should know what they are being asked to purchase and why they’ll be delighted by the purchase. Contextualize the type of product you’re offering and the wording you use to offer it. This will ensure that customers remain engaged with your skill, making their decision to purchase your product a seamless and natural experience within the context of your skill. <em>&nbsp;</em></p> <h2>New Reporting and Upsell Metrics for Your Monetized Skills</h2> <p>We recently added two new upsell metrics to the developer console to help you optimize your upsell strategy: Upsell to Offer Conversion, which measures what percentage of customers who heard your upsell agreed to hear the offer, and Upsell to Purchase Conversion, which measures what percentage of customers who heard your upsell agreed to make a purchase. You can use these metrics to gauge the effectiveness of your upsell strategy and make enhancements. Learn more about how to use these metrics <a href="">here</a>.</p> <h2>More Resources – How to Promote Your Monetized Skill</h2> <p>To help customers discover delightful voice experiences, Amazon promotes high-quality monetized skills in the US Alexa Skills Store via Amazon marketing channels. To be eligible for this promotional placement, ensure your monetized skill meets our <a href="">eligibility requirements</a>. This valuable exposure could help to accelerate your revenue earned. Follow the guidelines in <a href="">this checklist</a> to ensure your monetized Alexa skill is eligible for Amazon promotion.</p> <p>In addition to Amazon-owned marketing channels, we encourage you to promote your skills within your own networks. <a href="">Follow these tips</a> to make your skill more discoverable both in the Alexa Skills Store and through your existing network.</p> <p>Questions? Attend our <a href="" target="_blank">Office Hours</a> on Twitch (no sign up required) or chime in on our <a href="" target="_blank">developer forums</a>.</p>