Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2019-01-15T18:59:13+00:00 Apache Roller /blogs/alexa/post/a7bb4a16-c86b-4019-b3f9-b0d663b87d30/new-method-for-compressing-neural-networks-better-preserves-accuracy New Method for Compressing Neural Networks Better Preserves Accuracy Larry Hardesty 2019-01-15T14:00:00+00:00 2019-01-15T14:04:56+00:00 <p>By compressing the huge lookup tables that list &quot;embeddings&quot;, or vector representations of individual words, a new system can shrink neural-network&nbsp;models by up to 90%, with minimal effect on accuracy.</p> <p><sup><em>Rahul Goel cowrote this post with&nbsp;Anish Acharya</em></sup></p> <p>Neural networks have been responsible for most of the top-performing AI systems of the past decade, but they tend to be big, which means they tend to be slow. That’s a problem for systems like Alexa, which depend on neural networks to process spoken requests in real time.</p> <p>In natural-language-understanding (NLU) applications, most of a neural network’s size comes from a huge lookup table that correlates input words with “embeddings.” An embedding is a large vector (usually a sequence of 300 numbers) that captures information about a word’s meaning.</p> <p>In a <a href="" target="_blank">paper</a> that we and our colleagues are presenting at the 33rd conference of the Association for the Advancement of Artificial Intelligence (AAAI), we describe a new method for compressing embedding tables that compromises the NLU network’s performance less than competing methods do.</p> <p>In one set of experiments, for instance, we showed that our system could shrink a neural network by 90 percent while reducing its accuracy by less than 1%. At the same compression rate, the best prior method reduced the accuracy by about 3.5%.</p> <p>The ability to compress NLU models means that, as Alexa learns to perform more and more complex tasks, she can continue to deliver responses in milliseconds. It also means that Alexa’s skill base can continue to expand unfettered. Alexa currently supports more than 70,000 third-party skills, with thousands more being added every month. Compression means that those skills’ NLU models can be stored efficiently.</p> <p>In our experiments, we used a set of pretrained word embeddings called Glove. Like other popular embeddings, Glove assesses words’ meanings on the basis of their co-occurrence with other words in huge bodies of training data. It then represents each word as a single point in a 300-dimensional space, such that words with similar meanings (similar co-occurrence profiles) are grouped together.</p> <p>NLU systems often benefit from using such pretrained embeddings, because it lets them generalize across conceptually related terms. (It could, for instance, help a music service learn that the comparatively rare instruction “Play the track ‘Roadrunner’” should be handled the same way as the more common instruction “Play the song ‘Roadrunner”.) But it’s usually possible to improve performance still further by fine-tuning the embeddings on training data specific to the task the system is learning to perform.</p> <p>In previous work, NLU researchers had taken a huge lookup table, which listed embeddings for about 100,000 words, reduced the dimension of the embeddings from 300 to about 30, and used the smaller embeddings as NLU system inputs.&nbsp;</p> <p>We improve on this approach by integrating the embedding table into the neural network in such a way that it can use task-specific training data not only to fine-tune the embeddings but to customize the compression scheme as well.</p> <p>To reduce the embeddings’ dimensionality, we use a technique called singular-value decomposition. Singular-value decomposition (SVD) produces a lower-dimensional projection of points in a higher-dimensional space, kind of the way a line drawing is a two-dimensional projection of objects in three-dimensional space.</p> <p><img alt="Projection.jpg" src="" style="display:block; height:333px; margin-left:auto; margin-right:auto; width:500px" /></p> <p style="text-align:center"><sub><em>Singular-value decomposition projects high-dimensional data into a lower-dimensional space, much the way a three-dimensional object can be projected onto a two-dimensional plane.</em></sub></p> <p>The key is to orient the lower-dimensional space so as to minimize the distance between the points and their projections. Imagine, for instance, trying to fit a two-dimensional plane to a banana so as to minimize the distance between the points on the banana’s surface and the plane. A plane oriented along the banana’s long axis would obviously work better than one that cut the banana in half at the middle. Of course, when you’re projecting 300-dimensional points onto a 30-dimensional surface the range of possible orientations is much greater.</p> <p>We use SVD to break our initial embedding matrix into two smaller embedding matrices. Suppose you have a matrix that is 10,000 rows long (representing a lexicon of 10,000 words) and 300 columns wide (representing a 300-dimensional vector&nbsp;for each word). You can break it into two matrices, one of which is 10,000 columns long and 30 columns wide, and the other of which is 30 columns long and 300 columns wide. This results in a reduction of parameters, from 10,000 x 300 to ((10,000 x 30) + (30 x 300)), or almost 90%.&nbsp;</p> <p>We represent one of these matrices as one layer of a neural network and the second matrix as the layer above it. Between the layers are connections that have associated “weights,” which determine how much influence the outputs of the lower layer have on the computations performed by the higher one. The training process keeps readjusting those weights, trying to find settings that reduce the projection distance still further.</p> <p>In our paper, we also describe a new procedure for selecting the network’s “learning rate”. The relationship between the weight settings of the entire network and the network’s error rate can be imagined as a landscape with peaks and valleys. Each point in the landscape represents a group of weight settings, and its altitude represents the corresponding error rate.</p> <p>The goal is to find a group of weights that correspond to the bottom of one of the deepest valleys, but we can’t view the landscape as a whole; all we can do is examine individual points. At each point, however, we can calculate the slope of the landscape, and the standard procedure for training a neural network is to continually examine points that lie in the downhill direction from the last point examined.&nbsp;</p> <p>Every time you select a new point, the question is how far in the downhill direction to leap, a metric called the learning rate. A recent approach to choosing the learning rate is the cyclical learning rate, which steadily increases the leap length until it hits a maximum, then steadily steps back down to a minimum, then back up to the maximum, and so on, until further exploration no longer yields performance improvements.</p> <p>We vary this procedure by decreasing the maximum leap distance at each cycle, then pumping it back up and decreasing it again. The idea is that the large leaps help you escape local minima — basins at the tops of mountains rather than true valleys. But tapering the maximum leap distance reduces the chance that when you’ve found a true valley and have started down its slope, you’ll inadvertently leap out of it.</p> <p><img alt="Learning_rate_comparison_(1).jpg" src="" style="display:block; height:169px; margin-left:auto; margin-right:auto; width:550px" /></p> <p style="text-align:center"><sub><em>A comparison of the learning-rate-selection strategies adopted<br /> in the cyclical learning rate (left) and the cyclically annealed learning rate (right).</em></sub></p> <p>We call this technique the cyclically annealed learning rate, and in our experiments, we found that it led to better performance than either the cyclical learning rate or a fixed learning rate.</p> <p>To evaluate our compression scheme, we compared it to two alternatives. One is the scheme we described before, in which the embedding table is compressed before network training begins. The other is simple quantization, in which all of the values in the embedding vector — in this case, 300 — are rounded to a limited number of reference values. So, for instance, the numbers 75, 83, and 87 might all become 80. This can reduce, say, 32-bit vector values to 16 or 8 bits each.</p> <p>We tested all three approaches across a range of compression rates, on different types of neural networks, using different data sets, and we found that in all instances, our approach outperformed the others.</p> <p><em>Anish Acharya is an applied scientist, and Rahul Goel is a machine learning scientist, both in the Alexa AI group.</em></p> <p><a href="" target="_blank"><strong>Paper</strong></a>: &quot;Online Embedding Compression for Text Classification using Low Rank Matrix Factorization&quot;</p> <p><strong><a href="" target="_blank">Alexa science</a></strong></p> <p><strong>Acknowledgments</strong>: <a href="">Angeliki Metallinou</a>, Inderjit Dhillon</p> <p><strong>Related</strong>:</p> <ul> <li><a href="" target="_blank">With New Data Representation Scheme, Alexa Can Better Match Skills to Customer Requests</a></li> <li><a href="" target="_blank">Shrinking Machine Learning Models for Offline Use</a></li> <li><a href="" target="_blank">How Alexa Can Use Song-Playback Duration to Learn Customers’ Preferences</a></li> <li><a href="" target="_blank">Amazon at AAAI</a></li> </ul> <p><em><sub>Projection image adapted from <a href="" target="_blank">Michael Horvath</a> under the&nbsp;<a href="">CC BY-SA 4.0</a>&nbsp;license</sub></em></p> /blogs/alexa/post/02732c1d-bab8-41fa-8afe-30d02d9a4280/hear-it-from-a-skill-builder-how-to-design-and-validate-an-alexa-skill-idea-in-5-days Hear It from a Skill Builder: How to Design and Validate an Alexa Skill Idea in 5 Days Jennifer King 2019-01-14T15:00:00+00:00 2019-01-14T15:00:00+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>Many people have asked us where to start with designing an Alexa skill, and we think we’ve found a great method that anyone can use to design a voice experience, from ideation to validation.</p> <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p><strong><em>Editor's Note:</em></strong><em> What if you’re tasked with prototyping a potential skill idea and you only have five business days to get it done? I’ve asked Alex Baxevanis, Experience Director at Webcredible, to share how he and his team have designed a sprint structure that condenses the prototyping phase of a skill down to a five-day process. While there is no “correct” process to prototype, hopefully the below will help focus your efforts next time you want to validate a new voice idea. </em></p> <p>Designing for a new technology can always bring a load of exciting ideas, alongside many questions and unknowns. Many people have asked us where to start with designing an Alexa skill, and we think we’ve found a great method that anyone can use to design a voice experience, from ideation to validation.</p> <p>You’ve probably heard of the “design sprint” method popularized by venture capital firm GV. A design sprint is a time-boxed, five-day process aimed at refining an idea and increasing its chance of success when it hits the market. It felt like just the right fit for exploring voice interactions. Whether you’re a skill-building hobbyist, a professional developer, or part of a seasoned development team, this process should help you structure your prototyping phase and consider the various steps involved.</p> <p>Here’s an overview of the voice design sprint and how we’ve used it to help clients design a new skill idea.</p> <h2>Day 1: Understand and Ideate</h2> <p>The first day of a voice design sprint starts by making sure that the team you’re working with, whether that’s a team within your own company or a group of people you’ve brought together for a brainstorm, understands how voice services like Alexa work in practice. Bring Alexa-enabled devices for participants to play with and get familiar with Alexa skills that would be relevant to the experience you're trying to build.</p> <p>During our workshops with clients, we’ll also hear from subject matter experts on the customer journey, and the information and interactions that they could deliver through voice.</p> <p>Where possible, we’ll always look for examples where people are already interacting with a brand through voice. This includes listening in to customer service calls, or shadowing staff as they talk to customers. For example, when we worked with Virgin Trains team on their Alexa skill, we went to train stations to hear first-hand (and note down) how exactly customers were wording their questions, and how Virgin Trains staff were responding.</p> <p>We close the day by writing out as many ideas as possible, inspired both by the possibilities of voice and our learnings from customers. At this stage, we don’t set any restrictions. All we ask is that participants note down for each idea:</p> <ul> <li>Who their user might be (e.g. a train traveler)</li> <li>What voice could offer (e.g. purchasing tickets)</li> <li>In what context might people use voice (e.g. at home)</li> <li>What the final outcome or benefit for the customer is (e.g. catching a last-minute train)</li> </ul> <h2>Day 2: Narrow Down the Idea and Start Mapping</h2> <p>Armed with an initial set of ideas, the second day is focused on whittling down the list to those that might best work for voice. We’ve developed a checklist based on our experience and <a href="" target="_blank">working with the Alexa team</a>. We get all sprint participants to go through <a href="">the checklist</a> and see how their ideas fare. In some cases, it’s a clear “yes” or “no.” In others it’s a “maybe,” which means we should definitely test our assumptions when we prototype.</p> <p><img alt="" src="" style="display:block; height:325px; margin-left:auto; margin-right:auto; width:576px" /></p> <p>The team gets to vote and collectively agree on one or two ideas to pursue. Then we get to work, <a href="">writing down scripts and mapping the flows of completing a task through voice</a>. To get people used to the format, we usually present a couple of ready-made examples for interactions that everyone can imagine, such as buying cinema tickets, or a food recommendation service like The Foodie below.</p> <p><img alt="" src="" style="display:block; height:393px; margin-left:auto; margin-right:auto; width:600px" /></p> <p>Before long, we get an idea for how simple or complex each use case can be, and how the ideal scenario might differ from an edge case. For example, in the case of a food recommendation skill, how will the experience differ if users ask for something that the skill supports (e.g. filtering by dietary constraints), versus something not supported (e.g. getting the calorie count by person).</p> <p>However, words on paper never give an accurate view of how the same words might sound while spoken aloud. With that in mind, as soon as people have completed their first scenario we get them to “role play” it. One person plays the role of the user and the other pretends to be “Alexa,” taking turns to read out aloud their part of the script.</p> <p>When people hear themselves saying what they’ve written down, they quickly understand what <a href="">sounds like a real-life conversation</a> and what sounds unnatural. They then spend the rest of the day iterating on their script and role-playing it again, until it sounds engaging and conversational.</p> <h2>Day 3: Prototype the Voice Experience</h2> <p>With a few scenarios mapped out, it’s then time to scale up and build a working prototype of the ideas we’re exploring.</p> <p>Whilst it’s certainly possible to continue testing and iterating by role-playing alone, we’ve always learnt even more by trying our ideas on a real Alexa-enabled device. For example, we get a feeling for how what to do when <a href="">something isn’t recognized</a>, and how our answers sound when read in the voice used by Alexa (is it too fast, too slow or is something harder to understand when read out by a synthetic voice?).</p> <p>Fortunately, there are now many <a href="">prototyping tools</a> that make it really easy to turn an idea into a working Alexa skill, without doing any coding, including services like <a href="" target="_blank">Voice Apps</a> and <a href="" target="_blank">Voiceflow</a>. Whenever you’re prototyping, make sure to keep the <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_guide-page_text-link&amp;sc_segment=visitors&amp;sc_place=guide-page&amp;sc_trackingcode=text-link" target="_blank">situational design guidelines</a> in mind.</p> <p>Again, we walk the sprint team through a working prototype so that they can get a glimpse of the tool’s capabilities before they get going. Be wary of just using flow charts when prototyping for voice as conversations don’t always flow as smoothly as you anticipate. This is where situational design comes in handy. Watch <a href="" target="_blank">this recording</a> from the Alexa team for a primer on situational design.</p> <p>We’ve found that even for complete beginners, one day is enough to learn how Voice Apps or Voiceflow work and create a testable prototype of one or more flows. It helps if people work in parallel, with one person creating the skill in the tool and others supporting by collecting sample data to use in the prototype and thinking of all the possible synonyms and <a href="">sample utterances</a> that users might want to say.</p> <p>Toward the end of the day, the team also creates a discussion guide, listing all the questions and scenarios to be used when testing the idea with real customers.&nbsp;</p> <h2>Day 4: User Testing</h2> <p>We dedicate the fourth day of the sprint to getting our idea in front of real customers. This means people who haven’t been involved in the development of the prototype, but could use our idea in real life.</p> <p>Before we even start with the sprint, we’ll have recruited and lined up around six people for that day. We may not know exactly what we’ll show them, but we can at least find people with some affinity to the domain we’re exploring. For example, if we’re prototyping voice interactions around buying cinema tickets, we’ll recruit a number of regular cinema-goers with a variety of film preferences.</p> <p>We also make sure everyone we recruit has used voice technology, like an Echo device, before so we spend the time testing our idea, not bringing them up to speed with how the service works.</p> <p>On the user testing day, we’ll bring people into a usability testing lab (or similar quiet room) and ask them to try out interacting with our prototype Alexa skill on a real device. We’re experts at running usability testing on a variety of platforms, but we’ve noticed that when testing with voice we had to slightly adapt our approach. For example:</p> <ul> <li>Whilst people can try a lot of things on a prototype of a website or app, voice interactions tend to be quite short. We schedule shorter sessions or we use the extra time to probe more into how participants use voice services in real life.</li> <li>Whilst we normally ask people to “think aloud” and explain what they’re doing while they use a website or app, they obviously can’t do the same while also talking to a voice service. We get them to tell us how the experience felt once they’ve finished a conversation.</li> <li>When testing a website, if a participant feels lost or clicks on the “wrong” button, we can easily intervene and put them back on track. It’s almost impossible for a moderator to intervene and take over a conversation with voice service. If we see people repeatedly fail at something, we’ll give them a hint on what to say.</li> </ul> <p>We take lots of notes and record all the sessions (with participants’ permission), so we get a clear record of how easy or hard to use our prototypes were.</p> <h2>Day 5: Analyze and Plan Next Steps</h2> <p>We start the final day by going through all our notes, reflecting on what puzzled participants, and what they said that our prototype skill couldn’t handle. We’ll go through our notes and recordings and pick out the exact words and phrases that people used. Where possible, we use our findings to get the prototype to understand more real-life scenarios and give clearer responses.</p> <p>We then make a roadmap for future work required to properly build the skill and bring the voice experience to life. We’ll discuss, for example, what APIs are needed to get live data integrated and how we might keep testing it with customers to ensure we stay on the right track.</p> <p>Finally, we have a go at sketching a “landing page” for our skill, showing how we’d promote it to customers. As there’s no way to “screenshot” a voice interaction, we think carefully about <a href="">how we can best sell the idea</a>, both internally and to customers browsing the Alexa Skills Store.</p> <p>There you have it! We’ve gone from zero to a validated Alexa skill idea in just five days. <a href="" target="_blank">Contact me</a> to learn more about the voice design sprint. To learn more about designing for voice, check out the <a href="">Alexa Design Guide</a>.</p> <h2>Related Content</h2> <ul> <li><a href="">Blog Series: 10 Things Every Alexa Skill Should Do</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_guide-page_text-link&amp;sc_segment=visitors&amp;sc_place=guide-page&amp;sc_trackingcode=text-link" target="_blank">Guide: How to Shift from Screen-First to Voice-First Design</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_standoutskill_guide-page_text-link&amp;sc_segment=visitors&amp;sc_keywords=standoutskill&amp;sc_place=guide-page&amp;sc_trackingcode=text-link" target="_blank">Guide: How to Design a Voice User Interface</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=wb_acquisition&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_detail=Guide&amp;sc_funnel=Convert&amp;sc_country=WW&amp;sc_medium=Owned_WB_wb_acquisition_ASK_Content_Guide_Convert_WW_visitors_standoutskill_guide-page_text-link&amp;sc_segment=visitors&amp;sc_keywords=standoutskill&amp;sc_place=guide-page&amp;sc_trackingcode=text-link" target="_blank">Guide: Tried-and-Tested Skill-Building Tips from Top Alexa Developers</a></li> <li><a href="">Situational Design: Build Adaptable Voice-First Interactions</a></li> <li><a href="">Situational Design: Individualize Your Entire Interaction</a></li> <li><a href="">Situational Design: Make Your Voice-First Interactions Accessible</a></li> <li><a href="">Situational Design: Talk with Your Customers, Not at Them</a></li> </ul> /blogs/alexa/post/55a49999-21b1-4f66-a80a-cc9034ccc82e/alexa-skill-teardown-building-the-interaction-model-for-the-space-explorer-skill Alexa Skill Teardown: Building the Interaction Model for the Space Explorer Skill Jennifer King 2019-01-11T16:05:03+00:00 2019-01-11T21:11:08+00:00 <p style="text-align:center"><img alt="" src="" style="height:357px; width:800px" /></p> <p>Get a deep dive on our new multimodal sample skill called Space Explorer. We’ll walk you through how we built the interaction model using the Alexa Presentation Language.</p> <p>In my previous post about the <a href="">Space Explorer sample Alexa skill</a>, I talked about how we approached the design for Space Explorer. I also discussed the overall goal of the project, the philosophy that guided our decision making, why we started with voice, and our thoughts on adapting the experience to suit the device.</p> <p>This time around, I'll talk more about how we turned that design into reality using the new <a href="">Alexa Presentation Language (APL)</a>, the Alexa Developer Portal, and AWS Lambda.</p> <h2>Building the Interaction Model</h2> <p>We started off by crafting our interaction model in the Alexa Developer Portal. Using the scripts we created as our guide (covered in the <a href="">first post in this series</a>), we started to create the various intents we knew we needed for users to navigate through the skill.</p> <p>Before building out the rich visuals you see in the final experience, we started by scaffolding all of the layouts using simple text-based labels for each of our target views. We created a minimal set of utterances to support our intended navigation, and confirmed that the correct views were being served.</p> <p style="text-align:center"><img alt="" src="" style="display:block; height:500px; margin-left:auto; margin-right:auto; width:800px" /></p> <p style="text-align:center"><em>Example of the basic layouts used early in development.</em></p> <p>Once the flows were complete, we spent some time expanding the utterances. We knew these basic utterances were only a starting point, so we added as many logical variations as we could think of to ensure we were covering as many scenarios as possible. For example, in addition to handling <em>“Take me to Jupiter,<strong>”</strong></em> we account for <em>“Go to Jupiter”</em> and <em>“Jupiter”</em> as well. But we also knew we would never be able to think of all the possibilities on our own. This is where user testing is a great tool. We reached out to some of our colleagues and asked them to play with the voice interactions in the skills and try to navigate around. Their feedback led to us handling a few more utterances than the original set we considered, resulting in a skill that is more resilient than our initial implementation.</p> <p>When the utterances were robust enough, we looked at how we could refine and make them easier to use in our back end when the time came. Enter slots. Slots are a great way to reduce the number of intents you need to handle on the back end, and make handling the target intent more convenient. Essentially, slots work like variables, with SlotTypes that map to predefined datasets (i.e. movies, actors, cities). Additionally, you can define custom SlotTypes that allow you to limit the set of accepted values for a given slot.</p> <p><img alt="" src="" style="display:block; height:530px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>For example, we created a custom slot type called celestial_objects and filled it with all the available planets and dwarf planets we wanted to make navigable. When a customer says either, <em>“Alexa, take me to Jupiter,”</em> or, <em>“Alexa, what's in Jupiter's atmosphere,”</em> Alexa knows the slot value, and will always return the single, lowercase value “jupiter” from the celestial_objects type. By predefining a collection of available slot values, we have limited the set of terms that Alexa has to map to, increasing the odds of a meaningful utterance.</p> <p>The last component of the voice design we implemented were the screen-based intents. These are the intents that let a customer navigate the screen content using their voice, such as titles or ordinals. Since these are not natively handled by APL yet, we had to implement them ourselves. For example, when presenting customers with lists of data, native Alexa experiences allow for selection using the item number or title, so we created custom intents to mimic that functionality.</p> <p>If you need help getting started with your interaction model, take a look at the Related Resources at the end of this post.</p> <h2>Translating Designs into APL</h2> <p>With our scaffolded skill functioning, it was time to turn our attention to the visuals. APL, as we've mentioned before, gave us the freedom to be as creative with the layouts as we wanted, which meant we needed to figure out how to translate that creativity into actual code. We also needed to make sure that the designs were clear enough to guarantee we used the right components.</p> <p>Just like with any UI development, our designs resulted in a series of redline-style documents to help guide the process. In addition to standard font-sizing and spacing guidelines, we made sure that we specifically included the touch target boundaries. This ensured we started off on the right track with components and minimized the amount of backtracking we had to do later on.</p> <p style="text-align:center"><img alt="" src="" style="display:block; height:450px; margin-left:auto; margin-right:auto; width:800px" /></p> <p style="text-align:center"><em>Example of the redline layers indicating touch targets</em></p> <h2>Importing Pre-Defined Style Packages</h2> <p>Throughout this skill, we're importing the <strong>alexa-styles</strong> and <strong>alexa-layouts</strong> packages from Alexa, as well as two additional custom packages served from our own CDN. The styles package provides developers a number of pre-built styles for text, spacing, colors, and more that have been developed to adapt to different viewport resolutions and viewing distances. In the layouts package, developers can find pre-built layout components developed by Amazon with the same adaptability as the styles package. We've used both extensively to make our development easier and we strongly recommend every developer do so, as well. For more information on what’s available, take a look at the <a href="">Alexa Packages Overview documentation</a>.</p> <p>Below is an example import block using the Alexa packages and custom packages:</p> <pre> <code>... &quot;import&quot;: [ { &quot;name&quot;: &quot;alexa-styles&quot;, &quot;version&quot;: &quot;1.0.0&quot; }, { &quot;name&quot;: &quot;alexa-layouts&quot;, &quot;version&quot;: &quot;1.0.0&quot; }, { &quot;name&quot;: &quot;layouts&quot;, &quot;version&quot;: &quot;1.0.0&quot;, &quot;source&quot;: &quot;; }, { &quot;name&quot;: &quot;styles&quot;, &quot;version&quot;: &quot;1.0.0&quot;, &quot;source&quot;: &quot;; } ] ... </code></pre> <p>For example, in the following snippet from our <a href="" target="_blank">custom layout package</a> you can see how we use the AlexaHeader and AlexaFooter throughout the skill:</p> <pre> <code>... &quot;ZoneList&quot;: { &quot;parameters&quot;: [ &quot;backgroundImage&quot;, &quot;title&quot;, &quot;logo&quot;, &quot;hintText&quot;, &quot;listData&quot; ], &quot;items&quot;: [ { &quot;when&quot;: &quot;${@viewportProfile != @hubRoundSmall}&quot;, &quot;type&quot;: &quot;Container&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;direction&quot;: &quot;column&quot;, &quot;items&quot;: [ ... { &quot;type&quot;: &quot;AlexaHeader&quot;, &quot;headerTitle&quot;: &quot;${title}&quot;, &quot;headerBackButton&quot;: 1, &quot;headerNavigationAction&quot;: &quot;backEvent&quot; }, ... { &quot;type&quot;: &quot;AlexaFooter&quot;, &quot;hintText&quot;: &quot;${hintText}&quot; } ] }, ... ] } ... </code></pre> <p>Notice the <strong>hintText</strong> property on the AlexFooter component. Using this property with a data transform, we can easily create a properly-formatted Alexa hint that references the device's active wake-word. Here's an example of how to use the textToHint transform in your APL datasources block:</p> <pre> <code>&quot;datasources&quot;: { &quot;data&quot;: { &quot;type&quot;: &quot;object&quot;, &quot;properties&quot;: { &quot;hintText&quot;: &quot;take me to Venus.&quot; }, &quot;transformers&quot;: [ { &quot;inputPath&quot;: &quot;hintText&quot;, &quot;outputName&quot;: &quot;hint&quot;, &quot;transformer&quot;: &quot;textToHint&quot; } ] } } } </code></pre> <p>If the active wake word was “Alexa,” this would output the property <strong>hint</strong>, with the value <em>'Try, “Alexa, take me to Venus.”'</em> For more information on this and other transforms, check out the <a href="">tech docs</a>.</p> <p>We've also created our own custom packages for this skill. This gave us more freedom to reuse the same code across the skill and allowed us to circumvent the directive size limit for skills. This was especially important, as the size cap includes datasources, and can quickly outgrow the 24Kb ceiling.</p> <h2>Accommodating Different Viewports</h2> <p>APL is designed to minimize the number of layouts you need to create for your skills, but there are some key things we needed to do to make that as simple as possible. First, we primarily used percentage- or viewport-based units for most of our dimensions. That ensures that spacing and positioning aren't adversely impacted when the viewport dimensions are changed.</p> <p>Second, we took advantage of APL's built-in conditional evaluation to show or hide elements, change dimension values or swap layouts entirely based on certain characteristics. This meant that we could show more information on larger displays, free up space on smaller displays, and drastically alter the layout for specific devices only. For instance, here's a what the APL for the main solar system screen looks like:</p> <pre> <code>... &quot;mainTemplate&quot;: { &quot;parameters&quot;: [&quot;payload&quot;], &quot;item&quot;: { &quot;type&quot;: &quot;Frame&quot;, &quot;backgroundColor&quot;: &quot;black&quot;, &quot;items&quot;: [ { &quot;when&quot;: &quot;${@viewportProfile == @hubRoundSmall}&quot;, &quot;type&quot;: &quot;SolarSystemSmallRoundHub&quot;, &quot;data&quot;: &quot;${}&quot; }, { &quot;when&quot;: &quot;${@viewportProfile != @hubRoundSmall}&quot;, &quot;type&quot;: &quot;SolarSystem&quot;, &quot;data&quot;: &quot;${}&quot; } ] } } ... </code></pre> <p>In the above example, we use conditional statements to determine which layout to display based on a resource called viewportProfile, found in the <strong>alexa-styles</strong> package. This resource is also using conditional evaluation to change it's value based on the viewport characteristics sent by the device.</p> <h2>Using APL Components to Create Scalable Graphic Elements</h2> <p>One of the exciting things about APL is the flexibility to look beyond traditional layouts. Much like HTML and CSS, the possibilities for creating truly dynamic and interesting elements are endless. For Space Explorer, there were a handful of screens that challenged us to use APL in more interesting ways. Among those were the size comparison, distance, and element views.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>The size comparison view uses variably shaped circles to represent the comparative sizing of different planets in our solar system. This effect could have been achieved using images, but that would not have given us the flexibility we needed to scale (and potentially introduced latency). As an alternative, we created the circles using APL Frames, dynamically sizing, coloring, and positioning them based on the characteristics of each planet.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>The distance screen uses a similar methodology. To create the comparison graphics in this view, we built the circles and bars using the following layout from our custom layout package:</p> <pre> <code>&quot;DistanceGraphic&quot;: { &quot;parameters&quot;: [&quot;color&quot;, &quot;name&quot;, &quot;width&quot;, &quot;active&quot;, &quot;test&quot;], &quot;items&quot;: [ { &quot;type&quot;: &quot;TouchWrapper&quot;, &quot;width&quot;: &quot;${width + '%'}&quot;, &quot;height&quot;: &quot;@indicatorSize&quot;, &quot;spacing&quot;: &quot;@indicatorSpacing&quot;, &quot;onPress&quot;: { &quot;type&quot;: &quot;SendEvent&quot;, &quot;arguments&quot;: [&quot;distanceEvent&quot;, &quot;${name}&quot;] }, &quot;items&quot;: [ { &quot;type&quot;: &quot;Frame&quot;, &quot;width&quot;: &quot;100%&quot;, &quot;height&quot;: &quot;100%&quot;, &quot;borderRadius&quot;: &quot;10dp&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;inheritParentState&quot;: true, &quot;style&quot;: &quot;backgroundWithFocusPress&quot;, &quot;item&quot;: { &quot;type&quot;: &quot;Container&quot;, &quot;width&quot;: &quot;100%&quot;, &quot;height&quot;: &quot;100%&quot;, &quot;direction&quot;: &quot;row&quot;, &quot;alignItems&quot;: &quot;center&quot;, &quot;opacity&quot;: &quot;${active ? 1 : 0.3}&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Frame&quot;, &quot;height&quot;: &quot;@indicatorStroke&quot;, &quot;grow&quot;: 1, &quot;backgroundColor&quot;: &quot;${color}&quot; }, { &quot;type&quot;: &quot;Frame&quot;, &quot;height&quot;: &quot;@indicatorSize&quot;, &quot;width&quot;: &quot;@indicatorSize&quot;, &quot;borderRadius&quot;: &quot;@indicatorRadius&quot;, &quot;borderWidth&quot;: &quot;@indicatorStroke&quot;, &quot;borderColor&quot;: &quot;${color}&quot;, &quot;backgroundColor&quot;: &quot;${active ? color : 'transparent'}&quot; } ] } } ] } ] } </code></pre> <p>As you can see, the elements rely on percentage units to scale accordingly, which made both responsive layouts and dynamic sizing easier. We also use conditional statements to fill in the circles and raise the opacity of the active elements.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>The atmospheric composition designs used a periodic table element style. Again, we could have achieved this with images, but APL allowed us to ensure the scaling, placement and crispness of the graphics would be consistent across all devices.</p> <pre> <code>&quot;Element&quot;: { &quot;parameters&quot;: [&quot;element&quot;, &quot;notation&quot;, &quot;title&quot;, &quot;percentage&quot;, &quot;color&quot;, &quot;spacing&quot;], &quot;items&quot;: [ ... { &quot;type&quot;: &quot;Container&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Container&quot;, &quot;width&quot;: &quot;${@isHubLandscapeSmall ? '18vw' : '200dp'}&quot;, &quot;height&quot;: &quot;${@isHubLandscapeSmall ? '18vw' : '200dp'}&quot;, &quot;alignItems&quot;: &quot;center&quot;, &quot;justifyContent&quot;: &quot;spaceAround&quot;, &quot;spacing&quot;: &quot;${spacing}&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Frame&quot;, &quot;width&quot;: &quot;100%&quot;, &quot;height&quot;: &quot;100%&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;top&quot;: 0, &quot;right&quot;: 0, &quot;bottom&quot;: 0, &quot;left&quot;: 0, &quot;borderWidth&quot;: &quot;2dp&quot;, &quot;borderColor&quot;: &quot;#FAFAFA&quot;, &quot;backgroundColor&quot;: &quot;black&quot;, &quot;borderRadius&quot;: &quot;8dp&quot;, &quot;opacity&quot;: 0.4 }, { &quot;when&quot;: &quot;${element != 'other'}&quot;, &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;${@viewportProfile == @hubRoundSmall || @viewportProfile == @hubLandscapeSmall? 'textStyleDisplay3Alt' : 'textStyleDisplay4Alt' }&quot;, &quot;color&quot;: &quot;${color}&quot;, &quot;text&quot;: &quot;${notation}&quot;, &quot;height&quot;: &quot;120dp&quot;, &quot;textAlignVertical&quot;: &quot;center&quot; }, { &quot;when&quot;: &quot;${element == 'other'}&quot;, &quot;type&quot;: &quot;Image&quot;, &quot;source&quot;: &quot;;, &quot;width&quot;: &quot;49dp&quot;, &quot;height&quot;: &quot;83dp&quot;, &quot;scale&quot;: &quot;best-fit&quot; }, { &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;textStyleDetail&quot;, &quot;textAlign&quot;: &quot;center&quot;, &quot;text&quot;: &quot;${title}&quot; } ] }, { &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;textStyleDisplay4&quot;, &quot;textAlign&quot;: &quot;center&quot;, &quot;spacing&quot;: 8, &quot;text&quot;: &quot;${percentage + '%'}&quot; } ] } ] } </code></pre> <p>To make sure the same component would adapt appropriately for larger displays, we created the elements to change form when the viewport characteristics were correct. You can see the full layout on our <a href="" target="_blank">GitHub repo</a>. Unfortunately, it just wasn't possible to create the donut graphs using APL elements alone, so we had to fall back to images for those assets.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" /></p> <p>For some screens, we had to be even more creative to achieve the effect the designs called for. The best example of that is the skill's launch screen. By creating a custom splash screen, we were able to launch the skill with a unique, branded experience while simultaneously masking the latency of loading images for our solar system view in the background.</p> <p><img alt="" src="" style="display:block; height:357px; margin-left:auto; margin-right:auto; width:800px" />To do that with APL's current features, we created a layout that layers an Image component on top of the main solar system layout, which itself sits atop a ScrollView with a single Text component positioned off screen. When we handle the LaunchRequest, a RenderDocument directive is returned to display the launch layout, accompanied by an ExecuteCommand directive with a SpeakItem command targeting the hidden ScrollView's Text component. This command has a delay built in, so that any loading that needs to occur happens before the command is sent.</p> <p>Finally, we used the OnScroll property of the ScrollView to tie the scroll position to the Image component's opacity, which resulted in the smooth fade effect we were after.</p> <p>Here's the final layout:</p> <pre> <code>{ &quot;parameters&quot;: [&quot;payload&quot;], &quot;item&quot;: { &quot;type&quot;: &quot;Container&quot;, &quot;direction&quot;: &quot;column&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;top&quot;: 0, &quot;bottom&quot;: 0, &quot;items&quot;: [ { &quot;type&quot;: &quot;ScrollView&quot;, &quot;width&quot;: &quot;100%&quot;, &quot;height&quot;: &quot;100%&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;onScroll&quot;: [ { &quot;type&quot;: &quot;SetValue&quot;, &quot;componentId&quot;: &quot;splashImage&quot;, &quot;property&quot;: &quot;opacity&quot;, &quot;value&quot;: &quot;${1 - (event.source.value * 2)}&quot; } ], &quot;item&quot;: [ { &quot;type&quot;: &quot;Container&quot;, &quot;paddingTop&quot;: &quot;100vh&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Text&quot;, &quot;text&quot;: &quot;What would you like to explore?&quot;, &quot;opacity&quot;: &quot;0&quot;, &quot;id&quot;: &quot;splashScroller&quot;, &quot;paddingTop&quot;: &quot;100vh&quot;, &quot;speech&quot;: &quot;${}&quot; } ] } ] }, { &quot;type&quot;: &quot;Container&quot;, &quot;items&quot;: [ { &quot;when&quot;: &quot;${@viewportProfile == @hubRoundSmall}&quot;, &quot;type&quot;: &quot;SolarSystemSmallRoundHub&quot;, &quot;data&quot;: &quot;${}&quot; }, { &quot;when&quot;: &quot;${@viewportProfile != @hubRoundSmall}&quot;, &quot;type&quot;: &quot;SolarSystem&quot;, &quot;data&quot;: &quot;${}&quot; } ] }, { &quot;type&quot;: &quot;Frame&quot;, &quot;id&quot;: &quot;splashImage&quot;, &quot;backgroundColor&quot;: &quot;black&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;top&quot;: 0, &quot;right&quot;: 0, &quot;bottom&quot;: 0, &quot;left&quot;: 0, &quot;item&quot;: [ { &quot;type&quot;: &quot;Container&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;justifyContent&quot;: &quot;center&quot;, &quot;alignItems&quot;: &quot;center&quot;, &quot;items&quot;: [ { &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;textStyleDisplay1Alt&quot;, &quot;fontSize&quot;: &quot;20vh&quot;, &quot;fontWeight&quot;: &quot;100&quot;, &quot;text&quot;: &quot;SPACE&quot;, &quot;letterSpacing&quot;: &quot;6.6vw&quot; }, { &quot;type&quot;: &quot;Text&quot;, &quot;style&quot;: &quot;textStyleHeadline&quot;, &quot;fontSize&quot;: &quot;5.5vh&quot;, &quot;text&quot;: &quot;EXPLORER&quot;, &quot;fontWeight&quot;: &quot;800&quot; }, { &quot;type&quot;: &quot;Image&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;scale&quot;: &quot;best-fill&quot;, &quot;source&quot;: &quot;@landingImage&quot;, &quot;position&quot;: &quot;absolute&quot; } ] } ] } ] } } </code></pre> <h2>What's Next</h2> <p>With our voice and visual interactions built out, the next step is tying it all together. In our next post, we'll wrap up the Space Explorer deep dive by looking at how we used AWS Lambda to handle intents and user events, deliver directives, and manipulate our APL. Stay tuned.</p> <h2>Related Resources</h2> <ul> <li><a href="" target="_blank">Space Explorer Sample Code</a></li> <li><a href="">Alexa Presentation Language Technical Documentation</a></li> <li><a href="">10 Tips for Designing Alexa Skills with Visual Responses</a></li> <li><a href="">4 Tips for Designing Voice-First Alexa Skills for Different Alexa-Enabled Devices</a></li> <li><a href="">How to Design Visual Components for Voice-First Alexa Skills</a></li> <li><a href="">How to Get Started with the Alexa Presentation Language to Build Multimodal Alexa Skills</a></li> </ul> /blogs/alexa/post/73df5551-ad93-401c-8b57-d8a2c56c5ac4/localizing-your-alexa-skills-how-to-tailor-your-voice-experience-for-global-audiences Localizing Your Alexa Skills: How to Tailor Your Voice Experience for Global Audiences Jennifer King 2019-01-09T15:00:00+00:00 2019-01-09T15:00:00+00:00 <p>As Alexa expands to more countries and languages, you have more opportunities to make your skills available to a growing audience around the world. If you're ready to take your skill global, you'll first want to consider the best way to localize, or internationalize, the experience.</p> <p>As <a href="">Alexa expands to more countries and languages</a>, you have more opportunities to make your skills available to a growing audience around the world. If you're ready to take your skill global, you'll first want to consider the best way to localize, or internationalize, the experience.</p> <p>We've all had the experience of reading the instructions for a product made in another country or language that was translated poorly. Oftentimes, these products are challenging to use, which may negatively impact your experience and trust in that product or brand. Effectively handling translation and cultural differences when designing and building your Alexa skills for multiple regions is key to create a positive and engaging experience for customers everywhere.</p> <p>The most important thing to recognize is that localization isn’t limited to just language. Localizing the experience is shifting how Alexa converses with different customers using your skill, using different imagery and phrases appropriate for each country. When localizing your voice experience, consider features for different languages, regional differences, and technical requirements of different target audiences. Think beyond your own native culture and language. Not only should you consider which countries you are planning to make your skill available in, but which languages will you need to support those countries. Also consider what level of translation or localization will be required.</p> <p>Designing and building your skill with the following best practices in mind will help reduce the resources required to localize your skill for new countries, and help your skill have broader appeal.</p> <h2>When Designing the <u>Voice Output</u> for Your Skill</h2> <ul> <li>Be mindful of long strings of nouns or adjectives, or very long sentences that work better as short ones. Long, complex sentences are difficult to translate, and difficult to understand for customers.</li> <li>Avoid colloquialisms, puns, or local jargon when they are not critical to content in your skill. This general rule is especially important to localization, since other spoken languages may have no equivalent jargon.</li> <li>Make sure to define terms, and use them consistently throughout your skill. If your terminology is inconsistent in how you present certain terms, or if you don’t provide proper term definitions to those assisting in translating your voice experience, it will be difficult to provide quality translations for your customers.</li> <li>Keep in mind that different languages have different word order. Grammatical rules in a respective language will dictate in which order these words need to come.</li> </ul> <h2>When Designing the <u>Visual Output</u> for Your Skill</h2> <ul> <li>Remember that most languages typically require more room than English, with longer words and sentences, and possibly larger characters. Make sure your visual layouts account for this, and have room to scale when required.</li> <li>Be sure to define line wrap and truncation behavior for all visual layouts using text components. Text in your layouts should be allowed to wrap and flow to as many lines as needed. Consider accounting for at least 30% extra space within your GUI beyond what the English source requires to accommodate this.</li> <li>Translate any text in the graphics you select. The best way to avoid dealing with localizing graphics is to minimize or avoid using text in graphics. But if you must use text in your images, make sure to verify the images are displaying properly in each locale and that the right image is being displayed.</li> <li>Use general images that are appropriate and easily understood in your intended countries and marketplaces. Not all cultural references will be global, so try to use general images that are appropriate for a worldwide audience.</li> <li>If you're using dates, time, phone numbers, and other general number formatting, make sure to follow local custom. For example, dates in the US are generally month, day, year but in most of Europe dates are written as day, month year.</li> </ul> <p>With Alexa's availability expanding to countries<a href=""> all over the world</a>, it's important to remember that the more localized your skill is, the more customers you will reach. And those customers will appreciate an experience tailored to their culture and language, leading to higher engagement and happier customers. For more examples of how you can localize your voice experience for a global audience, see the <a href="">Alexa Design Guide</a>.</p> <h2>Related Content</h2> <ul> <li><a href="">Alexa Design Guide: Internationalization</a></li> <li><a href="">How to Localize Your Alexa Skills</a></li> <li><a href="">5 Tips for Building Multi-Language Alexa Skills</a></li> </ul> /blogs/alexa/post/f73c5010-5866-4281-90fa-8c9f85fee2e7/alexa-are-you-going-to-ces Alexa, Are You Going to CES? Adam Vavrek 2019-01-07T23:12:05+00:00 2019-01-07T23:14:51+00:00 <p><a href="" target="_self"><img alt="" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></a></p> <p>The Consumer Electronics Show&nbsp;starts tomorrow, January 8, in Las Vegas. More than 180,000 people from over 155 countries will be in attendance showcasing the latest in consumer technologies.</p> <p><img alt="Amazon Alexa CES 2019" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" />Alexa had a busy year. The number of customers interacting with Alexa each day doubled <a href="" target="_blank">in 2018</a>. Equally as exciting is the momentum we’ve seen among Alexa developers and device makers: The number of Alexa-compatible smart home devices increased 6x to 28,000 products from more than 4,500 unique brands; the number of Alexa skills increased to more than 70,000; and the number of products with Alexa built-in more than doubled. In fact, more than 90% of the Alexa devices launched last year were built by someone other than Amazon.</p> <p>These developers help make Alexa smarter, more useful, and more accessible to customers around the world, and we’re excited to showcase what they’ve built this week at the Consumer Electronics Show (CES). The four-day event starts tomorrow, January 8, in Las Vegas. More than 180,000 people from over 155 countries will be in attendance showcasing the latest in consumer technologies.</p> <p><img alt="Amazon Alexa CES Public Exhibits" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>Alexa, Where Can I Meet You at CES?</strong></h2> <p>Amazon will have a public presence in several locations:</p> <ul> <li>The Venetian (Ballrooms C and D) – Alexa public exhibit</li> <li>Las Vegas Convention Center (North Hall - 7506) – Alexa Auto</li> <li>The Sands Convention Center (Lobby) – Amazon Key and Ring</li> </ul> <p>In the Venetian, attendees can experience products and services from across Amazon, including Alexa, Fire TV, AWS, Dash Replenishment Services (DRS), and more. At the center of the Alexa public exhibit is the all new Audi e-tron SUV, which is surrounded by other technologies that showcase how Alexa makes life easier when you’re at home, work, and on-the-go. The fully electric Audi e-tron features Alexa built directly into the vehicle, so customers can ask her to play music, locate points of interest, control smart home devices, and access thousands of Alexa skills.</p> <p>There will also be the <em>speakeasy</em>, an area focused on solutions for device makers building with Alexa. Amazon Solution Architects will be on site to help educate developers on integrating Alexa into their products, and to showcase the newest development kits, systems integrators, and original design manufacturer solutions.</p> <p><img alt="Amazon Alexa CES 2019 What's New" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>What’s New</strong></h2> <p>CES has a history of debuting the latest and greatest innovations in consumer electronics and this year is no different.</p> <p>There are already more than 150 different products with Alexa built-in, from headphones and PCs to cars and smart home devices. You’ll see dozens of products with Alexa announced at CES: televisions from LG and Samsung; headphones from Jabra and JBL; smart home devices from Kohler and First Alert; automotive products from iOttie and BOSS Audio; and much, much more.</p> <p><img alt="Amazon Alexa CES 2019 Panels and Talks" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <h2><strong>Panels and Talks</strong></h2> <p>Amazon will be participating in several sessions and panels you won’t want to miss. If you’re going to CES, click the links below to add them to your agenda.</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">AI Forecasting Famine</a></strong><br /> Tuesday, January 8 | 11:30 a.m. - 12:30 p.m.<br /> Westgate, Level 1, Ballroom F</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">Technology Deployment into the Home</a></strong><br /> Wednesday, January 9 | 11:30 a.m. - 12:30 p.m.<br /> Venetian, Level 4, Marcello 4406</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">Vehicle Tech’s Next Big Thing</a></strong><br /> Wednesday, January 9 | 11:30 a.m. - 12:30 p.m.<br /> Las Vegas Convention Center, North Hall, N262</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">Go Big or Go Home – The IdeaMakers</a></strong><br /> Wednesday, January 9 | 2:40 p.m. - 3:20 p.m.<br /> Aria, Level 1, Joshua 9</p> <p style="margin-left:.5in"><strong><a href="" target="_blank">IoT Software Platforms: Measure Twice, Cut Once</a></strong><br /> Thursday, January 10 | 9:00 a.m. – 10:00 a.m.<br /> Las Vegas Convention Center, North Hall, N253</p> <p style="margin-left:.5in"><strong>Alexa Auto Fireside Chat</strong><br /> Thursday, January 10 | 11:00 a.m. - 11:30 a.m.<br /> Engadget Stage: Las Vegas Convention Center, Central Hall, Grand Lobby</p> <h2><img alt="Amazon Alexa CES 2019 Social Media #ASKALEXA" src="" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></h2> <h2><strong>#AskAlexa</strong></h2> <p>Follow #AskAlexa, #AlexaAuto, and #CES2019 on social media for real-time updates from the show. On Twitter, follow <a href="" target="_blank">@AlexaDevs</a>, <a href="" target="_blank">@AmazonEcho</a>, and <a href="" target="_blank">@AmazonNews</a> where will be sharing news and announcements with videos, photos, and more.&nbsp;</p> <p><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=CES2019PreEventBlogVavrek2&amp;sc_publisher=WB&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_CES2019PreEventBlogVavrek2_WB_Content_Discover_WW_AllDevs&amp;sc_segment=AllDevs" target="_blank">Subscribe here</a>&nbsp;for email updates during and after CES and get the latest information delivered straight to your inbox.</p> /blogs/alexa/post/68025d70-83c2-4251-9e1b-4c7145218a66/consejos-para-crear-frases-de-ejemplo-certificables-en-tu-skill-de-alexa Consejos para Crear Frases de Ejemplo Certificables en tu Skill Alexa German Viscuso 2019-01-07T12:00:00+00:00 2019-01-10T16:42:35+00:00 <p>La mayor&iacute;a de las skills de Alexa enviadas para certificaci&oacute;n se topan con una serie de problemas comunes donde las frases de ejemplo incorrectas son la raz&oacute;n m&aacute;s frecuente. Para ayudarte a evitar este problema hacemos un repaso de los requisitos y mejores pr&aacute;cticas para crear las frases de ejemplo.</p> <p>En <a href="">nuestra entrada de blog anterior</a> te comentabamos que la mayor&iacute;a de las skills de Alexa enviadas para certificaci&oacute;n se topan con una serie de problemas comunes. Las frases de ejemplo incorrectas son la raz&oacute;n m&aacute;s frecuente por la que las Skills de Alexa fallan el proceso de certificaci&oacute;n. Para ayudarte a evitar este problema vamos a hacer un repaso de los requisitos que tienen las frases de ejemplo y compartiremos algunas de las mejores pr&aacute;cticas.</p> <h2>&iquest;Qu&eacute; son las Frases de Ejemplo?</h2> <p>Para poder enviarnos tu Skill y comenzar el proceso de certificaci&oacute;n tienes que proveer al menos una frase de ejemplo en la pesta&ntilde;a <em>Distribution</em> de la consola de desarrollo de Skills Alexa.</p> <p><img alt="example_phrases_spanish.png" src="" style="display:block; margin-left:auto; margin-right:auto" /></p> <p>Los usuarios pueden ver estas frases de ejemplo en la entrada descriptiva de la Skill una vez que la descubren. Nos gusta pensar en este conjunto de frases como una gu&iacute;a que le muestra a los usuarios como comenzar a usar f&aacute;cilmente la Skill en sus dispositivos con Alexa. Tambi&eacute;n es una buena oportunidad para mostrar las funcionalidades clave de tu Skill.</p> <p><img alt="cookpad_app_spanish.jpg" src="" style="display:block; margin-left:auto; margin-right:auto" /></p> <p>La estructura b&aacute;sica que usan las frases de ejemplo para abrir Skills se describe en nuestra <a href="">documentaci&oacute;n</a>, y lo resumimos a continuaci&oacute;n:</p> <p style="margin-left:.5in; margin-right:0in"><strong>[Palabra de Activaci&oacute;n], [Palabra de Lanzamiento] [Nombre de Apertura] [Conector] [Enunciado] </strong></p> <p style="margin-left:.5in; margin-right:0in"><strong>Palabra de activaci&oacute;n</strong> (<em>wake word</em>)<strong>: </strong>Se utiliza “<em>Alexa</em>” por defecto en dispositivos Alexa, pero los clientes pueden cambiarla en sus preferencias. Debes usar “<em>Alexa</em>” como palabra de activaci&oacute;n en tus frases de ejemplo. No olvides comenzar tu primera frase de ejemplo con &quot;<em>Alexa</em>&quot; y pon una coma luego de la palabra de activaci&oacute;n.</p> <p style="margin-left:.5in; margin-right:0in"><strong>Palabra de lanzamiento</strong> (<em>launch word</em>): Como lo especificamos en la <a href="">documentaci&oacute;n</a>, esto incluye varias frases de lanzamiento como &quot;<em>abre</em>,&quot; &quot;<em>preg&uacute;ntale</em>,&quot; &quot;<em>empieza</em>,&quot; &quot;<em>lanza</em>,&quot; &quot;<em>comienza</em>,&quot; ,&quot;<em>corre</em>,&quot; &quot;<em>jugar</em>,&quot; &quot;<em>dile</em>,&quot; &quot;<em>dame</em>,&quot; “<em>pide</em>,” y m&aacute;s. Cuando estas frases combinan bien con tu nombre de apertura (ver abajo) le resultar&aacute; m&aacute;s f&aacute;cil al usuario recordar como abrir tu Skill.</p> <p style="margin-left:.5in; margin-right:0in"><strong>Nombre de apertura</strong> (<em>invocation name</em>): Este es el nombre de apertura que le has asignado a tu Skill al crearla en <a href=""></a>. Puedes consultar <a href="">nuestra documentaci&oacute;n</a> para ver los requisitos. Adem&aacute;s, si utilizas un nombre propio asegurate que tienes derrecho a utilizarlo (cuidado con las marcas registradas).</p> <p style="margin-left:.5in; margin-right:0in"><strong>Conector </strong>(<em>connecting word</em>): Estas son palabras que se usan para conectar la palabra de lanzamiento con los enunciados e incluyen a &quot;<em>y</em>,&quot; &quot;<em>de</em>,&quot; &quot;<em>desde</em>,” “<em>usando</em>,” &quot;<em>que</em>,&quot; &quot;<em>sobre</em>,&quot; &quot;<em>por</em>,&quot; &quot;<em>si</em>,&quot; y m&aacute;s. Para una lista completa echa un vistazo a nuestra <a href="">documentaci&oacute;n</a>. A pesar de que este componente puede ser omitido, el incluirlo har&aacute; que los usuarios entiendan y digan mejor la frase.</p> <p style="margin-left:.5in; margin-right:0in"><strong>Enunciado</strong> (<em>utterance</em>): son obligatorios y deben aparecer en las frases de ejemplo de tu modelo de interacci&oacute;n (es decir los enunciados de tus frases de ejemplo deben estar en tu modelo de interacci&oacute;n y ser id&eacute;nticos).</p> <p>En la frase de ejemplo &quot;<em>Alexa, abre cookpad y busca una receta de tortilla de patatas</em>&quot; , &quot;<em>Alexa</em>&quot; es la palabra de activaci&oacute;n, &quot;<em>abre</em>&quot; es la palabra de lanzamiento, &quot;<em>cookpad</em>&quot; es el nombre de apertura, &quot;<em>y</em>&quot; es el conector y &quot;<em>busca una receta de tortilla de patatas</em>&quot; es el enunciado.</p> <p>Y aqu&iacute; tienes un ejemplo v&aacute;lido que no utiliza conector: &quot;<em>Alexa, preg&uacute;ntale a cookpad como hacer tortitas</em>&quot;, donde &quot;<em>Alexa</em>&quot; es la palabra de activaci&oacute;n, &quot;<em>preg&uacute;ntale a</em>&quot; es la palabra de lanzamiento (en este caso una frase), &quot;<em>cookpad</em>&quot; es el nombre de apertura y &quot;<em>como hacer tortitas</em>&quot; es el enunciado.</p> <h2>Problemas Comunes con Frases de Ejemplo</h2> <p>A continuaci&oacute;n te ofrecemos una lista con los contratiempos m&aacute;s comunes que vemos en las <a href="">frases de ejemplo y sus requisitos</a>:</p> <ol> <li><strong>Faltan componentes:</strong> En muchos casos las frases de ejemplo no tienen el nombre de apertura o la palabra de lanzamiento correctas. Por ejemplo: &quot;<em>Alexa, pide revisar mi balance</em>&quot;. Sin el nombre de apertura especificado luego de un &quot;<em>pide a</em>&quot; y antes del enunciado que comienza con &quot;<em>revisar</em>&quot; Alexa no va a responder de forma apropiada. A veces tambi&eacute;n vemos aqu&iacute; nombres de Intenciones reemplazando al nombre de apertura (esto tambi&eacute;n es incorrecto).<br /> <br /> Aqu&iacute; tienes otro ejemplo que tambi&eacute;n vemos en las Skills que nos env&iacute;an: &quot;<em>Alexa, Voz Social los t&oacute;picos m&aacute;s importantes</em>&quot;. En esta caso Alexa podr&iacute;a no responder adecuandamente porque falta la palabra de lanzamiento</li> <br /> <li><strong>No basarse en enunciados de muestra:</strong> Cada frase de ejemplo debe ser creada en base a los enunciados de muestra presentes en tu modelo de interacci&oacute;n. Por ejemplo &quot;<em>Alexa, preg&uacute;ntale a Registro de Mareas cuando hay marea alta en Barcelona</em>&quot; debe tener un enunciado id&eacute;ntico para proveer una respuesta v&aacute;lida: <pre> <code class="language-javascript">&quot;samples&quot;: [ &quot;cuando hay marea alta en {ciudad}&quot;, &quot;...&quot; ]</code></pre> Si el enunciado no existe, Alexa no podr&aacute; mapear la frase de ejemplo a la Intenci&oacute;n correcta. La Skill no sabr&aacute; como responder y la experiencia de usuario ser&aacute; pobre. Adem&aacute;s, como puedes ver en el ejemplo de arriba, si la frase que utilizas tiene Slots debes asegurarte que el valor de Slot que utilizas (e.g. Barcelona) es un valor v&aacute;lido para el tipo de Slot utilizado (y si es un Custom Slot el valor debe coincidir con alguno de los valores que has asignado al tipo). Vemos este problema en un gran porcentaje de las Skills que nos envian.</li> <br /> <li><strong>Respuestas err&oacute;neas:</strong> Aseg&uacute;rate de que cuando el usuario utilice una frase de ejemplo obtendr&aacute; una respuesta relevante. En muchas de las Skills que se envian para certificaci&oacute;n vemos problemas con respuestas potencialmente confusas:<br /> <br /> User: &quot;<em>Alexa, pr&eacute;guntale a busca recetas como hacer tortitas.</em>&quot;<br /> Skill: &quot;<em>Bienvenido a Busca Recetas. Puedes hacerme preguntas como, cual es la receta para tortitas. &iquest;C&oacute;mo te puedo ayudar?</em>&quot;</li> </ol> <p>Esperamos que estos consejos te hayan sido de utilidad para crear y certificar tus Skills. Si tus frases completas est&aacute;n estructuradas correctamente, se basan en enunciados de muestra y proveen respuestas relevantes, tendr&aacute;s m&aacute;s posibilidades de pasar r&aacute;pidamente el proceso de certificaci&oacute;n. Echa un vistazo a <a href="">nuestra entrega anterior</a> para mas consejos sobre la certificaci&oacute;n.</p> <h2>Env&iacute;anos tus Comentarios</h2> <p>Como de costumbre nos interesa saber sobre tu experiencia con la certificaci&oacute;n para poder mejorar el proceso. Por favor envianos tus comentarios en <a href=";amp;sc_channel=website&amp;amp;sc_publisher=devportal&amp;amp;sc_campaign=Conversion_Contact-Us&amp;amp;sc_assettype=conversion&amp;amp;sc_team=us&amp;amp;sc_traffictype=organic&amp;amp;sc_country=united-states&amp;amp;sc_segment=all&amp;amp;sc_itrackingcode=100020_us_website&amp;amp;sc_detail=blog-alexa">este formulario</a>.</p> <h2>Recursos Relacionados</h2> <p>Para acceder a m&aacute;s recursos sobre Distribuci&oacute;n y Certificaci&oacute;n de tu Skill mira los siguientes enlaces:</p> <ul> <li><a href="">Las Claves para Certificar con &Eacute;xito tu Skill Alexa</a></li> <li><a href="">Certification Requirements for Custom Skills</a></li> <li><a href="">Review and Test Example Phrases</a></li> <li><a href="">Alexa Developer Blog: Certification tag</a></li> <li><a href="">Pregunta al Experto - Alexa Office Hours en Espa&ntilde;ol (Twitch)</a></li> </ul> /blogs/alexa/post/4506e350-1e7a-4ba3-b54c-8abf000d7236/how-to-optimize-your-upsell-strategy-for-your-monetized-alexa-skills How to Optimize Your Upsell Strategy for Your Monetized Alexa Skills Metty Fisseha 2019-01-04T18:20:03+00:00 2019-01-04T18:20:03+00:00 <p><img alt="Can-Handle-Intent_Blog_(1).png" src="" /></p> <p>If you have published a monetized skill, ensure you optimize your upsell strategy to help drive more customers to your premium content.</p> <p><img alt="Can-Handle-Intent_Blog_(1).png" src="" /></p> <p>If you have published a monetized skill, your next step is to optimize your upsell strategy to help accelerate your sales. An effective upsell should present customers with the option to engage even deeper with your skill, at the right time and in the right context, compelling them to make a purchase.</p> <p>Cracking the code on upsell strategy is critical for the success of your monetized skill. And, because each skill will have a unique upsell strategy, it’s important that you test your skill to find what works best. For this reason, we added enhanced reporting on upsell metrics in the Alexa Developer Console to help you better track performance of your monetized skill. To access these new tools, log in to your developer account and click on “Analytics” by your premium skill. In the left hand toolbar, click on “In-Skill Purchases.” Learn more about how to use these metrics <a href="">here</a>.</p> <h2>What is an Upsell? And Why is It Important?</h2> <p>An upsell is when you surface an in-skill product to your customer. The first upsell is important because this is your opportunity to introduce that your skill offers premium content. Your goal with any upsell is to capture your customer’s attention and encourage them to learn more about the product. To do this, you’ll want to be thoughtful about the upsell placement, frequency, and messaging to achieve optimal conversion.</p> <p>If the customer says “yes” to your upsell, they are led to the offer. The offer, which contains important transactional details such as price, ends with Alexa asking the customer “…would you like to buy it?” Amazon handles the voice interaction model and all the mechanics of the offer and the transaction.</p> <p>By presenting his premium content to the customer at just the right time, Steven Arkonovich of <a href=";field-keywords=%22philosophical+creations%22" target="_blank">Philosophical Creations</a>, creator of Big Sky, reports that <strong>50% of people who are offered his in-skill product convert to make the purchase</strong>.</p> <p>“The strength of voice is that it is a very personal experience,” says Arkonovich. “Just as personalizing the experience to each user sets your skill apart from the rest, tailoring your upsell message and its timing to what your customer is looking for at that very moment is key to higher conversion rates.”</p> <p>Read more about Steven’s journey to optimize his monetized skill <a href="">here</a>.</p> <p>To get you started, we’ve compiled a few upsell best practices based on our observations and tips from developers who are making money with in-skill purchasing. There are three key components of an upsell: placement, frequency, and messaging.<strong> </strong></p> <h2>1. Placement: Upsell Early</h2> <p>Upselling early allows you to proactively showcase your skill’s premium content to customers, rather than expecting them to discover it on their own. Use data available to you in the developer console, such as average skill utterances and number of dialogs, to inform where you place the upsell in your premium skill.</p> <p>Sampat Biswas, developer of <a href=";ie=UTF8&amp;qid=1545173577&amp;sr=1-3&amp;keywords=world+of+words" target="_blank">World of Words Game</a> skill, says, “Initially, my upsell was placed in level three of my game. However, I saw from average utterances and dialogs per customer data that customers were most engaged around level two. After moving my upsell placement to level two, I’ve seen more offers being delivered and an improvement in my conversion rate.”</p> <p>Similarly, Sanasar Hovsepian, developer of the <a href="" target="_blank">Smarty Pants Trivia</a> skill, discovered that notifying customers about his skill’s premium content early on helped to drive higher sales.</p> <p>“Thinking about where I placed upsells within my skill has helped me increase the amount of in-skill purchases. As an example, the simple act of mentioning to users that there are premium options to purchase later on in my skill has helped drive my sales by an extra 21%.”</p> <h2>2. Frequency: Upsell Often</h2> <p>Unlike mobile, where at any time customers can see a menu of in-app products, voice-first skills require your skill to remind customers that premium content is available for purchase. Through our work with developers, we learned that one way to address this challenge is to increase your upsell frequency.</p> <p>While you might be concerned about negative customer experiences caused by frequent upsells, we’ve found that customers respond positively when products are presented to them frequently, as long as the placement and context are appropriate. Also, it helps to diversify your premium offerings. By offering different types of premium content, upselling often allows customers to choose which product is right for them.</p> <h2>3. Messaging: Upsell In-Context</h2> <p>A customer should know what they are being asked to purchase and why they’ll be delighted by the purchase. Contextualize the type of product you’re offering and the wording you use to offer it. This will ensure that customers remain engaged with your skill, making their decision to purchase your product a seamless and natural experience within the context of your skill. <em>&nbsp;</em></p> <h2>New Reporting and Upsell Metrics for Your Monetized Skills</h2> <p>We recently added two new upsell metrics to the developer console to help you optimize your upsell strategy: Upsell to Offer Conversion, which measures what percentage of customers who heard your upsell agreed to hear the offer, and Upsell to Purchase Conversion, which measures what percentage of customers who heard your upsell agreed to make a purchase. You can use these metrics to gauge the effectiveness of your upsell strategy and make enhancements. Learn more about how to use these metrics <a href="">here</a>.</p> <h2>More Resources – How to Promote Your Monetized Skill</h2> <p>To help customers discover delightful voice experiences, Amazon promotes high-quality monetized skills in the US Alexa Skills Store via Amazon marketing channels. To be eligible for this promotional placement, ensure your monetized skill meets our <a href="">eligibility requirements</a>. This valuable exposure could help to accelerate your revenue earned. Follow the guidelines in <a href="">this checklist</a> to ensure your monetized Alexa skill is eligible for Amazon promotion.</p> <p>In addition to Amazon-owned marketing channels, we encourage you to promote your skills within your own networks. <a href="">Follow these tips</a> to make your skill more discoverable both in the Alexa Skills Store and through your existing network.</p> <p>Questions? Attend our <a href="" target="_blank">Office Hours</a> on Twitch (no sign up required) or chime in on our <a href="" target="_blank">developer forums</a>.</p> /blogs/alexa/post/7c9b6bea-0d82-4482-96ba-d1935c2617b9/how-to-quickly-update-your-existing-multimodal-alexa-skills-with-the-alexa-presentation-language How to Quickly Update Your Existing Multimodal Alexa Skills with the Alexa Presentation Language Jennifer King 2019-01-04T15:00:00+00:00 2019-01-04T15:00:00+00:00 <p>If you have already built multimodal Alexa skills using the <a href="">Display Directive</a> interface, you can still use the Alexa Presentation Language (APL) to create similar displays of those templates. Here's a quick overview of how you can migrate your display templates over to APL.</p> <p>Using the <a href="">new Alexa Presentation Language (APL)</a>, you can deliver richer, more engaging voice-first interactions to your customers across tens of millions of Alexa-enabled devices with screens. Before APL, you could use our <a href="">Display Directive</a> interface to create a skill that supports screen display. While display templates allow you to support visual experiences, APL is more flexible, giving you the ability to enhance your skill experience for different device types, control your user experience by defining where visual elements are placed on screens, and choose from a variety of components available with APL.</p> <p>If you have already built multimodal Alexa skills with visuals using our <a href="">Display Directive</a> interface, you can still use APL to create similar displays of those templates. In today’s blog, we share a quick overview of how you can migrate your display templates over to APL.</p> <h2>First, a Few APL Reminders</h2> <p>APL is JSON that is compiled to multimodal components to be inflated and rendered on your device. APL is comprised of <a href="">components</a>, which are reusable, self-contained artifacts used to display elements on the screen such as text, images, sequences, and frames. Please note that APL is currently in public beta, and we are continually adding components to our visual reference that you can use in their APL documents.</p> <p>Alongside components, APL incorporates <a href="">styles</a>, <a href="">resources</a>, and <a href="">Layouts</a>. You can apply styles to components to add defined visual properties that can be extended or inherited. Resources are named global values in your APL document denoted by the “@” symbol. Finally, layouts are composite components you create and can reuse throughout the main template of your APL document.</p> <h2>The Differences Between APL and Display Directives</h2> <p>When you begin to integrate visuals into your Alexa skills, it is crucial that your skill can adapt to work on different devices. Previously with the display directives interface, serving information to a body or list template would guarantee scaling to a round or landscape rectangular display, according to the <a href="">GUI specifications</a> of that template.</p> <p>With APL, we offer you the JSON code to achieve a similar experience to the body and list display directive templates in your skill. However, with APL, we advocate that you customize your visual experience even further and tailor it according to a set of <a href="">viewport</a> characteristic specifications.</p> <p>When you enter the <a href="">APL authoring tool</a>, you can select samples representing the following display templates: BodyTemplate1, BodyTemplate2, BodyTemplate3, BodyTemplate6, BodyTemplate7, ListTemplate1, and ListTemplate2. You can hover over each document to read about its intended purpose and which template it most closely relates to.</p> <p><img alt="" src="" style="display:block; height:272px; margin-left:auto; margin-right:auto; width:1000px" />The main difference between the previous display interface and APL is how these visuals are served to the customer. When you are building your response in your skill code, with APL you use a different type of <a href="">directive</a>. The directive specifies how the compiler translates the corresponding input. Previously, you would use a DisplayDirective that would tell the compiler to inject the information you provide into a static, inflexible template.</p> <pre> <code>handlerInput.responseBuilder .speak(speechText) .addDirective({ type: 'Alexa.Presentation.APL.RenderDocument', document: require('./mainScreen.json'), datasources: require('./datasources.json'), }) .getResponse(); </code></pre> <p>With APL, you use an Alexa.Presentation.APL.RenderDocument directive. In this example, you are informing the compiler to interpret the document containing your APL JSON as components to be inflated on the display, and the datasource to be sent in parallel to the document as a payload that holds information to be <a href="">data-bound</a> within the main template. This datasource should contain any information from your skill that you want to incorporate in your display. This could include information from your request, such as slot values or skill states, profile or account linking data, static variable datasets, or webservice or API responses. In short, the information contained in your datasource is completely up to you as the developer, and can be anything you want to display on the device. It is important, however, that you do not display any private or sensitive information about the customer without their consent. This data is then inflated with the document on the device. You can utilize data from the datasource directly or conditionally in your document.</p> <h2>Migrating Your Display Directive Templates to APL</h2> <p>Throughout the rest of this this blog post, we will examine the APL document that resembles BodyTemplate1. First, let’s breakdown the BodyTemplate1 APL document.</p> <p><img alt="" src="" style="display:block; height:528px; margin-left:auto; margin-right:auto; width:1000px" /></p> <p>At the top of the APL mainTemplate lives a <a href="">Container</a>. This is the highest-level attribute of the template, and it is responsible for being the parent of all of the components within the rest of the template. Associated to this container is a when clause. This is a statement that allows you to conditionally inflate components. In this case, the when clause checks the viewport profile to see if the device is round or otherwise, and changes the layout of the template accordingly.</p> <p>The first child of the container is an <a href="">Image</a> component. This Image component is positioned absolutely so it appears as a background image and the succeeding child components will appear atop the image.</p> <p>The next component is an <a href="">AlexaHeader</a>. This is a layout that we have created for you to use by importing alexa-layouts in your APL document. Essentially, AlexaHeader is comprised of a <a href="">Text</a> and Image components to resemble the experience of headers in the display directive templates. We have named parameters included the AlexaHeader layout to intuitively place your title, subtitle, skill icon, etc.</p> <p>The final component is a Text component. This is a block to show the primary text on the display.</p> <pre> <code>{ &quot;type&quot;: &quot;APL&quot;, &quot;version&quot;: &quot;1.0&quot;, &quot;theme&quot;: &quot;dark&quot;, &quot;import&quot;: [ { &quot;name&quot;: &quot;alexa-layouts&quot;, &quot;version&quot;: &quot;1.0.0&quot; } ], &quot;resources&quot;: [ ... ], &quot;styles&quot;: { ... }, &quot;layouts&quot;: {}, &quot;mainTemplate&quot;: { &quot;parameters&quot;: [ &quot;payload&quot; ], &quot;items&quot;: [ { &quot;type&quot;: &quot;Container&quot;, ... &quot;items&quot;: [ { &quot;type&quot;: &quot;Image&quot;, ... }, { &quot;type&quot;: &quot;AlexaHeader&quot;, ... }, { &quot;type&quot;: &quot;Text&quot;, ... } ] } ] } } </code></pre> <p>When you select Long Text Sample in the APL authoring tool, you will notice there are two tabs, one with the name of the template and then the Data JSON tab.</p> <p><img alt="" src="" style="display:block; height:564px; margin-left:auto; margin-right:auto; width:1000px" /></p> <p>The data that lives in Data JSON is the datasource. To make this data accessible in your APL mainTemplate, you need to include a parameter that unlocks the information. You will notice the parameters field includes this variable, called payload.</p> <p>Within the Data JSON, there is a JSON object entitled bodyTemplate1Data. Attributes within bodyTemplate1Data allow you to edit the data of various attributes in the APL document.</p> <pre> <code>{ &quot;bodyTemplate1Data&quot;: { &quot;type&quot;: &quot;object&quot;, &quot;objectId&quot;: &quot;bt1Sample&quot;, &quot;backgroundImage&quot;: { &quot;contentDescription&quot;: null, &quot;smallSourceUrl&quot;: null, &quot;largeSourceUrl&quot;: null, &quot;sources&quot;: [ { &quot;url&quot;: &quot;;, &quot;size&quot;: &quot;small&quot;, &quot;widthPixels&quot;: 0, &quot;heightPixels&quot;: 0 }, { &quot;url&quot;: &quot;;, &quot;size&quot;: &quot;large&quot;, &quot;widthPixels&quot;: 0, &quot;heightPixels&quot;: 0 } ] }, &quot;title&quot;: &quot;Did You Know?&quot;, &quot;textContent&quot;: { &quot;primaryText&quot;: { &quot;type&quot;: &quot;PlainText&quot;, &quot;text&quot;: &quot;But in reality, mice prefer grains, fruits, and manmade foods that are high in sugar, and tend to turn up their noses at very smelly foods, like cheese. In fact, a 2006 study found that mice actively avoid cheese and dairy in general.&quot; } }, &quot;logoUrl&quot;: &quot;; } } </code></pre> <p>To edit the image source of the Image component, you will update the url attribute living under sources. There are two URLs of the same image. The first is intended for smaller hubs, the second larger. It is important to include varying sizes of images to assure that it renders appropriately from a small round hub like the Echo Spot to an extra-large landscape TV with Fire TV Cube. These attributes are accessed in the mainTemplate from the payload via direct databinding.</p> <pre> <code>{ &quot;type&quot;: &quot;Image&quot;, &quot;source&quot;: &quot;${payload.bodyTemplate1Data.backgroundImage.sources[0].url}&quot;, &quot;position&quot;: &quot;absolute&quot;, &quot;width&quot;: &quot;100vw&quot;, &quot;height&quot;: &quot;100vh&quot;, &quot;scale&quot;: &quot;best-fill&quot; }, </code></pre> <p>The appropriate image is selected via the when clause that lives on the parent container: &quot;when&quot;: &quot;${viewport.shape == 'round'}&quot;</p> <p>To edit the title and icon of the AlexaHeader layout, you will update the title and logoUrl attributes living under bodyTemplate1Data. These attributes are accessed in the mainTemplate from the payload via direct databinding.</p> <pre> <code>{ &quot;type&quot;: &quot;AlexaHeader&quot;, &quot;headerTitle&quot;: &quot;${payload.bodyTemplate1Data.title}&quot;, &quot;headerAttributionImage&quot;: &quot;${payload.bodyTemplate1Data.logoUrl}&quot; }, </code></pre> <p>Finally, to edit the content of the large Text component, you will edit the text attribute living under textContent.primaryText. This attribute is accessed in the mainTemplate from the payload via direct databinding.</p> <pre> <code>{ &quot;type&quot;: &quot;Text&quot;, &quot;text&quot;: &quot;${payload.bodyTemplate1Data.textContent.primaryText.text}&quot;, &quot;fontSize&quot;: &quot;@textSizeBody&quot;, &quot;spacing&quot;: &quot;@spacingSmall&quot;, &quot;style&quot;: &quot;textStyleBody&quot; } </code></pre> <p>This approach of updating the Data JSON is similar for each of the Body and List templates in the Authoring Tool.</p> <h2>Build APL Skills, Then Enter the Alexa Skills Challenge: Multimodal</h2> <p>Consider using these examples as a starting point to create your own unique APL documents. In addition to being a more powerful tools that allows greater flexibility in creating interactive voice experiences, APL was developed to make it easy to design and build visually rich Alexa skills to tens of millions of Alexa-enabled devices with screens.</p> <p>Start building with APL and then enter your creation for the <a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=skillschallenge&amp;sc_content=Promotion&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_skillschallenge_Promotion_Discover_WW_unknown&amp;sc_segment=unknown">Alexa Skills Challenge: Multimodal</a> and compete for $150k in total prizes. You can also earn a new Amazon device by just publishing an eligible APL skill.</p> <h2>More Resources to Get Started with APL</h2> <ul> <li><a href="">Alexa Design Guide</a></li> <li><a href="">Steven Arkonovich Enhances Voice-First Alexa Skills with Visuals and Touch Using the Alexa Presentation Language</a></li> <li><a href="">Blog: How to Get Started with the Alexa Presentation Language to Build Multimodal Alexa Skills</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=certification&amp;sc_content=Content&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_certification_Content_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">Webinar: Get Started with the Alexa Presentation Language</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=skillschallenge&amp;sc_content=Promotion&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_skillschallenge_Promotion_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">Webinar: Advanced Template Building with the Alexa Presentation Language</a></li> <li><a href="">Twitch: Steps to Build a Standout Skill with the Alexa Presentation Language</a></li> <li><a href=";sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=apl&amp;sc_publisher=skillschallenge&amp;sc_content=Promotion&amp;sc_funnel=Discover&amp;sc_country=WW&amp;sc_medium=Owned_WB_apl_skillschallenge_Promotion_Discover_WW_unknown&amp;sc_segment=unknown" target="_blank">APL Sample Skill for NodeJS: Space Explorer</a></li> <li><a href="">APL Sample Skill for NodeJS: FireTV Vlogs</a></li> <li><a href="">APL Sample Skill for Java: Pager Karaoke</a></li> <li><a href="">APL Sample Skill for Python: Pager Karaoke</a></li> </ul> /blogs/alexa/post/f64b2dda-a2b1-4563-ae81-cc3163d9d3a8/new-aws-certification-and-training-opportunities-for-alexa-developers New AWS Certification and Training Opportunities for Alexa Developers Jennifer King 2019-01-03T15:55:39+00:00 2019-01-10T17:18:34+00:00 <p><img alt="" src="" style="height:480px; width:1908px" /></p> <p>AWS Training and Certification is now offering a new AWS Certified Alexa Skill Builder – Specialty beta exam and four self-paced, digital training courses to help you enhance your voice design skills.</p> <p><img alt="" src="" /></p> <p>If you’re looking to gain recognition for your skill-building experience, there’s no better way than with AWS Training and Certification. We’re excited to announce that AWS Training and Certification is offering a new AWS Certified Alexa Skill Builder – Specialty beta exam and new self-paced, digital training courses to help you enhance your voice design skills. This new beta exam and training courses are available globally, so don’t miss out on this opportunity to validate your knowledge and showcase your skills.</p> <h2>Become a Certified Alexa Skill Builder to Validate Your Experience and Advance Your Career</h2> <p>Become among the first Alexa developers to validate your skill-building experience and receive an industry-recognized certification you can use to enhance your voice design career. The <a href="" target="_blank">AWS Certified Alexa Skill Builder – Specialty beta exam</a> validates what you’ve learned about building, testing, publishing, and certifying Amazon Alexa skills. You’ll also have an opportunity to provide feedback on the exam prior to its generally available later in 2019.</p> <p>“Any AWS certification carries a level of integrity and credibility that can be trusted and distinguishes talent. It helps verify that the developer is a credible expert,” says Bob Stolzberg, Alexa Champion and founder and CEO of VoiceXP. “We plan to only hire certified developers to work on our customers’ voice experiences.”</p> <p>We recommend that developers taking the exam have at least six months of hands-on experience designing and building Alexa skills, proficiency with a programming language, and published at least one Alexa skill. When you successfully pass the beta exam, you will hold an AWS Certified Alexa Skill Builder – Specialty Certification and have access to other AWS Training and Certification benefits and resources, including digital badges to use on your social media profiles and email signatures. <a href="" target="_blank">Click here</a> to learn more and register for the certification exam.</p> <h2>Get Alexa Skills Training with AWS</h2> <p>To help you prepare for the exam, AWS Training and Certification is releasing <a href="" target="_blank">new on-demand training courses</a>. These free, self-paced digital courses include:</p> <ul> <li><strong>Getting in the Voice Mindset: </strong>Get a quick introduction to voice-based applications. Learn why voice-based applications are the ubiquitous expectation these days and how they can add value to a business.</li> <li><strong>Introduction to Skill Concepts: </strong>In this short interactive course, you'll learn about different components of an utterance and examples of how one might interact with Alexa to invoke a skill.</li> <li><strong>Designing for Conversation: </strong>Learn about the design methods we recommend you use to develop engaging conversational voice user interfaces (VUI). You'll follow the process we used to design “The Foodie” skill for Alexa, imagining and analyzing user interactions through a series of exercises aimed at exposing and designing around the nuances of human conversation.</li> </ul> <p>These courses, which provide foundational through intermediate level knowledge, are great for beginners looking to begin their skill-building journey. They are also impactful for decision makers exploring voice for their businesses. <a href="" target="_blank">Click here</a> to learn more about these AWS training courses.<strong> </strong></p> <p>Your ability to build Alexa skills opens up many opportunities for new business ideas, a new career in voice, and new customer experiences. With the new <a href="" target="_blank">online training</a> and <a href="" target="_blank">specialty certification</a>, you can accelerate your skill-building journey. We can’t wait to see what you build.</p> /blogs/alexa/post/5e4f3bb2-6ada-4121-bf97-347eb78f92fd/new-alexa-skill-sample-learn-multimodal-skill-design-with-space-explorer New Alexa Skill Sample: Learn Multimodal Skill Design with Space Explorer Jennifer King 2019-01-03T15:00:00+00:00 2019-01-03T15:00:00+00:00 <p><img alt="" src="" style="display:block; height:447px; margin-left:auto; margin-right:auto; width:1000px" /></p> <p>Get a deep dive on our new sample skill called Space Explorer. We’ll walk you through the skill experience and share best practices for building voice-first visual experiences with the Alexa Presentation Language.</p> <p>With the <a href="">Alexa Presentation Language (APL)</a> you can build interactive, <a href="">multimodal Alexa skills</a> with more customization than ever before. APL is a JSON-based markup language influenced by front-end technologies and native development practices. It lets you completely control your layouts and presentation, with the flexibility to scale across multiple devices or create a custom layout that highlights the strengths of a specific class of device.</p> <p>Some of the key ingredients of APL include:</p> <ul> <li><strong>Components:</strong> Building blocks, including containers, text, and images</li> <li><strong>Layouts:</strong> Composed components with their own properties that can be reused across multiple screens</li> <li><strong>Packages:</strong> Importable files containing styles, resources, and layouts</li> <li><strong>Viewport Characteristics:</strong> Information about the device passed directly to APL</li> <li><strong>Conditional Clauses:</strong> Using properties and/or characteristics to determine the state of a component or layout</li> </ul> <p>We’ve built several new Alexa skill samples to help you get started using APL and gain some first-hand multimodal skill-building experience. In this post, we do a deep dive on one of the samples called <a href="" target="_top">Space Explorer</a>. We’ll walk you through the sample and some best practices for building voice-first visual experiences.</p> <h2>Why We Created the Space Explorer Sample Skill</h2> <p>Images of space are visually engaging. Vast expanses of black punctuated by bright planets and gorgeous star clusters makes for some truly striking visuals. It was easy to envision a skill that would benefit from having a screen. But beyond that, NASA has done an amazing job of aggregating those visuals and releasing them to the public domain.</p> <p>We always recommend that developers lead with voice when designing an Alexa skill. For Space Explorer, though, we wanted to create a skill that indexes heavily on delivering beautiful visuals and clear interaction patterns, but still complements the voice experience and avoids anti-patterns like requiring touch for navigation.</p> <p><img alt="" src="" style="display:block; height:447px; margin-left:auto; margin-right:auto; width:1000px" /></p> <h2>Creating a Voice-Forward Experience with Space Explorer</h2> <p>Multimodal design is a fast-growing discipline within Amazon. Since the original Echo Show was in development we've spent countless hours creating a system of design for this new class of devices. For Space Explorer, we looked at the list of tenets that have been created and cherry-picked a few that had special relevance for our purpose. When designing Space Explorer, we kept all of our <a href="">voice design tenets</a> in mind, primarily:</p> <ol> <li><strong>Be Voice-Forward, but Not Voice-Only: </strong>Anything a customer can touch should have a voice counterpart. However, everything said in voice does not need a touch input. What Alexa says should be relevant to what it shows and vice versa. Customers will likely alternate between looking at a device and looking away throughout the experience. Be sure the voice flow and screen flow are comprehensive on their own and complimentary together. Screens should provide additional context when Alexa is speaking.</li> <li><strong>Honor User Modality:</strong> If a customer speaks to Alexa, then Alexa should respond with voice. If a customer touches the screen, then Alexa should not respond with voice.</li> <li><strong>Emphasize Patterns &amp; Consistency: </strong>Adherence to common patterns will reduce cognitive overhead for customers.</li> </ol> <h2>Designing the Voice Interaction for Space Explorer</h2> <p>Screens give Alexa a new medium to deliver information, provide context, and enhance the voice experience. As with any other voice-driven experience, we <a href="">started with a script</a> to explore the many ways a customer could interact with Space Explorer.</p> <p style="margin-left:40px"><strong>Customer:</strong> Alexa, open Space.</p> <p style="margin-left:40px"><strong>Alexa:</strong> Welcome to the solar system. What would you like to explore?</p> <p style="margin-left:40px"><em>[The screen shows the Solar System with touchable planets and large celestial bodies, like comets and asteroids.]</em></p> <p style="margin-left:40px"><strong>Customer:</strong> Let's go to Saturn<strong>.</strong></p> <p style="margin-left:40px"><strong>Alexa:</strong> Approaching Saturn, the sixth planet from the sun. What would you like to learn?</p> <p style="margin-left:40px"><strong>Customer:</strong> Tell me about the rings.</p> <p style="margin-left:40px"><strong>Alexa:</strong> The <strong>rings of Saturn</strong> are the most extensive ring system of any planet in the Solar System. They consist of countless small particles, ranging from micrometers to meters in size, that orbit about Saturn. The ring particles are made almost entirely of water ice, with a trace component of rocky material. There is still no consensus as to their mechanism of formation; some features of the rings suggest a relatively recent origin, but theoretical models indicate they are likely to have formed early in the Solar System's history.</p> <p style="margin-left:40px"><strong>Customer: </strong>Alexa, what is Saturn made of?</p> <p style="margin-left:40px"><strong>Alexa:</strong> Standard planetary models suggest that the interior of Saturn is similar to that of Jupiter, having a small rocky core surrounded by hydrogen and helium with trace amounts of various volatiles. This core is similar in composition to the Earth, but more dense.</p> <p style="margin-left:40px"><em>[The screen shows a split screen image and text.]</em></p> <p>Early design exercises focused on creating a voice interaction model that felt natural. We wanted customers to go on an adventure, so we started with flows that took you to different places within the solar system. That meant context would be important, as well.</p> <p>Imagine you’re flying to Mars. When you step off the shuttle, you'll probably say something like, “Wow. What's in the atmosphere?” not “Wow. What's in Mars' atmosphere?” You might follow it up by asking about another planet's atmosphere, though, so it was important to maintain that ability to dive straight into other content without the preamble of traveling there, first.</p> <p>One of the strengths of voice interactions is the ability to completely bypass navigational necessities present in traditional interfaces. So we designed the voice interaction model to allow customers to travel anywhere in the solar sytem and ask contextual questions about their location as well as completely non-contextual information about any other location.</p> <p>Conversely, one downside of voice interactions is the lack of way-finding tools. Once you get to a location, the customer needs to either be told what they can say, or the developer needs to handle all potential scenarios. The former can be tedious and long and the latter is almost impossible. That's where having a screen becomes a huge advantage.</p> <h2>Designing the Visuals for Space Explorer</h2> <p>Once we had an idea of how the user would navigate through the skill, it was time to figure out how to bring that to life, visually. The first step was to consider what data we were trying to present. Because the screen should be viewed as a companion to the voice experience, we needed it to complement the audio, not mimic it. Just like in a presentation, it wasn't about having Alexa just speak out what you see on the screen.</p> <p>If the data was visual, it should be shown. A great pattern for presenting multiple visual data points is the horizontal list, such as in the inner and outer solar system and the list of notable moons.</p> <p><img alt="" src="" style="display:block; height:447px; margin-left:auto; margin-right:auto; width:1000px" />If the data was driven by numbers, we considered how that data could be represented in graphs or other formats to make the numbers compelling. Distance, for example, shows the expanse of our galaxy without displaying a table of numbers.</p> <p><img alt="" src="" style="display:block; height:447px; margin-left:auto; margin-right:auto; width:1000px" />It was also important to use the visuals to enhance the user's understanding of the data. In the above screens, we provide some context to help ground the massive distances by showing that data relative to the other planets, as well as presenting additional information about the time it takes sunlight to reach that distance. We used similar thinking to create the size views.</p> <p><img alt="" src="" style="display:block; height:447px; margin-left:auto; margin-right:auto; width:1000px" />The composition of the atmosphere was specifically designed to map to the periodic table of elements.</p> <p><img alt="" src="" style="display:block; height:447px; margin-left:auto; margin-right:auto; width:1000px" />Another important factor we designed for was the variable nature of the displays. What works for presenting information on one device doesn't necessarily work on another. In the above example, we use the periodic-table style element blocks for most displays, but changed the number of elements on screen for the smaller Echo Spot display. We also used the much larger television screen to include a donut chart, which didn't fit well on the smaller displays.</p> <h2>Creating Touchable and Voice-Driven Elements for Space Explorer</h2> <p>We’re all familiar with the interaction patterns that come with smartphone apps. Our challenge was to design screens that users could touch <em>and </em>speak to. A great example of that is the planet details page.</p> <p><img alt="" src="" style="display:block; height:447px; margin-left:auto; margin-right:auto; width:1000px" /></p> <p>As we've already mentioned, screens are great for adding way-finding context to your skill. Using lists to present options to users helps them better navigate, and including ordinals on those lists makes that even easier. For example, on this screen a user can say, “Overview,” or “Tell me about it,” or “Number one.”</p> <h2>Taking Screens Out of the Equation</h2> <p>With the interaction model planned and screens designed, it seemed logical to start development right away. However, we were forgetting an import part of the puzzle. Alexa is, first and foremost, a voice-only interface. Designing an interaction model that relies on a screen doesn't work for Alexa-enabled devices without screens. We needed to go back and create an alternative interaction model that did all the things our screens did with voice alone.</p> <p>For instance, instead of simply welcoming a user to the solar system and asking them what they wanted to do, we needed to provide some examples and options. We also realized we needed to make it possible for a user to ask for help, and that Alexa's response should be specific to the user's current location.</p> <p>In future blog posts, we'll dive into how we used APL to achieve our vision and how AWS Lambda played an integral part in that. We'll also talk about fleshing out the interaction model and using the developer portal to test and iterate quickly. Stay tuned.</p> <h2>Related Resources</h2> <ul> <li><a href="" target="_blank">Space Explorer Sample Code</a></li> <li><a href="">Alexa Presentation Language Technical Documentation</a></li> <li><a href="">10 Tips for Designing Alexa Skills with Visual Responses</a></li> <li><a href="">4 Tips for Designing Voice-First Alexa Skills for Different Alexa-Enabled Devices</a></li> <li><a href="">How to Design Visual Components for Voice-First Alexa Skills</a></li> <li><a href="">How to Get Started with the Alexa Presentation Language to Build Multimodal Alexa Skills</a></li> </ul>