Vielen Dank für deinen Besuch. Diese Seite ist nur in Englisch verfügbar.
Alexa Blogs Alexa Developer Blogs /blogs/alexa/feed/entries/atom 2019-05-24T23:15:00+00:00 Apache Roller /blogs/alexa/post/c291e779-366b-4e7f-93e2-782b220d4142/introducing-announcements-on-alexa-built-in-devices Introducing Announcements on Alexa Built-in Devices Sanjay Ramaswamy 2019-05-24T21:53:56+00:00 2019-05-24T23:15:00+00:00 <p><a href="https://developer.amazon.com/blogs/alexa/post/c291e779-366b-4e7f-93e2-782b220d4142/introducing-announcements-on-alexa-built-in-devices" target="_self"><img alt="Alexa Announcements" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/auto/AnnouncementsBlog.png" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></a></p> <p>Device makers can now implement Alexa Announcements on all Alexa built-in devices. This feature enables customers to make and receive one-way announcements to and from compatible Alexa built-in devices in their household.</p> <p><img alt="Alexa Announcements" src="https://m.media-amazon.com/images/G/01/mobile-apps/dex/alexa/auto/AnnouncementsBlog.png" style="display:block; height:240px; margin-left:auto; margin-right:auto; width:954px" /></p> <p>Today, Alexa Voice Service (AVS) is making Announcements available for certified Alexa built-in products to implement&nbsp;and new ones that pass the provided self-tests and certification. Alexa Announcements enables customers to make and receive one-way announcements to and from compatible Alexa built-in devices in their household. Similar to a one-way intercom, Alexa Announcements is great for quick, broad communications you want to share with your whole family. This allows customers to announce on their device from the kitchen that “dinner is ready,” or that “it’s time to wake up,” or from their Fire TV Cube that “the movie is starting.” This audio message will be “announced” playing back simultaneously through the other Alexa built-in devices in the home. Customers can also use the Alexa app when they are away from home to make an Announcement.</p> <p>Amazon will continuously improve the Announcement feature to create better customer experiences and to ensure consistency across all Alexa built-in devices.</p> <p style="text-align:center"><iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/o7EEXGYSxwk" width="560"></iframe></p> <h2>How to Enable Announcements</h2> <p><strong>If you already have a product in market, </strong>you can find documentation to enable Announcements on your device <a href="https://developer.amazon.com/docs/acm/announcements-acm.html" target="_blank">here</a>. <strong>If you are developing a new product,</strong> Announcements will be possible to add as part of your AVS implementation&nbsp;and AVS certification will cover testing of this feature.</p> <h2>New to AVS?</h2> <p>AVS makes it easy to develop products with Alexa built-in and bring voice-forward experiences to your customers. Through AVS, you can add a new, natural user interface to your products and offer your customers access to a growing number of Alexa features, smart home integrations, and skills. <a href="https://developer.amazon.com/docs/alexa-voice-service/get-started-with-alexa-voice-service.html" target="_blank">Get started</a>.</p> /blogs/alexa/post/a35ad62f-dafa-4f4b-9648-a77913129871/alexatraining-advanceddevelopment Alexaスキル開発トレーニングシリーズ第5回: 本格的なスキル開発に向けて Kazushige Yoshida 2019-05-24T06:09:55+00:00 2019-05-24T06:09:55+00:00 <div> トレーニングシリーズ基礎編では、Alexa スキル開発の流れを理解してもらうために、最もシンプルな方法でスキルの作り方をご紹介しました。 今回は、本格的なスキル開発に向けて、より高度な開発の手法をご紹介します。 </div> <div> <div> <div> <div> &nbsp; </div> </div> </div> </div> <div> トレーニングシリーズ基礎編では、Alexa スキル開発の流れを理解してもらうために、最もシンプルな方法でスキルの作り方をご紹介しました。 今回は、本格的なスキル開発に向けて、より高度な開発の手法をご紹介します。 </div> <div> <div> <div> <div> &nbsp; <h3>AWSの利用</h3> <div> <p>これまで、基礎トレーニングシリーズでは、実は <a href="https://developer.amazon.com/ja/docs/hosted-skills/build-a-skill-end-to-end-using-an-alexa-hosted-skill.html">Alexa-hosted スキル</a> という仕組みを使って、バックエンドのコードを作る方法を紹介していました。 Alexa-hosted スキルは、スキル開発入門の方が手軽に試せるという利点がある反面、次のような制限もあります。</p> </div> <div> <ul> <li> <p>コードの編集は、開発者コンソール内のコードエディタで行うことになります。 外部エディタの利用、Git 等を利用した履歴管理やチームとのコード共有、TypeScript の利用などビルドプロセスが必要な場合は、ローカル環境でコードを編集して、都度開発者コンソールに貼付ける作業が必要になります。</p> </li> <li> <p>Alexaスキルは、Node.js、Python、Javaなど SDK が用意されている言語の他、JSONを扱えるすべてのプログラミング言語で作成することができますが、Alexa-hosted スキルは Node.js のみサポートします。</p> </li> <li> <p>次に紹介する ASK CLI が使えません。</p> </li> <li> <p>トランザクションやストレージの容量などが、 <a href="https://aws.amazon.com/jp/free/">AWS の無料利用枠内</a>に制限されます。 スキルを公開する場合は、上限に自由度が必要になる場合があります。</p> </li> </ul> </div> <div> <p>本格的な開発のためには、AWS などのサービスを利用することをお勧めします。 AWSをご利用の場合でも、 <a href="https://aws.amazon.com/jp/free/">無料利用枠内</a>であれば、料金は発生しません。 加えて、 <a href="https://developer.amazon.com/ja/alexa-skills-kit/alexa-aws-credits">Alexa AWSプロモーションクレジット</a> を利用すると、さらに追加の無料クーポンを受け取ることができます。</p> </div> <div> <p>AWS アカウントをお持ちでない場合は、 <a href="https://aws.amazon.com/jp/register-flow/">こちら</a> をご参照してください。 <a href="https://developer.amazon.com/ja/docs/custom-skills/host-a-custom-skill-as-an-aws-lambda-function.html">カスタムスキルを AWS Lambda 関数としてホスティングする</a> では、AWS Lambda を使ったバックエンドサービスの実装方法を紹介しています。</p> </div> </div> <div> &nbsp; <h3>ASK CLI</h3> <div> <p>Alexa Skills Kit コマンドラインインターフェース(ASK CLI)は、Alexaスキル、および連携する AWS Lambda 関数を、コマンドラインで管理するためのツールです。 ASK CLI を使うと、スキルの対話モデルや各種設定ファイル、バックエンドの Lambda 関数のコードなどをローカルストレージに保持し、コマンドラインで、Alexa サービスや AWS へのデプロイや各種操作をすることができます。 これにより例えば次のような利点があります。</p> </div> <div> <ul> <li> <p>ソース管理ツールを使って、チームでソースを共有したり、履歴の管理を行える。</p> </li> <li> <p>お好みのコードエディタを使うことができる。</p> </li> <li> <p>バッチ処理による定型化されたソースのデプロイや、テストを行うことができる。</p> </li> </ul> </div> <div> <p>たとえば、ローカルファイルをデプロイするのは次のようになります。</p> </div> <div> <div> <pre> &gt; ask deploy -------------------- Update Skill Project -------------------- Profile for the deployment: [default] Skill Id: ... Skill deployment finished. Model deployment finished. Lambda deployment finished.</pre> </div> </div> <div> <p>デプロイの他にも、新しいスキルの作成や既存のスキルのクローン、スキルのテストなど、 <a href="https://developer.amazon.com/ja/docs/smapi/ask-cli-command-reference.html">高度で幅広い操作</a> が可能です。</p> </div> <div> <p>CLIの利用を開始するには、 <a href="https://developer.amazon.com/ja/docs/smapi/quick-start-alexa-skills-kit-command-line-interface.html">ASK CLI クイックスタート</a> を参照してください。</p> </div> <div> <p>また、 <a href="https://developer.amazon.com/ja/docs/ask-toolkit/get-started-with-the-ask-toolkit-for-visual-studio-code.html">ASK Toolkit for Visual Studio Code</a> を使うと、Visual Studio Code に統合された形で ASK CLI を使ったり、コードスニペットの生成や JSON スキーマの検証を行うこともできます。 大変便利ですので、Visual Studio Code ユーザーの方は是非併せてご利用ください。</p> </div> </div> <div> &nbsp; <h3>スキルの公開</h3> <div> <p>スキルを公開すると、 <a href="https://www.amazon.co.jp/b?ie=UTF8&amp;node=4788676051">スキルストア</a> に登録され、一般の Alexa ユーザーが利用できるようになります。 カスタムスキルの公開申請の手続きにかかる費用は無料(バックエンドサービスなどの利用料は別です)で、個人でも申請することができます。 公開のためには、所定の手続きをしてスキルの審査に合格する必要があります。 審査の主なポイントは以下の4つです。</p> </div> <div> <ul> <li> <p>Amazon が定める公開スキルの <a href="https://developer.amazon.com/ja/docs/custom-skills/policy-testing-for-an-alexa-skill.html">ポリシーガイドライン</a> に違反しないこと。 商標や広告、対象者の年齢などに関する複数の規定があります。</p> </li> <li> <p>セキュリティ要件に準拠すること。 ホスティング環境やアカウントリンクなどに関する複数の規定があります。</p> </li> <li> <p>スキルの機能テストをクリアすること。 対話モデルや機能ごとにチェック項目があるので、それぞれを満たすこと。</p> </li> <li> <p>スキルの VUI およびユーザー体験テストをクリアすること。 ユーザーがスキルを快適に利用するための会話のバリエーションや文脈の一貫性についてチェック項目があるので、それぞれを満たすこと。</p> </li> </ul> </div> <div> <p>そのほか、審査で指摘されやすい要件として <a href="https://developer.amazon.com/ja/docs/custom-skills/choose-the-invocation-name-for-a-custom-skill.html#invocation-name-requirements">呼び出し名の要件</a> に係る項目があります。 こちらも確認しておきましょう。 詳細な要件はリンク先のドキュメントに記載されていますので申請する前に目を通し、スキルの要件を必ず確認しましょう。</p> </div> <div> <p>要件を満たすことを確認したら、開発者コンソールの「公開」タブにある「スキルのプレビュー」、「プライバシーとコンプライアンス」、「公開範囲」の各必要項目を入力します。 各項目の記入例や留意点は、 <a href="https://developer.amazon.com/ja/docs/devconsole/launch-your-skill.html">テクニカルドキュメント</a> を参照してください。 公開には2つのサイズのアイコンを用意する必要がありますが、入力フォームの下にある Alexa Skill アイコンビルダーを使うと簡単にオリジナルアイコンを作ることができます。</p> </div> <div> <p><img alt="IconBuilder" src="https://m.media-amazon.com/images/G/09/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/trainingv3/AdvancedDevelopment/0/IconBuilder.png" /></p> </div> <div> <p>入力を完了したら、「認定」タブにある「検証」・「機能テスト」を実行し、問題がすべて解決したら申請します。</p> </div> <div> <p>申請すると、審査が行われ、完了すると結果が通知されます。 審査を通過しなかった場合は、結果とその理由がEメールで提示されますので、対応を検討、修正の上再度申請します。 審査を通過した場合は、公開される時期の目安がEメールで通知されます。</p> </div> <div> <p>スキルが公開されると、開発者コンソールのスキル一覧には開発したスキルと同じ名前の <strong>ライブスキル</strong> が追加されます。 ライブスキルは[ステータス]列が「公開中」になり、設定の表示のみが可能で変更はできません。 引き続きスキルに機能追加や改良を行うためには、[ステータス]列が「開発中」になっている方のスキルを変更します。</p> </div> <div> <p><img alt="screenshot published skill" src="https://m.media-amazon.com/images/G/09/mobile-apps/dex/alexa/alexa-skills-kit/jp/blog/trainingv3/AdvancedDevelopment/0/screenshot-published-skill.png" /></p> </div> <div> <p>スキルの呼び出し回数やユーザー数などを開発者コンソールのスキル一覧の <a href="https://developer.amazon.com/ja/blogs/alexa/post/fa20b61e-daa2-4ffa-9761-9571f619baf7/jp-enablement-and-account-linking-metrics">「レポート」</a> から確認することができます。 よりたくさんのユーザーに使ってもらえるよう、これらの情報を参考にしてスキルを改善していきましょう。</p> </div> <div> <p>その他以下のブログも是非ご参照ください。</p> </div> <div> <ul> <li> <p><a href="https://developer.amazon.com/ja/blogs/alexa/post/f396a024-1a74-4869-899d-81269bb806e2/certification-jp-6th">Alexaスキル認定へのヒント: 詳細な説明編</a></p> </li> <li> <p><a href="https://developer.amazon.com/ja/blogs/alexa/post/3e2a13b3-3435-414e-8b3f-79220fd42dd4/certification-jp-5th">スキル認定へのヒント: ヘルプインテント編</a></p> </li> <li> <p><a href="https://developer.amazon.com/ja/blogs/alexa/post/a0a3bc83-2247-48e3-a2a4-cf7af7fec843/certification-jp-2nd">Alexaスキル認定へのヒント:サンプルフレーズ編</a></p> </li> <li> <p><a href="https://developer.amazon.com/ja/blogs/alexa/post/0ce8abf3-2b86-4a1d-9c6f-cee639d8bddf/certification-jp">Alexaスキル認定へのヒント集</a></p> </li> <li> <p><a href="https://developer.amazon.com/ja/blogs/alexa/post/de085f2a-3cfb-4549-9f23-52cdef6f263a/certification-jp-3rd">Alexaスキル内の広告について</a></p> </li> <li> <p><a href="https://developer.amazon.com/ja/blogs/alexa/post/32a17821-afac-4fa1-a08f-b8df3169ce12/manage-your-skills-through-the-alexa-developer-console-or-smapi-jp">審査の状況やスキルの公開ステータスを知ることができるようになりました</a></p> </li> </ul> </div> <div> <p><a href="https://developer.amazon.com/ja/alexa-skills-kit/training/building-a-skill">■ トレーニングコース目次 ■</a></p> </div> </div> </div> </div> </div> /blogs/alexa/post/28368692-a0b9-4579-b129-e6793bef7848/alexa-skill-recipe-update-making-http-requests-to-get-data-from-an-external-api-using-the-ask-software-development-kit-for-node-js-version-2 Alexa Skill Recipe Update: Making HTTP Requests to Get Data from an External API Using the ASK Software Development Kit for Node.js Version 2 Jennifer King 2019-05-23T16:22:04+00:00 2019-05-23T16:22:04+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/skill_recipe_blog._CB499280997_.png" style="height:240px; width:954px" /></p> <p>In this skill recipe update, we look at two of the most popular methods you can use to call an external API through your Alexa skill using version 2 of the Node.js Software Development Kit..</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/skill_recipe_blog._CB499280997_.png" style="height:240px; width:954px" /></p> <p>I love interacting with the Alexa developer community. Your thirst for knowledge constantly challenges me to experiment and dive deep into many voice-related topics. One question I’m often asked at hackathons and training sessions is how to get data from an external API from the AWS Lambda function. It’s a best practice to separate your data from your business logic. Alexa evangelist <a href="https://twitter.com/ajot" target="_blank">Amit Jotwani</a> wrote a <a href="https://developer.amazon.com/blogs/alexa/post/a9ef18b2-ef68-44d4-86eb-dbdb293853bb/alexa-skill-recipe-making-http-requests-to-get-data-from-an-external-api">previous Alexa skill recipe</a> about this topic using version 1 (v1) of the Alexa Skills Kit (ASK) Software Development Kit (SDK) for Node.js. Since then, we’ve updated the <a href="https://developer.amazon.com/blogs/alexa/post/decb3931-2c81-497d-85e4-8fbb5ffb1114/now-available-version-2-of-the-ask-software-development-kit-for-node-js">SDK to version 2 (v2)</a>. V2 offers new challenges and capabilities to think about, including new features that enable you to build skills faster and reduce complexity in your code. This updated skill recipe shows how to make external API calls using v2 of the ASK SDK for Node.js.</p> <h2>Adding an HTTP Request to a Fact Skill</h2> <p>To make it simple, we’re going to use the fact skill as our base. Our customers will interact with the skill by asking it for a fact about a number. Let’s take a look at a simple interaction between our skill and our customer:</p> <p style="margin-left:40px"><em><strong>Customer: </strong>Alexa, ask number facts to give me a fact about seven.<br /> <br /> <strong>Alexa:</strong> Seven is the number of main stars in the constellations of the Big Dipper and Orion.</em></p> <p>When our customer says, “give me a fact about seven” the <strong>GetNumberFactIntent</strong> is triggered and the <strong>number</strong> slot is set to ‘7’. Our skill code makes an http get request to <a href="http://numbersapi.com" target="_blank">numbersapi.com</a> to get a random fact about the number ‘7’. You can test out the api by typing: http://numbersapi.com/7 into your browser.</p> <p>When we first built our number facts skill, we hardcoded our facts straight into the code. Let’s take a moment to understand how the code works before we update our code to get the facts from <a href="http://numbersapi.com" target="_blank">numbersapi.com.</a></p> <h2>Understanding the Hard-Coded Facts</h2> <p>First, we defined an object called <strong>numberFacts</strong>. It functions as a lookup table. Take a moment to look through the table:</p> <pre> <code>const numberFacts = { &quot;1&quot;:&quot;is the number of moons orbitting the earth.&quot;, &quot;2&quot;:&quot;is the number of stars in a binary star system (a stellar system consisting of two stars orbiting around their center of mass).&quot;, &quot;3&quot;:&quot;is the number of consecutive successful attempts in a hat trick in sports.&quot;, &quot;4&quot;:&quot;is the number of movements in a symphony.&quot;, &quot;5&quot;:&quot;is the number of basic tastes (sweet, salty, sour, bitter, and umami).&quot;, &quot;6&quot;:&quot;is the number of fundamental flight instruments lumped together on a cockpit display.&quot;, &quot;7&quot;:&quot;7 is the number of main stars in the constellations of the Big Dipper and Orion.&quot;, &quot;8&quot;:&quot;is the number of bits in a byte.&quot;, &quot;9&quot;:&quot;is the number of innings in a regulation, non-tied game of baseball.&quot;, &quot;10&quot;:&quot;is the number of hydrogen atoms in butane, a hydrocarbon.&quot;, &quot;11&quot;: &quot;is the number of players in a football team.&quot; }</code></pre> <p>If our customer asked for a fact about ‘7’, we can pass it to <strong>numberFacts</strong> to get the fact. The code to do that would look like:</p> <pre> <code>numberFacts[&quot;7&quot;]</code></pre> <p>Since we don’t want to hardcode the number 7, let’s take a look at how we’d look up the fact for our <strong>number</strong> slot.</p> <pre> <code>const theNumber = handlerInput.requestEnvelope.request.intent.slots.number.value; const theFact = numberFacts[theNumber]</code></pre> <p>Below is the complete code for <strong>GetNumberFactIntentHandler</strong> before we make any updates.</p> <pre> <code>const GetNumberFactIntentHandler = { canHandle(handlerInput) { return handlerInput.requestEnvelope.request.type === 'IntentRequest' &amp;&amp; handlerInput.requestEnvelope.request.intent.name === 'GetNumberFactIntent'; }, handle(handlerInput) { const theNumber = handlerInput.requestEnvelope.request.intent.slots.number.value; const theFact = numberFacts[theNumber]; const speakOutput = theNumber + theFact; const repromptOutput = &quot; Would you like another fact?&quot;; return handlerInput.responseBuilder .speak(speakOutput) .reprompt(speakOutput + repromptOutput) .getResponse(); } };</code></pre> <p>We simply access the <strong>number</strong> slot and look up the fact using the <strong>numberFacts</strong> object and then have our skill tell the user the fact by returning a response using the <strong>responseBuilder</strong>.</p> <h2>Using Promises to Make Asynchronous Calls</h2> <p>Before we make any updates to our code, we need to understand that when we request data from an external API there will be a slight delay until we receive a response. NodeJS technically only has one main code execution thread, so if you do anything that takes a long time no other code can run. This will negatively impact performance. To prevent this from happening, most things that can potentially take a long time to complete are written to be asynchronous. This will allow the code defined after it to run. To demonstrate how this works when the user says “give me a fact about 7” take a look at the following code:</p> <pre> <code>const theNumber = handlerInput.requestEnvelope.request.intent.slots.number.value; // making an http get request is slow, so getHTTP is asynchronous const theFact = getHttp(&quot;http://numbersapi.com/&quot; + theNumber); console.log(&quot;The fact is: &quot;, theFact); const speakOutput = theNumber + theFact; return handlerInput.responseBuilder .speak(speakOutput) .getResponse();</code></pre> <p>The getHttp function is asynchronous thus it won’t pause execution and it won’t be complete before the console.log executes on the line below. If you check your logs, you will see <strong>“The fact is: undefined”</strong>, which is not good. Further making things worse, Alexa will say, “7 undefined.” This will confuse our customer, so let’s keep working at this.</p> <p>How do we wait for the response without blocking the main executition thread? We don’t. We pass getHttp a block of code to execute when it finishes! We’re basically saying, “hey getHttp, once you finish getting that stuff for us, follow these instructions to process it.” Let’s take a look at how that works:</p> <pre> <code>const theNumber = handlerInput.requestEnvelope.request.intent.slots.number.value; // making an http get request is slow, so getHTTP is asynchronous const theFact = getHttp(&quot;http://numbersapi.com/&quot; + theNumber, response =&gt; { console.log(&quot;The fact is: &quot;, theFact); const speakOutput = theNumber + response; return handlerInput.responseBuilder .speak(speakOutput) .getResponse(); });</code></pre> <p>Here we are passing an <a href="https://codeburst.io/javascript-arrow-functions-for-beginners-926947fc0cdc">arrow function</a> to getHttp, which is the code we want to have execute when we get a response back from <strong>numbersapi.com</strong>.</p> <p>Now that we understand how asynchronous functions work. Let’s take a look how we’ll define the getHttp function. We’ll be using the built-in <a href="https://nodejs.org/api/http.html">http module</a> to make the request. So, we’ll need to require the <strong>http</strong> module in order to use it:</p> <pre> <code>const http = require('http');</code></pre> <p>The <a href="https://nodejs.org/api/http.html#http_http_get_options_callback">get</a> function is asynchronous, so getHttp will need to be asynchronous as well. To do this, we are going to use promises. Like we did above, we could use an arrow function, but using a promise now will enable us to cut a lot of complexity out of our code when we get to recipe 2.</p> <p>A promise is an object that represents the success or failure of an asynchronous task. When you define a promise, you define a closure that the promise executes. In that code you determine success or failure by calling either <strong>resolve</strong> or <strong>reject</strong>, respectively. When you create the Promise, the code doesn’t run until you call the <strong>then</strong> method. The <strong>then</strong> method takes a block of code (like an arrow function which we used above) which will run upon success. Any variables you pass to <strong>resolve</strong> will be available in <strong>then</strong>. To handle any failures, you can provide a block of code for <strong>catch</strong>. Likewise, any variables you passed to <strong>reject</strong> will be available within <strong>catch</strong>.</p> <p>Let’s take a look at how you’d create a promise. We’re going to have <strong>getHttp</strong> return a promise that wraps the <strong>http.get</strong> function, but now let’s take a high level look at how it would create the function.</p> <pre> <code>function getHttp(url, query) { return new Promise((resolve, reject) =&gt; { // set up the http request ... if (error) { reject(error); } else { resolve(response) } }); }</code></pre> <p>The function creates and returns a new Promise. We will call <strong>reject</strong> and if it completes without any errors we call <strong>resolve</strong> and pass it the response so the arrow function we pass to <strong>then</strong> can access it.</p> <p><strong>getHttp</strong> returns a promise which is an object. The following code would create the promise but won’t execute the asynchronous code.</p> <p>const URL = &quot;http://numbersapi.com/&quot;;<br /> const theNumber = &quot;7&quot;;<br /> getHttp(URL, theNumber) // returns a promise, but doesn't execute the request</p> <p>To make the promise execute, we would call <strong>then</strong> and pass it an arrow function:</p> <pre> <code>getHttp(URL, theNumber) .then(response =&gt; { // the code here will run when the request completes. }) .catch(error =&gt; { // the code here will run if there's an error. });</code></pre> <p>Now let’s take a look at our getHttp function. It takes in 2 parameters, <strong>url</strong> and <strong>query</strong>. The <strong>url</strong> is the address of the api we want to get and <strong>query</strong> is the value we want to look up. It uses the <strong>http.get</strong> function. It takes a URL and an arrow function. We combine the <strong>url</strong> and <strong>query</strong> together to make <strong>URL</strong> and pass it to <strong>http.get</strong>. In the arrow function we subscribe to events called <strong>data</strong>, <strong>end</strong>, and <strong>error</strong>.</p> <p>When you make a request you might not get all the data in one piece, so you’ll need to keep appending the chunks of data until the request ends. Take a look at the block below to see how we are appending the chunks of data:</p> <pre> <code>... let returnData = ''; response.on('data', chunk =&gt; { returnData += chunk; }); ...</code></pre> <p>If there’s an error we’ll receive an <strong>error</strong> event. This is where we’ll want to call <strong>reject</strong>.</p> <pre> <code>response.on('error', error =&gt; { reject(error); });</code></pre> <p>There’s another place to check for errors. If we get a status code from the server that indicates an error, we’ll also want to call reject.</p> <pre> <code>if (response.statusCode &lt; 200 || response.statusCode &gt;= 300) { return reject(new Error(`${response.statusCode}: ${response.req.getHeader('host')} ${response.req.path}`)); } </code></pre> <p>If the request completes without an error, then we’ll receive an <strong>end</strong> event. In that case, we’ll want to call <strong>resolve</strong>.</p> <pre> <code>response.on('end', () =&gt; { resolve(returnData); });</code></pre> <p>Zooming out the entire function appears as below:</p> <pre> <code>const getHttp = function(url, query) { return new Promise((resolve, reject) =&gt; { const request = http.get(`${url}/${query}`, response =&gt; { response.setEncoding('utf8'); let returnData = ''; if (response.statusCode &lt; 200 || response.statusCode &gt;= 300) { return reject(new Error(`${response.statusCode}: ${response.req.getHeader('host')} ${response.req.path}`)); } response.on('data', chunk =&gt; { returnData += chunk; }); response.on('end', () =&gt; { resolve(returnData); }); response.on('error', error =&gt; { reject(error); }); }); request.end(); }); }</code></pre> <h2>Using the Promise from Our Handler</h2> <p>Now that we’ve made our <strong>getHttp</strong> function return a promise, we’ll use it from our <strong>handle</strong> function to make the http request:</p> <pre> <code>getHttp(URL, theNumber).then(response =&gt; { speakOutput += &quot; &quot; + response; return handlerInput.responseBuilder .speak(speakOutput + repromptOutput) .reprompt(repromptOutput) .getResponse(); }).catch(error =&gt; { console.log('Error with HTTP Request:', error); repromptOutput = &quot;&quot;; return handlerInput.responseBuilder .speak(`I wasn't able to find a fact for ${theNumber}. ${repromptOutput}`) .reprompt(repromptOutput) .getResponse(); });</code></pre> <p>Our <strong>then</strong> method takes an arrow function that handles the response that comes back from numbersapi.com and adds it to the <strong>speakOutput</strong>. It then builds the response using the <strong>responseBuilder</strong> and returns it. The <strong>catch</strong> method takes arrow function that logs the error and communicates that there was an error to the customer. Putting it all together, our handle function looks like:</p> <pre> <code>handle(handlerInput) { const theNumber = handlerInput.requestEnvelope.request.intent.slots.number.value; const speakOutput = theNumber ; const repromptOutput = &quot; Would you like another fact?&quot;; getHttp(URL, theNumber).then(response =&gt; { speakOutput += &quot; &quot; + response; return handlerInput.responseBuilder .speak(speakOutput + repromptOutput) .reprompt(repromptOutput) .getResponse(); }).catch(error =&gt; { return handlerInput.responseBuilder .speak(`I wasn't able to find a fact for ${theNumber}`) .reprompt(repromptOutput) .getResponse(); }); }</code></pre> <p>If you were to use this code, however, your skill will say, “There was a problem with the requested skill’s response.” What happened?</p> <h2>I Used a Promise. What Is There a Problem?</h2> <p>Once you go asynchronous all sub functions must also be ansynchronous. Our <strong>getHttp</strong> function returns a Promise object. We call <strong>then</strong> and the code executes, but execution hasn’t been paused. The <strong>handle</strong> function completes. So when we return the response using the <strong>responseBuilder</strong> in our arrow function, the response, which is what your skill will say, isn’t being returned from your <strong>handle</strong> function. To demonstrate, add the following code below the <strong>catch</strong> and your skill will say: “We spoke before the request completed.”</p> <pre> <code>return handlerInput.responseBuilder .speak(&quot;We spoke before the request completed&quot;) .getResponse();</code></pre> <p>You code should now look like:</p> <pre> <code>handle(handlerInput) { const theNumber = handlerInput.requestEnvelope.request.intent.slots.number.value; const speakOutput = theNumber ; const repromptOutput = &quot; Would you like another fact?&quot;; getHttp(URL, theNumber).then(response =&gt; { speakOutput += &quot; &quot; + response; return handlerInput.responseBuilder .speak(speakOutput + repromptOutput) .reprompt(repromptOutput) .getResponse(); }).catch(error =&gt; { return handlerInput.responseBuilder .speak(`I wasn't able to find a fact for ${theNumber}`) .reprompt(repromptOutput) .getResponse(); }); return handlerInput.responseBuilder .speak(&quot;We spoke before the request completed&quot;) .getResponse(); }</code></pre> <p>So how do we make sure our handle function returns response from our promise? We return a promise! We can wrap the <strong>getHttp</strong> function in a new promise and call <strong>resolve</strong> and <strong>reject</strong> instead of return.</p> <pre> <code>return new Promise((resolve, reject) =&gt; { getHttp(URL, theNumber).then(response =&gt; { speakOutput += &quot; &quot; + response; resolve(handlerInput.responseBuilder .speak(speakOutput + repromptOutput) .reprompt(repromptOutput) .getResponse()); }).catch(error =&gt; { reject(handlerInput.responseBuilder .speak(`I wasn't able to find a fact for ${theNumber}`) .getResponse()); }); });</code></pre> <h2>Recipe 1: Code Complete with Promises</h2> <p>Once we have returned our promise, the SDK will call <strong>then</strong> on our promise and use the response it receives. This response is the one we built using the <strong>responseBuilder</strong>. It includes the data we requested from numbersapi.com. Let’s take a look at the whole <strong>handle</strong> function:</p> <pre> <code>handle(handlerInput) { const theNumber = handlerInput.requestEnvelope.request.intent.slots.number.value; const speakOutput = theNumber ; const repromptOutput = &quot; Would you like another fact?&quot;; return new Promise((resolve, reject) =&gt; { getHttp(URL, theNumber).then(response =&gt; { speakOutput += &quot; &quot; + response; resolve(handlerInput.responseBuilder .speak(speakOutput + repromptOutput) .reprompt(repromptOutput) .getResponse()); }).catch(error =&gt; { reject(handlerInput.responseBuilder .speak(`I wasn't able to find a fact for ${theNumber}`) .getResponse()); }); }); } const getHttp = function(url, query) { return new Promise((resolve, reject) =&gt; { const request = http.get(`${url}/${query}`, response =&gt; { response.setEncoding('utf8'); let returnData = ''; if (response.statusCode &lt; 200 || response.statusCode &gt;= 300) { return reject(new Error(`${response.statusCode}: ${response.req.getHeader('host')} ${response.req.path}`)); } response.on('data', chunk =&gt; { returnData += chunk; }); response.on('end', () =&gt; { resolve(returnData); }); response.on('error', error =&gt; { reject(error); }); }); request.end(); }); }</code></pre> <h2>Using Async/Await to Clean Things Up</h2> <p>Promises are exciting, and I’ve only scratched the surface on how they can be used. Our code has become a little hard to read because we’ve nested our <strong>getHttp</strong> function that returns a promise in a new promise. Let’s use some relatively new Javascript keywords to simplify our code <strong>async</strong> and <strong>await</strong>.</p> <p>The <strong>async</strong> keyword is used to denote that following function returns a promise. The cool thing is that even if the function returns a non-promise value, Javascript will wrap it in a resolved promise so you don’t have to.</p> <p>We can use this wonderful keyword to declare that our <strong>handle</strong> function returns a promise and instead of nesting our call to <strong>getHttp</strong> in a new promise, we’ll let Javascript do the hard work.</p> <pre> <code>async handle(handlerInput) { ... }</code></pre> <p>Now let’s talk about <strong>await</strong>. This keyword makes function execution pause until the promise it precedes has completed. This doesn’t mean that we’re pausing execution of the main thread. We’re simply pausing execution of our <strong>async</strong> function. In our case, we’re going to pause <strong>handle</strong> until <strong>getHttp</strong> has been resolved or rejected. Let’s take a look at how we’d use the <strong>await</strong> keyword.</p> <pre> <code>const response = await getHttp(URL, theNumber);</code></pre> <p>Isn’t that much cleaner! We don’t have to call <strong>then</strong> nor do we have to pass in an arrow function. Javascript pauses the execution in this case. The response will be the value that we passed to the <strong>resolve</strong> function when we defined our promise in <strong>getHttp</strong>.</p> <p>So what happens if there’s an error? With our promise we called <strong>catch</strong> to handle the error case. When we use the <strong>await</strong> keyword the promise returns the <strong>resolved</strong> value, so if there’s an error it throws it. So we need to wrap our code in a <strong>try/catch</strong> block.</p> <pre> <code>try { const response = await getHttp(URL, theNumber); } catch (error) { // handle the error }</code></pre> <p>If there’s an error, the code in our <strong>catch</strong> block will execute so we can log the error and ask our customer for another number to look up. We can build our response with the response builder inside the <strong>try</strong> and <strong>catch</strong> which will allow us to have only one <strong>return</strong> statement at the end of our <strong>handle</strong> function which will make our code easier to read. Our complete function that uses <strong>async</strong> and <strong>await</strong> appears below.</p> <h2>Recipe 2: Code Complete Using Async and Await</h2> <pre> <code>async handle(handlerInput) { const theNumber = handlerInput.requestEnvelope.request.intent.slots.number.value; let speakOutput = theNumber ; const repromptOutput = &quot; Would you like another fact?&quot;; try { const response = await getHttp(URL, theNumber); speakOutput += &quot; &quot; + response; handlerInput.responseBuilder .speak(speakOutput + repromptOutput) .reprompt(repromptOutput) } catch(error) { handlerInput.responseBuilder .speak(`I wasn't able to find a fact for ${theNumber}`) .reprompt(repromptOutput) } return handlerInput.responseBuilder .getResponse(); } const getHttp = function(url, query) { return new Promise((resolve, reject) =&gt; { const request = http.get(`${url}/${query}`, response =&gt; { response.setEncoding('utf8'); let returnData = ''; if (response.statusCode &lt; 200 || response.statusCode &gt;= 300) { return reject(new Error(`${response.statusCode}: ${response.req.getHeader('host')} ${response.req.path}`)); } response.on('data', chunk =&gt; { returnData += chunk; }); response.on('end', () =&gt; { resolve(returnData); }); response.on('error', error =&gt; { reject(error); }); }); request.end(); }); }</code></pre> <p>Our handle function is so clean and we didn’t even have to make any updates to the <strong>getHttp</strong> function to take advantage of the <strong>async/await</strong> because we made it return a promise from the beginning. The <strong>await</strong> keyword works only with promises. If you’re using an asychronous function that doesn’t return a promise, you can wrap it a promise so you can use <strong>await</strong>. This is why we made our <strong>getHttp</strong> function return a promise.</p> <h2>Conclusion</h2> <p>Go ahead and try out this new skill recipe. You can replace the numbersapi with your own external api. The internet is your oyster. Putting this technique to use will open up your skill to access data from outside of the values you’ve defined in code.</p> <p>I hope this post has inspired you to think about ways that you can augement that data that your skill uses by using external APIs. I would love to hear your ideas. Please share them with me on Twitter at <a href="https://twitter.com/sleepydeveloper" target="_blank">@SleepyDeveloper</a>.</p> <p>For more recipes, visit the <a href="https://github.com/alexa/alexa-cookbook" target="_blank">Alexa Skill-Building Cookbook</a> on GitHub.</p> <h2>More Resources</h2> <ul> <li><a href="https://developer.amazon.com/post/Tx1UE9W1NQ0GYII/Publishing-Your-Skill-Code-to-Lambda-via-the-Command-Line-Interface">Publishing Your Skill Code to Lambda via the Command Line Interface</a></li> <li><a href="https://developer.amazon.com/docs/smapi/quick-start-alexa-skills-kit-command-line-interface.html">Quick Start Alexa Skills Kit Command Line Interface (ASK CLI)</a></li> <li><a href="https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs" target="_blank">Alexa Skills Kit SDK for Node.js</a></li> <li><a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank">AWS SDK for JavaScript in Node.js</a></li> <li><a href="https://www.npmjs.com/package/request" target="_blank">Request Simplified HTTP client</a></li> <li><a href="https://developer.amazon.com/docs/custom-skills/send-the-user-a-progressive-response.html">Send the User a Progressive Response</a></li> </ul> /blogs/alexa/post/896a5310-4189-4b8c-bc33-5610728019da/how-to-get-started-with-amazon-pay-to-sell-goods-and-services-from-your-alexa-skills How to Get Started with Amazon Pay to Sell Goods and Services from Your Alexa Skills Kristin Fritsche 2019-05-22T08:30:00+00:00 2019-05-22T15:24:58+00:00 <p><img alt="b895dd2c0d1ae02f997ffbec94e9f036cf943a3f171b8a7911e538623f37de8b_c95fe012-f03e-4fd7-8add-70cf1b8958d4.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/b895dd2c0d1ae02f997ffbec94e9f036cf943a3f171b8a7911e538623f37de8b_c95fe012-f03e-4fd7-8add-70cf1b8958d4.png._CB464480825_.png?t=true" /></p> <p>With Amazon Pay for Alexa Skills, you can sell real-world goods and services such as tickets for movies or concerts, car pick up services, food, and more. This post will show you how to add Amazon Pay to your skill in just a few simple steps.</p> <p><img alt="b895dd2c0d1ae02f997ffbec94e9f036cf943a3f171b8a7911e538623f37de8b_c95fe012-f03e-4fd7-8add-70cf1b8958d4.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/b895dd2c0d1ae02f997ffbec94e9f036cf943a3f171b8a7911e538623f37de8b_c95fe012-f03e-4fd7-8add-70cf1b8958d4.png._CB464480825_.png?t=true" /></p> <p>With Amazon Pay for Alexa Skills, you can sell real-world goods and services such as tickets for movies or concerts, car pick up services, food, and more. You can reach customers around the world through an interaction as natural as voice, powered by a seamless payment processing flow handled by Amazon Pay.</p> <p>Developers are already using <a href="https://developer.amazon.com/docs/amazon-pay/alexa-amazon-pay-faq.html#what_products_and_services" target="_blank">Amazon Pay</a> to bring a variety of real-world products to voice. For example, the British rail operator <a href="https://www.virgintrains.co.uk/alexa" target="_blank">Virgin Trains</a> is able to sell train tickets to customers directly through their Alexa-enabled device.</p> <p>After building an engaging voice experience, you’re ready to learn more about monetizing your Alexa skill using Amazon Pay for Alexa Skills. This post will show you how to add Amazon Pay to your skill in just a few simple steps. Before you start, sign-up as an Amazon Pay merchant. Learn more in <a href="https://developer.amazon.com/docs/amazon-pay/integrate-skill-with-amazon-pay-v2.html#before_you_begin" target="_blank">our guide</a>.</p> <p>The Amazon Pay for Alexa Skills APIs consist of only two operations - <em>Setup</em> and <em>Charge.</em> We will walk you through both, below.</p> <h2>Setup</h2> <p><em>Setup</em> will create an agreement between your merchant account and the buyer, called a <em>BillingAgreement</em>, which will be used to charge the customer in a later step. Amazon Pay uses <a href="https://developer.amazon.com/blogs/alexa/post/7b332b32-893e-4cad-be07-a5877efcbbb4/skill-connections-preview-now-skills-can-work-together-to-help-customers-get-more-done">Alexa Skill Connections</a> to have your skill interact with the Amazon Pay services. To initiate the creation of the agreement, we create a matching <em>Connections directive</em> to call the setup operation.</p> <pre> <code>let setupDirective = { 'type': 'Connections.SendRequest', 'name': 'Setup', 'payload': { '@type': 'SetupAmazonPayRequest', '@version': '2', 'sellerId': 'AEMGQXXXKD154', 'countryOfEstablishment': 'US', 'ledgerCurrency': 'USD', 'checkoutLanguage': 'en-US', 'needAmazonShippingAddress': true, 'billingAgreementAttributes': { '@type': 'BillingAgreementAttributes', '@version': '2', 'sellerNote': 'Thanks for shaving with No Nicks', 'sellerBillingAgreementAttributes': { '@type': 'SellerBillingAgreementAttributes', '@version': '2' } } }, 'token': 'IK1yRWd8VWfF' };</code></pre> <p>First, we define the <em>Connections.SendRequest</em> directive for the Amazon Pay <em>Setup</em> operation. The payload inside the directive defines all Amazon Pay relevant information. The most essential ones are the <em>sellerId</em>, which defines <em>who</em> is initiating the charge, the <em>countryOfEstablishment</em> and <em>ledgerCurrency </em>define<em> how </em>to charge the customer. For definitions of all other fields, refer to our <a href="https://developer.amazon.com/docs/amazon-pay/integrate-skill-with-amazon-pay-v2.html" target="_blank">comprehensive guide</a> linked to in the resources section.</p> <p>You'll notice, we did not define <em>how much</em> to charge yet. This is subject to the C<em>harge</em> operation, if you charge inside your skill, or any other service using our backend APIs, you are charging “offline”.</p> <p>Adding the directive to your response is fairly simple:</p> <pre> <code>return handlerInput.responseBuilder .addDirective(setupDirective) .withShouldEndSession(true) .getResponse(); </code></pre> <p>Note: the reason we end the session is because the Connection.Request will terminate your skill session and invoke it again with a Connections.Response. If you do not end your session or add a re-prompt, it will result in an error.</p> <p>To catch the response, simply define a handler for the <em>Connections.Response </em>request</p> <pre> <code>canHandle(handlerInput) { return handlerInput.requestEnvelope.request.type === &quot;Connections.Response&quot; &amp;&amp; handlerInput.requestEnvelope.request.name === &quot;Setup&quot;; }</code></pre> <p><code>;</code></p> <p>The payload of the response will contain the <em>billingAgreementId</em> needed to charge the customer.</p> <h2>Charge</h2> <p>Amazon Pay can help you with a variety of use cases. We classify them as the payment workflows, C<em>harge Now</em> and <em>Charge Later</em>.</p> <p><a href="https://developer.amazon.com/docs/amazon-pay/integrate-skill-with-amazon-pay-v2.html#workflow1" target="_blank">Charge Now</a> allows you to sell real-world goods (e.g. tickets, clothing, etc.) and charge the buyer while they are still interacting with your skill. It's a perfect match for one-time purchases where you know the exact charge amount. The <em>starter kit</em> in the “No Nicks” demo skill is an example of Charge Now.</p> <p><a href="https://developer.amazon.com/docs/amazon-pay/integrate-skill-with-amazon-pay-v2.html#workflow2" target="_blank">Charge Later</a> allows you to setup a <em>BillingAgreement</em>, which represents the buyer's payment and delivery address preferences, if available, and use this agreement to charge the customer at a later point in time via Amazon Pay <a href="https://pay.amazon.com/uk/developer/documentation/apireference/201751630" target="_blank">backend APIs</a>. It's the perfect match when you don't know the exact order total yet - e.g. for up-sell opportunities, pay-as-you-go scenarios or subscriptions, where a buyer will be charged in regular intervals.</p> <p>In the <strong><em>chargeNow</em></strong> workflow, you can similarly execute a <em>charge</em> request, using the <em>billingAgreementId</em> received in the <em>setup</em> response.</p> <pre> <code>const billingAgreementId = responsePayload.billingAgreementDetails.billingAgreementId; let directiveObject = { 'type': 'Connections.SendRequest', 'name': 'Charge', 'payload': { '@type': 'ChargeAmazonPayRequest', '@version': '2', 'sellerId': 'AEMGQXXXKD154', 'billingAgreementId': billingAgreementId, 'paymentAction': 'AuthorizeAndCapture', 'authorizeAttributes': { '@type': 'AuthorizeAttributes', '@version': '2', 'authorizationReferenceId': 'ml3qPJG3nC6c65UE', 'authorizationAmount': { '@type': 'Price', '@version': '2', 'amount': '9', 'currencyCode': 'USD' }, 'transactionTimeout': 0, 'sellerAuthorizationNote': '', 'softDescriptor': 'No Nicks' }, 'sellerOrderAttributes': { '@type': 'SellerOrderAttributes', '@version': '2', 'storeName': 'No Nicks', 'sellerNote': 'Thanks for shaving with No Nicks' } }, 'token': 'WASv2lk4pdfI' }</code></pre> <p>The <em>charge</em> operation requires you to at least specify the total amount and currency to request from the customer. For a full reference, refer to the <a href="https://developer.amazon.com/docs/amazon-pay/integrate-skill-with-amazon-pay-v2.html" target="_blank">comprehensive guide</a> in the resources below.</p> <p>Just like with the <em>setup </em>phase, we'll add the directive to the <em>responseBuilder</em> when preparing the response.</p> <pre> <code> return handlerInput.responseBuilder .addDirective(directiveObject) .withShouldEndSession(true) .getResponse();</code></pre> <p>Once again, define a handler for the <em>Connections.Response</em> request</p> <pre> <code>canHandle(handlerInput) { return handlerInput.requestEnvelope.request.type === &quot;Connections.Response&quot; &amp;&amp; handlerInput.requestEnvelope.request.name === &quot;Charge&quot;; }</code></pre> <p>The response of the Connections request will tell you if the charge was successful or if there was an issue taking payments.</p> <p>After a successful purchase, you should send a card to the customer’s Alexa app as an order confirmation, including the order details.</p> <pre> <code>var confirmationCardResponse = 'Your order has been placed.\n' + 'Products: 1 Starter Kit \n' + 'Total amount: $9.00\n' + 'Thanks for shaving with No Nicks\n' + 'www.nonicks.com' return handlerInput.responseBuilder .speak( config.confirmationIntentResponse ) .withStandardCard( 'Order Confirmation Details', confirmationCardResponse, config.logoURL ) .withShouldEndSession( true ) .getResponse( ); </code></pre> <p>&nbsp;</p> <p>With just a few simple steps, you’re able to take payments for real-world products or services in an Alexa skill.</p> <p>Get started today with integrating Amazon Pay into your Alexa skill and join the growing list of voice-first merchants. We can’t wait to see what you build!</p> <h2>Resources</h2> <ul> <li><a href="https://developer.amazon.com/blogs/alexa/post/80c551eb-5303-4ade-9942-e83d55d1904f/best-practices-to-create-a-delightful-voice-commerce-experience-for-your-customers">Best Practices to Create a Delightful Voice Commerce Experience for Your Customers</a></li> <li><a href="https://developer.amazon.com/alexa-skills-kit/make-money/amazon-pay?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=DELaunch&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_funnel=Publish&amp;sc_country=DE&amp;sc_medium=Owned_WB_DELaunch_ASK_Content_Publish_DE_DEDevs&amp;sc_segment=DEDevs">Amazon Pay for Alexa Skills</a></li> <li><a href="https://developer.amazon.com/docs/amazon-pay/integrate-skill-with-amazon-pay-v2.html" target="_blank">Technical Documentation: Integrate a Skill with Amazon Pay</a></li> <li><a href="https://developer.amazon.com/de/docs/amazon-pay/alexa-amazon-pay-faq.html" target="_blank">Amazon Pay FAQs</a></li> <li><a href="https://pay.amazon.com/us/developer/documentation/apireference/201751630" target="_blank">Amazon Pay API Reference Guide</a></li> <li><a href="https://github.com/alexa/skill-sample-nodejs-demo-store-amazon-pay" target="_blank">Amazon Pay Sample Skill</a></li> </ul> /blogs/alexa/post/f91afab2-22e8-44cb-8c34-5d9aaaf55463/how-to-leverage-presets-with-alexa-cooking-apis How to Leverage Presets with Alexa Cooking APIs Ahmed El Araby 2019-05-21T21:16:51+00:00 2019-05-21T21:16:51+00:00 <p>If your business offers a connected microwave, this blog post will help you create an easy-to-consume food preset catalog that you can associate with your microwave Alexa skill.</p> <p>Even as Amazon Alexa now appears on over 30,000 Alexa-compatible smart home devices, Alexa is also helping families to do more in the kitchen. With new innovative microwave products, Alexa can control the appliance from anywhere in the house with simple voice commands. This new functionality is available in an expanded Alexa Smart Home Skill API and helps customers prepare meals by replacing cooking controls like defrost, popcorn mode, time and power that would normally require 5 to 10 button presses with a simple voice command. Additionally, the hands-free ability for a consumer to pause and resume cooking in an oven while they take a call or handle another event is exceptionally useful.</p> <p>In 2018, Amazon released its first <a href="https://amzn.to/2UvWzkL" target="_blank">voice-controlled microwave</a>, and <a href="https://amzn.to/2UwloNs" target="_blank">GE</a> followed suit. Both microwaves utilize the Alexa voice capabilities. These microwaves provide consumers with easy-to-remember voice commands to prepare common food items like popcorn and frozen pizza.</p> <p>If your business offers a connected microwave, this blog post will help you create an easy-to-consume food preset catalog that you can associate with your microwave Alexa skill.</p> <h2>The Cooking Interface</h2> <p>To understand how to implement commands for cooking, we will share the steps and best practices to add voice support to cooking devices. As developers integrate cooking-centered voice commands into connected microwaves, one of the first challenges for providing a great user experience is that packaged food items have complicated names. Variations on sub-brands, sizes, and flavors all lead to requiring voice commands that might be challenging for the customer and Alexa. To help simplify and standardize this interface for developers, Alexa defines the Alexa.Cooking Interface. This interface is common to all cooking endpoints and describes the available operations for the device.</p> <p>The basic voice operation of a microwave would be something like “Alexa, two minutes on my microwave.” This command assumes that customer already placed a food item inside the microwave, and that the customer knows the cooking time required. In case the customer didn’t specify the time, Alexa would ask about the time required to cook the item.</p> <p>What if customers didn’t know the correct mode and the right amount of time to cook and item? In this case, the Alexa preset cooking comes handy. If the microwave manufacturer has created a preset catalog, cusotmers can only ask Alexa to cook by preset name, without the need to know the mode of the time required. In some cases, preset cooking requires either weight, volume or quantity (count) to perfectly cook the food item. This is determined by the preset catalog author. The author can specify that one or more of these food properties are required to fulfill the request. If it is required, Alexa will ask the customer about count, volume or weight if customer didn’t specify them in the request.</p> <p>For cooking with preset settings, the Alexa.Cooking.PresetController helps developers define custom cooking settings appropriate for a manufacturer's appliance.</p> <h2>Using a Preset Catalog</h2> <p>If a microwave has an often used or common preset, the developer should consider it to be controlled with voice commands. Specifically, Alexa-enabled microwaves can provide customers with the ability to cook with most of the commonly used recipes and packaged food by simply providing the name of the food. (The food name will be resolved to a slot value, or catalog item, and will be sent to you skill within the Alexia directive.) Using voice means fewer buttons for the customer to press and convenient cooking control while their hands are busy or messy. Beyond stopping and starting cooking, adjustments to power levels and duration are also available. For example, a customer can stop cooking and then instruct Alexa to set the microwave at 80% power for three minutes.</p> <p>To understand how a preset is used for a cooking device, let’s look at an example. To support a preset for &quot;PRIMO Mango Chicken with Coconut Rice,” the flow from configuration to handling a PresetController directive from Alexa looks like the following:</p> <p style="text-align:center"><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/primo_image_1.jpg._CB462828109_.jpg" style="display:block; height:343px; margin-left:auto; margin-right:auto; width:400px" /></p> <ol> <li>The developer provides Amazon with a <a href="https://developer.amazon.com/docs/device-apis/alexa-cooking-presetcontroller.html#preset-catalogs">Preset Catalog</a> of supported, custom cooking settings. This includes an entry for “PRIMO Mango Chicken with Coconut Rice.”</li> <li>After the catalog is ingested by Amazon, the developer will receive a unique preset catalog ID to be used in the discovery response of a cooking device.</li> <li>The developer builds an Alexa skill that supports the cooking endpoint.</li> <li>The discovery response returned by the cooking endpoint skill defines the required <a href="https://developer.amazon.com/docs/device-apis/alexa-cooking-presetcontroller.html#configuration-object">presetCatalogId</a> (received from catalog ingestion) and <a href="https://developer.amazon.com/docs/device-apis/alexa-cooking-presetcontroller.html#configuration-object">supportedCookingModes</a>.</li> <li>A customer enables the cooking endpoint skill through account linking.</li> <li>The customer says, “Alexa, microwave the Primo mango chicken with coconut rice.”</li> <li>Alexa interprets the food and cooking verb from the customer and sends a <a href="https://developer.amazon.com/docs/device-apis/alexa-cooking-presetcontroller.html#cookbypreset">CookByPreset</a> directive to the cooking endpoint skill.</li> <li>Using the preset directive information, the cooking endpoint instructs the endpoint to cook on High for two minutes.</li> </ol> <p>To provide cooking by name of an item, it is necessary for Amazon to train Alexa to understand the items in a provided preset catalog to offer the best customer experience.</p> <h2>Using the Supported Cooking Modes</h2> <p>A required element of a cooking endpoint is the Supported Cooking Modes. These modes describe the configuration settings for a defined mode.</p> <p>The current CookingMode values are as follows:</p> <ul> <li>Defrost - Cooking appliance is automatically configured to defrost mode</li> <li>Off - Switches off the device</li> <li>Preset - Brings it back to its automated cooking</li> <li>Reheat - Sets it to reheating mode</li> <li>TimeCook - Sets the time and power level for cooking</li> </ul> <p>For example, an endpoint with the Defrost cooking mode defined could support the following user utterance: &quot;Alexa, defrost two pounds of chicken.” In this example, the Preset is a chicken, while the Cooking Mode will be set to DEFROST. Alexa has support for food quantities, count or weight when using the preset cooking functionality.</p> <h2>Best Practices for Authoring the Preset Catalog</h2> <p>For most customers, it can be tedious when mentioning the full name of the item they want to cook in the microwave. For instance, having to say, “Alexa, microwave PRIMO Frozen Sandwiches Four Meat and Four Cheese Pizza” and “Alexa, cook ALWAYS-FRESH Frozen Sandwiches Pepperoni and Sausage Pizza.” Those two items have the same cooking instructions, and customers end up omitting the brand name. This omission might lead to preset name failure.</p> <p>To overcome this problem, avoid repeating the same words in more than one item; this makes detection more difficult. Understanding that only the Preset Name and Cooking Mode are required, it is recommended to group similar items by the mode of cooking and not the item name. For example, the following two items share the same cooking mode and time settings, as well as the general item name, but are from different brands:</p> <p style="text-align:center"><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/primo_image_2.PNG._CB462828111_.png" style="display:block; height:288px; margin-left:auto; margin-right:auto; width:600px" /></p> <p>In this scenarios, group these two items into one preset record: HERB ROASTED CHICKEN.Users can say, “Alexa, cook Herb Roasted Chicken” or “Alexa, Microwave Herb Roasted Chicken.” In this case, there is a preset on this microwave for Herb Roasted Chicken. Alexa sends a Cook By Preset directive with details about the food.</p> <h2>Conclusion</h2> <p>Developers should understand the challenges of voice recognition when undertaking Preset Cooking and should expect that Preset Catalog Item names reflect the most common way people identify or describe the cooked item. This is not necessarily the actual name on the box label. You should research how your customer names or refers to the supported food items that are to be used by your endpoint. Additionally, focus on how an item is cooked using your device and not the preset name.</p> <h2>Additional Resources</h2> <ul> <li><a href="https://developer.amazon.com/docs/device-apis/alexa-cooking.html">Alexa.Cooking Interface</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/a143dc87-5070-4158-bd0d-5777faa3a46c/introducing-cooking-capabilities-in-the-alexa-smart-home-skill-api">Introducing Cooking Capabilities in the Alexa Smart Home Skill API</a></li> </ul> /blogs/alexa/post/2d8c2128-eec9-44cc-9274-444940eb0a4d/using-adversarial-training-to-recognize-speakers-emotions Using Adversarial Training to Recognize Speakers’ Emotions Larry Hardesty 2019-05-21T13:20:57+00:00 2019-05-21T14:22:53+00:00 <p>The combination of an autoencoder, which is trained to output the same data it takes as input, and adversarial training, which pits two neural networks against each other, confers modest performance gains but opens the door to extensive training with unannotated data.&nbsp;</p> <p>A person’s tone of voice can tell you a lot about how they’re feeling. Not surprisingly, emotion recognition is an increasingly popular conversational-AI research topic.&nbsp;</p> <p>Emotion recognition has a wide range of applications: it can aid in health monitoring; it can make conversational-AI systems more engaging; and it can provide implicit customer feedback that could help voice agents like Alexa learn from their mistakes.</p> <p>Typically, emotion classification systems are neural networks trained in a supervised fashion: training data is labeled according to the speaker’s emotional state, and the network learns to predict the labels from the data. At this year’s International Conference on Acoustics, Speech, and Signal Processing, my colleagues and I <a href="https://ieeexplore.ieee.org/document/8682823" target="_blank">presented</a> an alternative approach, in which we used a publicly available data set to train a neural network known as an adversarial autoencoder.</p> <p>An adversarial autoencoder is an encoder-decoder neural network: one component of the network, the encoder, learns to produce a compact representation of input speech; the decoder reconstructs the input from the compact representation. The adversarial learning forces the encoder’s representations to conform to a desired probability distribution.</p> <p>The compact representation — or “latent” representation — encodes all properties of the training example. In our model, we explicitly dedicate part of the latent representation to the speaker’s emotional state and assume that the remaining part captures all other input characteristics.&nbsp;</p> <p>Our latent emotion representation consists of three network nodes, one for each of three emotional measures: <em>valence</em>, or whether the speaker’s emotion is positive or negative; <em>activation</em>, or whether the speaker is alert and engaged or passive; and <em>dominance</em>, or whether the speaker feels in control of the situation. The remaining part of the latent representation is much larger, 100 nodes.</p> <p style="text-align:center"><img alt="Adversarial_autoencoder.jpg" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/Adversarial_autoencoder.jpg._CB462489265_.jpg?t=true" style="display:block; height:309px; margin-left:auto; margin-right:auto; width:600px" />&nbsp;<br /> <em><sup>The architecture of our adversarial autoencoder. The latent representation has two components (emotion classes and style), whose outputs feed into two adversarial discriminators.</sup></em></p> <p>We conduct training in three phases. In the first phase, we train the encoder and decoder using data without labels. In the second phase, we use adversarial training to tune the encoder.</p> <p>Each latent representation — the three-node representation and the 100-node representation — passes to an adversarial discriminator. The adversarial discriminators are neural networks that attempt to distinguish real data representations, produced by the encoder, from artificial representations generated in accord with particular probability distributions. The encoder, in turn, attempts to fool the adversarial discriminator.&nbsp;</p> <p>In so doing, the encoder learns to produce representations that fit the probability distributions. This ensures that it will not overfit the training data, or rely too heavily on statistical properties of the training data that don’t represent speech data in general.</p> <p>In the third phase, we tune the encoder to ensure that the latent emotion representation predicts the emotional labels of the training data. We repeat all three training phases until we converge on the model with the best performance.&nbsp;</p> <p>For training, we used a public data set containing 10,000 utterances from 10 different speakers, labeled according to valence, activation, and dominance. We compared the performance of the proposed learning method and the fully supervised learning baseline and observed marginal improvements.</p> <p>In tests in which the inputs to our network were sentence-level feature vectors hand-engineered to capture relevant information about a speech signal, our network was 3% more accurate than a conventionally trained network in assessing valence.</p> <p>When the input to the network was a sequence of vectors representing the acoustic characteristics of 20-millisecond <em>frames</em>, or audio snippets, the improvement was 4%. This suggests that our approach could be useful for end-to-end spoken-language-understanding systems, which dispense with hand-engineered features and rely entirely on neural networks.</p> <p>Moreover, unlike conventional neural nets, adversarial autoencoders can benefit from training with unlabeled data. In our tests, for purposes of benchmarking, we used the same data sets to train both our network and the baseline network. But it’s likely that using additional unlabeled data in the first and second training phases can improve the network’s performance.</p> <p><em>Viktor Rozgic is a senior applied scientist in the Alexa Speech group.</em></p> <p><a href="https://ieeexplore.ieee.org/document/8682823" target="_blank"><strong>Paper</strong></a>: “Improving Emotion Classification through Variational Inference of Latent Variables”</p> <p><a href="https://developer.amazon.com/alexa/science" target="_blank"><strong>Alexa science</strong></a></p> <p><strong>Acknowledgments</strong>: Srinivas Parthasarathy, Ming Sun, Chao Wang</p> <p><strong>Related</strong>:</p> <ul> <li><a href="https://developer.amazon.com/blogs/alexa/post/9436a0fd-34d1-4121-8479-074e6a8c7c0f/two-new-papers-discuss-how-alexa-recognizes-sounds" target="_blank">Two New Papers Discuss How Alexa Recognizes Sounds</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/4969ab72-8137-4cbd-826b-420f5bc4516a/adversarial-training-produces-synthetic-data-for-machine-learning" target="_blank">Adversarial Training Produces Synthetic Data for Machine Learning</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/e5f41b49-0b4d-4aef-9b90-2bb7f68c0705/to-correct-imbalances-in-training-data-don-t-oversample-cluster" target="_blank">To Correct Imbalances in Training Data, Don’t Oversample: Cluster</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/8a7980f4-340c-4275-9575-509255617b04/how-alexa-is-learning-to-ignore-tv-radio-and-other-media-players" target="_blank">How Alexa Is Learning to Ignore TV, Radio, and Other Media Players</a></li> </ul> /blogs/alexa/post/e7d13044-cb20-4e78-8b3e-260a54034287/alexa-fund-invests-in-unruly-studios-and-zoobean-to-boost-learning-with-alexa Alexa Fund invests in Unruly Studios and Zoobean to boost learning with Alexa Brian Adams 2019-05-21T13:00:00+00:00 2019-05-21T15:33:36+00:00 <p>Today, we are excited to announce two more investments in education companies exploring integrations with Alexa – Unruly Studios and Zoobean.</p> <p>Since launching Alexa in 2014, we have seen more and more customers make Alexa part of their daily lives. The number of customers interacting with Alexa on a daily basis both more than doubled last year, and we’re encouraged by the ways in which voice is making their lives easier, more productive and more entertaining.&nbsp;</p> <p>Education and learning is a great example of that, and we see lots of innovation in this category. Families, in particular, love interacting with Alexa because she introduces new ways to learn about the world around them – from animals and science to math, spelling and more. And new skills allow them to put a new twist on game night by offering fun, educational games the whole family can enjoy. Learning and education come to life in this communal setting, and we’ve seen a number of developers introduce new skills that allow parents and kids to share in the learning experience. In fact, there are already thousands of education and reference skills in the Alexa Skills Store.</p> <p>The Alexa Fund has helped support this category by investing in several promising edtech startups. Last fall, the Alexa Fund invested in <a href="https://bamboolearning.com/">Bamboo Learning</a>, a voice-based software and services company with a mission to provide interactive teaching using in voice-first education applications and content.&nbsp; Sphero, another Alexa Fund investment, has continued to see positive momentum for <a href="https://edu.sphero.com/">Sphero Edu</a>, which combines coding and robotics to make STEM education even more engaging for students.</p> <p>Today, we are excited to announce two more investments in education companies exploring integrations with Alexa – Unruly Studios and Zoobean.</p> <p><a href="https://www.unrulysplats.com/">Unruly Studios</a> is an alum of the 2018 Alexa Accelerator, and we are thrilled to be reinvesting in the company as part of its seed round. Unruly is led by Bryanne Leeming, who founded the company with a compelling mission: to get more kids involved with STEM by combining coding with active, physical play through their first product, Unruly Splats. Unruly is exploring ways to connect Splats with Alexa to make the entire experience even more fun and engaging, while giving kids a glimpse into the basics of programming and voice design.</p> <p>Zoobean is the company behind <a href="https://www.beanstack.com/">Beanstack</a>, software that allows schools and libraries to facilitate reading, and for people of all ages to track their reading progress. The company was founded by Jordan Lloyd Bookey and Felix Brandon Lloyd, who got their start in 2014 with an appearance on Shark Tank. Mark Cuban invested in Zoobean following that appearance, and has continued to back the company in the time since. Jordan and Felix share <a href="https://developer.amazon.com/blogs/alexa/post/0902e3c5-9649-47e5-b705-984666b85125/mark-cuban-voice-ambient-computing-are-the-future-and-why-developers-should-get-in-now">Mark’s optimism</a> about voice technology and its potential to make learning easier and more fun, and they’re exploring ways to integrate Alexa into Beanstack, allowing readers to ask Alexa to track their progress or send reminders about reading time.</p> <p>“One of the reasons I’m so optimistic about voice technology is because it creates this communal experience where multiple people can share in the interaction,” said Mark Cuban. “Every startup founder should be looking at how voice services like Alexa fit into their business model, and it’s great to see companies like Zoobean and Unruly take that to heart. I’m excited to see them evolve their products and use voice to make reading and STEM accessible to more people.”</p> <p>Like us, the founders of Unruly and Zoobean see voice as a way to make learning easier, more fun and more engaging for people of all ages. As part of the Alexa Fund portfolio, they’ll continue to explore opportunities to integrate Alexa into their products and services -- we can’t wait to see what they build in the future!</p> /blogs/alexa/post/fc82ccb8-c204-46d9-a4e0-5fc22a84e040/voice-expert-q-a-how-discovery-designs-multimodal-alexa-skills Voice Expert Q&amp;A: How Discovery Designs Multimodal Alexa Skills Jennifer King 2019-05-20T14:00:00+00:00 2019-05-20T14:00:00+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/AlexaSkillsKit/1blog.png._CB480627403_.png" style="height:240px; width:954px" /></p> <p>We recently spoke with Tim McElreath, director of technology for mobile and emerging platforms at Discovery, to learn how Discovery is leveraging voice, explore his team’s process for building multimodal skills, and dive deep into their Food Network Alexa skill.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/AlexaSkillsKit/1blog.png._CB480627403_.png" style="height:240px; width:954px" /></p> <p>Customers embrace voice because it’s simple, natural, and conversational. Adding visual elements and touch to deliver multimodal, voice-first experiences can make can make your Alexa skill even more engaging and easy to use. Developers are already building <a href="https://developer.amazon.com/alexa-skills-kit/multimodal">multimodal skills</a> using the <a href="https://developer.amazon.com/blogs/alexa/post/0d2ad283-b7c3-48ba-8313-40f2b5fdc19d/alexa-presentation-language-now-available">Alexa Presentation Language (APL)</a>, creating immersive visuals with information that complements the voice experience.</p> <p>We had the opportunity speak with one voice leader—<a href="https://twitter.com/timmcelreath?lang=en" target="_blank">Tim McElreath</a>, director of technology for mobile and emerging platforms at Discovery, Inc.—to learn more about how Discovery is leveraging voice, explore his team’s process for building multimodal skills, and dive deep into their Food Network Alexa skill.</p> <p><a href="https://twitter.com/theonlyakersh?lang=en" target="_blank">Senior Solutions Architect Akersh Srivastava </a>and I sat down with Tim during <a href="https://developer.amazon.com/alexa/alexa-live">Alexa Live</a>, a free online conference for voice developers. Below is recap of our discussion, which has been edited for brevity and clarity. You can also watch the full 45-minute interview below.</p> <p style="text-align:center"><a href="https://youtu.be/8tkNna9mqC8"><iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/8tkNna9mqC8" width="640"></iframe></a></p> <p><strong>Akersh Srivastava: </strong>Tim, tell us about what you do. If you had to pitch yourself to the community, how would you do it?</p> <p><strong>Tim McElreath:</strong> I come from both a design and an engineering background. I'm a graduate of an art and design school but I also grew up around computers. The way I see myself is trying to bridge that gap between product design and engineering, and developing a user experience with a focus on how users really want to interact with digital interfaces. Now, at Discovery, I work very closely with the Food Network and HGTV. We're also brands like Motor Trend, Animal Planet, and brands that allow people to build their lives around the things that are important, like how they eat, how they create their home, and their past times. There's a lot of content and experiences to play with.</p> <p><strong>Akersh: </strong>How did you discover Alexa and the Alexa community?</p> <p><strong>Tim:</strong> I started working with Alexa back in 2016, so fairly early on. We have a great product team at Discovery and they recognized, because of the rate of adoption of Amazon Echo devices, that voice was going to be much more than just a novelty. This was a new way that customers could engage with our content now, and we didn’t want to wait around to see how the technology would evolve and jump in later. We wanted to start exploring how we could use voice interfaces and conversational interfaces to deliver our content, our experiences, our personalities, and our information in a more direct way to our customers. We started building a Food Network skill back in 2016 and we've been expanding on that ever since.</p> <p><strong>Cami Williams: </strong>Here at Alexa, we’re spearheading a voice-first initiative, but many skills also include some sort of component that would require you to think about multimodal experiences. I think it depends on the brand, the brand’s content, and how their customers typically engage. It's important to not only consider your voice-first approach but also previous generations of technology, like web and mobile, and recognize their influence within the voice community. With that in mind, what makes you most excited about voice?</p> <p><strong>Tim:</strong> We're in the beginning of a shift in the way humans interact with digital interfaces. We went from the early days of PC into the web into mobile 10 years ago. When you see that shift, we have to re-teach ourselves how to interact with digital interfaces. The expectation is that digital interfaces are going to understand us. But as engineers and designers, we're going to do the heavy lifting so that users can talk in their most natural language. For me, it's really an entirely new way of connecting with customers and users, and we're still figuring it out. That's really the exciting part. We don't know exactly what those expectations are going to be in the future, so being involved in it now feels very exploratory and very innovative.</p> <p><strong>Cami: </strong>Interacting with touch- and screen-based devices has become second nature. With the Alexa Presentation Language, we're excited to see how developers marry touch, screens, and voice, bringing conversation to this second-natured touch and screen experience. When you think about developing multimodal skills for Discovery, how can you marry the voice experience with the visual experience? And what's your perspective on multiple modalities for voice interfaces?</p> <p><strong>Tim:</strong> I think it's a fascinating challenge because one of the shifts in application design is that you're creating a single application that is meant to be delivered on anything, from an Echo Dot, to a small speaker, to a smart screen on your counter, to a connected TV, to auto, to headphones, and the list goes on. It's all the same experience but you have to tailor that to not only the device capabilities and the device modality, but the way the users are expected to be using that device in their current situation.</p> <p>When you're thinking about delivering a response through Alexa to a customer on a particular device, how do you change that response to make it fit their situation if it's on their night table or if they're standing six feet away from it on a kitchen counter? And how much attention are they going to be paying to that screen? For example, if you're delivering a response to a connected TV, you can expect that they're going to be actually paying attention to that screen because they're in &quot;lean back&quot; mode. However, if it's a smart screen on a kitchen counter, they may not be looking at that screen at all. You have to make sure that you're giving the information through your speech response, just in case they're not fully engaged with that screen in that particular context. If there's no screen at all, you have to be able to give them the complete information of what they're looking for via voice alone. You have to pay attention to what the user is asking for and what the device is capable of presenting. It's about adapting your interface to the user to make it as easy as possible for the user to get what they need.</p> <p><strong>Cami:</strong> What’s the skill-building process like for you and your team?</p> <p><strong>Tim:</strong> We start by approaching every interface as a conversational interface. Meaning, if we’re building a system, we think of every interaction as part of an ongoing conversation with context and history. We start by designing every interaction from that point of view, rather than starting with the visual UI or system design. We actually get people into a room and we role play. One person will be the application and knows a certain set of information and can communicate it. How would you talk to that application, that person, in a way that most naturally gives you that information using the minimum visual feedback that's necessary to give you what you need? With the minimal text input and the minimal haptic input, what is the easiest way to use people's natural language to fulfill some utility, entertainment, or need?</p> <p>Our engineers participate in the process as well. They're closest to how the technology can actually work and how we can design it from a technical point of view. They have more insight on some of the features that could assist with some of those conversational patterns. It's a combination of engineering, interaction design, the language being used in order to fulfill requests, and how we break those requests up into intense and slot values.</p> <p>During the second half of the interview (starting at <a href="https://www.youtube.com/watch?v=8tkNna9mqC8&amp;feature=youtu.be&amp;t=1402" target="_blank">23:22 in the video</a>), we asked Tim to walk us through how his team designed the voice, visual, and touch experience for the Food Network skill. We loved having a chance to chat with Tim and enjoyed learning how a large brand is getting in early with voice to further engage and delight customers.</p> <p>If you’re excited to start building multimodal voice experiences, check out our resources below.</p> <h2>Related Content</h2> <ul> <li><a href="https://developer.amazon.com/alexa-skills-kit/multimodal#See%20What%20Others%20Have%20Built%20with%20APL">See What Others Have Built with APL</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/12d662bb-1b98-45d0-a462-e8b309503f13/hear-it-from-a-skill-builder-going-from-voice-only-to-voice-first-with-multimodal-alexa-skills">Hear It from a Skill Builder: Going from Voice-Only to Voice-First with Multimodal Alexa Skills</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/5a5197fc-90ea-4ef3-9c40-18da72d29886/how-to-design-with-apl-components-for-new-voice-first-experiences-in-your-alexa-skill">How to Design with the Alexa Presentation Language Components to Create New Voice-First Experiences in Your Alexa Skill</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/c99e2446-0f47-42e6-b59d-e9e901d688b4/10-tips-for-designing-alexa-skills-with-visual-responses">10 Tips for Designing Alexa Skills with Visual Responses</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/50f20c85-f23c-4e40-bbfe-1cfe27eb95e5/4-tips-for-designing-voice-first-alexa-skills-for-different-alexa-enabled-devices">4 Tips for Designing Voice-First Alexa Skills for Different Alexa-Enabled Devices</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/5959e319-1656-40cb-b689-b35c988d6b91/how-to-design-visual-components-for-voice-first-alexa-skills">How to Design Visual Components for Voice-First Alexa Skills</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/7c9b6bea-0d82-4482-96ba-d1935c2617b9/how-to-quickly-update-your-existing-multimodal-alexa-skills-with-the-alexa-presentation-language">How to Quickly Update Your Existing Multimodal Alexa Skills with the Alexa Presentation Language</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/5e4f3bb2-6ada-4121-bf97-347eb78f92fd/new-alexa-skill-sample-learn-multimodal-skill-design-with-space-explorer">New Alexa Skill Sample: Learn Multimodal Skill Design with Space Explorer</a></li> <li><a href="https://developer.amazon.com/blogs/alexa/post/551d2f3c-b182-408b-8ccc-431fa6620f38/new-apl-sample-skill-sauce-boss">New Alexa Skill Sample: Learn Multimodal Skill Design with Sauce Boss</a></li> </ul> /blogs/alexa/post/7e3376cf-97d7-41d6-86a6-afcdf1ca1379/new-alexa-skills-training-course-build-your-first-alexa-skill-with-cake-walk New Alexa Skills Training Course: Build an Engaging Alexa Skill with Cake Walk Jennifer King 2019-05-17T16:38:24+00:00 2019-05-17T16:38:24+00:00 <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_design-guide_954x240.png._CB464487126_.png" /></p> <p>We’re excited to introduce our new self-paced skill-building course called Cake Walk: Build an Engaging Alexa Skill. This free course offers step-by-step guidance on how to build a high-quality Alexa skill from start to finish. Learn about the course and dive in.</p> <p><img alt="" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/blog_design-guide_954x240.png._CB464487126_.png" /></p> <p>We’re excited to introduce our new self-paced skill-building course called <a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk">Cake Walk: Build an Engaging Alexa Skill</a>. This free course offers step-by-step guidance on how to build a high-quality Alexa skill from start to finish. New skill builders will learn how to build their first skill in 5 minutes. Experienced developers will learn how to add advanced features like memory to deliver a more personalized and conversational voice experience. When you complete the course, you'll have the foundational knowledge of voice design, skill programming, and development tools to help you build high-quality Alexa skills your customers will enjoy.</p> <h2>Learn How to Design and Implement a Voice Experience</h2> <p>While anyone can quickly build an Alexa skill, there’s a lot to consider to build an engaging voice experience. Having a compelling voice idea is important, but so is implementation. A great skill idea implemented poorly will make it challenging for your skill to gain traction and retain customers. Before you start turning your voice idea into an Alexa skill, we recommend taking the time to learn how to design a voice experience, how to build a voice user interface, and how to leverage skill-building tools. We designed the Cake Walk course to teach you these concepts so you can design and implement an engaging skill.&nbsp;</p> <p>Cake Walk is a simple sample skill that enables Alexa to count down the days until your birthday. Cake Walk will also deliver a happy birthday message on your special day. Throughout the course, you’ll learn how to build your own version of Cake Walk, from the basic voice design and implementation to adding advanced features like persistence and memory.</p> <h2>Course Components</h2> <p>The course offers an introduction to voice design concepts and four skill-programming modules:</p> <ul> <li><a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk-3" target="_blank">Create a skill in 5 minutes</a></li> <li><a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk-4" target="_blank">Collect slots turn by turn</a></li> <li><a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk-5" target="_blank">Add memory to Cake Walk</a></li> <li><a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk-6" target="_blank">Use the Settings API to get the time zone</a></li> </ul> <p>If you’re new to skill building, we recommend starting from the <a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk-1" target="_blank">introduction</a>. If you already know the basics and want to add memory to your skill, you can skip ahead to <a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk-5" target="_blank">the section on using persistent attributes.</a> Each module includes the code you need to get started and step-by-step instructions to apply the code.</p> <h2>What You'll Learn</h2> <p>By completing the course, you’ll understand the components of voice design, skill programming, and tooling to help you build engaging skills. You’ll learn how to use the <a href="https://developer.amazon.com/alexa/console/ask">Alexa Developer Console</a> to create and test your skill. You’ll also learn how to use <a href="https://developer.amazon.com/docs/hosted-skills/build-a-skill-end-to-end-using-an-alexa-hosted-skill.html">Alexa-hosted skills</a> to host your skill’s back end. The course introduces the core concepts of voice design and how to program your back end using the <a href="https://ask-sdk-for-nodejs.readthedocs.io/en/latest/">Alexa Skills Kit Software Development Kit for Node.js</a>.</p> <p>You’ll also learn how to leverage important Alexa Skills Kit (ASK) features like:</p> <ul> <li><a href="https://developer.amazon.com/docs/custom-skills/create-intents-utterances-and-slots.html">Intents, utterances, and slots</a> to build a voice user interface</li> <li><a href="https://developer.amazon.com/docs/custom-skills/delegate-dialog-to-alexa.html#automatically-delegate-simple-dialogs-to-alexa">Auto delegation</a> to have the skill automatically prompt for missing information</li> <li><a href="https://ask-sdk-for-nodejs.readthedocs.io/en/latest/" target="_blank">ASK Software Development Kit for Node.js</a> to handle requests sent to your skill</li> <li><a href="https://ask-sdk-for-nodejs.readthedocs.io/en/latest/Managing-Attributes.html" target="_blank">Persistent attributes</a> with <a href="https://aws.amazon.com/s3/" target="_blank">Amazon S3</a> to remember information</li> <li><a href="https://developer.amazon.com/docs/smapi/alexa-settings-api-reference.html">Alexa Settings API</a> to look up the time zone</li> </ul> <h2>More Training Opportunities to Enhance Your Alexa Skills</h2> <p>Once you’ve completed this course, we recommend you continue your learning by checking out these additional training materials:</p> <ul> <li><a href="http://alexa.design/cdw" target="_blank">Designing for Conversation Course</a>: Learn how to design more dynamic and conversational experiences.</li> <li><a href="https://developer.amazon.com/docs/alexa-design/get-started.html">Alexa Design Guide</a>: Learn the principles of situational voice design so that you can create voice-first skills that are natural and user-centric.</li> <li><a href="https://www.twitch.tv/videos/409503308?filter=archives&amp;sort=time" target="_blank">How to Shift from Screen-First to Voice-First Design</a>: Learn about the four design patterns that make voice-first experiences engaging.</li> </ul> <h2>Get Started with Cake Walk</h2> <p>The self-paced course is free and available for anyone ready to build Alexa skills. <a href="https://developer.amazon.com/alexa-skills-kit/courses/cake-walk">Click here</a> to get started. And please tell us what you think! Reach out to me on Twitter at <a href="http://twitter.com/SleepyDeveloper" target="_blank">@SleepyDeveloper</a> to share your comments and feedback.</p> /blogs/alexa/post/80c551eb-5303-4ade-9942-e83d55d1904f/best-practices-to-create-a-delightful-voice-commerce-experience-for-your-customers Best Practices to Create a Delightful Voice Commerce Experience for Your Customers Kristin Fritsche 2019-05-17T08:30:00+00:00 2019-05-17T12:10:51+00:00 <p><img alt="b895dd2c0d1ae02f997ffbec94e9f036cf943a3f171b8a7911e538623f37de8b_c95fe012-f03e-4fd7-8add-70cf1b8958d4.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/b895dd2c0d1ae02f997ffbec94e9f036cf943a3f171b8a7911e538623f37de8b_c95fe012-f03e-4fd7-8add-70cf1b8958d4.png._CB464480825_.png?t=true" /></p> <p>Today, developers and businesses can leverage Alexa to reach customers across over 100 million Alexa-enabled devices, engage with customers, and sell products and services using <a href="https://developer.amazon.com/alexa-skills-kit/make-money/in-skill-purchasing">in-skill purchasing (ISP)</a> and <a href="https://developer.amazon.com/alexa-skills-kit/make-money/amazon-pay">Amazon Pay for Alexa Skills</a>.</p> <p><img alt="b895dd2c0d1ae02f997ffbec94e9f036cf943a3f171b8a7911e538623f37de8b_c95fe012-f03e-4fd7-8add-70cf1b8958d4.png" src="https://m.media-amazon.com/images/G/01/DeveloperBlogs/AlexaBlogs/default/b895dd2c0d1ae02f997ffbec94e9f036cf943a3f171b8a7911e538623f37de8b_c95fe012-f03e-4fd7-8add-70cf1b8958d4.png._CB464480825_.png?t=true" /></p> <p>Voice is the next frontier for developers and merchants to reach new customers, extend their brand presence, and generate revenue. Today, developers and businesses can leverage Alexa to reach customers across over 100 million Alexa-enabled devices, engage with customers, and sell products and services using <a href="https://developer.amazon.com/alexa-skills-kit/make-money/in-skill-purchasing">in-skill purchasing (ISP)</a> and <a href="https://developer.amazon.com/alexa-skills-kit/make-money/amazon-pay">Amazon Pay for Alexa Skills</a>.</p> <p>If you are offering goods or services through your website or your mobile app, you might think about using the same approach for Alexa. But building and designing for voice technology is different than for screen-based devices. While selling your product or service might be your ultimate goal, you first have to build a valuable and convenient voice experience for your customers.</p> <p>If you’re ready to learn how you can leverage voice to build your business, follow these best practices for creating a delightful voice commerce experience for your customers.</p> <h2>Think Voice-First</h2> <p>As you are designing your voice experience, think about how voice can help customers solve a problem or simplify a task. Which components of the customer journey through your current digital channels are cumbersome or tedious, and how can voice make the experience better? For example, how many clicks or taps does it take to check an order status on desktop or mobile, respectively? How can you use Alexa to make that task more convenient—faster, easier, and more natural—for customers? Here are some other related questions to consider:</p> <ul> <li>How can you enhance your current offerings via voice? What is the value proposition of the voice-first purchasing flow? Example: Make your FAQs accessible via voice and help customers get their questions answered in the most natural way.</li> <li>What are top reasons that customers reach out to your customer support? Example: If one of the most frequent questions for your support team is “When will my order arrive?” consider supporting this utterance in your skill.</li> <li>Are there habitual tasks you can simplify, like renewal subscriptions? Example: Habitual tasks more often than not come with habitual products. Make it easy for customers to reorder them via voice. Use your order history to identify the right products automatically and simplify the checkout experience.</li> <li>What about special deals for Alexa on a daily, weekly or monthly basis? Example: A “Deal of the Day” is a nice way to put the most interesting products into focus and curate the selection for your customers.</li> </ul> <h2>Consider Supporting Multimodal Experiences</h2> <p>With the <a href="https://developer.amazon.com/docs/alexa-presentation-language/apl-overview.html" target="_blank">Alexa Presentation Language (APL)</a>, you can build multimodal voice experiences that are compatible with Alexa-enabled devices with screens. Customers embrace voice because it’s simple, natural, and conversational. When you build a <a href="https://developer.amazon.com/alexa-skills-kit/multimodal">multimodal</a> experience, you combine voice, touch, text, images, graphics, audio, and video in a single user interface. The result is voice-first experience complemented by visuals. You can provide customers with complementary information that’s easily glanceable from across the room. You can build immersive experiences that customers can sit back and watch, or lean into to get things done. And, you can optimize skills to deliver the best experience on whatever device a customer is using. Example: If you are selling apparel, a multimodal experience will help customers see the product before they buy it.</p> <h2>Keep It Simple</h2> <p>Simplicity and convenience are critical for expriences on Alexa. Don’t try to do everything with your skill, instead create a seamless customer interaction. Your voice user interface should be simple and easy to interact with - so should be the items you are selling via voice. Example: Product searches can result in a high number of results. Instead of reading a long list of results to the customer, only provide a smaller selection (e. g. 3 to 5) of the results at once. This makes it easier for the customer to follow. A multimodal experience can complement the voice experience by providing a visual list for the search results. Read more about building voice-first experiences for Alexa-enabled devices with screens below.</p> <h2>Limit Your Selection</h2> <p>While your first intention might be to offer as wide a selection as possible within your skill, constrain and curate what you offer at first. For example, only offer customers their most frequent purchases or your businesses most popular products. This will help reduce the paradox of choice for customers. You can widen your selection over time and learn from customer feedback and by leveraging <a href="https://developer.amazon.com/docs/devconsole/measure-skill-usage.html" target="_blank">skill usage analytics</a>.<em> </em>Example: Offer one to two products at first and refer to the skill usage analytics dashboard to see how customers are interacting with your upsell. Use this data to determine which products to add and where in the customer journey is the optimal time to upsell them.</p> <h2>Think Multichannel</h2> <p>When designing your voice experience, the customer experience don't needs to start and end with voice. Enable your customers to start checkout on your website or mobile app and complete the purchase via Alexa, or vice versa. To help you in making this vision come true, we have created <a href="https://developer.amazon.com/de/docs/amazon-pay/integrate-skill-with-amazon-pay-v2.html#buyer_id" target="_blank">Amazon Pay Buyer ID</a> for you, to identify your customers across channels and personalize the experience for them. With help of the Amazon Pay <a href="https://pay.amazon.com/us/developer/documentation/automatic/201752090" target="_blank">Automatic Payments API</a>, customers can pre-authorize payments for future purchases. This enables you to charge a customer's Amazon account on a regular basis for subscriptions and usage-based billing without requiring the customer to start a new voice checkout any time. Example: Use the knowledge you gained about your customers over time. Any time customers interact with you via a new channel – just delight them with a pleasant personalized experience. For example, let’s say you have a candy subscription service that sends a care package to subscribers every month. By leveraging &nbsp;the Automatic Payments API in combination with Amazon Pay Buyer ID, you can create an Alexa skill to allow customers to manage their care packages (e.g. change the size, order an extra two for a month) and bill accordingly.</p> <h2>Resources</h2> <ul> <li><a href="http://developer.amazon.com/alexa-skills-kit/make-money/amazon-pay?&amp;sc_category=Owned&amp;sc_channel=WB&amp;sc_campaign=DELaunch&amp;sc_publisher=ASK&amp;sc_content=Content&amp;sc_funnel=Publish&amp;sc_country=DE&amp;sc_medium=Owned_WB_DELaunch_ASK_Content_Publish_DE_DEDevs&amp;sc_segment=DEDevs">Amazon Pay for Alexa Skills</a></li> <li><a href="https://developer.amazon.com/docs/amazon-pay/integrate-skill-with-amazon-pay-v2.html" target="_blank">Technical Documentation: Integrate a Skill with Amazon Pay</a></li> <li><a href="https://developer.amazon.com/de/docs/amazon-pay/alexa-amazon-pay-faq.html" target="_blank">Amazon Pay FAQs</a></li> <li><a href="https://pay.amazon.com/us/developer/documentation/apireference/201751630" target="_blank">Amazon Pay API Reference Guide</a></li> <li><a href="https://github.com/alexa/skill-sample-nodejs-demo-store-amazon-pay" target="_blank">Amazon Pay Sample Skill</a></li> </ul>