Using the ASK CLI v2.0 to Continuously Deploy Your Skill

Peter Moon Jun 19, 2020
Share:
CLI Build Tips & Tools Tutorial
Blog_Header_Post_Img

The Alexa Skills Kit Command Line Interface (ASK CLI), which has recently been updated to v2.0, allows you to easily manage your skill and its related resources from the command line. If you haven’t used the CLI yet, we encourage you to follow the quick start documentation to install the CLI and try it out before reading this blog post. Many customers have told us about their experience using the CLI interactively in their development machines to streamline their workflows, and we continue to work hard to improve that experience. On the other hand, customers have also asked us about using the CLI in a more automated manner, particularly in the context of continuous integration and continuous deployment (CI/CD). This blog post will show how you can set up a simple CI/CD pipeline for your skill using the ASK CLI, AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild. While details of configuring a pipeline will vary depending on what workflow automation service you’re using, understanding what’s explained here should enable you to replicate a similar setup in any other tool of choice and tweak it to fit your needs. So let’s get started!

Step 1: Create Your Source Control Repository

First, we will create a CodeCommit repository. You can do this using the AWS Console, which will walk you through creating an empty repository, setting up credentials, and cloning the repository to your local file system.

Step 2: Put Your Skill Project into the Repository

Now, you can copy your existing skill project files into this directory. In this blog post we will simply copy a hello-world skill created by the ASK CLI’s `ask new` command.

Copied to clipboard
$ ask new
Please follow the wizard to start your Alexa skill project ->
? Choose the programming language you will use to code your skill:  NodeJS
? Choose a method to host your skill's backend resources:  AWS with CloudFormation
? Choose a template to start with:  Hello world
? Please type in your skill name:  skill-sample-nodejs-hello-world
? Please type in your folder name for the skill project (alphanumeric):  test-pipe-template
Project for skill "skill-sample-nodejs-hello-world" is successfully created at ./test-pipe-template

Project initialized with deploy delegate "@ask-cli/cfn-deployer" successfully.

Now that the skill is created, we will copy the contents over to the directory cloned in the previous step.

Copied to clipboard
$ cd test-pipe-template
$ cp -R * [cloned directory location]/

Let’s verify the copy actually did happen.

Copied to clipboard
$ cd [cloned directory location]
$ ls -al
.  ..  .git  ask-resources.json  infrastructure  lambda  skill-package

It looks like the hello-world skill got there okay. We need to make one small change to tell the CLI to use credentials that will be later configured as environment variables in the pipeline. Open up the ask-resources.json file, and replace the profile name “default” with “__ENVIRONMENT_ASK_PROFILE__” as seen below.

Copied to clipboard
{
  "askcliResourcesVersion": "2020-03-31",
  "profiles": {
    "__ENVIRONMENT_ASK_PROFILE__": {
      ...

Now let’s push this as our initial commit to CodeCommit.

Copied to clipboard
$ git add *
$ git commit -m ‘initial commit’
$ git push

Step 3: Prepare Your Credentials and Secure Them in AWS Systems Manager Parameter Store

The credentials needed by the ASK CLI are highly sensitive data and can provide broad access to your Alexa skill developer account and AWS account. Therefore, we will make sure these are stored and made available to our pipeline securely, using the SecureString feature of AWS Systems Manager Parameter Store.

The first two credentials you need are AWS Access Key ID and AWS Secret Access Key of an IAM user. Since our hello-world skill’s infrastructure will be deployed using AWS CloudFormation, this IAM user will require a corresponding IAM policy that allows this. The exact permission needed here depends on the content of the CloudFormation template being deployed. We will not go into the details here, but you can learn more about access control with CloudFormation from AWS documentation. Once you have your IAM credentials, it’s time to collect the other two credentials you’ll need: OAuth 2.0 Refresh Token from Login with Amazon and the Vendor ID of your Alexa skill developer account. If you are already using the ASK CLI on your development machine, you can find these in your CLI profile by looking at the ~/.ask/cli_config file. Note the refresh_token and vendor_id fields under the profile that represents the account that will contain your skill. Once you’re done configuring, your credentials will be present in the cli_config file.

Now navigate to the Parameter Store page on the AWS Console, and name and configure the four strings, making sure they are marked as “SecureString”. Note these parameter names down for later use when configuring access permissions for your pipeline. For this blog post, we’ll use the names “/skill-test/accesskey”, “/skill-test/secretkey”, “/skill-test/refreshtoken”, and “/skill-test/vendorid”.

Step 4: Set Up Your Pipeline with AWS CodePipeline

With your credentials securely stored, let’s move on to creating your pipeline. Navigate to the AWS CodePipeline console and follow the prompts to create your pipeline, set the CodeCommit repository you created earlier as the source stage, and add a build stage using CodeBuild, which is where the ASK CLI will run to build and deploy your skill. In the “Create Build Project” dialog, pick the desired operating system (both Amazon Linux and Ubuntu come with a pre-installed Node.js runtime and NPM, which are required to install the ASK CLI), create a “New service role”, and choose “Use a buildspec file”. We’ll modify this service role and add a buildspec.yml file to our source in the following steps. Skip the deploy stage since ASK CLI will be handling deployments directly from the build stage.

Once finished, this simple two-stage pipeline will look like this:

Pipeline

The first run of the pipeline will actually fail, since the build stage has no instructions (buildspec.yml) to follow at this point. But don’t worry—we will be fixing that soon, after we...

Step 5: Give the CodeBuild Environment Permission to Access Your Credentials

As promised, we now come back to the new service role that we created when adding the build stage to your pipeline. We have to modify it to allow the build stage and the CLI running inside it to access the credentials we previously put in the parameter store. Navigate to the AWS IAM console and find the role associated with your CodeBuild environment. The role name should contain the name of your pipeline (mine looked like this: arn:aws:iam::123456789012:role/service-role/codebuild-test-pipeline-build-service-role). If it’s hard to find the role based on the name, you can look at your build environment settings to get the role’s name and search for it in IAM. Now get to the policy document attached to the role, and add the following permission item with the correct parameter name path you noted in Step 3. Below I’m granting access to all parameters whose names follow the “/skill-test/*” pattern.

Copied to clipboard
{
    "Effect": "Allow",
    "Action": [
        "ssm:GetParameters"
    ],
    "Resource": [
        "arn:aws:ssm:[your_region]:[your_account_number]:parameter/skill-test/*"
    ]
}

Step 6: Add a buildspec.yml File to Your Project

We will now go back to your locally cloned CodeCommit repository and add a buildspec.yml file, so that we can kick off the first (successful) run of the pipeline. At the root of your repository, create a file named “buildspec.yml”, and add the following lines (modify your parameter names (/skill-test/*) if you named them differently:

Copied to clipboard
version: 0.2

env:
  parameter-store:
    AWS_ACCESS_KEY_ID: /skill-test/accesskey
    AWS_SECRET_ACCESS_KEY: /skill-test/secretkey
    ASK_VENDOR_ID: /skill-test/vendorid
    ASK_REFRESH_TOKEN: /skill-test/refreshtoken

phases:
  install:
    commands:
       - npm install -g ask-cli
  build:
    commands:
       - ask deploy
       - cat .ask/ask-states.json

The “env:” section lists the credentials that will be securely loaded from AWS’s encrypted store and made available as environment variables in the build stage. The “install:” phase will install the ASK CLI, and the “build:” phase will build and deploy your skill. Now that the pipeline knows what to do, commit and push the buildspec.yml file, and this first successful run of the pipeline will create a new skill ID and CloudFormation stack. Note the last line of the buildspec.yml file—this command will print the results of the deployment into your build logs, which can be viewed in the AWS CodeBuild Console. The logs will contain several identifiers and should look like this:

Copied to clipboard
{
  "askcliStatesVersion": "2020-03-31",
  "profiles": {
    "__ENVIRONMENT_ASK_PROFILE__": {
      "skillId": "amzn1.ask.skill.ID",
      "skillInfrastructure": {
        "@ask-cli/cfn-deployer": {
          "deployState": {
            "default": {
              "s3": {
                "bucket": "ask-src-awscreden-useast1-12345678",
                "key": "endpoint/build.zip",
                "objectVersion": "some_version_id"
              },
              "stackId": "arn:aws:cloudformation:us-east-1:account_num:stack/stack-name"
            }
          }
        }
      },
      "skillMetadata": {
        "lastDeployHash": "some_hash"
      },
      "code": {
        "default": {
          "lastDeployHash": "some_hash_2"
        }
      }
    }
  }
}

In the next step, we’ll feed a reduced version of this back into our source to guide subsequent deployments to the same destinations, and not create a new skill and CloudFormation stack with each deployment.

Step 7: Add .ask/ask-states.json File to Drive Subsequent Re-deployments

The ask-states.json file we saw printed in build logs contains permanent identifiers such as skill IDs and short-lived identifiers such as version IDs and file hashes. For pipeline purposes, we only need to grab permanent identifiers, so we’ll remove the unnecessary short-lived information and end up with a minimized ask-states.json that looks like:

Copied to clipboard
{
    "askcliStatesVersion": "2020-03-31",
    "profiles": {
        "__ENVIRONMENT_ASK_PROFILE__": {
            "skillId": "amzn1.ask.skill.ID",
            "skillInfrastructure": {
                "@ask-cli/cfn-deployer": {
                    "deployState": {
                        "default": {
                            "s3": {
                                "bucket": "ask-src-awscreden-useast1-12345678",
                                "key": "endpoint/build.zip"
                            },
                            "stackId": "arn:aws:cloudformation:us-east-1:account_num:stack/stack-name"
                        }
                    }
                }
            }
        }
    }
}

Now save this file to [repository root]/.ask/ask-states.json. Commit the change, and push it to the pipeline. The new deployment should go to the same skill ID and CloudFormation stack created in the first successful run of the pipeline, and your first CI/CD pipeline for Alexa skills is now up and running. You can go on to write more code and continue pushing changes, or you can start customizing the pipeline with new features, if desired. Some ideas from me:

  • Add some tests to be run in the pipeline. You can run unit tests to test your code prior to `ask deploy`, or you may choose to automate testing simulated skill dialogs after deployment using the CLI’s `ask dialog --replay {test-scenarios.json}` command. This will reduce the need for manual testing, which can become time-consuming as your skill grows in size and complexity.
  • Add more stages. You can introduce multiple versions of ask-states.json (e.g. ask-states-dev.json, ask-states-live.json) containing different skill IDs and CloudFormation Stack IDs, and copy (or symlink) the stage-specific file to ask-states.json in the buildspec.yml file, to dynamically deploy your skill content to stage-specific destinations. This will be helpful if you want to separate your development infrastructure (databases, etc.) from your live/production infrastructure, or run different sets of tests (for example, full simulation tests for live/production, but only fast-running unit-tests in development).
  • Add a manual approval step for skill certification. The pipeline is now fully automated to push updates to the development stage of your skill, but you may want to manage the full lifecycle of your skill from your pipeline, including certification for publishing to the Alexa Skill Store. Most pipeline-style products offer manual approval capabilities that work well for this purpose. If you’re using AWS CodePipeline, you can find relevant topics in AWS’s documentation.

We hope you found this walkthrough helpful. We’re always eager to hear about your experience using the CLI and ideas about how we can improve it. You can always find us and share your thoughts with us on GitHub. Happy coding!

Subscribe