Danny Dong, co-founder and CEO of iMCO, wondered what life would be like if we didn’t have to rely on screens to get the information we want and need. After witnessing countless people checking their phones while on the go, he began to redefine how people carry technology with them every day.
He set out to create the CoWatch, the world’s first Amazon Alexa-integrated smartwatch, to help break the habit of constantly checking a screen for updates. Now, users can ask for the latest weather reports, check the time, and set reminders with their voice and Alexa.
By early 2016, Dong had a working smartwatch prototype. It had a beautiful design, but it needed an OS to bring it to life. That’s when Dong contacted Leor Stern, co-founder and CEO of Cronologics Corporation.
Stern’s team, assembled from Android and Google veterans, had built a new software platform for smart wearables that helps amplify and support smartphone features. “We built Cronologics platform to unlock the true potential of the smartwatch. We don’t see [a smartwatch] as a tiny, limited extension of a smartphone. It’s an independent device with smart features of its own, not dependent on a smartphone.”
Shortly after Dong partnered with the Cronologics team, Stern began discussions with Amazon how to bring the power of Alexa to the smartwatch. Dong knew he wanted to create a completely new type of smartwatch—one that gives users a choice of how they consume information, either on a screen or by voice.
“We’d been talking to Amazon for a few months, bouncing around ideas like how cool it would be to have Alexa on a watch,” Stern said. “We were also talking with iMCO about the CoWatch when AVS came out. It fit beautifully into our plans.”
The Cronologics platform makes it easy to scroll through messages with a few taps, no phone required. And with Alexa, customers can ask for the latest scores and weather reports, or set reminders and alarms. Now, almost anything you can do with Alexa on an Echo, you can do with CoWatch.[Read More]
For inspiration on developing innovative Alexa skills, check out the Wayne Investigation, a skill developed by Warner Bros. to promote the recently released Batman v Superman: Dawn of Justice feature film. In this audio-only, interactive adventure game, you’re transported to Gotham City a few days after the murder of Bruce Wayne’s parents. You play the part of a detective, investigating the crime and interrogating interesting characters, with Alexa guiding you through multiple virtual rooms, giving you choices, and helping you find important clues.
The game, created using the Alexa Skills Kit, is collaboration between Amazon, Warner Bros., head writers at DC Comics, and Cruel & Unusual Films (the production house run by Batman v Superman’s director Zack Snyder and executive producers Debbie Snyder and Wes Coller). With these companies behind the game and its affiliation with a superhero film franchise, it’s not surprising that The Wayne Investigation was a big hit.
But it’s become enormously popular on its own accord. Launched on March 1, this was the first Alexa skill to combine Alexa technology with produced audio assets—namely, compelling music and sound effects—and the response has been extraordinary. During its first week, the Wayne Investigation was engaged 7x more (per weekly average) than all other skills combined. Currently the Wayne Investigation rates in the top 5% of skills (earning 4.8 out of 5 stars) and is the #1 skill for both total time spent engaging with the skill and average time spent per user.
The team scripted the experience by building it around a gaming map with directions and actions in each room. Once the script was finalized, they used a decision tree model to translate the experience into code, which is hosted in AWS. From three starting actions, users can make up to 37 decisions, each taking the user down paths that lead to new and iconic Gotham characters and locations before completing the game. An efficient (and lucky) walkthrough of the Wayne Investigation takes 5 to 10 minutes, but fans who want to explore every nook and cranny can spend as long as 40 minutes in this Gotham City.
An added benefit of creating the Wayne Investigation skill is that it led to the creation of a tool that allows developers to graphically design interactive adventure games. Today, we’re pleased to announce that we’ve made a tool with source code available to make it easier for the Alexa community to create similar games.
To experience the skill, simply enable it in your Alexa companion app and then say, “Alexa, open the Wayne Investigation.”
Today’s guest blog post is from Troy Petrunoff, content strategist at AngelHack. Amazon works with companies like AngelHack who are dedicated to advancing the art of voice user experience through hackathons.
This year Amazon Alexa teamed up with AngelHack, the pioneers of global hackathons, for their ninth Global Hackathon Series. Since 2011, the series has exposed over 100,000 developers from around the world to new technologies from sponsors ranging from small startups to large corporations. Amazon Alexa joined the fun this year at nine AngelHack events, sending Solutions Architects and Amazon Echo devices to give talented developers, designers, and entrepreneurs the chance to learn about the Alexa technology. Thirty two teams included Alexa technology into their projects.
Of the nine events Amazon Alexa sponsored, three of the grand prize winners won using Alexa. Winning the AngelHack Grand Prize earned these teams an exclusive invite into the AngelHack HACKcelerator program. AngelHack’s invite-only HACKcelerator program connects ambitious developers with thought leaders and experienced entrepreneurs to help them become more versatile, entrepreneurial, and successful. The program is intended to give developers of promising projects built at a hackathon the opportunity to listen and talk to some of the biggest players in the Silicon Valley tech scene on a weekly basis. All while providing them with the resources to successfully transition their Hackathon project into a viable startup with early traction.In addition to the grand prize, the Amazon Alexa team offered a challenge at each AngelHack event. The challenge for the series was best voice user experience using Amazon Alexa. In addition to the three grand prize winning teams, two Alexa Challenge winners will also receive an invite into the HACKcelerator program. Participating teams of the HACKcelerator will be provided with mentorship and other resources to prepare them for the Global Demo Day in San Francisco.[Read More]
Earlier this year, Paul Cutsinger, Evangelist at Amazon Alexa, joined a team of developers and designers from Capital One at SXSW in Austin to launch the new Capital One skill for Alexa. The launch of the new skill garnered national attention, as Capital One was the first company to give customers the ability to interact with their credit card and bank accounts through Alexa-enabled devices. This week at the Amazon Developer Education Conference in NYC, Capital One announced another industry first by expanding the skill to enable its customers to access their auto and home loan accounts through Alexa.
"The Capital One skill for Alexa is all part of our efforts to help our customers manage their money on their terms – anytime and anywhere," said Ken Dodelin, Vice President, Digital Product Management, Capital One. “Now, you can access in real time all of your Capital One accounts—from credit cards to bank accounts to home and auto loans—using nothing but your voice with the Capital One skill.”
The skill is one of the top-rated Alexa skills, 4.5/5 stars, with 47 reviews. It enables Capital One customers to stay on top of their credit card, auto loan, mortgage and home equity accounts by checking their balance, reviewing recent transactions, or making payments, as well as get real-time access to checking and savings account information to understand their available funds.
“Capital One has a state of the art technology platform that allows us to quickly leverage emerging technologies, like Alexa." Scott Totman, Vice President of Digital Products Engineering, Capital One said. “We were excited about the opportunity to provide a secure, convenient, and hands-free experience for our customers.”
To bring the new skill to life, the Capital One team – comprised of engineers, designers, and product managers – kicked off a two-phase development process.
“Last summer a few developers started experimenting with Echo devices, and, ultimately, combined efforts to scope out a single feature: fetching a customer’s credit card balance. That exercise quickly familiarized the team with the Alexa Skills Kit (ASK) and helped them determine the level of effort required to produce a full public offering,” said Totman. “The second phase kicked off in October and involved defining and building the initial set of skill capabilities, based on customer interviews and empathy based user research. Less than six months later we launched the first version of the Capital One skill for Alexa.”
The team also spent a lot of time finding the right balance between customers’ need for both convenience and security. In the end, Capital One worked with Amazon to strike the right balance and gave customers the option of adding a four-digit pin in order to access the skill and provide an additional layer of security. The pin can be changed or removed at the customer’s discretion.
“The Alexa Skills Kit is very straightforward. However, it is evolving quickly, so developers need to pay close attention to online documentation, webinars, and other learning opportunities in order to stay on top of new features and capabilities as they are released,” Totman said.
“We dedicated a lot of time to getting the conversation right from the start,” said Totman. “This meant we not only had to anticipate the questions customers were going to ask, but also how they were going to ask them.”
This was a really interesting challenge for Capital One’s design team. In order to make the skill feel like a personalized conversation, the team had to identify exactly where and how to inject personality and humor, while carefully considering customers’ priorities and the language they use to discuss finances.
“A lot goes into making sure our customers get what they expect from our personality, as well as what they expect from Alexa’s personality. That becomes especially visible when injecting humor, because what looks great on paper doesn’t always transition to the nuance of voice inflection, cadence, or the context of banking,” said Stephanie Hay, head of Capital One’s content strategy team. “But that’s the joy of design > build > iterate in a co-creation method; product, design, and engineering design the conversation together, hear Alexa say it, react, iterate, test it with actual customers, iterate further, and then get it to a point we all feel excited about.”
Capital One’s Alexa skill represents just the starting lineup of features. Capital One’s team continues to test, learn, and explore new features by focusing on customer needs and continually refining the experience.
“As customers become more familiar using voice technologies, we anticipate growing demand for feature capabilities, as well as increased expectations regarding the sophistication of the conversation.” Totman said. “With voice technologies, we get to learn firsthand how customers are attempting to talk to us, which allows us to continually refine the conversation.”
“The possibilities with the Alexa Skills Kit are nearly endless, but I advise developers to be very thoughtful about the value of their skill,” said Totman. “Leveraging voice-activated technology is only worthwhile if you can clearly define how your solution will go above and beyond your existing digital offerings.”
Stay tuned to part two to learn how Capital One built their Alexa skill and added new capabilities.
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
In our first post, we shared why Discovery decided to build an Alexa skill and what requirements they outlined as they thought through what the voice experience should look like. In this post, we’ll share how they built and tested their Alexa skill and their tips for other Alexa developers.
When Stephen Garlick, Lead Development and Operations Engineer at Discovery Channel, took the lead in developing the Alexa skill, it was a chance to learn how to design a new experience for customers. He had no prior experience with AWS Lambda and Alexa Skills Kit (ASK). To start, he spent some time digging into online technical documentation and code samples provided on the Alexa Github repo. This helped him gain a deeper understanding of how to build the foundation of the Alexa skill and handle basic tasks.
By using AWS Lambda and ASK, Stephen and team were able to keep things simple and quickly deploy the code without the need to set up additional infrastructure to support the skill. Additionally, they were easily able to extend the node.js skill without having to create a skill from scratch.
Initially, Discovery used Alexa to respond with facts; later, they decided to customize her voice by using a mp3 playback. To accomplish this, Stephen used the SSML support for mp3 playback and AWS S3 with cloud front for hosting the files reliably. Each mp3 was less than 90 seconds in length, 48 kbps, and adhered to MPEG version 2 specifications. All the resources were created and deployed using the AWS CloudFormation service.
For the countdown feature, Stephen pulled in the moment.js dependency into node.js to help simplify some time-based calculations. The countdown now combines a mp3 playback for everything except the actual time which is played back by Alexa.
To test the skill, they used the skill test pane within the Alexa app. The testing tool made it easy to quickly test various scenarios without an Alexa-enabled device. Once the skill was operating as expected (and desired) in the test pane, Stephen asked other people to test the Shark Week skill on Alexa-enabled devices. This allowed them to collect additional feedback and iterate accordingly.
Overall, the entire process of learning these new technologies, coding, and building the skill took no more than 12 hours. This included a few iterations of the Alexa skill as well.
Tip #1: Make The Skill As Human As Possible: Initially, Discovery had the Alexa voice state each of the randomized facts. In an attempt to assist with the pronunciation, they spelled a few of the words and numbers phonetically. However, in doing so, the cards displayed in the Alexa app weren't correct. It quickly became apparent that a recorded reading of each fact eliminated the pronunciation issues, enabled proper spelling of facts for the cards in the Alexa app, and made the entire experience more personal.
Tip #2: Plan for Time Sensitive Coding: If you're building time specific functionality (e.g.; a countdown timer to a specific time), make sure you think about what happens when the specific time arrives. The team at Discovery was able to account for the Shark Week kickoff by providing three different countdown messages based on time in each specific time zone. The first was the countdown lead in, the second was a message indicating that Shark Week already started, and the third indicated that Shark Week had concluded and that the Shark Week website provides other shark-related information year-round.
Tip #3: Control for Volume: If you're using a combination of recordings and Alexa powered speech, make sure the volume levels are consistent throughout the experience.
Tip #4: Be Creative with Your Intent Schema and Utterances: People think, act, and speak differently. Therefore, it's important that you account for as many different intents as possible. For example, after you ask for a Shark Week fact, the skill will ask if you would like to hear another. Just a few of Discovery’s "no" utterances include "no," "nope," "no thanks," "no thank you," "not really," "definitely not," "no way," "nah," negative," "no sir," "maybe another time," and many more. It's better to be as inclusive as possible, rather than having Alexa unable to understand.
Tip #5: Take Chances: Push your limits and think big when it comes to building your Alexa skill. Discovery started the project with a broad scope in mind and were able to quickly iterate and resubmit the skill for certification.
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
Craig Johnson, president of Emerson’s Residential Solutions business, claims it was inevitable. “Thermostats are no longer just passive HVAC controllers hanging on your wall. The convergence of wireless and mobile technologies allowed us to develop a thermostat that allows better temperature control, programmability and scheduling, as well as remote access.”
Even before Amazon’s Smart Home Skill API was publicly released, Johnson was excited about smart home. Prior to Smart Home, Emerson had a fully functional mobile app and internet portal our customers could use to control their Sensi thermostat remotely. But integration of Alexa is a natural extension of that remote access and remote functionality.”
In February 2016, Johnson’s software development manager, Joe Mahari, jumped on board the Smart Home beta program. In just four weeks’ time—and by the time Amazon officially launched the Smart Home Skill API—Mahari’s team had built and tested its Sensi Smart Home skill and passed certification.
The Smart Home Skill API converts a voice command, such as “Alexa, increase my first floor by 2 degrees,” to directives (JSON messages). The directive includes:
It then sends the directive to the methods implemented in the Sensi skill.
According to Mahari, Emerson implemented three main directives. Examples of these are:
The Emerson team agrees the skill and API were well packaged and supported, end-to-end. “Amazon defined the use case very crisply,” said Johnson. “We received a deck of scenarios to achieve, plus integrated logging, systems’ checks and documentation. These were essential to our success.”
Mahari says it was invaluable that the Amazon team connected with them daily. “For example, we had some concerns about how to increase or decrease the temperature during auto-schedules. But working directly with the Alexa team, we figured out how to make it work.”
So, if working with Amazon’s support and the API itself went so smoothly, what were some challenges the Emerson team faced over the four-week project?[Read More]
Shark Week, television's most anticipated week of shark-filled television, kicked off this year on Sunday, June 26. A celebration of all things shark-related, this year's week delivers more hours of all-new, visually-stunning, and informative shark-filled specials than ever before.
Since the Discovery Channel audience is tech-savvy and forward-thinking, Discovery wanted to expand Shark Week's reach. To do this, Discovery decided to test the water with a Shark Week skill.
Given this was Discovery's first Alexa skill, they wanted to familiarize the team with Alexa skill development. Adam Zuckerman, Director of Ventures & Innovation at Discovery Channel, says, “The requirements for the Alexa skill were three-fold: 1) build an Alexa skill that the Discovery audience would find useful and informative 2) add a time-relevant component for Shark Week 3) remain relevant after Shark Week.”
With these requirements in mind, they built an Alexa skill with a real-time countdown timer and a voice activated “Sharkopedia” fact engine. The skill was a collaboration between Adam Zuckerman and Stephen Garlick, Lead Development and Operations Engineer at Discovery Channel.
See the skill in action by first enabling the skill via the Alexa app or simply saying, “Alexa, enable Shark Week skill.” Then say, “Alexa, ask Shark Week for a fact.”
Stay tuned for part two to learn how Discovery built and tested their Alexa skill and their tips for other Alexa developers.
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
Imagine a group of you gather for an impromptu meeting, and Alexa not only tells you what conference rooms are available but also schedules the room of your choice. That’s the vision behind an Alexa skill in development at Beco (check out the proof-of-concept demo), and it demonstrates the enormous potential for Alexa to deliver new experiences, efficiencies, and value in the workplace.
The Beco skill is a location-aware office assistant that combines the natural ease of a voice user interface with the building intelligence of Beco. It’s a mobile platform that uses existing light fixtures to power low-cost iBeacons, a mobile SDK, and cloud services that enable enterprise systems. For clients across sectors, Beco provides indoor positioning, location analytics, and the ability to search for people and places in real time. Learn more about Beco (pronounced “Bee-Co”—and stands for “Be Connected”) here.
The Beco-Alexa skill communicates with a NodeJS application deployed on AWS Lambda. Given a query from Alexa, the NodeJS application maps a person’s name to an email address using a lookup table. Beco provides extensive People vs. Place vs. Time query functionality via a real-time Occupancy API. This RESTful web service allows introspection of a variety of hyper location data.
The skill requires the use of custom-slots in its intents, because people typically give numbers as ordinals (“the 16th floor”) rather than cardinals (“floor 16”). Following are the intents available now and some of their corresponding sample utterances.
Uses the “find by email address” endpoint to find the “Place” where the mobile device of the person-to-be-found is currently located, then speaks the name of the Place.
Uses the “what spaces are free/utilized” endpoint and speaks back the names of those free Places.
The Beco team envisions expanding Alexa integration to include these capabilities:[Read More]
This post introduces Flask-Ask, a new Python micro-framework that significantly lowers the bar for developing Alexa skills. Flask-Ask is a Flask extension that makes building voice user interfaces with the Alexa Skills Kit easy and fun. We'll learn it by building a simple game that asks you to repeat three numbers backwards. Knowing Python and Flask are not required, but some experience programming will help.
If you prefer the video walkthrough of this post, check it out here.
To start, you'll need Python installed. If you're on a recent version of OS X or Linux, Python comes preinstalled. On Mac, you can find installation instructions here. You may also need to install pip, which can be found here. On Windows, follow these installation instructions. Once Python and pip are installed, open a terminal, and type the below command to install Flask-Ask. Note: You might need to precede it with sudo on Unix if you get permission errors.[Read More]
When Sébastien de la Bastie, Managing Director at Invoxia, walked the exposition floor at CES 2015, he knew his company had a hit on its hands. That’s when the French technology start-up previewed Triby—a new smart speaker for the kitchen.
Triby is a portable yet connected device for your smart home. Triby’s integration of AVS gives it voice control over a growing world of digital devices and services. It’s an incredible wireless music system for Spotify, FM and internet radio, and Bluetooth streaming. And it’s also a family’s home communications hub. Triby provides a connected message board, free internet calls and conference-quality, hands-free mobile calls. It also has an e-paper screen to display digital sticky notes and other information.
Paul Bernard, Director, Corporate Development and Head of the Alexa Fund, also saw the CES demo and was impressed with Invoxia’s vision for reinventing home communications with the Triby. He saw the device as an ideal candidate for Alexa Voice Service (AVS) integration. Soon afterward, Amazon approached de la Bastie about adding new capabilities to his device.
With AVS, Triby would have access to the world of voice-enabled services offered by Amazon. “Amazon had the idea of making Alexa available on non-Amazon devices,” said de la Bastie. They spotted us because we had developed innovative technology for far-field voice capture. That’s important for any device sending acoustic data to Alexa.”
In late 2015, Amazon’s Alexa Fund made Invoxia its first AVS-related investment, showing its support for the company’s vision of bringing voice to Triby. Thus began a strong, collaborative partnership to create the first non-Amazon, Alexa-enabled device. The new Triby debuted in April 2016.
de la Bastie says he recalls the project involved three distinct phases.[Read More]
When I was first introduced to Zach Feldman, Chief Academic Officer and Co-Founder of The New York Code + Design Academy, I knew I was talking with an Alexa connoisseur. Before Amazon publicly released the Alexa Skills Kit, Zach was talking about how to add capabilities to Alexa. Couple this with publishing alexa-home, a popular project on GitHub to use Amazon Echo to control home automation software, before we even released the Smart Home Skill API. Zach has always shown a keen interest in the voice space. Fast forward a year later, it made complete sense to bring Zach’s knowledge of Alexa development to The New York Code + Design Academy.
Today, I’m excited to announce a collaboration between The New York Code + Design Academy (NYCDA) and Amazon Alexa. NYCDA has been training developers – at all levels – with hands-on, intensive workshops in web and mobile app development for the past three years.
This summer, NYCDA students will be able to attend the first in-person training on building Alexa skills with Ruby and Sinatra as the language and framework of choice. Students will begin by gaining an understanding of the Alexa Skills Kit (ASK). From there, they’ll move on to building an Alexa skill together as a class with both a simple skill and one that accesses an external API. They’ll be able to test their voice user experiences with Amazon Tap speakers, Alexa-enabled devices provided by the school. The course will wrap up with an independent final project and will walk students through the process of certification and publication of their first Alexa skill. Classes will run from August 9, 2016 through September 27, 2016. To enroll, students can apply here.
Wait, there’s more. Zach will be hosting a free lecture on the Alexa skill infrastructure and what goes into building your first skill on June 21, 2016 at 6:30 p.m ET at NYCDA’s headquarters in New York City. If you’re in the area don’t miss this opportunity to meet him, learn more about Alexa skill development, and ask questions about NYCDA’s 8-week Alexa course. Save your spot.
“Amazon Alexa is one of the most compelling new software and hardware integrations I've seen in a while! I can't wait to bring the power of Alexa to our students and the Ruby development community.” - Zach Feldman, Chief Academic Officer and Co-Founder of The New York Code + Design Academy
Learn more about the Alexa course from NYCDA here.
Experienced Alexa developer Eric Olson (Galactoise in the Amazon developer forums) had a mission to determine whether or not you could really create a custom skill from scratch within 24 hours. Eric did it in less than 12—and did it well—on a weighted, random-number-generator skill called DiceBot that he developed using Alexa Skills Kit (ASK) and a Lambda proxy.
By day Eric is an engineer for Disney, but he and his friends at DERP Group also build things on their own for fun and profit. The dice-rolling DiceBot is their third Alexa skill, and Eric shares his experience about the process in this informative blog post. His vision was:
In DiceBot, you can invoke a different intent by changing the invocation phrase. For example, by prepending the word “me” to your dice-set description, you can tip DiceBot off to weight things a bit differently:
You can also append “for me” to the end of the dice set description to weight the rolls downward:
Read more about Eric's experience building DiceBot and give it a try yourself. Simply enable the Dicebot skill in the Alexa app and say one of these:
On April 5, 2016, Amazon announced the Smart Home Skill API, the public, self-service version of the Alexa Lighting API, which was introduced as a beta in August 2015. As part of the beta program, we worked with a number of companies to gather developer feedback, while extending the Alexa smart home capabilities to work with their devices.
In 2015, the Alexa team wanted to make it fast and easy for developers to create skills that connect devices directly to our lighting and thermostat capabilities so that customers can control their lights, switches, smart plugs or thermostats—without lifting a finger. So they created a beta program to work with experts in thermostats and home comfort to gather developer feedback, while extending the Alexa smart home capabilities to work with their devices. That naturally led them to ecobee, creator of a smart thermostat that uses remote sensors to optimize the temperature in specific rooms. The engineers at ecobee jumped at the chance to help Amazon define the integration and product requirements for the new feature.
Why? First, ecobee was the first to allow iOS users to control their thermostats with the Siri voice interface when they integrated Apple’s HomeKit API into their smartphone app. “But Alexa’s way different,” said Hesham Fahmy, Vice President of Technology for ecobee. “One of our biggest product benefits is ‘Comfort where it matters,’ which is especially true with our remote sensor capabilities.” To Fahmy, it made perfect sense to connect your ecobee device to Alexa and say, “Alexa, turn up the temperature” without needing to find your phone.[Read More]
Recently an entrepreneur approached software and design firm Macadamian with a unique product concept: an interactive NHL scoreboard. That WiFi-connected, voice-controlled gadget is enough to make any hockey fan drool. And while it was the company’s first foray into the world of Echo and Alexa, it was certainly not the last.
Now Macadamian has launched an Alexa skill to bring “hands-free” to an action performed 6 billion times each day in the U.S. alone: sending a text message. What could have more mass-market appeal? Yet the company says it created the skill to showcase its expertise, not to gain millions of users.
They call their skill Scryb (pronounced “scribe”). To use it, enable Scryb in the Alexa App, and simply say “Alexa, Scryb your-message-here.”
Ed Sarfeld, UX architect at Macadamian, explains the twofold reason behind the name. "As UX designers, we wanted to make the skill simple and natural to use. The word ‘scribe’ means to write, so it's easy to remember. We changed the spelling because of existing trademarks and wordmarks. But this is voice, and it’s still pronounced ‘scribe’.”
Further, “scribe” is also the skill's main verb, and there’s no need to repeat it. Scryb needs only a single, simple statement: “Alexa, tell Scryb I’m on my way.” Less to remember means it’s simpler for the user.
By design, users have few other commands to worry about. One lets you set or change the recipient – Scryb stores only one number at a time. If that seems odd, it’s not: remember there’s no screen of contacts on a smartphone to tap on here. And having a single, primary recipient is right in line with the expected uses for the skill:
The Alexa Skills Kit is a collection of self-service APIs, tools, documentation and code samples that make it fast and easy for developers to add skills to Alexa. Justin Kovac, developer of 7-Minute Workout and Technical Program Manager for Alexa Skills Kit shares his experience and tips for diving head-first into building your own skills.
Prior to his current role, Justin was a Developer Advocate for multiple services across Amazon where his core responsibility was to serve as a voice of the developer community. This includes gathering community feedback to help guide initiatives and providing technical guidance to anyone seeking help via Amazon's Developer Forums and Contact Us support channels. "When I began supporting Alexa, I needed to get my bearings quickly," Justin remembers. “How can you advocate on behalf of a new developer community if you haven’t been in their shoes?”
To get started, Justin attended a hackathon – the perfect opportunity to learn the whole process, from concept to certification.
"The 7-Minute Workout skill is extremely simple in concept," Justin believes. "After some brainstorming, I remembered an iOS app I used based on a New York Times article. It worked, but it felt awkward to have my phone on the table or floor while looking for the next exercise in the routine." That's when Justin began creating a proof of concept of his skill using Node.js and AWS Lambda, an Amazon Web Service where you can run code for virtually any type of application or backend service with zero administration.
“To me, the most important benefit of 7-Minute Workout was getting hands-on knowledge of how to develop an Alexa skill, end to end. Knowing that, I was able to better support the developers who are just joining our community.”
Below Justin discusses the top seven lessons he learned while developing the 7-Minute Workout.
One of the things that the experience at the hackathon made very clear to me was the need to start with the voice experience, not the code. While skills are developed using the same tools and resources as you would use when creating an app, designing for voice feels distinctively different which makes it essential to understand VUI concepts first. The idea of triggering an action, like you traditionally would by the press of a button in an app, is now a variable of hundreds of potential values based on the customer’s request. So a customer could potentially say, “start a new workout” or “begin a workout” or “let’s exercise.” This guide is a great starting point to help you better understand Alexa Skills Kit, VUI, and how to keep users on the "happy path" when interacting with your skill via voice.
With no prior experience building an Alexa skill, I needed the ability to dive right in. What I quickly realized was that there was no need to reinvent the wheel. Amazon’s included samples provide a great variety of functional building blocks to kick start your skill, including DynamoDB integration, multi-stage conversations, RESTful request to third-party APIs and more. Personally, I used 'Wiseguy' as a starting point for the 7-Minute Workout skill because of its simplicity and intent structure. For each sample, read the overview of features and don't forget to follow the README.md files for step-by-step instructions.[Read More]