Eric Olson and David Phillips, co-founders of 3PO-Labs, are both “champs” when it comes to building and testing Alexa skills. The two met while working together at a Seattle company in 2015. Finding they had common interests, they soon combined forces to “start building awesome things”—including Alexa skills and tools.
Eric, an official Alexa Champion, is primarily responsible for the Bot family of skills. These include CompliBot and InsultiBot (both co-written with David), as well as DiceBot and AstroBot. David created and maintains the Alexa Skills Kit (ASK) Responder. The two do most everything as a team, though, and together built the underlying framework for all their Alexa skills.
This fall, they’re unveiling prototyping and testing tools that will enable developers to build high-quality Alexa skills faster than ever.
Eric and David first got involved with Alexa when Eric proposed an Amazon Echo project for a company hackathon. The two dove into online documentation and started experimenting—and having fun. “After the hackathon, we just kind of kept going,” Eric said. “We weren’t planning to get serious about it.”
But over the past year, they grew more involved with the Alexa community. They ended up creating tools that could benefit the whole community. “We wrote these tools to solve problems we ran into ourselves. We ended up sharing them with other people and they became popular,” David said.
The first of these, the Alexa Skills Kit Responder, grew from David’s attempt to speed the process of testing different card response formats. Testing a response until it was just right meant you had to repeatedly modify and re-deploy code each time you changed the response. Instead, this new tool lets developers test mock skill responses without writing or deploying a single line of code. Follow the documentation to set up an Alexa skill to interface with ASK Responder, then upload any response you’d like. The ASK Responder will return it when invoked.
And that’s just the beginning. The ASK Responder’s usefulness is about to explode.
David created Responder for testing mock responses. But the two soon discovered a home automation group using the tool in an unexpected way.
Instead of a skill called “Responder,” they’ll create a skill named My Home Temp, for example. They’ll map an intent like “What is the temperature?” and have their smart home device upload a response to the ASK Responder with the temperature of the house. When the user says “Alexa, ask My Home Temp what is the temperature?” Alexa plays the uploaded response through the Echo. This creates the seamless illusion of a fully operating skill.[Read More]
Dave Grossman, chief creative officer at Earplay, says his wife is early-to-bed and early-to-rise. That’s not surprising when you have to keep up with an active two-year-old. After everyone else is off to bed, Grossman stays up to clean the kitchen and put the house in order. Such chores require your eyes and hands, they don’t engage the mind.
“You can’t watch a movie or read a book while doing these things,” says Grossman. “I needed something more while doing repetitious tasks like scrubbing dishes and folding clothes.”
He first turned to audio books and Podcasts to fill the void. Today, though, he’s found the voice interactivity of Alexa is a perfect fit. That’s also why he’s excited to be part of Earplay. With the new Earplay Alexa skill, you can enjoy Grossman’s latest masterpieces: Earplays. Earplays are interactive audio stories you interact with your voice. And they all feature voice acting and sound effects like those in an old-time radio drama.
Jonathon Myers, today Earplay’s CEO, co-founded Reactive Studio in 2013 with CTO Bruno Batarelo. The company pioneered the first interactive radio drama, complete with full cast recording, sound effects and music.
Myers started prototyping in a rather non-digital way. Armed with a bunch of plot options on note cards, he asked testers to respond to his prompts by voice. Myers played out scenes like a small, intimate live theater, rearranging the note cards per the users’ responses. When it was time to design the code, Myers says he’d already worked out many of the pitfalls inherent to branching story plots.
They took a digital prototype (dubbed Cygnus) to the 2013 Game Developers Conference in San Francisco. Attendees of the conference gave the idea a hearty thumbs-up, and the real work began, which led to a successful Kickstarter campaign and a subsequent release while showcasing at 2013 PAX Prime in Seattle.
Grossman later joined the team as head story creator, after a decade at Telltale Games. Grossman had designed interactive story experiences for years, including the enduring classic The Secret of Monkey Island at Lucas Arts. Most gamers credit him with creating the first video game to feature voice acting.
Together they re-branded the company as Earplay in 2015. “We were working in a brand new medium, interactive audio entertainment. We called our product Earplay, because you're playing out stories with your voice,” Myers says.
The team first produced stories—including Codename Cygnus—as separate standalone iOS and Android apps. They then decided to build a new singular user experience. That lets users access all their stories— past, present and future—within a single app.
When Alexa came along, she changed everything.
The rapid adoption of the Amazon Echo and growth of the Alexa skills library excited the Earplay team. The company shifted its direction from mobile-first to a home entertainment-first focus. “It was almost as though Amazon designed the hardware specifically for what we were doing.”
Though not a developer, Myers started tinkering with Alexa using the Java SDK. He dug into online documentation and examples and created a working prototype over a single weekend. The skill had just a few audio prompts and responses from existing Earplay content, but it worked. He credits the rapid development, testing and deployment to the Alexa Skills Kit (ASK) and AWS Lambda.
Over several weeks, Myers developed the Earplay menu system to suit the Alexa voice-control experience. By then, the code had diverged quite a bit from what they used on other services. “When I showed it to Bruno, it was like ‘Oh Lord, this looks ugly!’” As CTO, Bruno Batarelo is in charge of Earplay’s platform architecture.
An intense six-week period followed. Batarelo helped Myers port the Earplay mechanics and data structures so the new skill could handle the Earplay demo stories. On August 26, they launched Earplay, version 1.0.[Read More]
With thousands of skills, Alexa is in the Halloween spirit and we’ve round up a few spooky skills for you to try. See what others are building, get inspired, and build your own Alexa skill.
Magic Door added a brand new story that has a Halloween-theme. Complete with a spooky mansion and lots of scary sound effects, you’re bound to enjoy the adventure. Ask Alexa to enable Magic Door skill and start your Halloween adventure.
Are you worried about some restless spirits? Use Ghost Detector to detect nearby ghosts and attempt to catch them. The ghosts are randomly generated with almost 3000 possible combinations and you can catch one ghost per day to get Ghost Bux. Ask Alexa to enable Ghost Detector skill so you can catch your ghost for the day.
Horror movie buffs can put themselves to the test with the Horror Movie Taglines skill. Taglines are the words or phrases used on posters, ads, and other marketing materials for horror movies. Alexa keeps score while you guess over 100 horror movie taglines. Put your thinking hat on and ask Alexa to enable Horror Movie Taglines skill.
Let this noise maker join your Halloween party this year. These spooky air horn sounds are the perfect background music for Halloween night. Listen for yourself by enabling Spooky Air Horns skill.
Scary, spooky haunted houses define Halloween and this interactive story is no different. The Haunted House skill lets you experience a stormy Halloween night and lets you pick your journey by presenting several options. The choice is yours. Start your adventure by enabling Haunted House skill.
This Halloween, you can follow Bryant’s tutorial and learn how to turn your Amazon Echo into a ghost with two technologies: the Photon and Alexa. With an MP3 and NeoPixel lights, you’ll be ready for Halloween. Dress up your own Echo with this tutorial.
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
Landon Borders, Director of Connected Devices at Big Ass Solutions, still chuckles when he tells customers how the company got its name. Founder Carey Smith started his company back in 1999, naming it HVLS Fan Company. Its mission was to produce a line of high-volume, low-speed (HVLS) industrial fans. HVLS Fan Company sold fans up to 24-feet in diameter for warehouses and fabrication mills.
“People would always say to him ‘Wow, that’s a big-ass fan.’ They wanted more information, but they never knew how to reach us,” says Borders. So the founder listed the company in the phone book twice, both as HVLS Fan Company and Big Ass Fans. Guess which phone rang more often? “In essence, our customers named the company.”
Today the parent company is Big Ass Solutions. It still owns Big Ass Fans. It also builds Big Ass Lights and Haiku Home, a line of smart residential lighting and fans. Now with an Alexa skill, the company’s customers can control their devices using only their voice.
Haiku Home is where Alexa comes into the picture.
Big Ass Fans (BAF) is a direct-sales company. As such, it gets constant and direct feedback about customers' satisfaction and product applications. BAF found people were using its industrial-grade products in interesting commercial and home applications. It saw an exciting new opportunity. So in 2012, BAF purchased a unique motor technology, allowing it to create a sleek, low-profile residential fan.
That was just the starting point for BAF’s line of home products. The next year, BAF introduced Haiku with SenseME, the world’s first smart fan.
What’s a smart fan? Borders says it first has to have cutting-edge technology. Haiku Home fans include embedded motion, temperature and humidity sensors. A microprocessor uses that data to adjust the fan and light kits to the user's tastes. The device also has to be connected, so it includes a Wi-Fi radio.
The microprocessor and Wi-Fi radio make the SenseME fan a true IoT device. Customers use a smartphone app to configure the fan’s set-it-and-go preferences. But after that, why should you need an app?
Borders remembers discussions in early 2015 centered on people getting tired of smartphone apps. Using apps were a good starting point, but the company found some users didn’t want to control their fan with their smartphone. BAF felt voice was definitely the user interface of the future. When they saw Amazon heavily investing in the technology, they knew what the next step would be.
They would let customers control their fans and lights simply by talking to Alexa.[Read More]
Brian Donohue, New Jersey-born software engineer and former CEO of Instapaper, wasn't an immediate Alexa fan. In fact, his first reaction to the 2014 announcement of the Amazon Echo was "That's cool, but why would I buy one?"
All that changed over the course of one whirlwind weekend in March 2016. Almost overnight, Brian went from almost indifferent to being one of the most active developers in the Alexa community. Today he’s recognized as an Alexa Champion and a master organizer of Alexa meetups.
We sat down with Brian to find out how Alexa changed his entire view of voice technology... and why he wanted to share his excitement with other Alexa developers.
Brian has led Instapaper for the last two and a half years. Its former owner, Betaworks, always encouraged employees—including Brian—to check and innovate with new technology. Brian has built apps for Google Glass and other devices, just because the company had them lying around the office.
When the company bought an Echo device in March, Brian had to take another look. He took it home one Friday night and decided to try building a skill using the Alexa Skills Kit (ASK). He selected something simple, inspirational and personal to him. The skill—which later became Nonsmoker—keeps track of when you stopped smoking and tells you how long it's been since your final cigarette.
The first version took Brian half a day to create. It was full of hardcoded values, but it was empowering. Then, in playing with this and other Alexa skills, Brian recognized something exciting. A fundamental technology shift was staring right at him. When he returned the Echo to the office on Monday, he was hooked.
“Interacting with Alexa around my apartment showed me the real value proposition of voice technology,” says Brian. “I realized it’s magical. I think it’s the first time in my life that I’d interacted with technology without using my hands.”
Brian wanted immediate and more active involvement in Alexa development. The following day he was searching meetup.com for Alexa user gatherings in New York City. He found none, so Brian did what always came naturally. He did it himself.
His goal was to find 20 or so interested people before going to the effort of creating a meetup. The demand was far greater than he expected. By the third week of March, he was hosting 70 people at the first-ever NYC Amazon Alexa Meetup, right in the Betaworks conference room.
After a short presentation about Echo, Tap and Dot, Brian did the rest of the program solo. He created a step-by-step tutorial with slides, a presentation and code snippets, all to explain how to create a simple Alexa skill. He walked attendees through the program, then let them test and demo their skills on his own Echo, in front of the class.
“A lot of them weren’t developers, but they could cut and paste code,” says Brian. “About half completed the skill, and some even customized the output a bit.” Brian helped one add a random number generator, so her skill could simulate rolling a pair of dice.[Read More]
In 2012, a “Down Under” team from Melbourne, Australia recognized LED lighting had finally reached a tipping point. LED technology was the most efficient way to create light, and affordable enough to pique consumers’ interest in bringing colored lighting to the home. And LIFX was born.
John Cameron, vice president, says LIFX launched as a successful Kickstarter campaign. From its crowd-funded beginnings, it has grown into a leading producer and seller of smart LED light bulbs. With headquarters in Melbourne and Silicon Valley, its bulbs brighten households in 80 countries around the globe.
Cameron says LIFX makes the world’s brightest, most efficient and versatile Wi-Fi LED light bulbs. The bulbs fit standard light sockets, are dimmable and can emit 1,000 shades of white light. The color model adds 16 million colors to accommodate a customer’s every mood.
Until 2015, LIFX customers controlled their smart bulbs using smartphones apps. Customers could turn them on or off by name, dim or brighten them, and select the color of light. They could also group the devices to control an entire room of lights at once. Advanced features let customers create schedules, custom color themes, even romantic flickering candle effects.
Without the phone, though, customers had no control.
Like Amazon, the LIFX team knew the future of customer interfaces lay in voice control. “We’re always looking for ways to let customers control [their lights] without hauling out their phone,” said Cameron. “When Alexa came along, it took everybody by storm.”
“That drove us to join Amazon's beta program for the Alexa Skills Kit (ASK)” says Daniel Hall, LIFX’s lead cloud engineer. Hall says the ASK documentation and APIs were easy to understand, making it possible for them to implement the first version of the LIFX skill in just two weeks. By the end of March 2015, LIFX had certified the skill and was ready to publish. The skill let customers control their lights just by saying “Alexa, tell ‘Life-ex’ to…”
Since the LIFX skill launch, ASK has added custom slots, a simpler and more accurate way of conveying customer-defined names for bulbs and groups of bulbs. Hall says that custom slots is something that LIFX would be interested in implementing in the future.[Read More]
In the latest headlines from KIRO7:
[stirring theme music begins] Hello from KIRO7 in Seattle. I’m Michelle Millman…
And I’m John Knicely. Here are the top stories we’re following on this Friday.
A car erupted in flames around 5:30 this morning on northbound I-5. This was just south of downtown and caused a major traffic backup, but you can get around it by…
This might sound like a local daybreak newscast blaring from the TV in the kitchen or the bedroom, as you rush around trying to get ready for work – but it isn’t.
It’s actually a Alexa Flash Briefing skill. Flash Briefing streams today’s top news stories to your Alexa-enabled device on demand. To hear the most current news stories from whatever sources you choose, just say “Alexa, play my flash briefing” or “Alexa, what’s the news?”
The particular Flash Briefing skill in question, though, is rather unique. With all its realism and personality, you might be fooled into thinking it’s an actual news desk, complete with bantering anchors, perky weather forecast, and the day’s top local headlines.
That’s because it is—and that’s what sets KIRO7 apart from the rest.
Using the Alexa app, you can select different skills for your Flash Briefing from a number of different news sources. These include big-named outlets like NPR, CNN, NBC, Bloomberg, The Wall Street Journal, and more. These all give you snapshots of global news. Now more and more local stations are creating their own Flash Briefing skills for Alexa.
The Flash Briefing Skill API, a new addition to the Alexa Skills Kit, which enables developers to add feeds to Alexa’s Flash Briefing, delivers pre-recorded audio and text-to-speech (TTS) updates to customers. When using the Flash Briefing Skill API, you no longer need to build a voice interaction model to handle customer requests. You configure your compatible RSS feed and build skills that connect directly to Flash Briefing so that customers can simply ask “Alexa, what’s my Flash Briefing” to hear your content.
If you’ve activated Flash Briefing before, you know that several content providers leverage Alexa to read text in her normal voice. That’s because most skills in Flash Briefing repurpose content that is already available in an RSS-style feed. They plug the same text into the feed for Alexa to ingest.
Jake Milstein, news director for KIRO7, said KIRO7 was one of the first local news channels to create a Flash Briefing. While Alexa has a wonderful reading voice, the KIRO7 team wanted to do something a bit more personal for its listeners. Working with the Alexa team, they discovered they could upload MP3 files as an alternative to text. Instead of reading from canned text files, Alexa would play the audio files.
Milstein said using real people’s voices was an obvious choice, because “We have such great personalities here at KIRO7.” The station tested various formats, but eventually settled on using two of its morning news anchors. Christine Borrmann, KIRO7 Producer, says, “We tinkered with the format until Michelle and John just started talking about the news in a very conversational way. Then we added a little music in the background. It felt right.”
KIRO7 started out with a single daily feed but now has three. The morning anchors, Michelle Millman and John Knicely, record the first ‘cast around 4 a.m. and the second shortly after their live broadcast at 8 a.m. Other news anchors record the third feed in late afternoon, so it captures the evening news topics. Each ‘cast’ is roughly two minutes long and ends by encouraging listeners to consume more KIRO7 content through the app on Amazon FireTV.
The whole KIRO7 team is proud to be the first local news station to produce a studio-quality audio experience in a Flash Briefing and the KIRO7 skill launched alongside several established networks with national scale.
Early feedback on Facebook showed KIRO7 listeners loved the skill and wanted even more. Now that Flash Briefings are skills, though, the KIRO7 team can start collecting its own reviews and star-ratings.
Milstein says it is important that KIRO7 stay at the forefront of delivering Seattle-area news the way people want to get their news. “Having our content broadcast on Alexa-enabled devices and available on Amazon Fire TV is something we're really proud of. For sure, as Amazon develops more exciting ways to deliver the news, we'll be there.”
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
Need a ride? Lyft is an on-demand transportation platform that lets you book a ride in minutes. It’s as easy as opening up the Lyft app, tapping a button and a driver arrives to get you where you need to go. Now, they’ve made it even easier. Simply say, “Alexa, ask Lyft to get me a ride to work.”
Roy Williams, the Lyft engineer who built the Alexa skill, said it started with a company hackathon.
Lyft has a long culture of hackathons. Each quarter, the San Francisco company invites employees to experiment with new ideas. The story goes that Lyft itself was born at such a hackathon, with someone’s idea for an “instant” ride service.
“It took about three weeks to go from the original prototype to a finished app,” Williams said. Lyft has been going strong ever since.
That wasn’t the last innovation to spring from a Lyft hackathon.
Williams said he purchased an Amazon Echo during the 2015 Black Friday sale. He immediately knew he wanted create an Alexa skill to let Echo users order a “lyft.” Williams dove into the Alexa Skills Kit (ASK) documentation, and he started building his prototype at the January hackathon. It was a hit.
Beyond the prototype, Williams estimates the project took three weeks of solid engineering time. The team spent one week working on the core functionality, including adding some workflow to their own API. It spent another week working through edge cases and complex decision trees, so the skill would never leave a user confused or at a dead-end. Finally, they spent another week on testing and analytics, before releasing it for an internal beta with 30 users.
Williams says ASK is very comprehensive, and because it is JSON-based, it makes testing easy. He admits having to add some edge testing to account for cases like asking Lyft for “a banana to work.” (Bananas are a favorite test fruit during certification.) In the end, he knew Lyft had a high-quality skill with near-one hundred percent test coverage.
Amazon published the final Lyft skill in July.
Megan Robershotte is a member of Lyft’s partner marketing team. She explained the Alexa skill fit well with the company’s primary goal: to get people to take their first ride with Lyft.[Read More]
When Belkin International launched its WeMo line of connected devices in 2012, it wasn’t its first foray into consumer electronics. Belkin has been around for 30 years, transforming its business from cabling to connectivity, wireless networking, and eventually into home automation.
According to CJ Pipkin, Belkin’s national account manager for WeMo, the farther the company delved into wireless networking, the more it realized people wanted to remote-control devices of all kinds around the home. So Belkin transformed its Zensi energy-monitoring devices into what become WeMo—a line of smart, remote-controlled and remotely-monitored switches.
“We built a smart ecosystem of connected devices as early as anyone in the industry,” Pipkin says.
Belkin makes a variety of devices, but high-quality switches dominate its WeMo home automation lineup:
But since Amazon Echo and Alexa came on the scene, it’s completely changed Belkin’s way of thinking. They realized one household user—the techiest one—had previously dominated WeMo usage. With Alexa, though, anyone can operate a connected device with ease.
Tom Hudson, software product manager for WeMo, says smartphones were a natural way to control home devices at first, especially lighting. They are handy for configuring set-it-and-forget-it automations to respond to specific events. For more immediate actions, though, voice actuation is so much better. “It’s a lot easier to just say, ‘Turn that light on’ than it is to pull out your phone, find and load up the app, then locate and tap the right command.”[Read More]
Do you develop in Amazon Web Services (AWS), have an Echo, and want the latest service availability details without having to open your laptop and scroll through dozens of green checkmarks? A home-schooled student named Kira Hammond has the solution with her newly-released CloudStatus Alexa skill.
CloudStatus summarizes the info on the AWS Service Health Dashboard, both current issues and recent problems. On a challenging day, Alexa’s conversation might start out like this:
“Hello! 3 out of 11 AWS regions are experiencing service issues—Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved—Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”
Interested? Listen to a recording of an example session or try it for yourself, say, “Alexa, enable the CloudStatus skill.”
Kira wrote CloudStatus with AWS Lambda, using Amazon EC2 to build Python modules for Requests and LXML. The modules download and parse the AWS status page to provide the desired data. The Python packages and the skill’s code files are zipped and uploaded to AWS Lambda.
Kira created this skill because her father, Eric Hammond, an AWS Community Hero and Internet startup technologist, wanted a simpler, easier way to access the service availability info himself. He figured having Kira create the skill would enable her to learn about retrieving and parsing web pages in Python—and being a good parent, he wanted to foster her creativity. And Kira is very enthusiastic about the creative process of development. “Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy. Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!”[Read More]
In April 2016, developer Aaron Roberts put the finishing touches on Alarm.com’s custom Alexa Skill. That wrapped up almost three months of development and internal and beta testing. All that testing led to a smooth certification process.
Rebecca Davenport, Director of Product Management at Alarm.com, says the Alarm.com skill controls more than just home security. It also controls almost every other device that’s part of the company’s home automation ecosystem. That includes security equipment, door locks, garage doors, video cameras, lights and thermostats.
The company's founders recognized the limitations of traditional landline-based alarm systems. Besides relying on phone wires—which can be tampered with and unreliable —customers often forgot to arm their systems. The company saw a unique opportunity to allow customers to arm and disarm the system and know what’s happening at their home from anywhere.
Alarm.com enhanced its offering with its first mobile app. At the same time, it started expanding its core platform beyond security into home automation and video. Today over two million Alarm.com customers control their smart home devices from their phones, tablets, TV, and more.
When Amazon Echo and Alexa debuted, Alarm.com saw another huge opportunity. With the launch of the Alexa Skills Kit (ASK), the company knew voice technology’s time had come. “We had voice technology on our radar,” Davenport says. “Voice control is a compelling way for customers to interact with their devices from within their homes.”
The software team didn’t start developing a custom Alexa skill right away. Instead, Roberts started his own early exploration and prototype during the ASK beta. When the integration project got the green light, he was ready.
Roberts said using the ASK API was straightforward. He found mapping the API responses to Alarm.com’s existing web services was the simplest part of the project. As for the rest, he recalls the major components:
The team members brainstormed all the ways they thought users would request a command. Like many developers new to voice applications, they found customers don’t always say what you expect.[Read More]
When April L. Hamilton first saw the Amazon Echo in 2014, she knew it was the future. As an Amazon Prime member, April had an early preview of the new device. She immediately knew Echo would be the theme of her next website and blog.
Amazon Echo users and Alexa skill developers will likely know April from her website, www.lovemyecho.com. The site has become a collection of Alexa developer resources, “how-to” articles, news features, and even downloadable Bingo-cards (more on that later).
How does someone become a noted authority on a new, rapidly-evolving technology like Alexa, in such a short time? We sat down with April to find out.
The internet of things enthralled both the technologist and app developer in April. As a blogger, she wanted in on the ground floor with something that had real potential. When Amazon offered its Prime members the chance to pre-order Echo, she knew the internet of things had finally arrived.
“I saw Amazon getting on board with the release of Echo, and I said this is it. Amazon is one of the only companies with the vision, consumer knowledge, tech resources, and dedication to really make it happen.”
Next, she had to figure out how to set up a consumer blog for something so new. April knew Echo would excite consumers and developers alike, and they’d have plenty of questions. With her programming and writing background, she wanted to be the one to answer them.
April knew it would be tough writing with authority about something so cutting-edge, but she wasn’t afraid to learn. In fact, April says her prime motivator was the sheer joy of learning about a new technology. So she signed up for Amazon’s Developer Day in early 2015 to get some hands-on experience with the device.
She likened the thrill she felt to when she developed apps for the first smartphones and tablets—but with a twist. “I was a mobile app developer before. Echo needed a unique type of ‘app’. So I thought, what better way to learn about it than to develop skills for it?”
She wondered what skill she could create that consumers would enjoy. Beyond that, she wanted to build an Alexa skill that would intrigue her colleagues, so they too could see the potential of Amazon Echo.[Read More]
Earlier this year, Paul Cutsinger, Evangelist at Amazon Alexa, joined a team of developers and designers from Capital One at SXSW in Austin to launch the new Capital One skill for Alexa. The launch of the new skill garnered national attention, as Capital One was the first company to give customers the ability to interact with their credit card and bank accounts through Alexa-enabled devices. This week at the Amazon Developer Education Conference in NYC, Capital One announced another industry first by expanding the skill to enable its customers to access their auto and home loan accounts through Alexa.
"The Capital One skill for Alexa is all part of our efforts to help our customers manage their money on their terms – anytime and anywhere," said Ken Dodelin, Vice President, Digital Product Management, Capital One. “Now, you can access in real time all of your Capital One accounts—from credit cards to bank accounts to home and auto loans—using nothing but your voice with the Capital One skill.”
The skill is one of the top-rated Alexa skills, 4.5/5 stars, with 47 reviews. It enables Capital One customers to stay on top of their credit card, auto loan, mortgage and home equity accounts by checking their balance, reviewing recent transactions, or making payments, as well as get real-time access to checking and savings account information to understand their available funds.
“Capital One has a state of the art technology platform that allows us to quickly leverage emerging technologies, like Alexa." Scott Totman, Vice President of Digital Products Engineering, Capital One said. “We were excited about the opportunity to provide a secure, convenient, and hands-free experience for our customers.”
To bring the new skill to life, the Capital One team – comprised of engineers, designers, and product managers – kicked off a two-phase development process.
“Last summer a few developers started experimenting with Echo devices, and, ultimately, combined efforts to scope out a single feature: fetching a customer’s credit card balance. That exercise quickly familiarized the team with the Alexa Skills Kit (ASK) and helped them determine the level of effort required to produce a full public offering,” said Totman. “The second phase kicked off in October and involved defining and building the initial set of skill capabilities, based on customer interviews and empathy based user research. Less than six months later we launched the first version of the Capital One skill for Alexa.”
The team also spent a lot of time finding the right balance between customers’ need for both convenience and security. In the end, Capital One worked with Amazon to strike the right balance and gave customers the option of adding a four-digit pin in order to access the skill and provide an additional layer of security. The pin can be changed or removed at the customer’s discretion.
“The Alexa Skills Kit is very straightforward. However, it is evolving quickly, so developers need to pay close attention to online documentation, webinars, and other learning opportunities in order to stay on top of new features and capabilities as they are released,” Totman said.
“We dedicated a lot of time to getting the conversation right from the start,” said Totman. “This meant we not only had to anticipate the questions customers were going to ask, but also how they were going to ask them.”
This was a really interesting challenge for Capital One’s design team. In order to make the skill feel like a personalized conversation, the team had to identify exactly where and how to inject personality and humor, while carefully considering customers’ priorities and the language they use to discuss finances.
“A lot goes into making sure our customers get what they expect from our personality, as well as what they expect from Alexa’s personality. That becomes especially visible when injecting humor, because what looks great on paper doesn’t always transition to the nuance of voice inflection, cadence, or the context of banking,” said Stephanie Hay, head of Capital One’s content strategy team. “But that’s the joy of design > build > iterate in a co-creation method; product, design, and engineering design the conversation together, hear Alexa say it, react, iterate, test it with actual customers, iterate further, and then get it to a point we all feel excited about.”
Capital One’s Alexa skill represents just the starting lineup of features. Capital One’s team continues to test, learn, and explore new features by focusing on customer needs and continually refining the experience.
“As customers become more familiar using voice technologies, we anticipate growing demand for feature capabilities, as well as increased expectations regarding the sophistication of the conversation.” Totman said. “With voice technologies, we get to learn firsthand how customers are attempting to talk to us, which allows us to continually refine the conversation.”
“The possibilities with the Alexa Skills Kit are nearly endless, but I advise developers to be very thoughtful about the value of their skill,” said Totman. “Leveraging voice-activated technology is only worthwhile if you can clearly define how your solution will go above and beyond your existing digital offerings.”
Stay tuned to part two to learn how Capital One built their Alexa skill and added new capabilities.
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
In our first post, we shared why Discovery decided to build an Alexa skill and what requirements they outlined as they thought through what the voice experience should look like. In this post, we’ll share how they built and tested their Alexa skill and their tips for other Alexa developers.
When Stephen Garlick, Lead Development and Operations Engineer at Discovery Channel, took the lead in developing the Alexa skill, it was a chance to learn how to design a new experience for customers. He had no prior experience with AWS Lambda and Alexa Skills Kit (ASK). To start, he spent some time digging into online technical documentation and code samples provided on the Alexa Github repo. This helped him gain a deeper understanding of how to build the foundation of the Alexa skill and handle basic tasks.
By using AWS Lambda and ASK, Stephen and team were able to keep things simple and quickly deploy the code without the need to set up additional infrastructure to support the skill. Additionally, they were easily able to extend the node.js skill without having to create a skill from scratch.
Initially, Discovery used Alexa to respond with facts; later, they decided to customize her voice by using a mp3 playback. To accomplish this, Stephen used the SSML support for mp3 playback and AWS S3 with cloud front for hosting the files reliably. Each mp3 was less than 90 seconds in length, 48 kbps, and adhered to MPEG version 2 specifications. All the resources were created and deployed using the AWS CloudFormation service.
For the countdown feature, Stephen pulled in the moment.js dependency into node.js to help simplify some time-based calculations. The countdown now combines a mp3 playback for everything except the actual time which is played back by Alexa.
To test the skill, they used the skill test pane within the Alexa app. The testing tool made it easy to quickly test various scenarios without an Alexa-enabled device. Once the skill was operating as expected (and desired) in the test pane, Stephen asked other people to test the Shark Week skill on Alexa-enabled devices. This allowed them to collect additional feedback and iterate accordingly.
Overall, the entire process of learning these new technologies, coding, and building the skill took no more than 12 hours. This included a few iterations of the Alexa skill as well.
Tip #1: Make The Skill As Human As Possible: Initially, Discovery had the Alexa voice state each of the randomized facts. In an attempt to assist with the pronunciation, they spelled a few of the words and numbers phonetically. However, in doing so, the cards displayed in the Alexa app weren't correct. It quickly became apparent that a recorded reading of each fact eliminated the pronunciation issues, enabled proper spelling of facts for the cards in the Alexa app, and made the entire experience more personal.
Tip #2: Plan for Time Sensitive Coding: If you're building time specific functionality (e.g.; a countdown timer to a specific time), make sure you think about what happens when the specific time arrives. The team at Discovery was able to account for the Shark Week kickoff by providing three different countdown messages based on time in each specific time zone. The first was the countdown lead in, the second was a message indicating that Shark Week already started, and the third indicated that Shark Week had concluded and that the Shark Week website provides other shark-related information year-round.
Tip #3: Control for Volume: If you're using a combination of recordings and Alexa powered speech, make sure the volume levels are consistent throughout the experience.
Tip #4: Be Creative with Your Intent Schema and Utterances: People think, act, and speak differently. Therefore, it's important that you account for as many different intents as possible. For example, after you ask for a Shark Week fact, the skill will ask if you would like to hear another. Just a few of Discovery’s "no" utterances include "no," "nope," "no thanks," "no thank you," "not really," "definitely not," "no way," "nah," negative," "no sir," "maybe another time," and many more. It's better to be as inclusive as possible, rather than having Alexa unable to understand.
Tip #5: Take Chances: Push your limits and think big when it comes to building your Alexa skill. Discovery started the project with a broad scope in mind and were able to quickly iterate and resubmit the skill for certification.
Craig Johnson, president of Emerson’s Residential Solutions business, claims it was inevitable. “Thermostats are no longer just passive HVAC controllers hanging on your wall. The convergence of wireless and mobile technologies allowed us to develop a thermostat that allows better temperature control, programmability and scheduling, as well as remote access.”
Even before Amazon’s Smart Home Skill API was publicly released, Johnson was excited about smart home. Prior to Smart Home, Emerson had a fully functional mobile app and internet portal our customers could use to control their Sensi thermostat remotely. But integration of Alexa is a natural extension of that remote access and remote functionality.”
In February 2016, Johnson’s software development manager, Joe Mahari, jumped on board the Smart Home beta program. In just four weeks’ time—and by the time Amazon officially launched the Smart Home Skill API—Mahari’s team had built and tested its Sensi Smart Home skill and passed certification.
The Smart Home Skill API converts a voice command, such as “Alexa, increase my first floor by 2 degrees,” to directives (JSON messages). The directive includes:
It then sends the directive to the methods implemented in the Sensi skill.
According to Mahari, Emerson implemented three main directives. Examples of these are:
The Emerson team agrees the skill and API were well packaged and supported, end-to-end. “Amazon defined the use case very crisply,” said Johnson. “We received a deck of scenarios to achieve, plus integrated logging, systems’ checks and documentation. These were essential to our success.”
Mahari says it was invaluable that the Amazon team connected with them daily. “For example, we had some concerns about how to increase or decrease the temperature during auto-schedules. But working directly with the Alexa team, we figured out how to make it work.”
So, if working with Amazon’s support and the API itself went so smoothly, what were some challenges the Emerson team faced over the four-week project?[Read More]