On November 18, the first episode of The Grand Tour series marked the most-watched premiere in Amazon’s video streaming service’s history. British car enthusiasts Jeremy Clarkson, Richard Hammond, and James May returned to the screen for an all-new series of globetrotting adventures. Each episode takes Amazon Prime Video viewers to another exotic location.
For Amazon Alexa users, watching The Grand Tour is only half the fun. Prior to the series premiere, Amazon debuted a companion skill built by PullString on the Alexa Store, available to its US and UK customers.
Each Thursday, prior to the show’s Friday airtime, The Grand Tour skill provides a new clue about what to watch for in the upcoming video episode. On Saturday, if viewers are truly “on the tour” and answer three trivia questions correctly, they’ll unlock exclusive video content.
The fun aside, what makes the skill unique is another first: the PullString Platform on which it was developed.
Mike Houlahan, head of PullString’s enterprise partner program, explains Oren Jacob and Martin Reddy co-founded the company in 2011. The two Pixar Animation veterans’ vision was to build lasting emotional connections between characters and audiences using two-way computer conversations. They noted an absence of professional toolsets for building conversational experiences between a character and its audience, and they set about filling that gap.
Now, the company makes the power of the PullString Platform available to Alexa developers. “We are very excited to launch The Grand Tour skill,” Houlahan said. “We are simultaneously announcing the availability of PullString for the Alexa Developer Community to build their own Alexa skills.”
The PullString Platform includes:
With the PullString Platform, a creative writer can prototype, develop, test and deploy an entire skill without writing a single line of code. That’s just what Danielle Frimer did.
Frimer is the creative writer who scripted the voice interaction model (VUI) for The Grand Tour Alexa skill using PullString. She worked with Amazon Prime Video to get the show’s actors into the recording booth to record dialog, and put it all together using the PullString Platform.
“I am not a developer in any way,” says Frimer. “With the platform, I could focus my attention on the creative aspects of it—the lines, the flow of things, the overall design—not on the underlying nuts and bolts of it.”
The skill’s design mimics the flow of The Grand Tour’s episode rollout. The voice interaction, of course, is peppered with the recorded dialog, making the experience even more engaging.
Frimer says PullString’s templates and documentation give developers a quick-start on different types of conversation projects. In all cases, it relieves both authors and developers of the complicated logic involved with a complex VUI model.[Read More]
Tushar Chugh is a graduate student at the Robotics Institute at Carnegie Mellon University (CMU). There he studies the latest in robotics, particularly how computer vision devices perceive the world around them.
One of his favorite projects was a robot named Andy. Besides having arms, Andy could discern colors and understand spatial arrangement. Andy could also respond to voice commands, like “pick up the red block and place it on top of the blue block.” Andy’s speech recognition, a CMU framework, was about to change.
When Amazon came to give some lectures at CMU, they had a raffle drawing. Chugh won the drawing and took home a new Amazon Echo as a prize. Over three days and nights without sleep, he completely integrated Andy and Alexa using the Alexa Skills Kit (ASK).
When he saw Hackster’s 2016 Internet of Voice challenge, he knew he had to enter. And in August 2016, Chugh’s Smart Cap won the prize for the Best Alexa Skills Kit with Raspberry Pi category.
According to Chugh, there are about 285 million visually-impaired people in the world. In 2012, he worked on a project to help the visually impaired navigate inside a building. His device, a belt with embedded sensing tiles, won a couple of prizes, including a Wall Street Journal Technology Innovation Award. It was ahead of its time, though, and it wasn’t yet practical to develop the technology into a commercial product.
A lot can change in four years, including Chugh’s discovery of Alexa. Besides dabbling with Alexa and Andy the robot, he has also worked with Microsoft Cognitive Services for image recognition. Chugh now saw a chance to bring a new and better “seeing device” to light.
“When I saw Alexa, I thought we can extend it and integrate [Alexa] as a separate component,” says Chugh. “I talked with a couple of organizations for the blind in India, and they agreed this kind of system would be very, very useful. That was my main motivation.”
Chugh says the hardware for the Smart Cap is basic. He used a Raspberry Pi (RPi), a battery pack, a camera and a cap on which to mount it. As for the software, it included:
The goal was straightforward. A visually-impaired user could ask Alexa what is in front of them. Alexa would vocalize the scene, allowing the person to navigate safely wherever he or she may be.
How do the pieces all fit together?
Chugh says there are two distinct parts.
First, the image capture and analysis:
Now comes the Alexa skill:
Barely a year later, Mandy attended the Technica Hack ‘15—and won the JP Morgan Chase prize for best mobile app. For Mandy, that event started a new career. After participating alongside so many eager and helpful programmers, she knew what she wanted to do.
Just 11 months later, Mandy has won three more hackathons, including NY TechCrunch ‘16 and Manhattan AngelHack ’16. But unlike her first contest, all these prizes were for Alexa skills.
Mandy first discovered Amazon Echo while attending a 2016 developer conference in San Francisco. She watched a team working with an Echo, and it instantly appealed to her interests in both back-end software development and artificial intelligence. “It was like having my code right in front of me. I talked to my code, and the code kind of talked back to me,” says Mandy.
The day before TechCrunch New York in May, Mandy dove into all the online Alexa Skills Kit documentation she could find. The next day, nervous but determined, she created the prototype that became the Dr. Speech skill—and won the Best Use of Alexa prize.
Mandy, originally from Hong Kong, wanted to help others improve their pronunciation of challenging words, so she created a skill called Dr. Speech, which helps non-native English speakers pronounce words accurately, thereby giving them more confidence without expensive speech therapy sessions.
Mandy gets tweets from people around the world thanking her for how Dr. Speech has been instrumental in improving their pronunciation and has also inspired other developers to build self-improvement skills. Similarly, a user of Mood Journal—another of Mandy’s Alexa skills—wrote to say it helped him in battling anxiety and depression.
Humbled, Mandy repeats she loves to write software that helps people. “Every skill I write is an extension of me. Dr. Speech is about improving speech, because I strive to be a great speaker. I never imagined how my skills would have touched so many people.”[Read More]
Developers have created thousands of skills for Alexa, expanding Alexa’s capabilities and offering consumers novel new voice experiences. We recently unveiled a new way for customers to browse the breadth of the Alexa skills catalog by surfacing Alexa skills on Amazon.com.
Today we are introducing a new program that allows you to nominate your favorite Alexa skills to be featured in our Community Favorites campaign. Skills that are nominated and meet the selection criteria will be featured in the Alexa app and on Amazon.com in December. This is a great way to help customers everywhere discover new, intriguing and innovative skills on their Alexa-enabled devices.
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
Magic mirror, on the wall—who is the fairest one of all?
Probably the most memorable line from Disney’s 1937 classic, Snow White and the Seven Dwarfs, it may soon become a household phrase again. Modern-day magic mirrors are taking a number of forms, from toys to high tech devices offering useful information to their masters. Now, Darian Johnson has taken that concept an enormous step farther.
Darian, a technology architect with Accenture, has worked in software solution design for 17 years. Today he helps clients move their on-premise IT infrastructure into the cloud. With a recent focus solely on Amazon Web Services (AWS), it’s only natural other Amazon technologies like Alexa would pique his interest.
One night, Darian was pondering what he might build for Hackster’s 2016 Internet of Voice Challenge. He was surfing the web, when he happened on an early concept of a Magic Mirror and realized he could do even better than that. He did. In August 2016, Darian’s new Mystic Mirror won a prize in the Best Alexa Voice Service with Raspberry Pi category.
Darian says his morning routine consists of running between bedroom and bathroom, trying to get ready for work. He doesn’t have an Amazon Echo in either, but he does, however, have mirrors there. That’s another reason why an Alexa Voice Service (AVS)-enabled mirror made sense.
He set his budget at a mere $100. That covered a Raspberry Pi (RPi), a two-way mirror, a refurbished monitor and speaker, some wood planks and a few other assorted items. He determined that his device would:
You can build your own Mystic Mirror using the details on the Hackster site. But it was his software and Alexa that brought it to life.
Darian decided to voice-enable his Raspberry Pi, microphone and speaker with the Alexa Voice Service (AVS). That meant the Mystic Mirror’s master would have access to the built-in power of Alexa and over 4,000 third-party skills, developed using the Alexa Skills Kit (ASK). With just a word, they could control smart home devices, ask for a Lyft ride, play music from Amazon Prime accounts and much more. Best of all, since Alexa is getting smarter all the time, the mirror’s capabilities would constantly evolve, too.[Read More]
Eric Olson and David Phillips, co-founders of 3PO-Labs, are both “champs” when it comes to building and testing Alexa skills. The two met while working together at a Seattle company in 2015. Finding they had common interests, they soon combined forces to “start building awesome things”—including Alexa skills and tools.
Eric, an official Alexa Champion, is primarily responsible for the Bot family of skills. These include CompliBot and InsultiBot (both co-written with David), as well as DiceBot and AstroBot. David created and maintains the Alexa Skills Kit (ASK) Responder. The two do most everything as a team, though, and together built the underlying framework for all their Alexa skills.
This fall, they’re unveiling prototyping and testing tools that will enable developers to build high-quality Alexa skills faster than ever.
Eric and David first got involved with Alexa when Eric proposed an Amazon Echo project for a company hackathon. The two dove into online documentation and started experimenting—and having fun. “After the hackathon, we just kind of kept going,” Eric said. “We weren’t planning to get serious about it.”
But over the past year, they grew more involved with the Alexa community. They ended up creating tools that could benefit the whole community. “We wrote these tools to solve problems we ran into ourselves. We ended up sharing them with other people and they became popular,” David said.
The first of these, the Alexa Skills Kit Responder, grew from David’s attempt to speed the process of testing different card response formats. Testing a response until it was just right meant you had to repeatedly modify and re-deploy code each time you changed the response. Instead, this new tool lets developers test mock skill responses without writing or deploying a single line of code. Follow the documentation to set up an Alexa skill to interface with ASK Responder, then upload any response you’d like. The ASK Responder will return it when invoked.
And that’s just the beginning. The ASK Responder’s usefulness is about to explode.
David created Responder for testing mock responses. But the two soon discovered a home automation group using the tool in an unexpected way.
Instead of a skill called “Responder,” they’ll create a skill named My Home Temp, for example. They’ll map an intent like “What is the temperature?” and have their smart home device upload a response to the ASK Responder with the temperature of the house. When the user says “Alexa, ask My Home Temp what is the temperature?” Alexa plays the uploaded response through the Echo. This creates the seamless illusion of a fully operating skill.[Read More]
Dave Grossman, chief creative officer at Earplay, says his wife is early-to-bed and early-to-rise. That’s not surprising when you have to keep up with an active two-year-old. After everyone else is off to bed, Grossman stays up to clean the kitchen and put the house in order. Such chores require your eyes and hands, they don’t engage the mind.
“You can’t watch a movie or read a book while doing these things,” says Grossman. “I needed something more while doing repetitious tasks like scrubbing dishes and folding clothes.”
He first turned to audio books and Podcasts to fill the void. Today, though, he’s found the voice interactivity of Alexa is a perfect fit. That’s also why he’s excited to be part of Earplay. With the new Earplay Alexa skill, you can enjoy Grossman’s latest masterpieces: Earplays. Earplays are interactive audio stories you interact with your voice. And they all feature voice acting and sound effects like those in an old-time radio drama.
Jonathon Myers, today Earplay’s CEO, co-founded Reactive Studio in 2013 with CTO Bruno Batarelo. The company pioneered the first interactive radio drama, complete with full cast recording, sound effects and music.
Myers started prototyping in a rather non-digital way. Armed with a bunch of plot options on note cards, he asked testers to respond to his prompts by voice. Myers played out scenes like a small, intimate live theater, rearranging the note cards per the users’ responses. When it was time to design the code, Myers says he’d already worked out many of the pitfalls inherent to branching story plots.
They took a digital prototype (dubbed Cygnus) to the 2013 Game Developers Conference in San Francisco. Attendees of the conference gave the idea a hearty thumbs-up, and the real work began, which led to a successful Kickstarter campaign and a subsequent release while showcasing at 2013 PAX Prime in Seattle.
Grossman later joined the team as head story creator, after a decade at Telltale Games. Grossman had designed interactive story experiences for years, including the enduring classic The Secret of Monkey Island at Lucas Arts. Most gamers credit him with creating the first video game to feature voice acting.
Together they re-branded the company as Earplay in 2015. “We were working in a brand new medium, interactive audio entertainment. We called our product Earplay, because you're playing out stories with your voice,” Myers says.
The team first produced stories—including Codename Cygnus—as separate standalone iOS and Android apps. They then decided to build a new singular user experience. That lets users access all their stories— past, present and future—within a single app.
When Alexa came along, she changed everything.
The rapid adoption of the Amazon Echo and growth of the Alexa skills library excited the Earplay team. The company shifted its direction from mobile-first to a home entertainment-first focus. “It was almost as though Amazon designed the hardware specifically for what we were doing.”
Though not a developer, Myers started tinkering with Alexa using the Java SDK. He dug into online documentation and examples and created a working prototype over a single weekend. The skill had just a few audio prompts and responses from existing Earplay content, but it worked. He credits the rapid development, testing and deployment to the Alexa Skills Kit (ASK) and AWS Lambda.
Over several weeks, Myers developed the Earplay menu system to suit the Alexa voice-control experience. By then, the code had diverged quite a bit from what they used on other services. “When I showed it to Bruno, it was like ‘Oh Lord, this looks ugly!’” As CTO, Bruno Batarelo is in charge of Earplay’s platform architecture.
An intense six-week period followed. Batarelo helped Myers port the Earplay mechanics and data structures so the new skill could handle the Earplay demo stories. On August 26, they launched Earplay, version 1.0.[Read More]
With thousands of skills, Alexa is in the Halloween spirit and we’ve round up a few spooky skills for you to try. See what others are building, get inspired, and build your own Alexa skill.
Magic Door added a brand new story that has a Halloween-theme. Complete with a spooky mansion and lots of scary sound effects, you’re bound to enjoy the adventure. Ask Alexa to enable Magic Door skill and start your Halloween adventure.
Are you worried about some restless spirits? Use Ghost Detector to detect nearby ghosts and attempt to catch them. The ghosts are randomly generated with almost 3000 possible combinations and you can catch one ghost per day to get Ghost Bux. Ask Alexa to enable Ghost Detector skill so you can catch your ghost for the day.
Horror movie buffs can put themselves to the test with the Horror Movie Taglines skill. Taglines are the words or phrases used on posters, ads, and other marketing materials for horror movies. Alexa keeps score while you guess over 100 horror movie taglines. Put your thinking hat on and ask Alexa to enable Horror Movie Taglines skill.
Let this noise maker join your Halloween party this year. These spooky air horn sounds are the perfect background music for Halloween night. Listen for yourself by enabling Spooky Air Horns skill.
Scary, spooky haunted houses define Halloween and this interactive story is no different. The Haunted House skill lets you experience a stormy Halloween night and lets you pick your journey by presenting several options. The choice is yours. Start your adventure by enabling Haunted House skill.
This Halloween, you can follow Bryant’s tutorial and learn how to turn your Amazon Echo into a ghost with two technologies: the Photon and Alexa. With an MP3 and NeoPixel lights, you’ll be ready for Halloween. Dress up your own Echo with this tutorial.
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
Landon Borders, Director of Connected Devices at Big Ass Solutions, still chuckles when he tells customers how the company got its name. Founder Carey Smith started his company back in 1999, naming it HVLS Fan Company. Its mission was to produce a line of high-volume, low-speed (HVLS) industrial fans. HVLS Fan Company sold fans up to 24-feet in diameter for warehouses and fabrication mills.
“People would always say to him ‘Wow, that’s a big-ass fan.’ They wanted more information, but they never knew how to reach us,” says Borders. So the founder listed the company in the phone book twice, both as HVLS Fan Company and Big Ass Fans. Guess which phone rang more often? “In essence, our customers named the company.”
Today the parent company is Big Ass Solutions. It still owns Big Ass Fans. It also builds Big Ass Lights and Haiku Home, a line of smart residential lighting and fans. Now with an Alexa skill, the company’s customers can control their devices using only their voice.
Haiku Home is where Alexa comes into the picture.
Big Ass Fans (BAF) is a direct-sales company. As such, it gets constant and direct feedback about customers' satisfaction and product applications. BAF found people were using its industrial-grade products in interesting commercial and home applications. It saw an exciting new opportunity. So in 2012, BAF purchased a unique motor technology, allowing it to create a sleek, low-profile residential fan.
That was just the starting point for BAF’s line of home products. The next year, BAF introduced Haiku with SenseME, the world’s first smart fan.
What’s a smart fan? Borders says it first has to have cutting-edge technology. Haiku Home fans include embedded motion, temperature and humidity sensors. A microprocessor uses that data to adjust the fan and light kits to the user's tastes. The device also has to be connected, so it includes a Wi-Fi radio.
The microprocessor and Wi-Fi radio make the SenseME fan a true IoT device. Customers use a smartphone app to configure the fan’s set-it-and-go preferences. But after that, why should you need an app?
Borders remembers discussions in early 2015 centered on people getting tired of smartphone apps. Using apps were a good starting point, but the company found some users didn’t want to control their fan with their smartphone. BAF felt voice was definitely the user interface of the future. When they saw Amazon heavily investing in the technology, they knew what the next step would be.
They would let customers control their fans and lights simply by talking to Alexa.[Read More]
Brian Donohue, New Jersey-born software engineer and former CEO of Instapaper, wasn't an immediate Alexa fan. In fact, his first reaction to the 2014 announcement of the Amazon Echo was "That's cool, but why would I buy one?"
All that changed over the course of one whirlwind weekend in March 2016. Almost overnight, Brian went from almost indifferent to being one of the most active developers in the Alexa community. Today he’s recognized as an Alexa Champion and a master organizer of Alexa meetups.
We sat down with Brian to find out how Alexa changed his entire view of voice technology... and why he wanted to share his excitement with other Alexa developers.
Brian has led Instapaper for the last two and a half years. Its former owner, Betaworks, always encouraged employees—including Brian—to check and innovate with new technology. Brian has built apps for Google Glass and other devices, just because the company had them lying around the office.
When the company bought an Echo device in March, Brian had to take another look. He took it home one Friday night and decided to try building a skill using the Alexa Skills Kit (ASK). He selected something simple, inspirational and personal to him. The skill—which later became Nonsmoker—keeps track of when you stopped smoking and tells you how long it's been since your final cigarette.
The first version took Brian half a day to create. It was full of hardcoded values, but it was empowering. Then, in playing with this and other Alexa skills, Brian recognized something exciting. A fundamental technology shift was staring right at him. When he returned the Echo to the office on Monday, he was hooked.
“Interacting with Alexa around my apartment showed me the real value proposition of voice technology,” says Brian. “I realized it’s magical. I think it’s the first time in my life that I’d interacted with technology without using my hands.”
Brian wanted immediate and more active involvement in Alexa development. The following day he was searching meetup.com for Alexa user gatherings in New York City. He found none, so Brian did what always came naturally. He did it himself.
His goal was to find 20 or so interested people before going to the effort of creating a meetup. The demand was far greater than he expected. By the third week of March, he was hosting 70 people at the first-ever NYC Amazon Alexa Meetup, right in the Betaworks conference room.
After a short presentation about Echo, Tap and Dot, Brian did the rest of the program solo. He created a step-by-step tutorial with slides, a presentation and code snippets, all to explain how to create a simple Alexa skill. He walked attendees through the program, then let them test and demo their skills on his own Echo, in front of the class.
“A lot of them weren’t developers, but they could cut and paste code,” says Brian. “About half completed the skill, and some even customized the output a bit.” Brian helped one add a random number generator, so her skill could simulate rolling a pair of dice.[Read More]