On September 29, 2016, Amazon announced the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. We received applications from leading universities across 22 countries. Each application was carefully reviewed by senior Amazon personnel against a rigorous set of criteria covering scientific contribution, technical merit, novelty, and ability to execute. Teams of scientists, engineers, user experience designers, and product managers read, evaluated, discussed, argued, and finally selected the ten teams who would be invited to participate in the competition. Wait, make that twelve; we received so many good applications from graduate and undergraduate students that we decided to sponsor two additional teams.
Today, we’re excited to announce the 12 teams selected to compete with an Amazon sponsorship. In alphabetical order, they are:
These teams will each receive a $100,000 research grant as a stipend, Alexa-enabled devices, free Amazon Web Services (AWS) services to support their development efforts, access to new Alexa Skills Kit (ASK) APIs, and support from the Alexa team. Teams invited to participate without sponsorship will be announced on December 12, 2016.[Read More]
Eric Olson and David Phillips, co-founders of 3PO-Labs, are both “champs” when it comes to building and testing Alexa skills. The two met while working together at a Seattle company in 2015. Finding they had common interests, they soon combined forces to “start building awesome things”—including Alexa skills and tools.
Eric, an official Alexa Champion, is primarily responsible for the Bot family of skills. These include CompliBot and InsultiBot (both co-written with David), as well as DiceBot and AstroBot. David created and maintains the Alexa Skills Kit (ASK) Responder. The two do most everything as a team, though, and together built the underlying framework for all their Alexa skills.
This fall, they’re unveiling prototyping and testing tools that will enable developers to build high-quality Alexa skills faster than ever.
Eric and David first got involved with Alexa when Eric proposed an Amazon Echo project for a company hackathon. The two dove into online documentation and started experimenting—and having fun. “After the hackathon, we just kind of kept going,” Eric said. “We weren’t planning to get serious about it.”
But over the past year, they grew more involved with the Alexa community. They ended up creating tools that could benefit the whole community. “We wrote these tools to solve problems we ran into ourselves. We ended up sharing them with other people and they became popular,” David said.
The first of these, the Alexa Skills Kit Responder, grew from David’s attempt to speed the process of testing different card response formats. Testing a response until it was just right meant you had to repeatedly modify and re-deploy code each time you changed the response. Instead, this new tool lets developers test mock skill responses without writing or deploying a single line of code. Follow the documentation to set up an Alexa skill to interface with ASK Responder, then upload any response you’d like. The ASK Responder will return it when invoked.
And that’s just the beginning. The ASK Responder’s usefulness is about to explode.
David created Responder for testing mock responses. But the two soon discovered a home automation group using the tool in an unexpected way.
Instead of a skill called “Responder,” they’ll create a skill named My Home Temp, for example. They’ll map an intent like “What is the temperature?” and have their smart home device upload a response to the ASK Responder with the temperature of the house. When the user says “Alexa, ask My Home Temp what is the temperature?” Alexa plays the uploaded response through the Echo. This creates the seamless illusion of a fully operating skill.[Read More]
This new technical tutorial by Sr Solutions Architect for Amazon Alexa, Sebastien Stormacq will show you how to use Amazon API Gateway and configure it to act as a HTTP Proxy, sitting between Alexa and your OAuth server.
Have you ever developed an Alexa skill that uses account linking? Do you remember the first time you tried to click on the “Link Account” button and feared for the result? I bet you first saw the dreadful error message: “Unable to Link your skill”. Sometimes trying to figure out what an error is, is like searching for a needle in a haystack. You have no clue.
Most of the errors that I have seen when working with developers, fall in two categories:
When you have access to the OAuth server logs, debugging the error message you see in the Alexa App is relatively easy. You just enable full HTTP trace on the server side and search for the error or the misconfiguration on the server. Full HTTP trace includes the full HTTP headers, query string and body passed by the Alexa service to your server.
With a bit of experience, catching an OAuth error in HTTP stack trace takes only a few minutes.
The problem is that most developers we are working with, have no access to the OAuth servers or the server logs. Either they are using a third party OAuth server (Login With Amazon, Login With Facebook, Login with Google and the likes), or they are working in a large enterprises where another team is operating the OAuth server. Meeting that team and asking them to change logging level or to request access to the logs can take weeks, or may not be possible at all.
This article explains how to setup an HTTP proxy between Alexa Skill Service and your OAuth server to capture all HTTP traffic and log it. By analyzing the logs, you can inspect the HTTP URLs, query strings, headers and full bodies exchanged. Setting such a proxy requires infrastructure to host the proxy: a networked server, with a runtime to deploy your code etc … this is unnecessary heavy lifting where Amazon Web Services can help.
We will use Amazon API Gateway instead and will configure it to act as an HTTP Proxy, sitting between Amazon’s Alexa Skill Service and your OAuth server.
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services.
API Gateway HTTP Proxy Integration mode is a new feature of API Gateway that was launched on Sept. 20th 2016. You can read the post by AWS Director of Evangelism, Jeff Bar’s, if you want to learn more about this.
The diagram below shows where API Gateway, with HTTP Proxy Integration, fits in the OAuth Architecture.
Today, we unveiled a new way for customers to browse the breadth of the Alexa skills catalog and discover new Alexa skills on Amazon.com. See the experience.
Now every Alexa skill will have an Amazon.com detail page. On-Amazon detail pages improves discovery so that a customer can quickly find skills on Amazon and enables developers to link customers directly to their skill with a single click. This is the first time that we are offering a pre-login discovery experience for Alexa skills. Before now, customers would need to log in to the Alexa app on their mobile device or browser. Developers can also improve organic discovery by search engines by optimizing skill detail pages.
You can now link directly to your skill’s page on Amazon.com. On the page, customers can take actions, like enable and disable the skill and link their accounts. For the first time, you can drive customers directly to your skill detail page to increase discovery and engagement for your own skill. To link directly to your skill, simply navigate to your skill’s page and grab the URL from your browser.[Read More]
Dave Grossman, chief creative officer at Earplay, says his wife is early-to-bed and early-to-rise. That’s not surprising when you have to keep up with an active two-year-old. After everyone else is off to bed, Grossman stays up to clean the kitchen and put the house in order. Such chores require your eyes and hands, they don’t engage the mind.
“You can’t watch a movie or read a book while doing these things,” says Grossman. “I needed something more while doing repetitious tasks like scrubbing dishes and folding clothes.”
He first turned to audio books and Podcasts to fill the void. Today, though, he’s found the voice interactivity of Alexa is a perfect fit. That’s also why he’s excited to be part of Earplay. With the new Earplay Alexa skill, you can enjoy Grossman’s latest masterpieces: Earplays. Earplays are interactive audio stories you interact with your voice. And they all feature voice acting and sound effects like those in an old-time radio drama.
Jonathon Myers, today Earplay’s CEO, co-founded Reactive Studio in 2013 with CTO Bruno Batarelo. The company pioneered the first interactive radio drama, complete with full cast recording, sound effects and music.
Myers started prototyping in a rather non-digital way. Armed with a bunch of plot options on note cards, he asked testers to respond to his prompts by voice. Myers played out scenes like a small, intimate live theater, rearranging the note cards per the users’ responses. When it was time to design the code, Myers says he’d already worked out many of the pitfalls inherent to branching story plots.
They took a digital prototype (dubbed Cygnus) to the 2013 Game Developers Conference in San Francisco. Attendees of the conference gave the idea a hearty thumbs-up, and the real work began, which led to a successful Kickstarter campaign and a subsequent release while showcasing at 2013 PAX Prime in Seattle.
Grossman later joined the team as head story creator, after a decade at Telltale Games. Grossman had designed interactive story experiences for years, including the enduring classic The Secret of Monkey Island at Lucas Arts. Most gamers credit him with creating the first video game to feature voice acting.
Together they re-branded the company as Earplay in 2015. “We were working in a brand new medium, interactive audio entertainment. We called our product Earplay, because you're playing out stories with your voice,” Myers says.
The team first produced stories—including Codename Cygnus—as separate standalone iOS and Android apps. They then decided to build a new singular user experience. That lets users access all their stories— past, present and future—within a single app.
When Alexa came along, she changed everything.
The rapid adoption of the Amazon Echo and growth of the Alexa skills library excited the Earplay team. The company shifted its direction from mobile-first to a home entertainment-first focus. “It was almost as though Amazon designed the hardware specifically for what we were doing.”
Though not a developer, Myers started tinkering with Alexa using the Java SDK. He dug into online documentation and examples and created a working prototype over a single weekend. The skill had just a few audio prompts and responses from existing Earplay content, but it worked. He credits the rapid development, testing and deployment to the Alexa Skills Kit (ASK) and AWS Lambda.
Over several weeks, Myers developed the Earplay menu system to suit the Alexa voice-control experience. By then, the code had diverged quite a bit from what they used on other services. “When I showed it to Bruno, it was like ‘Oh Lord, this looks ugly!’” As CTO, Bruno Batarelo is in charge of Earplay’s platform architecture.
An intense six-week period followed. Batarelo helped Myers port the Earplay mechanics and data structures so the new skill could handle the Earplay demo stories. On August 26, they launched Earplay, version 1.0.[Read More]
With thousands of skills, Alexa is in the Halloween spirit and we’ve round up a few spooky skills for you to try. See what others are building, get inspired, and build your own Alexa skill.
Magic Door added a brand new story that has a Halloween-theme. Complete with a spooky mansion and lots of scary sound effects, you’re bound to enjoy the adventure. Ask Alexa to enable Magic Door skill and start your Halloween adventure.
Are you worried about some restless spirits? Use Ghost Detector to detect nearby ghosts and attempt to catch them. The ghosts are randomly generated with almost 3000 possible combinations and you can catch one ghost per day to get Ghost Bux. Ask Alexa to enable Ghost Detector skill so you can catch your ghost for the day.
Horror movie buffs can put themselves to the test with the Horror Movie Taglines skill. Taglines are the words or phrases used on posters, ads, and other marketing materials for horror movies. Alexa keeps score while you guess over 100 horror movie taglines. Put your thinking hat on and ask Alexa to enable Horror Movie Taglines skill.
Let this noise maker join your Halloween party this year. These spooky air horn sounds are the perfect background music for Halloween night. Listen for yourself by enabling Spooky Air Horns skill.
Scary, spooky haunted houses define Halloween and this interactive story is no different. The Haunted House skill lets you experience a stormy Halloween night and lets you pick your journey by presenting several options. The choice is yours. Start your adventure by enabling Haunted House skill.
This Halloween, you can follow Bryant’s tutorial and learn how to turn your Amazon Echo into a ghost with two technologies: the Photon and Alexa. With an MP3 and NeoPixel lights, you’ll be ready for Halloween. Dress up your own Echo with this tutorial.
Share other innovative ways you’re using Alexa in your life. Tweet us @alexadevs with hashtag #AlexaDevStory.
People love that they can dim their lights, turn up the heat, and more just by asking Alexa on their Amazon Echo. Now Belkin Wemo has launched new capabilities through the existing Alexa Voice Service (AVS) API, making the same smart home voice controls accessible on the Echo available on all third-party products with Alexa. Best of all, your customers can enable the Wemo skill on your device today—no additional development work needed.
Because Alexa is cloud-based, it’s always getting smarter with new capabilities, services, and a growing library of third-party skills from the Alexa Skills Kit (ASK). As an AVS developer, your product gains access to these growing capabilities through regular API updates, feature launches, and custom skills built by our active developer community.
Belkin makes a variety of high-quality Wemo switches that consumers use to control a number of devices in the home, from floor lamps and ceiling bulbs to fans and home audio speakers. The switches are perfect for beginners and early adopters alike, and now with third-party integration across the family of Amazon and third-party devices with Alexa, your users can have even greater control of their smart homes without lifting a finger. Read more about how Wemo is building a smart ecosystem of connected devices for the home.
Belkin Wemo joins other Amazon Alexa Smart Home partners, such as Philips Hue SmartThings, Insteon, and Wink, in enabling voice control in third-party devices with Alexa.
Learn more about the Alexa Voice Service, its features, and design use cases.
AVS is coming soon to the UK and Germany. Read the full announcement here.
We recently announced support for Alexa in two new languages, English (UK) and German. In order to easily add all three supported languages to your skills, we have updated the Alexa SDK for Node.js. We’ve also updated our Fact, Trivia and How To skill samples to include support for all three languages using the new SDK feature. You can find these updated samples over at the Alexa GitHub.
Fact – This skill helps you to create a skill similar to “Fact of the Day”, “Joke of the Day” etc. You just need to come up with a fact idea (like “Food Facts”) and then plug in your fact list to the sample provided.
Trivia – With this template you can create your own trivia skill. You just need to come up with the content idea (like “Santa Claus Trivia”) and plug in your content to the sample provided.
How To – This skill enables you to parameterize what the user says and map it to a content catalog. For example, a user might say "Alexa, Ask Aromatherapy for a recipe for focus" and Alexa would map the word "focus" to the correct oil combination in the content catalog.
Let’s take a look at the new version of the fact skill, and walk through the added multi-language support. You can find the entire skill code here.
The first thing that you will notice is that we now define a resource object when configuring the Alexa SDK. We do this by adding this line within our skill handler:[Read More]
Today’s guest post is from Joel Evans from Mobiquity, a professional services firm trusted by hundreds of leading brands to create compelling digital engagements for customers across all channels. Joel writes about how Mobiquity built a portable voice controlled drone for under $500 using Amazon Alexa.
As Mobiquity’s innovation evangelist, I regularly give presentations and tech sessions for clients and at tradeshows on emerging technology and how to integrate it into a company’s offerings. I usually show off live demos and videos of emerging tech during these presentations, and one video, in particular, features a flying drone controlled via Alexa. Obviously, a flying object commanded by voice is an attention getter, so this led me to thinking that maybe I could do a live demo of the drone actually flying.
While there have been a number of articles that detail how to build your own voice-controlled drone, the challenge remains the same: how do you make it mobile since most solutions require you to be tethered to a home network.
I posed the challenge of building a portable voice-controlled drone to our resident drone expert and head of architecture, Dom Profico. Dom has been playing with drones since they were called Unmanned Arial Vehicles (UAVs) and has a knack for making things talk to each other, even when they aren’t designed to do so.
Dom accepted my challenge and even upped the ante. He was convinced he could build the portable drone and accomplish the task for under $500. To make the magic happen, he chose to use a Raspberry Pi 2 as the main device, a Bebop Drone, and an Amazon Echo Dot.[Read More]
To introduce another way to help you build useful and meaningful skills for Alexa quickly, we’ve launched a calendar reader skill template. This new Alexa skill template makes it easy for developers to create a skill like an “Event Calendar,” or “Community Calendar,” etc. The template leverages AWS Lambda, the Alexa Skills Kit (ASK), and the Alexa SDK for Node.js, while providing the business logic, use cases, error handling and help functions for your skill.
For this tutorial, we'll be working with the calendar from Stanford University. The user of this skill will be able to ask things like:
You will be able to plug your own public calendar feed (an .ICS file) into the sample provided, so that you can interact with your calendar in the same way. This could be useful for small businesses, community leaders, event planners, realtors, or anyone that wants to share a calendar with their audience.
Using the Alexa Skills Kit, you can build an application that can receive and respond to voice requests made on the Alexa service. In this tutorial, you’ll build a web service to handle requests from Alexa and map this service to a skill in the Amazon Developer Portal, making it available on your device and to all Alexa users after certification.
After completing this tutorial, you'll know how to do the following:
All published skills will receive an Alexa dev t-shirt. Quantities are limited. See Terms and Conditions.
In September, Amazon announced the availability of Amazon Echo outside the US, in the UK and Germany. At the same time, Amazon announced the all-new version of the groundbreaking Echo Dot for under $50, so customers can add Alexa to any room in their homes. Recently, Forrester reported on this expansion and shared the importance of expanding to voice as a customer interaction channel. Companies across the world have fair warning, voice-based intelligent agents (IAs) are here to stay.
“CMOs who don’t already have a plan for dealing with the expanding influence of voice as a customer interaction channel now have fair warning: Voice-based intelligent agents (IAs) are here to stay.” – "Quick Take: Amazon Extends Its Lead By Taking Alexa Intelligent Agent Global", by James McQuivey, Forrester Research, Inc., September 14, 2016
The Alexa team is excited to be collaborating with Udacity on a new Artificial Intelligence Nanodegree program. Udacity is a leading provider of cutting-edge online learning, with a focus on in-demand skills in innovative fields such as Machine Learning, Self-Driving Cars, Virtual Reality, and Artificial Intelligence.
“The Alexa team is dedicated to accelerating the field of conversational artificial intelligence. Udacity’s new nanodegree for AI engineers is aligned with our vision to advance the industry. We’re excited for students to learn about our work at Amazon and to build new skills for Alexa as part of the course.”
– Rohit Prasad, VP & Head Scientist, Alexa
Learn more about the Artificial Intelligence Nanodegree program in this guest post by Christopher Watkins, Senior Writer at Udacity.
Few topics today are as compelling as artificial intelligence. From ethicists to artists, physicians to statisticians, roboticists to linguists, everyone is talking about it, and there is virtually no field that stands apart from its influence. That said, there is still so much we don’t know about the future of artificial intelligence. But, that is honestly part of the excitement!
What we DO know is that world-class, affordable AI education is still very hard to come by, which means unless something changes, and unless new learning opportunities emerge, the field will suffer for a lack of diverse, global talent.
Fortunately, something IS changing. We are so excited to announce the newest offering from Udacity, the Artificial Intelligence Nanodegree program!
“This is truly a global effort, with global potential. We believe AI will serve everyone best if it’s built by a diverse range of people.” —Sebastian Thrun (Founder, Udacity)
With the launch of this program, virtually anyone on the planet with an Internet connection (and the relevant background and skills) will be able to study to become an AI engineer. If AI is the future of computer science—and it is—then our goal is to ensure that everyone who wishes to be a part of this future can do so. We want to see every aspiring AI engineer find a job and advance their career in this extraordinary field.
Apply to the Artificial Intelligence Nanodegree program today!
To help achieve these goals, we are collaborating with an amazing roster of industry-leading companies, including Amazon Alexa, IBM Watson, and Didi Chuxing. In order to provide our students with the highest quality, most cutting-edge curriculum possible, we are building the Artificial Intelligence Nanodegree program in close partnership with IBM Watson. To support the career goals of our students, we have also established hiring partnerships with both IBM Watson and Didi Chuxing.
Amazon Alexa is the voice service that powers Amazon Echo and enables people to interact with the world around them in a more intuitive way using only their voice. Through a series of free, self-service, public APIs, developers, companies, and hobbyists can integrate Alexa into their products and services, and build new skills for Alexa, creating a seamless way for people to interact with technology on a daily basis.[Read More]
We are happy to announce the Amazon Alexa API Mashup Contest, our newest challenge with Hackster.io. To compete, you’ll build a compelling new voice experience by connecting your favorite public API to Alexa, the brain behind millions of Alexa-enabled devices, including Amazon Echo. The contest will award prizes for the most creative and most useful API mashups.
Create great skills that report on ski conditions, connect to local business, or even read recent messages from your Slack channel. If you have an idea for something that should be powered by voice, build the Alexa skill to make it happen. APIs used in the contest should be public. If you are not sure where to start, you can check out this list of public APIs on GitHub.
Need Real-World Examples?
Submit your projects for API combos to the Alexa API Mashup Contest on Hackster for a chance to win. You don't need an Echo (or any other hardware) to participate. Besides, if you place in the contest, we’ll give you an Echo (plus a bunch of other stuff!)
We’re looking for the most creative and most useful API mashups. A great contest submission will tell a great story, have a target audience in mind, and make people smile.
There will be three winners for each category; categories are: 1) the most creative API mashup and 2) the most useful API mashup.
The first 50 people to publish skills in both Alexa and the Hackster contest page (other than winners of this contest) will receive a $100 gift card. And everyone who publishes an Alexa skill can get a limited edition Alexa developer t-shirt.
The Alexa Skills Kit (ASK) enables developers to easily build capabilities, called skills, for Alexa. ASK includes self-service APIs, documentation, templates and code samples to get developers on a rapid road to publishing their Alexa skills. For the Amazon Alexa API Mashup Contest, we will award developers who make the most creative and the most useful API mashups using ASK components.
Today, we’re excited to announce that Alexa VP and Head Scientist Rohit Prasad will present a State of the Union on Alexa and recent advances in conversational AI at AWS re:Invent 2016. The Alexa team will also offer six hands-on workshops to teach developers how to build voice experiences. AWS re:Invent 2016 is the largest gathering of the global Amazon developer community and runs November 28 through December 2, 2016.
AWS re:Invent registered attendees can now reserve spots in sessions and workshops online. You can register for Alexa sessions now.
Alexa VP and Head Scientist Rohit Prasad will present the state of the union for Amazon Alexa at AWS re:Invent 2016. He’ll address advances in spoken language understanding and machine learning in Alexa, and share how Amazon thinks about building the next generation of user experiences. Learn how Amazon is using machine learning and cloud computing to help fuel innovation in AI, making Alexa smarter every day. The session is on Wednesday, November 30, 2016 from 1-2 pm.
We also today announced that the Alexa team will run six workshops to teach developers how to build Alexa experiences with the Alexa Skills Kit and the Alexa Voice Service.
Workshop: Creating Voice Experiences with Alexa Skills: From Idea to Testing in Two Hours (3 sessions)
This workshop teaches you how to build your first voice skill with Alexa. You bring a skill idea and we’ll show you how to bring it to life. This workshop will walk you through how to build an Alexa skill, including Node.js setup, how to implement an intent, deploying to AWS Lambda, and how to register and test a skill. You’ll walk out of the workshop with a working prototype of your skill idea.
Workshop: Build an Alexa-Enabled Product with Raspberry Pi (3 sessions)
Fascinated by Alexa, and want to build your own device with Alexa built in? This workshop will walk you through to how to build your first Alexa-powered device step by step, using a Raspberry Pi. No experience with Raspberry Pi or Alexa Voice Service is required. We will provide you with a Raspberry Pi and the software required to build this project, and at the end of the workshop, you will be able to walk out with a working prototype of Alexa on a Pi. Please bring a WiFi capable laptop.
The Alexa track at AWS re:Invent will dive deep into the technology behind the Alexa Skills Kit and the Alexa Voice Service, with a special focus on using AWS Services to enable voice experiences. We’ll cover AWS Lambda, DynamoDB, CloudFormation, Cognito, Elastic Beanstalk and more. You’ll hear from senior engineers, solution architects and Alexa evangelists and learn best practices from early Alexa developers.[Read More]
As an Alexa developer, you have the ability to provide Alexa skill cards that contain text and/or images (see Including a Card in Your Skill's Response). There are two main types of cards:
Customers interacting with your skill can then view these cards via the Alexa app or on Fire TV. While voice experiences allow customers to break from their screens, graphical interfaces act to complement and can enhance the experience users have with your skill.
In our new guide, Best Practices for Skill Card Design, you can learn how to best present information on cards for easy consumption by customers. Skill cards contain the same information (image and text) everywhere they appear, but have differing layouts depending on the access point, the Alexa app or Fire TV.
To drive engagement with your Alexa skill, we’ve compiled the top 10 tips for effective Alexa skill card design.
Cards do not replace the voice experience, instead, they deliver value-added content. Customers should not need to rely on the cards to enjoy your voice experience and cards should never be required to use an Alexa skill. Instead, they should be used to provide additional information.
For example, imagine a customer asks for a recipe and you want to share details of the recipe. The skill card could add additional context by providing the recipe category, recipe description, cook time, prep time, and number of ingredients, while Alexa may simply say, “Try chicken parmesan accented by a homemade tomato sauce.”
Cards can be a great way to get a lost user back on track, or enable self-service to show users what they can do. Give enough detail for the user to move forward when lost – without going overboard. Suggest sample utterances when they need help, or when AMAZON.HelpIntent is triggered. Always keep the utterances relevant and in context of the current situation. For example, don't suggest an utterance on how to check your previous scores when the user is in the middle of the game.
Structure the copy for cards in brief, informative sentences or lines of text and avoid unstructured product details. Don’t rely on large blocks of text and keep details to a minimum so that users can quickly evaluate the card at a glance. For example, show a stock symbol and the current stock quote instead of a full sentence describing the change, which is more difficult to quickly grasp.
Use line breaks (/n) to help format individual lines of addresses, product details or information. Again, this makes it easier to quickly scan for key information. However, don’t double line break when separating parts of a street address.
Since URLs in cards are not clickable links, don’t only show URLs to direct users to other sites. Instead, provide clear direction on how to get to more information (e.g., “Go to giftsgalore.com and head to ‘My Account’”). While we don’t encourage the use of URLs in cards, if you do include them, make it easy for the user to consume and remember.
A general guideline for card content is to keep it short and easy to read. Cards should provide quick bits of content that users can consume at a glance. Providing images is a helpful way to quickly convey key information (e.g., images of a cheese pizza vs. a pepperoni pizza are instantaneously distinguishable). The card shouldn’t include everything that Alexa says, but instead simply the key information in the card (e.g., a bulleted list of product details vs. the full description).[Read More]