AVS enables you to create devices with Alexa built-in that your customers can control by saying “Alexa.” Designing devices with a voice user interface (VUI) requires you to first determine how you'd like customers to interact with your device, and then utilize the functional and user experience guidelines to ensure they have a consistent experience across all devices with Alexa built-in.
Any connected device that has a microphone and speaker can be voice-enabled with AVS. Typically, devices are grouped into two categories: Headless and Screen-based. AVS makes it easy for developers to add voice as a modality to devices in both categories.
Here are two scenarios that illustrate how customers interact with AVS products:
Emma recently purchased a tablet with Alexa built-in. To access Alexa, Emma uses the “Alexa” wake word, and then asks a question or issues a command. AVS captures the audio, sends it to the cloud for processing, and Alexa delivers an audible response with a complementary Display Card on the screen. The Display Card offers a reinforcement of what Alexa says and additional information not delivered by voice.
AVS provides the flexibility to choose the best experience for your device with Alexa built-in. Whether you’re building a tap-to-talk or hands-free device, AVS offers Automatic Speech Recognition (ASR) Profiles tailored for specific types of user interaction. The first step is to envision how you want your customers to interact with Alexa.
|User Interaction||User wakes the device up with the wake word and the cloud instructs the device to stop listening when the user stops speaking.||User taps a button to open the microphone and the cloud instructs the device to stop listening when the user stops speaking.||User pushes and holds a button to keep the microphone open until the entire utterance is captured.|
|Use Case||For devices that will be activated from across the room using the wake word “Alexa”, and where voice is the primary user interface even in noisy environments, such as the living room, kitchen, and other communal spaces in the home.||For devices that will be activated by the wake word “Alexa” and used in environments where voice is the primary user interface and the user is within arm’s length, such the bedroom, hallway, and smaller areas in the home.||For devices that will be used in medium-ambient environments and where the user will be in close proximity to the device, such as the car and at work.||For devices that will be used in high-ambient environments where the microphone array might struggle to pick up the wake word “Alexa”, such as mass transportation, public spaces, and noisy home entertainment areas.|
|Development Tools||AVS Device SDK »||AVS Device SDK »|
Customers who purchase a product with Alexa expect a familiar experience. To help you design the best possible user experience for your voice-forward device, we’ve assembled a package of design requirements and recommendations to help you develop, prototype, and launch with AVS.
Whether you’re building a tap-to-talk or a hands-free device, there are specific cues and hardware considerations that you’ll need to make in order to provide customers with a familiar experience. For example, we recommend that every device with Alexa built-in incorporate LEDs or GUI elements to provide feedback to the customer on Alexa’s state.
Voice is a new way to interface with technology for many customers. A successful Alexa integration makes it easy to interact with a device using voice, as well as learn new reasons to use Alexa every day. Alexa is always getting smarter with new capabilities and services, and the easier it is for customers to setup and access Alexa, the more they’ll use your product.