Development Kits for the Alexa Voice Service (AVS) are complete reference solutions for building products with Alexa. They include chipsets, voice processing technologies, and client software that leverages the AVS APIs to help you easily build commercial-grade, voice-activated products while reducing development costs and accelerating the integration process.
Designed to help you quickly and easily create far-field audio front end systems for your Alexa-enabled products, this development kit features Conexant's AudioSmart™ CX20924 Voice Input Processor with a 4-microphone board and Sensory's TrulyHandsFree™ wake word engine tuned to "Alexa". Prototype using the AVS sample app on Raspberry Pi.
Designed to help commercial device manufacturers easily create far-field voice experiences, this development kit features the same 7-mic circular array and technology for “Alexa” wake word recognition, beam forming, noise reduction, acoustic echo cancellation, and barge-in capabilities found in the Amazon Echo. This solution is supported by leading chipset providers, enabling device manufacturers to quickly integrate Alexa voice capabilities into their products.
Designed to help you quickly and easily create hands-free audio front end systems for your Alexa-enabled products, this development kit features Conexant's AudioSmart™ CX20921 Voice input Processor with a dual microphone board and Sensory's TrulyHandsFree™ wake word engine tuned to "Alexa". Prototype using the AVS sample app on a Raspberry Pi.
This 2-mic dev kit is designed to help consumer audio OEMs and ODMs quickly get Alexa-enabled smart speakers, portable speakers, and compact audio devices to market. It uses Cirrus Logic algorithms for voice control, noise suppression, and echo cancellation technology enabling high-accuracy wake-word triggering and command interpretation. Prototype using the AVS sample app on the Raspberry Pi 3 included in the kit. Features include Cirrus Logic’s:
Designed to recognize the “Alexa” wake word and deliver audio-enhanced speech requests for cloud processing in adverse audio environments, this 2-microphone hands-free dev kit enables ODMs and OEMs to build Alexa-enabled products with high-quality voice recognition interfaces. Prototype using the AVS sample app on a Raspberry Pi. Features include:
How you want your users to interact with your product determines the number of microphones you select. Voice-enabled devices designed for closer, hands-free interaction can use 1 or 2-mic solutions, whereas far-field products with listening ranges from across the room can benefit from a 4 or 7-mic array. Keep in mind that additional mics may take up more physical space and add incremental costs to your product.
Your product’s form factor determines the arrangement of microphones. Square or circular arrays in a horizontal plane are better for 360-degree, omni-directional listening often utilized in tabletop products like the Amazon Echo or Echo Dot. Linear arrays are better suited for uni-directional listening or wall-mounted products such as connected light switches and Alexa-enabled thermostats like the ecobee4.
Voice processing algorithms enable your device to leverage the full capabilities of the mic array. Noise reduction improves speech recognition in noisy environments, beam forming helps locate the direction of speech, and acoustic echo cancellation allows the user to barge-in even when your device is playing loud music. These algorithms, combined with wake word engines, allow voice-initiated interactions and send clear, processed audio to the cloud.