Voice at the Edge: How Syntiant’s Low-Power AI Semiconductors are Transforming Computing

Brian Lee Feb 19, 2021
Share:
Spotlight
Blog_Header_Post_Img

“Starting a chip company is sort of like starting a car company,” Kurt Busch says, recalling what he jokingly told his friends when he first floated the idea of launching Syntiant. “The odds are going to be stacked against you.”

Based in Irvine, California, Syntiant produces ultra-low-power semiconductors that enable speech interfaces on battery-powered consumer devices, from as small as earbuds to as large as laptops. This includes smartphones, smart speakers, AR glasses, wearables, and smart home entertainment and security devices.

While the odds might have been stacked against them, just over three years after its founding in 2017, Syntiant reports the company has shipped more than 10 million units to customers around the world and has received $65 million in funding from leading tech companies, including the Amazon Alexa Fund.

At CES 2021, Syntiant announced that its newest Neural Decision Processor, dubbed the NDP120, became an Innovation Awards Honoree. This is the second year in a row that Syntiant received such a distinction: The company’s NDP100 collected the CES 2020 Best of Innovation Award in the Embedded Technologies category.

Some of Syntiant’s recent accolades also include being named to Gartner’s April 2020 Cool Vendors in AI Semiconductors; Fast Company’s prestigious list of the World’s Most Innovative Companies for 2020; and the Linley Group’s 2019 Analysts’ Choice Award in the Best Embedded Processor category, among others.

Voice at the Edge

According to the company, Syntiant’s tiny (1.4 by 1.8 millimeters) NDP100 processor is approximately one hundred times more efficient—and delivers a tenfold increase in throughput combined with minimal power consumption (140 microwatts, to be precise)—when compared to traditional CPUs and DSPs.

Syntiant says it started from a clean sheet—and co-designed its hardware and software, simultaneously optimizing silicon and deep learning models, to develop its first proprietary deep learning architecture, the Syntiant Core 1. Syntiant’s near-memory computation leverages the large-scale parallelism of computation and deep learning to reduce power consumption.

“We combined semiconductor design with deep learning,” Busch said. “So, we deliver the software, the data, the training pipeline and the chip—the entire solution.”

To get a sense of the breakthrough performance Syntiant technology can offer, consider wireless stereo headphones that offer voice interaction, for example.

Among the most popular headphones on the market, the earbuds need to work in different voice-challenging environments, including noisy gyms and traffic-filled streets. That way if a runner on a treadmill says “Alexa, volume up,” those earbuds are able to understand the request without asking for clarification or ignoring it.

This means that not only do the earbuds need to be waiting for a wake word, such as “Alexa,” but they also must recognize the command word, and be able execute a relevant function, like increasing volume. On top of that, they must be able to do it all with little space for bulky batteries or control buttons.

“To perform such tasks, engineers have been training deep learning algorithms to accurately hear wake words. But how do deep learning algorithms run inside of tiny earbuds at the edge?”

Software, Data, and the Chip

Back in 2017, Busch was working with one of Syntiant’s co-founders, Jeremy Holleman, a professor of electrical and computer engineering at The University of North Carolina at Charlotte. Together, they were trying to find an answer to the question, “How do you optimize both machine learning and hardware?”

They started with memory. “Traditional computing is optimized for logic, while machine learning is memory dominated,” Busch said.

“This combination of focus on memory, parallel operation, and modest precision is effectively the opposite of where processor design has been for the last 40 years,” Busch continued. “We decided to focus on power and make machine learning pervasive.

“The whole idea is to deploy machine learning in everything, similar to how a microprocessor is everywhere, even in greeting cards.” 

Reimagining a Processor for Voice

Even in those early design days, Busch was certain that he wanted to build a chip for always-on voice applications. “Voice is the most effective form of communication,” he said. “More people are opting to use speech to control their devices, and machine learning is good at pattern matching, making it a natural fit for voice.”

Syntiant focused on collecting training data for its neural network: The company amassed millions of utterances in multiple accents in different languages. With this data and its training pipeline, Syntiant reports that its product has been helping customers of any size easily bring deep learning to all kinds of voice applications in edge devices.

Even though the voice activation is always on, devices aren’t draining substantial power. In the case of the earbuds example, according to Syntiant, the chips drain such little power that a month-long Alexa experience is possible from a single coin cell battery. This innovation is already transforming computing and benefitting smart devices with voice interfaces, without downside to battery life. 

Innovating for the Future of Voice

The company recently introduced the Syntiant NDP120, the latest generation of special purpose chips for audio and sensor processing for always-on applications in battery-powered devices.

Built using the Syntiant Core 2 deep learning architecture, the NDP120 applies neural processing to run multiple audio applications simultaneously with minimal power consumption. This includes such advanced audio features as echo-cancellation, beamforming, noise suppression, speaker identification and verification, keyword spotting, multiple wake words, event detection, and local commands recognition.

“Our new NDP120 enables multi-processing functionality normally found in devices that need to be plugged into an electrical outlet,” Busch said.

Syntiant is working on emerging customer applications for its tiny machine learning (TinyML) device. According to a recent study by tech market advisory firm ABI Research, a total of 2.5 billion TinyML devices are expected to be shipped in 2030, propelled by the increasing focus on low latency, advanced automation, and the availability of low-cost and ultra-power-efficient AI chipsets.

Busch added that Syntiant’s deep learning intelligent voice solution is bringing the voice experience into reach for practically any tech company. “Our roadmap is very straightforward—we think voice is the next user interface,” Busch said. “I call voice the touch screen of the future, an always-on battery-powered interface that one can just talk to.”

Subscribe