Design considerations for a multi-modal approach in the vehicle: An interview with Emi Macleod, UX Designer for Alexa Auto and Arianne Walker, Chief Evangelist for Alexa Auto

Arianne Walker Dec 08, 2020
Share:
Automotive Aftermarket Multimodal
Blog_Header_Post_Img

Walker: Today we’re going to talk to Emi MacLeod, UX designer on the Alexa Auto UX Team. Emi, can you tell us a little bit about what you do at Alexa Auto?

MacLeod: As a UX designer, I focus on creating the experiences you see and hear from Alexa in the car. I specifically focus on mobile experiences like Auto Mode.

Walker: Can you describe what Auto Mode is?

MacLeod: Auto Mode is part of the Alexa app and uses the customer’s smartphone as a companion visual interface to compliment the Alexa voice experience in the vehicle when paired with a compatible Alexa automotive device.

Walker: Alexa is a voice first experience. Can you share more about how adding visuals helps customers while they are driving?

MacLeod: Our goal with Auto Mode is to make voice the primary way customers interact with Alexa in the vehicle. To do that, it is important to think through the best design for Alexa to provide response given all of the options at her disposal. Turning the phone into a driver-friendly visual interface gives customers an easy way to remember options in a list, access to quick taps to pause music or hang up a phone call, and easily see what Alexa can do for them – all the while keeping their hands on the wheel and focus on the road. The technical term for this is redundancy gain.

Walker: What does that mean for customers?

MacLeod: Research suggests people comprehend multi-modal information more effectively and might actually help with memory and processing auditory information. Our investigation, in conjunction with an independent third party, indicated that tasks performed via a multi-modal versus voice-only interface were completed in less time. Ultimately, this means that customers can get information, make a decision and move on even more quickly with a glanceable multi-modal interface for a low distraction experience.

Walker: What are the most important design considerations for a multi-modal approach in the vehicle?

MacLeod: There are three key design considerations that we applied for Auto Mode. First, it was important to provide an easy way for customers to use visuals to refresh their memories, or reduce the cognitive load while they drive. For example, when asking Alexa for nearest coffee shops it might be difficult to remember all of the options she reads out to you. We provide a glanceable visual aid so that a customer can easily tap or ask for their choice without having to remember every option on the list, what order they were in, how far away they are, etc.

Coffee shops Auto Mode

Second, when designing these visual aids we also make them very easy to see and touch. What this means is that we provide bigger, simplified tap targets and surface options that the customer might use frequently. For example, rather than asking Alexa to provide directions to work, drivers can quickly tap on the work card and start navigating. We make these visual elements simple to make sure that at a quick glance, the customer will know exactly what they are.

Home Screen Auto Mode

And, third, it is important to keep in mind that voice isn’t necessarily the best for everything. For example, it can be challenging to ask Alexa to skip a song or answer an incoming call with your voice when you’re playing music. With big touch targets in places like the Now Playing Screen, we can give drivers another option to control their in vehicle experience. This also allows drivers to avoid awkward interactions like asking Alexa to hang up your call when the person you’re talking to can still hear you.

Music Auto Mode

Walker: What challenges still exist for a multi-modal approach?

MacLeod: One of the biggest challenges around designing multi-modal interfaces in the vehicle is that multi-modality is still relatively new in the car. Screens have been in vehicles for quite some time and voice recognition software has as well, whereas robust voice services have been available in vehicles for less time. Thoroughly testing these experiences ensures that we’re providing solutions for customers that provide value while minimizing distraction and adds better understanding around these challenges.

Walker: How does the UX team think about best practices for a multi-modal experience for the automotive industry?

MacLeod: We start by benchmarking against NHTSA and JAMA standards while designing any automotive experience. Additionally, we apply our own extensive testing using a variety of methods. But in terms of best practices, we provide guidelines for voice-first and multi-modal experiences in the car using Alexa with our own HMI Guidelines. One example from the guidelines is focused on a voice forward design. The guiding principles include helping drivers to complete tasks without having to take their eyes off the road or hands off the wheel. What that means in practice is that information on the screen (whether in Auto Mode or on the infotainment screen embedded in the vehicle) should augment the information Alexa is saying, but the customer shouldn't have to rely on taking their eyes off the road in order to complete a task. Any control that is on the screen should also be interactive with the customer's voice. For example, if the media player shows a skip forward 30 seconds for Audible books, then the customer should be able to say “Alexa, skip forward.”

For more information about multimodal design, explore the HMI Guidelines and read the Auto Mode blog post.

Related Articles

IOttie and Alexa: An Interview between Eric Kang, Vice President, iOttie and Arianne Walker, Chief Evangelist for Alexa Auto

Subscribe