Top 3 TU-Detroit Takeaways for Improving UX in Connected Cars
As an innovator at heart, I'm passionate about advancing auto technology and collaborating with the most innovative minds in connected cars, mobility and autonomous vehicles. Most recently, I had the opportunity to speak on a TU-Automotive Detroit panel discussing “Optimum Strategies for Improved User Experience.” Through this opportunity I gained invaluable insights and predictions from my fellow peers—a talented stage of disruptors that left us inspired and eager for what's to come in the next year and beyond. Below are some of the key takeaways from the panel session.
The Driver-Centric Evolution
The time is now. Americans collectively spend 70 billion hours behind the wheel every year and are demanding seamless connectivity to the outside world from their vehicle. Connectivity and digitalization are the #1 key disruptors driving change in the automotive industry, fueling much of the discovery, debate and conversations at TU-Automotive Detroit 2019. In order to reach the demands of connectivity in the vehicle, the technology has gone through some distinct phases to balance safety, driver experiences and the user's digital demands.
- First was Generation 1 UX in the form of the touchscreen. And with the touchscreen, as much as it helps to keep the driver informed and focused, it requires a touch, a glance and quick moment of distraction to operate. To help increase safety by mitigating the need to take eyes off the road, even if just for a few seconds, the industry evolved quickly to the latest—voice.
- Generation 2 is the current platform and a voice centric user interface, with a mission to continue to reduce the driver's cognitive load and mitigate distractions to do a better job of keeping passengers safe. Plus, with this transition to voice, there is less of a need to have “real estate” wars on the touchscreen/infotainment interface. Transitioning to voice allows for greater convenience for the driver and opens up more opportunities to embed services within the vehicle.
Reducing driver distraction and cognitive load is what continues to influence the evolution and rapid transformation from touchscreens, which was focused on driver-centric apps and reduced driver distraction to voice and eventually predictive automation. My fellow panelist, Jacek Spiewla, from Mitsubishi Electric Automotive America Inc. projected that, “speech will become the primary modality with which we'll interact with the car.” However, there are many barriers that we need to overcome before speech technology fully evolves in the automotive space.
Lost in Translation
Panelist David Holecek from Volvo Car Corporation noted, "voice is fantastic when it works, but the barrier—the threshold you'll need to reach—is much higher than with an on-screen user interface, which, even if it's not the best, you can still use it. But with voice, either it works, or it doesn't."
How I see it is there are two key building blocks for creating intuitive UX based on voice control. They include high quality microphone arrays with algorithms that can accurately detect speech and words in a challenging vehicle environment, and natural language algorithms to make sense of what you are actually saying. However, so much can get lost in translation (and does) as voice activated interfaces and personal conversations with AI are received very differently around the world.
Voice adoption is heavily based on culture. For example, the United States is already familiar with artificial assistants like Alexa™, Google Assistant™ and Apple™ Siri®, but users tend to be more hesitant to use them while in public spaces. On the contrary, China has a formal explosion of these digital assistants and users are extremely comfortable talking to their devices in public. Yet, in Europe and Japan, it's much more restrictive and these specific users are unaccustomed to utilizing these types of digital assistants.
Automakers are now faced with the task of providing the best UX to multiple and different cultures alongside visual feedback that fits each of their needs. This will take a significant investment in superior technology, to ensure reliability and functionality across a digitally-diverse, privacy-centric and culturally-different user.
Predictive Intelligence—The Next Frontier
As audio algorithms improve, we will also see a move to intelligent automation using predictive intelligence to further reduce cognitive load and provide a new level of experience for the driver—what I call Generation 3.
This will begin as vehicle manufacturers migrate from reactive inputs to proactive input to deliver an interface spotlighting suggestive automation, such as the vehicle prompting an action for the driver without the driver's request. One can argue that a voice assistant that does exactly what you ask is very obedient but not particularly smart. However, technology that can predict your actions and act upon it is not only smart but can even be somewhat magical.
My fellow panelist Ford Motor Company UX Evangelist, Shyamala Prayaga, noted an example of a person running late to work due to traffic on their daily route. With proactive inputs, the vehicle will be able to recognize that the driver is running late, and conveniently suggest a faster route. This will ultimately save the driver time and deliver efficiency without the AI being intrusive. Or say you get a coffee at Starbucks every morning, but today your usual location is unexpectedly closed. Instead of traveling to Starbucks and realizing that it's closed, the vehicle will proactively prompt the driver that this location is closed today, and opportunely suggests another location or route.
So, where do we go from here?
David Holecek expertly devised, “we have failed as an industry” in the context of how we need to think of the vehicle as an integrated piece in the customer's digital lifestyle. And he couldn't be more right.
UX expands beyond the vehicle—UX should be a focal point at every step of the driver's journey. Predictive automations, or proactive inputs, are contextual awareness with intent. And at Chamberlain Group we take that seriously. It's significant to realize the driver's intent because that increases your true positive and decreases your false positive rate. Though the driver's intent is not always easy to predict, there are small actions that can signal intent and build a prediction confidence score. While the most obvious intent is to use the destination point from the navigation, small gestures from drivers such as their turn signals, their speed as they approach their home, and even calendar appointments can contribute to a driver's predictive score.
While at TU-Automotive Detroit 2019, we partnered with Mitsubishi Electric Automotive America to unveil the Chamberlain myQ® Auto™ infotainment-based access solution. Featured in a concept vehicle retrofitted with a Mitsubishi Electric Android-based head unit at TU-Automotive Detroit, we were able to showcase how myQ Auto enables drivers to use the touchscreen or voice command to identify the status of the garage door, allowing drivers to monitor, close and open their garage door wherever they are on their journey. However, as vehicle connectivity continues to grow, opening and closing the garage door is just the start for myQ Auto's capabilities.
As vehicles become more autonomous it will open up more opportunity to connect drivers to not only their cars, but to their daily lives as well. And I will be along for this wild and magical ride!