In his enthusiastically received tutorial at the 2019 Vehicle Displays and Interfaces Symposium, held September 24 to 25 at the Burton Manor Conference Center in Livonia, Michigan, Bryan Reimer of the MIT AgeLab (Cambridge, MA) said “it could be 100 years before automated cars can be built profitably. We’re better off looking at driver assistance.”
That may be a bit extreme, but after Doug Patton (Principal at Jupiter Consulting LLC and former Executive Vice President of Engineering at Denso) delivered his keynote address on human-machine interfaces (HMI) and autonomous driving, he received this one-word question: “When?” To which he responded,
“When will Level 5 [complete automation] become common? A guess: 2050.”
These comments are remarkable because, three years ago, Ford was predicting with great apparent confidence that it would have substantial numbers of fully autonomous ride-sharing vehicles on the road by 2021, and similar predictions were common throughout the industry. What happened? Self-driving (full automation under all conditions) turned out to be much more difficult than engineers imagined, which became clear as relatively easy solutions to critical problems refused to materialize. In a very few years corporate enthusiasm has turned to recognition that a very long slog lies ahead. Reimer:
“The trough of disillusionment [for full automation] will last for decades, not years.”
Last week, the estimable Junko Yoshida reported in EET Asia that Daimler chief executive Ola Källenius had revealed plans to scale back the automaker’s investment in robo-taxis. This more recent data point supports the postions of Reimer and and Patton, but Yoshida explored a different angle. Not only is complete automation a long way off and the further development of existing advanced driver assistant systems (ADAS) a more productive approach, but
“the development track for assisted driving is different from the development track for autonomous driving, and car OEMs are going to have to pick a lane”.
Yoshida quotes Ian Riches (Executive Director for the Global Automotive Practice at Strategy Analytics) as saying
“ADAS and autonomous have been, of necessity, on very different development paths, typically with different teams.”
Riches explained that the sensor suites, processing power and vehicle architectures required for autonomous driver are much too expensive for mass-market ADAS features, and it is not practical to scale down a system designed for L4 or L5 autonomy to meet ADAS cost points. And it is also not practical to go the other way, to scale up
“the typically stand-alone, discrete architectures, modest processing power and limited sensor suites required by ADAS [to realize] an L4 solution.”
Not everyone agrees. Phil Magney (founder and principal of VSI labs) told Yoshida he doesn’t believe ADAS and robotaxis are mutually exclusive.
“Incremental automation is what Tesla is doing. You build up the fundamentals and collect as much data as you can. Eventually you will have gained enough know-how to apply it to robotaxis.”
Even Riches at Strategy Analytics is hopeful that
“the cross-over point where a combined development team and approach may make sense is perhaps coming into sight. The growing interest in so-called ‘L2+’ features could be the point that these two worlds start to come together.”
At this level, we could see an L4 style of architecture with centralized control, but
“without the redundancy across sensors, processing and actuation needed to make the system sufficiently robust for L4 operation.”
But others still say that vehicles for a human driver and vehicles with no human driver at all rely on fundamentally different design principles. Carlos Holguin (co-founder of AutoKAB) said to Yoshida,
“As the famous saying goes, ‘the electric light did not come from the continuous improvement of candles.’ Better ADAS would not make a fully autonomous vehicle.”
Three things are clear:
1. ADAS continue to become more effective, to cover more functions, and to be incorporated in an ever-widening range of vehicles.
2. As ADAS become more sophisticated so do HMI systems, including displays, cameras, touch controls, haptics, gesture-control, and audio communication.
3. Commercial implementation of L4 and L5 autonomous systems is receding toward the horizon. I will still have to teach my granddaughter how to drive, but she might not have to do that for her granddaughter. (KW)
Ken Werner is Principal of Nutmeg Consultants, specializing in the display industry, manufacturing, technology, and applications, including mobile devices, automotive, and television. He consults for attorneys, investment analysts, and companies re-positioning themselves within the display industry or using displays in their products. He is the 2017 recipient of the Society for Information Display’s Lewis and Beatrice Winner Award. You can reach him at [email protected] or www.nutmegconsultants.com.