What it takes to achieve automotive's “Vision Zero”
How high-integrity sensor technologies help build the cognitive vehicles of the future
Chris Jacobs joined ADI in 1995. During his tenure at Analog Devices, Jacobs has held a number of design engineering, design management, and business leadership positions in the Consumer, Communications, Industrial and Automotive teams. Chris Jacobs is currently the Vice President of the Autonomous Transportation & Automotive Safety business unit at Analog Devices. Prior to this, Jacobs was the General Manager of Automotive Safety, Product and Technology Director of Precision Converters and the Product Line Director of High Speed Converters & Isolation Products.
Traditional driving may soon be viewed as archaic. There is a disruptive evolution taking place from human-steered vehicles to autonomous vehicles requiring a holistic ecosystem to spur development and create a monumental, structural transformation of a high percentage of the global economy. Still, safety remains a paramount hurdle for this ecosystem to clear before the driverless existence becomes true reality.
More than 3,000 road crash deaths occur worldwide daily. Removing humans from the equation is one way to address this, and as a result, technology providers, Tier-1 suppliers, original equipment manufacturers (OEM), and automakers are embracing new business models and making big bets to accelerate the maturation of key autonomous driving technologies. The aim is to achieve Vision Zero, the goal of no loss of life caused by vehicles, for autonomous deployment hopes to reach their fullest potential.
- Debunking the myths of driverless cars
- Mapping the world: solving one of the biggest challenges for autonomous car
- AI, 5G and the race to completely autonomous vehicles
Core sensor technologies help attain higher-level vehicle autonomy
Vehicle intelligence is typically expressed as Autonomy Levels. Levels 1 and 2 are largely warning systems, where at Level 3 and above, the vehicle can act to avoid accidents. As the vehicle advances to Level 5, the steering wheel is removed, and the car operates fully on its own. In these first few system generations, as vehicles start to take on Level 2 functionality, sensor systems operate independently. To reach fully cognitive autonomous vehicles, the number of sensors rise significantly. Their performance and response times also need to vastly improve.
Vehicles with more external sensors can become more fully aware of their surroundings and prove safer as a result. Technologies critical in AI systems capable of navigating an autonomous vehicle include cameras, LiDAR, RADAR, microelectromechanical systems (Inertial MEMS), ultrasound and GPS. Along with supporting an autonomous vehicle’s perception and navigation systems, these sensors can better monitor mechanical conditions (i.e. tire pressure, change in weight), in addition to other maintenance factors that might affect motor functions like braking and handling.
While such sensors and sensor fusion algorithms may help achieve Vision Zero, several factors must be considered, the first of which is object classification. Current systems cannot achieve proper resolution required for object classification, but RADAR – given its micro-Doppler capabilities – is more capable in this area. Although currently a premium feature in autonomous vehicles, RADAR will become more common as the AEM (automatic emergency braking) mandate becomes a reality in the early 2020s.
LiDAR meanwhile is not a standard feature in cars today as it is not currently at the right cost or performance point to warrant broader adoption. Yet LiDAR will provide 10 times more image resolution than RADAR, which is needed to discern even more nuanced scenes. Getting to a high-quality solution –high-sensitivity with low dark current and low capacitance –is the key technology to enable the 1,500 nm LiDAR market, which may lead to its increased adoption. A key capability here is solid-state beam steering, as a high sensitivity, lower cost photodetector technology is needed to push the market to 1,500 nm.
Camera systems – common in new vehicles today – are a cornerstone to Level 2 autonomy. However, these systems do not work well under all use-cases (i.e. night and inclement weather). Ultimately, these perception technologies are needed to provide the most comprehensive data set to the systems that are designed to keep the vehicle occupant safe.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Though often overlooked, IMUs depend upon gravity, which is constant, regardless of environmental conditions. As such, they are very useful for dead-reckoning. In the temporary absence of a GPS signal, dead-reckoning uses data from sources such as the speedometer and IMUs to detect distance traveled and direction and overlays this data onto high-definition maps. This keeps a cognitive vehicle on the right trajectory until a GPS signal can be recovered.
Sensor fusion can supplement the shortcomings of perception sensing systems. Required here is an intelligent balance between central and edge processing to drive data to the fusion engine. Cameras and LiDAR sensors provide excellent lateral resolution, but even the best machine learning algorithms require ~300 ms to make a lateral movement detection with sufficiently low false alarm rates. In today’s systems, ~10+ successive frames are needed for reliable detection with low-enough false alarm rates. This needs to be lowered to 1-2 successive frames to provide more time for the vehicle to take necessary evasive action.
New technology needs to be developed and brought forth—enabling advanced perception capabilities at high speeds—to support fully autonomous driving in both highway and city conditions. However, the more this is worked on, the more complex use cases will be identified that need to be addressed. Furthermore, inertial navigation will be a critical aspect of autonomous vehicles for the future, as these systems are impervious to environmental conditions and are needed to complement perception sensors, which can be impaired in certain situations.
The role of ADAS and full autonomy
Another major, non-technical factor one must consider in the goal of achieving Vision Zero is finding a balance between what technology can do and what legislation will allow.
Currently, industry leaders follow two tracks: advanced driver assistance systems (ADAS) and fully-autonomous vehicles. While the automotive industry feels more assured about ADAS than fully-autonomous vehicles, ADAS technology is still not perfect.
OEMs and Tier 1 automotive suppliers are currently focused on Level 2 or Level 3 autonomy, as they view these as good business opportunities. Legislation associated with highly-autonomous vehicles isn’t firm yet, and other areas such as insurance and regulations need to be further explored to put a proper framework in place. Robo-taxis, for example, are poised for debut in several US cities. These vehicles will likely be on top of broader Level 2 or Level 3 applications already in place.
Much more work is also needed to improve the performance of specific sensing technologies like radar and LiDAR, and various algorithms that actuate automobiles and conditions. When we get to 2020 and beyond, where AEB becomes more of a standard feature in cars, is where we formally start shifting to Level 3 autonomy. However, further improvements are required to get from where automakers are today to where they would need to be to achieve this.
OEMs really embrace the two-track dynamic. For example, with robo-taxis, they are considering that the economics of this business is entirely different from mass market automotive, as it embodies ride-sharing services. One of the other dynamics within that specific market enables OEMs to place advanced technology in these vehicles to mature hardware, software, and the sensor fusion framework. Even though OEMs have more faith in ADAS, seen more often are instances where they have created separate companies to take greater levels of vehicle autonomy into account. However, there are also OEMs that do not have research and development capital to follow this course, instead partnering with other companies that specialize in autonomous driving technologies.
In the middle of this two-track system lies Level 3+ autonomy. Though not fully autonomous, Level 3+ is more advanced than existing ADAS systems and combines premium performance features with practical functions. Although much higher performance sensors are needed to support Level 3+ applications, such as full speed highway autopilot and AEB+, when the vehicle not only brakes, but also swerves to avoid an accident. Level 3+ features highly autonomous technologies, including a critical sensor framework that lays the foundation for future fully autonomous vehicles.
Although we are not at the point of full autonomy, Level 3+ automation gets us closer towards achieving the goal of Vision Zero as it balances practicality and performance, combining developments from the two tracks to develop a safe transportation ecosystem. This is the inflection point where autonomous technology becomes much more capable and available to the public.
Journey to Vision Zero
Regardless of industry leaders’ different approaches toward reaching Vision Zero, a diversity of high-performance perception and navigation sensors help get us there. Additionally, high-quality data generated from these sensors helps ensure decision-making software makes the correct decision – every time. The journey to Vision Zero and full autonomy follow the same road. Any player in the ecosystem must keep that top of mind in the coming years given the goal of autonomous vehicle development is to usher in a new technological and business model era, as well as save lives.
Chris Jacobs, Vice President of Autonomous Transportation and Automotive Safety at Analog Devices
- Want a smarter car today? Also check out the best dash cam