Just like with any new technology, autonomous cars have faced their fair share of speed bumps from concept to product, especially when it comes to navigating in inclement weather. Recent developments in autonomous car technology, however, could be the solution to this problem.
Similar to human drivers, self-driving vehicles can have trouble "seeing" in inclement weather such as rain or fog. The car's sensors can be blocked by snow, ice or torrential downpours, and their ability to "read" road signs and markings can be impaired.
The vehicle relies on a technology called LiDAR and radar for visibility and navigation, but each has its shortcomings. LiDAR works by bouncing laser beams off surrounding objects and can give a high-resolution 3D picture on a clear day, but it cannot see in fog, dust, rain or snow.
"A lot of automatic vehicles these days are using LiDAR, and these LiDAR are basically lasers, that shoot lasers that keep rotating to create points for a particular object," Kshitiz Bansal, a computer science and engineering Ph.D. student at University of California San Diego, told AccuWeather in an interview.
However, Bansal said all of those lasers bounce off fog or rain or snow particles and are not able to give the required perception. It has been a challenge for these advanced cars to drive when their sensors cannot sense the road -- and other objects -- through snow or when visibility is limited by rain or fog.
Thanks to a team of electrical engineers at the University of California San Diego, self-driving cars are one step closer to navigating safely in inclement weather of all types.
The team has spent more than a year and a half developing a new way to improve the imaging capability of existing radar sensors so that they accurately predict the shape and size of objects in an autonomous car's view. Radar, which transmits radio waves, can see in all weather, but it captures only a partial picture of the road scene. This is where the team came in to improve how radar sees.
"It's a LiDAR-like radar," said Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering. It's an inexpensive approach to achieving bad weather perception in self-driving cars, he noted. "Fusing LiDAR and radar can also be done with our techniques, but radars are cheap. This way, we don't need to use expensive LiDARs."
The system consists of two radar sensors placed on the hood. Having two radar sensors arranged this way is key because they enable the system to see more space and detail than a single radar sensor.
"Say a car is coming toward you, what I would want is not just detecting that there is a car, but I would also want to know at what speed that car is coming toward me, what is the dimension of that car, particularly the length, width and height. And where the car is positioned. So, basically, the difference is that just a point detection is not enough," Bansal said. "We need a lot of points on a particular car so that we can estimate all these dimensions and high-quality features of the object."
To test if this concept worked or not, the engineers completed test drives on clear days and nights to show their system performed as well as a LiDAR sensor at determining the dimensions of cars moving in traffic. And, when the team added a foggy weather simulation, its performance did not change.
When the team "hid" another vehicle using a fog machine, their system accurately predicted its 3D geometry while the LiDAR sensor essentially failed the test.
"So, for example, a car that has LiDAR, if it's going in an environment where there is a lot of fog, it won't be able to see anything through that fog. But at the same time, with our radar, they can actually pass through all these bad weather conditions and can even see through fog or snow. And this is something that we also show in our book. And has been established with some past tests," Bansal said.
The team uses millimeter radar, which is a small version of radar. The frequency of the lens is at a sweet spot that gives a lot more points for a particular object.
"For the radar that is used for automatic vehicles, we want a very high-resolution detection. So, for example, if you're doing a test, as long as you see a dot followed by a particular object, you say okay, I have detected this object. Like when we're talking about automatic vehicles, it's not just about detecting the presence of a vehicle, it's also about actually estimating the dimensions of the object," Bansal said.
Traditional radar has typically suffered from poor imaging quality because when radio waves are transmitted and bounced off objects, only a small fraction of signals get reflected back to the sensor. As a result, vehicles, pedestrians and other objects appear as a sparse set of points.
"This is the problem with using a single radar for imaging. It receives just a few points to represent the scene, so the perception is poor. There can be other cars in the environment that you don't see," Bansal said. "So if a single radar is causing this blindness, a multi-radar setup will improve perception by increasing the number of points that are reflected back."
The team found that two eyes are better than one, which is why they spaced the two radar sensors 1.5 meters apart on the hood of the car.
"By having two radars at different vantage points with an overlapping field of view, we create a region of high-resolution, with a high probability of detecting the objects that are present," Bansal said.
"What happens is that by using these multiple radars, we can generate more points than the LiDAR would generate. With this, we finally create a system that can dig into these points generated by the radar and really give out the parameters that we've been talking about this object, the length, the width and all the other estimates that we want for a particular object," Bansal said.
However, it wasn't that simple. More radars also mean more noise, Bharadia noted.
When radars pick up noise, it is common to see random points in the radar images, which do not belong to any objects. According to the team, the sensor can also pick up what are called echo signals, which are reflections of radio waves that are not directly from the objects that are being detected.
To fix this problem, the team developed new algorithms that can fuse the information from two different radar sensors together and produce a new image free of noise. This led the team to one of their other innovations -- the first dataset combining data from two radars.
"There are currently no publicly available datasets with this kind of data, from multiple radars with an overlapping field of view," Bharadia said. "We collected our own data and built our own dataset for training our algorithms and for testing."
The dataset consists of 54,000 radar frames of driving scenes during the day and night in live traffic, as well as in simulated fog conditions. Future work will include collecting more data in the rain. To do this, the team will first need to build better protective covers for their hardware.
The team's success has given them the opportunity to work with Toyota to fuse the new radar technology with cameras. The researchers say this could potentially replace LiDAR in the future.
"Radar alone cannot tell us the color, make or model of a car. These features are also important for improving perception in self-driving cars," Bharadia said.