IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Using several bands of radar at once can give cars a kind of second sight
Seeing around the corner is simulated by modeling an autonomous vehicle approaching an urban intersection with four high-rise concrete buildings at the corners. A second vehicle is approaching the center via a crossing road, out of the AV’s line of sight, but it can be detected nonetheless through the processing of signals that return either by reflecting along multiple paths or by passing directly through the buildings.
An autonomous car needs to do many things to make the grade, but without a doubt, sensing and understanding its environment are the most critical. A self-driving vehicle must track and identify many objects and targets, whether they’re in clear view or hidden, whether the weather is fair or foul.
Today’s radar alone is nowhere near good enough to handle the entire job—cameras and lidars are also needed. But if we could make the most of radar’s particular strengths, we might dispense with at least some of those supplementary sensors.
Conventional cameras in stereo mode can indeed detect objects, gauge their distance, and estimate their speeds, but they don’t have the accuracy required for fully autonomous driving. In addition, cameras do not work well at night, in fog, or in direct sunlight, and systems that use them are prone to being fooled by optical illusions. Laser scanning systems, or lidars, do supply their own illumination and thus are often superior to cameras in bad weather. Nonetheless, they can see only straight ahead, along a clear line of sight, and will therefore not be able to detect a car approaching an intersection while hidden from view by buildings or other obstacles.
Radar is worse than lidar in range accuracy and angular resolution—the smallest angle of arrival necessary between two distinct targets to resolve one from another. But we have devised a novel radar architecture that overcomes these deficiencies, making it much more effective in augmenting lidars and cameras.
Our proposed architecture employs what’s called a sparse, wide-aperture multiband radar. The basic idea is to use a variety of frequencies, exploiting the particular properties of each one, to free the system from the vicissitudes of the weather and to see through and around corners. That system, in turn, employs advanced signal processing and sensor-fusion algorithms to produce an integrated representation of the environment.
We have experimentally verified the theoretical performance limits of our radar system—its range, angular resolution, and accuracy. Right now, we’re building hardware for various automakers to evaluate, and recent road tests have been successful. We plan to conduct more elaborate tests to demonstrate around-the-corner sensing in early 2022.
Each frequency band has its strengths and weaknesses. The band at 77 gigahertz and below can pass through 1,000 meters of dense fog without losing more than a fraction of a decibel of signal strength. Contrast that with lidars and cameras, which lose 10 to 15 decibels in just 50 meters of such fog.
Rain, however, is another story. Even light showers will attenuate 77-GHz radar as much as they would lidar. No problem, you might think—just go to lower frequencies. Rain is, after all, transparent to radar at, say, 1 GHz or below.
This works, but you want the high bands as well, because the low bands provide poorer range and angular resolution. Although you can’t necessarily equate high frequency with a narrow beam, you can use an antenna array, or highly directive antenna, to project the millimeter-long waves in the higher bands in a narrow beam, like a laser. This means that this radar can compete with lidar systems, although it would still suffer from the same inability to see outside a line of sight.
For an antenna of given size—that is, of a given array aperture—the angular resolution of the beam is inversely proportional to the frequency of operation. Similarly, to achieve a given angular resolution, the required frequency is inversely proportional to the antenna size. So to achieve some desired angular resolution from a radar system at relatively low UHF frequencies (0.3 to 1 GHz), for example, you’d need an antenna array tens of times as large as the one you’d need for a radar operating in the K (18- to 27-GHz) or W (75- to 110-GHz) bands.
Even though lower frequencies don’t help much with resolution, they bring other advantages. Electromagnetic waves tend to diffract at sharp edges; when they encounter curved surfaces, they can diffract right around them as “creeping” waves. These effects are too weak to be effective at the higher frequencies of the K band and, especially, the W band, but they can be substantial in the UHF and C (4- to 8-GHz) bands. This diffraction behavior, together with lower penetration loss, allows such radars to detect objects around a corner.
Multipath reflections and through-building transmission allow the autonomous vehicle [red circle, at right in each diagram] to begin detecting the second vehicle [red rectangle, bottom of each diagram] around the 0.45-second mark, at which time the second vehicle remains firmly occluded by the bottom-left building. Because both frequency bands produce “ghost targets” [blue circles] due to reflections and multiple paths, the system employs a Bayesian algorithm to determine the true targets and remove the ghosts. The algorithm uses a combination of ray tracing and fusion of the results over time across both the UHF and C bands.
One weakness of radar is that it follows many paths, bouncing off innumerable objects, on its way to and from the object being tracked. These radar returns are further complicated by the presence of many other automotive radars on the road. But the tangle also brings a strength: The widely ranging ricochets can provide a computer with information about what’s going on in places that a beam projected along the line of sight can’t reach—for instance, revealing cross traffic that is obscured from direct detection.
To see far and in detail—to see sideways and even directly through obstacles—is a promise that radar has not yet fully realized. No one radar band can do it all, but a system that can operate simultaneously at multiple frequency bands can come very close. For instance, high-frequency bands, such as K and W, can provide high resolution and can accurately estimate the location and speed of targets. But they can’t penetrate the walls of buildings or see around corners; what’s more, they are vulnerable to heavy rain, fog, and dust.
Lower frequency bands, such as UHF and C, are much less vulnerable to these problems, but they require larger antenna elements and have less available bandwidth, which reduces range resolution—the ability to distinguish two objects of similar bearing but different ranges. These lower bands also require a large aperture for a given angular resolution. By putting together these disparate bands, we can balance the vulnerabilities of one band with the strengths of the others.
Different targets pose different challenges for our multiband solution. The front of a car presents a smaller radar cross section—or effective reflectivity—to the UHF band than to the C and K bands. This means that an approaching car will be easier to detect using the C and K bands. Further, a pedestrian’s cross section exhibits much less variation with respect to changes in his or her orientation and gait in the UHF band than it does in the C and K bands. This means that people will be easier to detect with UHF radar.
Furthermore, the radar cross section of an object decreases when there is water on the scatterer's surface. This diminishes the radar reflections measured in the C and K bands, although this phenomenon does not notably affect UHF radars.
The tangled return paths of radar are also a strength because they can provide a computer with information about what’s going on sideways—for instance, in cross traffic that is obscured from direct inspection.
Another important difference arises from the fact that a signal of a lower frequency can penetrate walls and pass through buildings, whereas higher frequencies cannot. Consider, for example, a 30-centimeter-thick concrete wall. The ability of a radar wave to pass through the wall, rather than reflect off of it, is a function of the wavelength, the polarization of the incident field, and the angle of incidence. For the UHF band, the transmission coefficient is around –6.5 dB over a large range of incident angles. For the C and K bands, that value falls to –35 dB and –150 dB, respectively, meaning that very little energy can make it through.
A radar’s angular resolution, as we noted earlier, is proportional to the wavelength used; but it is also inversely proportional to the width of the aperture—or, for a linear array of antennas, to the physical length of the array. This is one reason why millimeter waves, such as the W and K bands, may work well for autonomous driving. A commercial radar unit based on two 77-GHz transceivers, with an aperture of 6 cm, gives you about 2.5 degrees of angular resolution, more than an order of magnitude worse than a typical lidar system, and too little for autonomous driving. Achieving lidar-standard resolution at 77 GHz requires a much wider aperture—1.2 meters, say, about the width of a car.
Besides range and angular resolution, a car’s radar system must also keep track of a lot of targets, sometimes hundreds of them at once. It can be difficult to distinguish targets by range if their range to the car varies by just a few meters. And for any given range, a uniform linear array—one whose transmitting and receiving elements are spaced equidistantly—can distinguish only as many targets as the number of antennas it has. In cluttered environments where there may be a multitude of targets, this might seem to indicate the need for hundreds of such transmitters and receivers, a problem made worse by the need for a very large aperture. That much hardware would be costly.
One way to circumvent the problem is to use an array in which the elements are placed at only a few of the positions they normally occupy. If we design such a “sparse” array carefully, so that each mutual geometrical distance is unique, we can make it behave as well as the nonsparse, full-size array. For instance, if we begin with a 1.2-meter-aperture radar operating at the K band and put in an appropriately designed sparse array having just 12 transmitting and 16 receiving elements, it would behave like a standard array having 192 elements. The reason is that a carefully designed sparse array can have up to 12 × 16, or 192, pairwise distances between each transmitter and receiver. Using 12 different signal transmissions, the 16 receive antennas will receive 192 signals. Because of the unique pairwise distance between each transmit/receive pair, the resulting 192 received signals can be made to behave as if they were received by a 192-element, nonsparse array. Thus, a sparse array allows one to trade off time for space—that is, signal transmissions with antenna elements.
Seeing in the rain is generally much easier for radar than for light-based sensors, notably lidar. At relatively low frequencies, a radar signal’s loss of strength is orders of magnitude lower.Neural Propulsion Systems
In principle, separate radar units placed along an imaginary array on a car should operate as a single phased-array unit of larger aperture. However, this scheme would require the joint transmission of every transmit antenna of the separate subarrays, as well as the joint processing of the data collected by every antenna element of the combined subarrays, which in turn would require that the phases of all subarray units be perfectly synchronized.
None of this is easy. But even if it could be implemented, the performance of such a perfectly synchronized distributed radar would still fall well short of that of a carefully designed, fully integrated, wide-aperture sparse array.
Consider two radar systems at 77 GHz, each with an aperture length of 1.2 meters and with 12 transmit and 16 receive elements. The first is a carefully designed sparse array; the second places two 14-element standard arrays on the extreme sides of the aperture. Both systems have the same aperture and the same number of antenna elements. But while the integrated sparse design performs equally well no matter where it scans, the divided version has trouble looking straight ahead, from the front of the array. That’s because the two clumps of antennas are widely separated, producing a blind spot in the center.
In the widely separated scenario, we assume two cases. In the first, the two standard radar arrays at either end of a divided system are somehow perfectly synchronized. This arrangement fails to detect objects 45 percent of the time. In the second case, we assume that each array operates independently and that the objects they’ve each independently detected are then fused. This arrangement fails almost 60 percent of the time. In contrast, the carefully designed sparse array has only a negligible chance of failure.
The truck and the car are fitted with wide-aperture multiband radar from Neural Propulsion Systems, the authors’ company. Note the very wide antenna above the windshield of the truck.Neural Propulsion Systems
Seeing around the corner can be depicted easily in simulations. We considered an autonomous vehicle, equipped with our system, approaching an urban intersection with four high-rise concrete buildings, one at each corner. At the beginning of the simulation the vehicle is 35 meters from the center of the intersection and a second vehicle is approaching the center via a crossing road. The approaching vehicle is not within the autonomous vehicle’s line of sight and so cannot be detected without a means of seeing around the corner.
At each of the three frequency bands, the radar system can estimate the range and bearing of the targets that are within the line of sight. In that case, the range of the target is equal to the speed of light multiplied by half the time it takes the transmitted electromagnetic wave to return to the radar. The bearing of a target is determined from the incident angle of the wavefronts received at the radar. But when the targets are not within the line of sight and the signals return along multiple routes, these methods cannot directly measure either the range or the position of the target.
We can, however, infer the range and position of targets. First we need to distinguish between line-of-sight, multipath, and through-the-building returns. For a given range, multipath returns are typically weaker (due to multiple reflections) and have different polarization. Through-the-building returns are also weaker. If we know the basic environment—the position of buildings and other stationary objects—we can construct a framework to find the possible positions of the true target. We then use that framework to estimate how likely it is that the target is at this or that position.
As the autonomous vehicle and the various targets move and as more data is collected by the radar, each new piece of evidence is used to update the probabilities. This is Bayesian logic, familiar from its use in medical diagnosis. Does the patient have a fever? If so, is there a rash? Here, each time the car’s system updates the estimate, it narrows the range of possibilities until at last the true target positions are revealed and the “ghost targets” vanish. The performance of the system can be significantly enhanced by fusing information obtained from multiple bands.
We have used experiments and numerical simulations to evaluate the theoretical performance limits of our radar system under various operating conditions. Road tests confirm that the radar can detect signals coming through occlusions. In the coming months we plan to demonstrate round-the-corner sensing.
The performance of our system in terms of range, angular resolution, and ability to see around a corner should be unprecedented. We expect it will enable a form of driving safer than we have ever known.
Behrooz Rezvani, a solid-state physicist, is the CEO of Neural Propulsion Systems, in Pleasanton, Calif., a driving-technology startup he cofounded in 2017.
Babak Hassibi is cofounder and chief technologist at driving technology startup Neural Propulsion Systems, in Pleasanton, Calif. He is also a professor of electrical engineering and computer science at California Institute of Technology, in Pasadena.
Fredrik Brännström is head of the communications systems group at Chalmers University of Technology, in Gothenburg, Sweden. He is also a research scientist at Neural Propulsion Systems, in Pleasanton, Calif.
Majid Manteghi is professor of electrical engineering at Virginia Tech, in Blacksburg.
Perhaps due to ignorance, we have high confidence in technology for automating tasks. For outperforming human drivers, robocars need to detect many subtle signals, which have not been codified yet. They belong to humans' innate abilities. So far, dead robots tell us that due to the complexity of automating tasks requiring innate abilities, dull jobs like household chore or driving will be left to human for the time being. Here is further clarification: https://www.the-waves.org/2020/08/02/dull-jobs-for-human-dead-robots-redefine-future-of-work/
Liberty Lifter X-plane will leverage ground effect
Arguably, the primary job of any military organization is moving enormous amounts of stuff from one place to another as quickly and efficiently as possible. Some of that stuff is weaponry, but the vast majority are things that support that weaponry—fuel, spare parts, personnel, and so on. At the moment, the U.S. military has two options when it comes to transporting large amounts of payload. Option one is boats (a sealift), which are efficient, but also slow and require ports. Option two is planes (an airlift), which are faster by a couple of orders of magnitude, but also expensive and require runways.
To solve this, the Defense Advanced Research Projects Agency (DARPA) wants to combine traditional sealift and airlift with the Liberty Lifter program, which aims to “design, build, and flight test an affordable, innovative, and disruptive seaplane” that “enables efficient theater-range transport of large payloads at speeds far exceeding existing sea lift platforms.”
DARPA is asking for a design like this to take advantage of ground effect, which occurs when an aircraft’s wing deflects air downward and proximity to the ground generates a cushioning effect due to the compression of air between the bottom of the wing and the ground. This boosts lift and lowers drag to yield a substantial overall improvement in efficiency. Ground effect works on both water and land, but you can take advantage of it for only so long on land before your aircraft runs into something. Which is why oceans are the ideal place for these aircraft—or ships, depending on your perspective.
During the late 1980s, the Soviets (and later the Russians) leveraged ground effect in the design of a handful of awesomely bizarre ships and aircraft. There’s the VVA-14, which was also an airplane, along with the vehicle shown in DARPA’s video above, the Lun-class ekranoplan, which operated until the late 1990s. The video clip really does not do this thing justice, so here’s a better picture, taken a couple of years ago:
The Lun (only one was ever made) had a wingspan of 44 meters and was powered by eight turbojet engines. It flew about 4 meters above the water at speeds of up to 550 kilometers per hour, and could transport almost 100,000 kilograms of cargo for 2,000 km. It was based on an earlier, even larger prototype (the largest aircraft in the world at the time) that the CIA spotted in satellite images in 1967 and which seems to have seriously freaked them out. It was nicknamed the Caspian Sea Monster, and it wasn’t until the 1980s that the West understood what it was and how it worked.
In the mid 1990s, DARPA itself took a serious look at a stupendously large ground-effect vehicle of its own, the Aerocon Dash 1.6 wingship. The concept image below is of a 4.5-million-kg vehicle, 175 meters long with a 100-meter wingspan, powered by 20 (!) jet engines:
With a range of almost 20,000 km at over 700 km/h, the wingship could have carried 3,000 passengers or 1.4 million kg of cargo. By 1994, though, DARPA had decided that the potential billion-dollar project to build a wingship like this was too risky, and canceled the whole thing.
Less than 10 years later, Boeing’s Phantom Works started exploring an enormous ground-effect aircraft, the Pelican Ultra Large Transport Aircraft. The Pelican would have been even larger than the Aerocon wingship, with a wingspan of 152 meters and a payload of 1.2 million kg—that’s about 178 shipping containers’ worth. Unlike the wingship, the Pelican would take advantage of ground effect to boost efficiency only in transit above water, but would otherwise use runways like a normal aircraft and be able to reach flight altitudes of 7,500 meters. Operating as a traditional aircraft and with an optimal payload, the Pelican would have a range of about 12,000 km. In ground effect, however, the range would have increased to 18,500 km, illustrating the appeal of designs like these. But Boeing dropped the project in 2005 to focus on lower cost, less risky options.
We’d be remiss if we didn’t at least briefly mention two other massive aircraft: the H-4 Hercules, the cargo seaplane built by Hughes Aircraft Co. in the 1940s, and the Stratolaunch carrier aircraft, which features a twin-fuselage configuration that DARPA seems to be favoring in its concept video for some reason.
From the sound of DARPA’s announcement, they’re looking for something a bit more like the Pelican than the Aerocon Dash or the Lun. DARPA wants the Liberty Lifter to be able to sustain flight out of ground effect if necessary, although it’s expected to spend most of its time over water for efficiency. It won’t use runways on land at all, though, and should be able to stay out on the water for 4 to 6 weeks at a time, operating even in rough seas—a significant challenge for ground-effect aircraft.
DARPA is looking for an operational range of 7,500 km, with a maximum payload of at least 90,000 kg, including the ability to launch and recover amphibious vehicles. The hardest thing DARPA is asking for could be that, unlike most other X-planes, the Liberty Lifter should incorporate a “low cost design and construction philosophy” inspired by the mass-produced Liberty ships of World War II.
With US $15 million to be awarded to up to two Liberty Lifter concepts, DARPA is hoping that at least one of those concepts will pass a system-level critical design review in 2025. If everything goes well after that, the first flight of a full-scale prototype vehicle could happen as early as 2027.
The publication was recognized for its editorial excellence, website, and art direction
The IEEE editorial and art team show off two of their five awards.
IEEE Spectrum garnered top honors at this year’s annual Jesse H. Neal Awards ceremony, held on 26 April. Known as the “Pulitzer Prizes” of business-to-business journalism, the Neal Awards recognize editorial excellence. The awards are given by the SIIA (Software and Information Industry Association).
For the fifth year in a row, IEEE Spectrum was awarded the Best Media Brand. The award is given for overall editorial excellence.
IEEE Spectrum also received these awards:
“The talented, dedicated team that produces the world’s best tech magazine day in and day out deserved to win Best Media Brand for the fifth year running,” says Harry Goldstein, IEEE Spectrum’s acting editor in chief. “We’re also delighted that Evan Ackerman’s astounding body of work on robotics earned an award, along with our Hands On column, written and curated by Stephen Cass and David Schneider, assisted by online art director Erik Vrielink.
“And speaking of art direction,” Goldstein says, “our other two Neals, for Best Cover and Best Single Article Treatment, came through the efforts of our staff including Brandon Palacio, Randi Klett, and Mark Montgomery.”
Register for this webinar to enhance your modeling and design processes for microfluidic organ-on-a-chip devices using COMSOL Multiphysics
If you want to enhance your modeling and design processes for microfluidic organ-on-a-chip devices, tune into this webinar.
You will learn methods for simulating the performance and behavior of microfluidic organ-on-a-chip devices and microphysiological systems in COMSOL Multiphysics. Additionally, you will see how to couple multiple physical effects in your model, including chemical transport, particle tracing, and fluid–structure interaction. You will also learn how to distill simulation output to find key design parameters and obtain a high-level description of system performance and behavior.
There will also be a live demonstration of how to set up a model of a microfluidic lung-on-a-chip device with two-way coupled fluid–structure interaction. The webinar will conclude with a Q&A session. Register now for this free webinar!