The following abstract is taken from the Electronics Weekly article discussing how Radar technology for autonomous vehicle & ADAS is challenging the role of Lidar in vehicle safety. The article was first published on 13th June 2018, authored by Dr David Wheeler, Technical Director EnSilica.

Radar Technology Challenges Lidar

Automotive RADAR is on the ascendance again with ever higher demands for data processing leading to a large number of tracked objects and a detailed point cloud driving Artificial Intelligence autopilot decisions.

electronics-weekly-logoThis RADAR imaging capability is challenging LIDAR to the extent that level 5 autonomous driving systems may not need LIDAR at all.

RADAR is undergoing a double step change in specification.  The first step change is for an integrated, multi-sensor aggregation providing 360-degree vehicle coverage to address the limited sector, Field of View (FOV) of an individual (non-rotating) RADAR.  These have created challenges in forming the sensor array into the bodywork, transporting data and combining it in a central ECU.

The second step change is the level of detail that each RADAR sensor provides. By adding more transmit and receive antennas, a virtual array can be formed using MIMO signal processing techniques. The resulting spatial resolution available for a 256 antenna virtual array can be down to 0.1 degrees, equivalent to the best LIDAR, but for a fraction of the cost.  Equally, new sawtooth Frequency Modulated Continuous Wave (FMCW) modulation techniques help to determine object range and speed unambiguously even in a very dense scattering scene.

adas

New sawtooth Frequency Modulated Continuous Wave (FMCW) modulation techniques help to determine object range and speed unambiguously even in a very dense scattering scene.

Furthermore, the RF modulation sample rates, after down conversion, are being pushed greater than 40 MegaSamples per second so that long Fast Fourier Transforms (FFT) up to 4K points can divide the range span into ever smaller cells revealing previously hidden detail. And finally, the RADARs have taken on 4D (range, speed, azimuth, elevation) as standard to further erode any advantages that LIDAR provided. Each of these dimensions involves calculating thousands of large FFTs and is computationally intensive.

The 4D FFT data needs further processing to be called a RADAR image point cloud. A RADAR image point cloud is typically of the order of 100K points, which can be reduced to 30K points per 40 ms after removing noise and this level of detail is akin to a LIDAR, where, for instance, the Velodyne HDL-32e produces 700K points per second.

Advanced signal processing techniques such as Capon or MUSIC, known collectively as Super-Resolution, must be applied to test and resolve objects within a beam-width that would otherwise remain hidden. These techniques require floating point, singular value decomposition on large dimension (>= 16×16), complex valued matrices and have consequently only been applied sparingly, but the extra point cloud detail now means this is necessary on 100+ range/Doppler bins.

Of course, the traditional RADAR processing that tracks extended objects is still an essential part of post processing and forms an additional output to the imaging point cloud. Indeed, an extended object becomes, in a simplified sense, a cluster of co-located points sharing the same speed. The process of clustering objects into a centroid is highly computational and made all the more difficult by having a high density point cloud. The extended object measurements are finally associated with tracks maintained by a Kalman filter and the closest measurement to each track is used to update that track.

In addition to all this, a new generation of RADAR needs to be multi-modal, which requires the ability to sequence and process a number of frames each designed to accomplish a different objective, such as short-range wide field of view, long range narrow field of view, squint view and frames designed to correct for Doppler ambiguity at a low sample rate.

EnSilica eSi-ADAS™ Radar Co-Processor

EnSilica has been deeply involved in RADAR for over the past 10 years with many engagements in defence, commercial roadside, and on-board automotive systems. Our knowledge and experience in this area, together with parallel internal developments, has built up a formidable RADAR signal processing chain that addresses precisely the challenges mentioned above, that automotive imaging RADAR faces. It can complement and, in many cases, replace the need for LIDAR in future autonomous vehicles at level 4 and level 5.adas-1

Our eSi-ADAS solution, now in its 3rd generation, is a highly configurable, MIMO virtual array imaging RADAR processor built around extensive hardware acceleration of key algorithms including calibration, windowing, FFT, digital beamforming, power spectrum generation, CFAR detection, clustering, super-resolution, kinematics, co-ordinate conversion, measurement to track association and tracking filters. It’s programming interface allowing flexible sequencing and custom extensions to commercially available RF devices, allowing them to be repurposed for RADAR imaging. The high density point cloud can be formatted in a number of popular standards including LIDAR ones; LAS, LAZ, PCD.

The digital processing is well suited for integration on-die next to a RFCMOS transceiver. This provides a class-leading power and area, single chip, solution for RF to point cloud generation and tracking.

Read the Full Article at Electronics Weekly online.

###

Media Contacts

  • Akheleash Raghuram, Marketing Specialist, EnSilica
    Tel:  +91 80 2258 4450.  Email: akheleash.raghuram@ensilica.com