Coherent, super-resolved radar beamforming using self-supervised learning
Abstract
High-resolution automotive radar sensors are required to meet the high bar of autonomous vehicle needs and regulations. However, current radar systems are limited in their angular resolution, causing a technological gap. An industry and academic trend to improve angular resolution by increasing the number of physical channels also increases system complexity, requires sensitive calibration processes, lowers robustness to hardware malfunctions, and drives higher costs. We offer an alternative approach, named Radar signal Reconstruction using Self Supervision (R2S2), which substantially improves the angular resolution of a given radar array without increasing the number of physical channels. R2S2 is a family of algorithms that use a deep neural network (DNN) with complex range-Doppler radar data as input and trained in a self-supervised method using a loss function that operates in multiple data representation spaces. Improvement of 4× in angular resolution was demonstrated using a real-world dataset collected in urban and highway environments during clear and rainy weather conditions.
Get full access to this article
View all available purchase options and get full access to this article.
Already a Subscriber?Sign In
Supplementary Materials
This PDF file includes:
Figs. S1 to S11
Tables S1 to S3
REFERENCES AND NOTES
1
L. M. Clements, K. M. Kockelman, Economic effects of automated vehicles. Transp. Res. Rec. 2606, 106–114 (2017).
2
Road Safety Annual Report 2019 (International Traffic Safety Data and Analysis Group, 2019), pp. 1–9.
3
SAE International, Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (SAE International, 2018), pp. 1–5.
4
International Organization for Standardization, ISO 26262-1:2018 (2018); www.iso.org/standard/68383.html.
5
O. Bialer, A. Jonas, T. Tirer, Super resolution wide aperture automotive radar. IEEE Sens. J. 21, 17846–17858 (2021).
6
R. H. ROY, A. Paulraj, T. Kailath, ESPRIT—A subspace rotation approach to estimation of parameters of cisoids in noise. IEEE Trans. Acoust. 34, 1340–1342 (1986).
7
R. O. Schmidt, Multiple emitter location and signal parameter estimation. Adapt. Antennas Wirel. Commun. , 190–194 (1986).
8
R. Komissarov, V. Kozlov, D. Filonov, P. Ginzburg, Partially coherent radar unties range resolution from bandwidth limitations. Nat. Commun. 10, 1423 (2019).
9
I. Orr, M. Cohen, Z. Zalevsky, High-resolution radar road segmentation using weakly supervised learning. Nat. Mach. Intell. 3, 239–246 (2021).
10
N. Scheiner, N. Appenrodt, J. DIckmann, B. Sick, Radar-based feature design and multiclass classification for road user recognition. IEEE Intell. Veh. Symp. , 779–786 (2018).
11
K. Patel, K. Rambach, T. Visentin, D. Rusev, M. Pfeiffer, B. Yang, Deep learning-based object classification on automotive radar spectra, in 2019 IEEE Radar Conf. RadarConf (IEEE, 2019).
12
A. Palffy, J. Dong, J. F. P. Kooij, D. M. Gavrila, CNN based road user detection using the 3D radar cube. IEEE Robot. Autom. Lett. 5, 1263–1270 (2020).
13
B. Major, D. Fontijne, R. T. Sukhavasi, M. Hamilton, S. Lee, S. Grzechnik, S. Subramanian, Vehicle detection with automotive radar using deep learning on range-azimuth-doppler tensors, in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (IEEE, 2019).
14
Z. Feng, S. Zhang, M. Kunert, W. Wiesbeck, Applying neural networks with a high-resolution automotive radar for lane detection, in 10th GMM-Symposium, AmE 2019—Automotive meets Electronics (VDE, 2019).
15
P. Kaul, D. De Martini, M. Gadd, P. Newman, RSS-Net: Weakly-supervised multi-class semantic segmentation with FMCW radar. arXiv:2004.03451 [cs.CV] (2 April 2020).
16
O. Schumann, M. Hahn, J. Dickmann, C. Wöhler, Semantic segmentation on radar point clouds. 2018 21st International Conference Information Fusion (FUSION) (IEEE, 2018).
17
A. M. Elbir, K. V. Mishra, Y. C. Eldar, Cognitive radar antenna selection via deep learning. IET Radar Sonar Navig. , 871–880 (2019).
18
J. Gao, B. Deng, Y. Qin, H. Wang, X. Li, Enhanced radar imaging using a complex-valued convolutional neural network. IEEE Geosci. Remote Sens. Lett. 16, 35–39 (2019).
19
J. Zhong, G. Wen, C. Ma, B. Ding, Radar signal reconstruction algorithm based on complex block sparse Bayesian learning, in 2014 12th International Conference Signal Processing (ICSP) (IEEE, 2014).
20
M. Rossi, A. M. Haimovich, Y. C. Eldar, Spatial compressive sensing for MIMO radar. IEEE Trans. Signal Process. 62, 419–430 (2014).
21
F. Marvasti, A. Amini, F. Haddadi, M. Soltanolkotabi, B. H. Khalaj, A. Aldroubi, S. Sanei, J. Chambers, A unified approach to sparse signal processing. EURASIP J. Adv. Signal Process. 2012, 44 (2012).
22
F. Roos, H. Philipp, L. Lorraine, T. Torres, C. Knill, J. Schlichenmaier, C. Vasanelli, N. Appenrodt, Compressed sensing based single snapshot DoA estimation for sparse MIMO radar arrays, in 2019 12th German Microwave Conference (GeMiC) (IEEE, 2019).
23
T. Strohmer, B. Friedlander, Compressed sensing for MIMO radar—Algorithms and performance, in 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers (IEEE, 2009).
24
K. Armanious, S. Abdulatif, F. Aziz, U. Schneider, B. Yang, An adversarial super-resolution remedy for radar design trade-offs, in 2019 27th European Signal Processing Conference (EUSIPCO) (IEEE, 2019).
25
M. Gall, M. Gardill, T. Horn, J. Fuchs, Spectrum-based single-snapshot super-resolution direction-of-arrival estimation using deep learning, in 2020 German Microwave Conference (GeMiC) (IEEE, 2020).
26
L. Wu, Z. M. Liu, Z. T. Huang, Deep convolution network for direction of arrival estimation with sparse prior. IEEE Signal Process. Lett. 26, 1688–1692 (2019).
27
M. Agatonovic, Z. Stanković, B. Milovanović, High resolution two-dimensional DOA estimation using artificial neural networks, in 2012 6th European Conference on Antennas and Propagation (EUCAP) (IEEE, 2012).
28
J. Fuchs, R. Weigel, M. Gardill, Single-snapshot direction-of-arrival estimation of multiple targets using a multi-layer perceptron, in 2019 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM) (IEEE, 2019).
29
M. Agatonović, Z. Stanković, I. Milovanović, N. Dončov, L. Sit, T. Zwick, B. Milovanović, Efficient neural network approach for 2D DOA estimation based on antenna array measurements. Prog. Electromagn. Res. 137, 741–758 (2013).
30
Y. Lecun, Self Supervised Learning—Keynote lecture. ICLR (2020); www.youtube.com/watch?v=8TTK-Dd0H9U&ab_channel=AIP-PursuingSoTAAIforeveryone.
31
T. Chen, S. Kornblith, M. Norouzi, G. Hinton, A simple framework for contrastive learning of visual representations, in Proceedings of the 37th International Conference on Machine Learning (PMLR, 2020).
32
S. Laine, T. Karras, J. Lehtinen, T. Aila, High-quality self-supervised deep image denoising, in 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) (ACM, 2019).
33
X. Zhan, Mix-and-match tuning for self-supervised semantic segmentation, in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI, 2018).
34
S. Singh, Self-supervised feature learning for semantic segmentation of overhead imagery, in British Machine Vision Convention (BMVC, 2018).
35
M. Chen, T. Artières, Unsupervised object segmentation by redrawing, in 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) (ACM, 2019).
36
D. Dwibedi, Y. Aytar, J. Tompson, P. Sermanet, A. Zisserman, G. Brain, Temporal cycle-consistency learning, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2019).
37
E. Rodol, A. Bronstein, R. Kimmel, Unsupervised learning of dense shape correspondence, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2019).
38
M. Noroozi, P. Favaro, Unsupervised learning of visual representations by solving jigsaw puzzles, in Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, B. Leibe, J. Matas, N. Sebe, M. Welling, Eds. (Springer, 2016).
39
M. Janner, J. Wu, T. D. Kulkarni, I. Yildirim, J. B. Tenenbaum, Self-supervised intrinsic image decomposition. arXiv:1711.03678 [cs.CV] (10 November 2017).
40
S. Gidaris, P. Singh, N. Komodakis, Unsupervised representation learning by predicting image rotations. arXiv:1803.07728 [cs.CV] (21 March 2018).
41
Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, Y. Fu, Image super-resolution using very deep residual channel attention networks, in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, V. Ferrari, M. Hebert, C. Sminchisescu, Y. Weiss, Eds. (Springer, 2018), vol. 11211.
42
X. Wang, K. C. K. Chan, C. Dong, C. C. Loy, EDVR: Video restoration with enhanced deformable convolutional networks, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (IEEE, 2019).
43
B. Lim, S. Son, H. Kim, S. Nah, K. M. Lee, Enhanced deep residual networks for single image super-resolution, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (IEEE, 2017).
44
C. Dong, C. C. Loy, Deep spatial feature transform, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2018).
45
Y.-C. Wang, S. Venkataramani, P. Smaragdis, Self-supervised learning for speech enhancement. arXiv:2006.10388 [eess.AS] (18 June 2020).
46
B. Gfeller, C. Frank, D. Roblek, M. Sharifi, M. Tagliasacchi, M. Velimirovi, SPICE: Self-supervised pitch estimation, in IEEE/ACM Transactions on Audio, Speech, and Language Processing (IEEE, 2020).
47
J. Engel, R. Swavely, A. Roberts, L. Hanoi, H. Curtis, Self-supervised pitch detection by inverse audio synthesis, in ICML 2020 Workshop SAS (ICML, 2020).
48
S. Wisdom, E. Tzinis, H. Erdogan, R. J. Weiss, K. Wilson, J. R. Hershey, Unsupervised speech separation using mixtures of mixtures, in ICML 2020 Workshop SAS (ICML, 2020).
49
A. Saeed, D. Grangier, N. Zeghidour, Contrastive learning of general-purpose audio representations, in ICASSP 2021—2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2020).
50
M. Ravanelli, Y. Bengio, U. De Montréal, Learning speaker representations with mutual information. arXiv:1812.00271 [eess.AS] (1 December 2018).
51
M. Tagliasacchi, D. Roblek, Self-supervised audio representation learning for mobile devices. arXiv:1905.11796 [eess.AS] (24 May 2019).
52
H. Banville, I. Albuquerque, A. Hyvärinen, G. Moffat, D. A. Engemann, A. Gramfort, Self-supervised representation learning from electroencephalography signals, in 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP) (IEEE, 2019).
53
P. Sarkar, A. Etemad, Self-supervised learning for ECG-based emotion recognition, in ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2020).
54
N. Long, K. Wang, R. Cheng, K. Yang, W. Hu, J. Bai, Assisting the visually impaired: Multitarget warning through millimeter wave radar and RGB-depth sensors. J. Electron. Imaging 28, 013028 (2019).
55
J. Li, P. Stoica, MIMO Radar Signal Processing (Wiley, 2009).
56
B. Friedlander, On signal models for MIMO radar. IEEE Trans. Aerosp. Electron. Syst. 48, 3655–3660 (2012).
57
H. Yang, W. Liu, W. Xie, Y. Wang, General signal model of MIMO radar for moving target detection. IET Radar Sonar Navig. 11, 570–578 (2017).
58
O. Oktay, J. Schlemper, L. Le Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. Mcdonagh, N. Y. Hammerla, B. Kainz, B. Glocker, D. Rueckert, Attention U-Net: Learning where to look for the pancreas. arXiv:1804.03999 [cs.CV] (11 April 2018).
59
J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, H. Lu, Dual attention network for scene segmentation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2019).
60
scipy.org, SciPy Version: 1.7.1.
Information & Authors
Information
Published In

Science Robotics
Volume 6 | Issue 61
December 2021
December 2021
Copyright
Copyright © 2021 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
This is an article distributed under the terms of the Science Journals Default License.
Submission history
Received: 17 June 2021
Accepted: 19 November 2021
Acknowledgments
We thank Y. Avargel, Z. Iluz, L. Korkidi, K. Twizer, H. Omer, E. Cohen, and N. Orr for their advice and help in revising the manuscript.
Funding: This research did not receive any specific grant from funding agencies.
Author contributions: Conceptualization: I.O. Methodology: I.O. and M.C. Data collection: H.D. Investigation: I.O., M.C., H.D., M.H., M.R., and Z.Z. Supervision: M.C. and Z.Z. Writing, review, and editing: I.O., M.C., H.D., M.H., M.R., and Z.Z.
Competing interests: I.O., H.D., and M.C. are inventors on a patent application (US-17/205,283) held/submitted by WiSense Technologies Ltd.
Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials.
Authors
Metrics & Citations
Metrics
Article Usage
Altmetrics
Citations
Export citation
Select the format you want to export the citation of this publication.
View Options
Get Access
Log in to view the full text
AAAS login provides access to Science for AAAS members, and access to other journals in the Science family to users who have purchased individual subscriptions.
- Become a AAAS Member
- Activate your Account
- Purchase Access to Other Journals in the Science Family
- Account Help
Log in via OpenAthens.
Log in via Shibboleth.
More options
Purchase digital access to this article
Download and print this article for your personal scholarly, research, and educational use.
View options
PDF format
Download this article as a PDF file
Download PDF




