US 11,836,852 B2
Neural network-based millimeter-wave imaging system
Junfeng Guan, Champaign, IL (US); Seyedsohrab Madani, Champaign, IL (US); Suraj S. Jog, Champaign, IL (US); Haitham Al Hassanieh, Champaign, IL (US); and Saurabh Gupta, Champaign, IL (US)
Assigned to Board of Trustees of the University of Illinois, Urban, IL (US)
Filed by Board of Trustees of the University of Illinois, Urbana, IL (US)
Filed on Dec. 17, 2020, as Appl. No. 17/124,637.
Claims priority of provisional application 62/951,388, filed on Dec. 20, 2019.
Prior Publication US 2021/0192762 A1, Jun. 24, 2021
Int. Cl. G06T 17/00 (2006.01); G06T 7/593 (2017.01); G06T 15/06 (2011.01); G06T 15/04 (2011.01); G06V 20/64 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 10/44 (2022.01); G06V 20/56 (2022.01)
CPC G06T 17/00 (2013.01) [G06T 7/593 (2017.01); G06T 15/04 (2013.01); G06T 15/06 (2013.01); G06V 10/454 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/56 (2022.01); G06V 20/64 (2022.01); G06V 20/647 (2022.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
receiving, by a processing device operatively coupled to memory, data comprising a plurality of data items, each data item of the plurality of data items comprising a three-dimensional (3D) radar heat map of an object and a corresponding two-dimensional (2D) image of the object captured by a stereo camera;
inputting, by the processing device, the plurality of data items into a machine learning model comprising a generative adversarial network (GAN) comprising:
a generator network that generates a 2D depth map for the object by encoding voxels within the 3D radar heat map into a first one-dimensional (1D) vector and decoding the first 1D vector into the 2D depth map, wherein the 2D depth map comprises pixels that each represent a respective distance from a respective location to a radar imaging sub-system; and
a discriminator network that generates, based on the 3D radar heat map and the 2D depth map, an output comprising a probability that the 2D depth map is the corresponding 2D image of the object; and
training, by the processing device, the machine learning model based on the plurality of data items to generate a trained machine learning model that iteratively learns, based on the probability, to generate an updated 2D depth map that approximates the corresponding 2D image more closely than the 2D depth map, wherein the training comprises at least one of: training the generator network to generate the 2D depth map or training the discriminator network to generate the output.