US 12,243,325 B2
Self-calibration for decoration based sensor fusion method
Jie Li, Ann Arbor, MI (US); Vitor Guizilini, Santa Clara, CA (US); and Adrien Gaidon, San Jose, CA (US)
Assigned to Toyota Research Institute, Inc., Los Altos, CA (US); and Toyota Jidosha Kabushiki Kaisha, Toyota (JP)
Filed by Toyota Research Institute, Inc., Los Altos, CA (US)
Filed on Apr. 29, 2022, as Appl. No. 17/733,101.
Prior Publication US 2023/0351768 A1, Nov. 2, 2023
Int. Cl. G06V 20/58 (2022.01); G06T 7/50 (2017.01); G06T 7/80 (2017.01)
CPC G06V 20/58 (2022.01) [G06T 7/50 (2017.01); G06T 7/80 (2017.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
receiving, with an electronic control unit, image data from a vision sensor and point cloud data from a depth sensor; and
implementing, with the electronic control unit, a machine learning model comprising:
a first branch having a first set of layers and a first set of weights associated with respective outputs of the first set of layers;
a second branch having a second set of layers and a second set of weights associated with respective outputs of the second set of layers, wherein the machine learning model is trained to:
align the point cloud data and the image data based on a current calibration, wherein the current calibration defines respective values for the first set of weights and the second set of weights,
detect a difference in alignment of the point cloud data and the image data,
adjust the current calibration based on the difference in the alignment, wherein to adjust the current calibration, one or more of the respective values for the first set of weights and the second set of weights is changed to a different value, and
output a calibrated embedding feature map based on adjustments to the current calibration.