CPC G06T 7/55 (2017.01) [G01S 17/86 (2020.01); G01S 17/894 (2020.01); G06T 7/20 (2013.01); G06T 7/70 (2017.01); G06T 17/00 (2013.01); G06T 19/20 (2013.01); G06V 10/22 (2022.01); G06V 10/82 (2022.01); G06V 20/647 (2022.01); H04N 23/45 (2023.01); G06T 2200/08 (2013.01); G06T 2207/10021 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2210/56 (2013.01); G06T 2219/2004 (2013.01); G06T 2219/2016 (2013.01)] | 18 Claims |
1. A method comprising:
obtaining image data comprised of a color image and a depth image captured by an image capture device configured with a color sensor and a time-of-flight depth sensor;
detecting, using a first neural network, an object in the color image that is known to include a surface that is composed of material that at least partly absorbs light emitted by the time-of-flight depth sensor, resulting in corrupted or missing depth values for pixels in the depth image that are associated with the surface;
accessing a three-dimensional model of the object, wherein the three-dimensional model of the object defines three-dimensional points associated with at least one of edges or corners of the surface;
in response to detecting the object, predicting, using a second neural network, two-dimensional points on the color image that correspond to the three-dimensional points associated with the at least one of the edges or the corners of the surface;
applying a prediction algorithm to compute a three-dimensional pose of the object in a color space of the color image, wherein application of the prediction algorithm computes the three-dimensional pose of the object in the color space of the color image by at least one of positioning or rotating the three-dimensional model of the object until the two-dimensional points on the color image align with corresponding three-dimensional points defined in the three-dimensional model of the object;
applying, to the three-dimensional pose of the object in the color space of the color image, a transform between the color space and the depth space to compute a three-dimensional pose of the object in the depth space of the depth image, wherein the transform between the color space and the depth space is defined via a calibration function defined for the color sensor and the time-of-flight depth sensor; and
repairing, using the three-dimensional pose of the object in the depth space of the depth image, the corrupted or missing depth values for the pixels in the depth image that are associated with the surface.
|