US 12,423,847 B1
Method and apparatus for combining data to construct a floor plan
Ali Ebrahimi Afrouzi, San Diego, CA (US); Chen Zhang, Richmond (CA); and Sebastian Schweigert, Sunnyvale, CA (US)
Assigned to AI Incorporated, Toronto (CA)
Filed by Ali Ebrahimi Afrouzi, San Diego, CA (US); Chen Zhang, Richmond (CA); and Sebastian Schweigert, Sunnyvale, CA (US)
Filed on Jan. 6, 2021, as Appl. No. 17/142,909.
Application 17/142,909 is a continuation of application No. 16/048,185, filed on Jul. 27, 2018, granted, now 10,915,114.
Claims priority of provisional application 62/618,964, filed on Jan. 18, 2018.
Claims priority of provisional application 62/591,219, filed on Nov. 28, 2017.
Claims priority of provisional application 62/537,858, filed on Jul. 27, 2017.
Int. Cl. G06T 7/593 (2017.01); G01C 21/32 (2006.01); G01S 17/86 (2020.01); G01S 17/89 (2020.01); G05D 1/00 (2024.01); G06T 7/00 (2017.01); G06T 7/13 (2017.01); G06T 7/30 (2017.01); G06T 7/33 (2017.01); G06T 7/73 (2017.01); G06T 17/05 (2011.01); G06V 10/75 (2022.01); G06V 20/10 (2022.01); G06V 20/58 (2022.01); G06V 20/64 (2022.01)
CPC G06T 7/593 (2017.01) [G01C 21/32 (2013.01); G01S 17/86 (2020.01); G01S 17/89 (2013.01); G05D 1/0227 (2013.01); G05D 1/0253 (2013.01); G05D 1/0274 (2013.01); G06T 7/0002 (2013.01); G06T 7/13 (2017.01); G06T 7/30 (2017.01); G06T 7/344 (2017.01); G06T 7/73 (2017.01); G06T 17/05 (2013.01); G06V 10/751 (2022.01); G06V 20/10 (2022.01); G06V 20/58 (2022.01); G06V 20/64 (2022.01); G06T 2207/10028 (2013.01); Y10S 901/47 (2013.01)] 18 Claims
OG exemplary drawing
 
1. One or more tangible, non-transitory, machine-readable media storing instructions that when executed by one or more processors of a robot effectuate operations comprising:
capturing, with a camera of the robot, a plurality of images of a working environment of the robot, wherein:
captured images comprise pixel data corresponding to respective fields of view of the camera at a position in the working environment from which the image was captured, indicative of presence of objects in the working environment where respective images were captured;
capturing, with an optical sensor paired with a source of illumination, a plurality of depth data of the working environment of the robot,
aligning, with the one or more processors of the robot, depth data, an area of overlap between the fields of view of the plurality of images, wherein aligning comprises:
determining a first area of overlap between a first image and a second image among the plurality of images by at least:
detecting a feature in the first image;
detecting the feature in the second image;
determining a first value indicative of a difference in position of the feature in the first and second images in a first frame of reference of one or more sensors;
obtaining a second value indicative of a difference in pose of the one or more sensors between when data from which the first image is obtained and when data from which the second image is obtained; and
determining the first area of overlap based on the first value and the second value; and
determining, with the one or more processors of the robot and based on the alignment of the data, at least one spatial model of the working environment.