US 11,657,506 B2
Systems and methods for autonomous robot navigation
Huan Tan, Clifton Park, NY (US); Isabella Heukensfeldt Jansen, Niskayuna, NY (US); Gyeong Woo Cheon, Clifton Park, NY (US); and Li Zhang, Clifton Park, NY (US)
Assigned to General Electric Company, Schenectady, NY (US)
Filed by General Electric Company, Schenectady, NY (US)
Filed on Mar. 6, 2019, as Appl. No. 16/293,972.
Prior Publication US 2020/0285247 A1, Sep. 10, 2020
Int. Cl. G06F 19/00 (2018.01); G06T 7/11 (2017.01); G05D 1/02 (2020.01); G06T 7/73 (2017.01); G05D 1/00 (2006.01); G06N 3/08 (2023.01); G06T 7/194 (2017.01); G06F 18/2413 (2023.01); G06N 3/045 (2023.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 10/44 (2022.01); G06V 20/10 (2022.01)
CPC G06T 7/11 (2017.01) [G05D 1/0088 (2013.01); G05D 1/0246 (2013.01); G06F 18/24133 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06T 7/194 (2017.01); G06T 7/74 (2017.01); G06T 7/75 (2017.01); G06V 10/454 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/10 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30244 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method of robot autonomous navigation in an environment, the method comprising:
capturing an image of the environment;
segmenting the captured image to identify one or more foreground objects and one or more background objects;
determining a match between one or more of the foreground objects to one or more predefined image files, at least one of the one or more predefined image files comprising a three-dimensional (3D) computer-aided design (CAD) model comprising: a wireframe model or a solid modeling model;
estimating an initial object pose for the one or more foreground objects by implementing a first iterative estimation loop, the first iterative estimation loop comprising:
projecting the 3D CAD model onto the captured image, computing an error between the projected 3D CAD model and the captured image, refining the determination of the match based on the computed error, and re-estimating the initial object pose;
determining a robot pose estimate by applying a robot-centric environmental model to the foreground object pose estimate by implementing a second iterative estimation loop;
determining a subsequent object pose estimate based an updated segmented image comprising the initial object pose estimate and the robot pose estimate via a third iterative estimation loop until an error between the projected 3D CAD model and the updated segmented image is less than a threshold value;
associating semantic labels to the matched foreground object;
compiling a semantic map containing the semantic labels and subsequent object pose in the updated segmented image; and
providing localization information to the robot based on the semantic map and the robot pose estimate.