US 12,135,563 B1
System and method for guiding heading of a mobile robotic device
Ali Ebrahimi Afrouzi, Henderson, NV (US); Lukas Robinson, York (CA); Chen Zhang, Redmond, WA (US); and Brian Highfill, Castro Valley, CA (US)
Assigned to AI Incorporated, Toronto (CA)
Filed by Ali Ebrahimi Afrouzi, Henderson, NV (US); Lukas Robinson, York (CA); Chen Zhang, Redmond, WA (US); and Brian Highfill, Castro Valley, CA (US)
Filed on Jun. 28, 2023, as Appl. No. 18/215,194.
Application 18/215,194 is a continuation of application No. 17/127,849, filed on Dec. 18, 2020, granted, now 11,726,490.
Application 17/127,849 is a continuation of application No. 16/504,012, filed on Jul. 5, 2019, granted, now 10,901,431, issued on Jan. 26, 2021.
Application 16/504,012 is a continuation in part of application No. 15/410,624, filed on Jan. 19, 2017, granted, now 10,386,847, issued on Aug. 20, 2019.
Claims priority of provisional application 62/746,688, filed on Oct. 17, 2018.
Claims priority of provisional application 62/740,580, filed on Oct. 3, 2018.
Claims priority of provisional application 62/740,558, filed on Oct. 3, 2018.
Claims priority of provisional application 62/740,573, filed on Oct. 3, 2018.
Claims priority of provisional application 62/736,676, filed on Sep. 26, 2018.
Claims priority of provisional application 62/735,137, filed on Sep. 23, 2018.
Claims priority of provisional application 62/720,521, filed on Aug. 21, 2018.
Claims priority of provisional application 62/720,478, filed on Aug. 21, 2018.
Claims priority of provisional application 62/702,148, filed on Jul. 23, 2018.
Claims priority of provisional application 62/699,101, filed on Jul. 17, 2018.
Claims priority of provisional application 62/699,367, filed on Jul. 17, 2018.
Claims priority of provisional application 62/699,582, filed on Jul. 17, 2018.
Claims priority of provisional application 62/696,723, filed on Jul. 11, 2018.
Claims priority of provisional application 62/297,403, filed on Feb. 19, 2016.
This patent is subject to a terminal disclaimer.
Int. Cl. G05D 1/02 (2020.01); G05D 1/00 (2006.01); G06T 7/521 (2017.01); G06T 7/68 (2017.01); G06T 7/70 (2017.01)
CPC G05D 1/0246 (2013.01) [G06T 7/521 (2017.01); G06T 7/68 (2017.01); G06T 7/70 (2017.01)] 17 Claims
OG exemplary drawing
 
1. A robotic device, comprising:
a chassis;
a set of wheels;
a battery;
a plurality of sensors; and
a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor of the robotic device effectuates operations comprising:
capturing, with an image sensor disposed on the robotic device, one or more images of an environment of the robotic device as the robotic device drives back and forth in straight lines;
capturing, with at least one sensor of the plurality of sensors, sensor data of the environment as the robotic device drives back and forth in the straight lines;
generating or updating, with the processor, a map of the environment based on at least one of the one or more images and the sensor data;
recognizing, with the processor, one or more rooms in the map based on at least one of the one or more images and the sensor data;
determining, with the processor, at least one of a position and an orientation of the robotic device relative to its environment based on at least one of the one or more images and the sensor data;
actuating, with the processor, the robotic device to adjust a heading of the robotic device based on the at least one of the position and the orientation of the robotic device relative to its environment;
emitting, with at least one light emitter disposed on the robotic device, a light structure onto objects within the environment;
capturing, with the image sensor, images of the light structure emitted onto the objects;
determining, with the processor, positions of elements of the light structure within the images;
identifying, with the processor, a feature relating to at least one object in the images based on the positions of the elements of the light structure within the images;
determining, with the processor, an object type of the at least one object within the images based on a comparison between the feature of the at least one object extracted from the images and an object dictionary comprising various object types and their associated features; and
instructing, with the processor, the robot to execute at least one action based on the object type identified.