CPC G05D 1/0246 (2013.01) [G06T 7/521 (2017.01); G06T 7/68 (2017.01); G06T 7/70 (2017.01)] | 17 Claims |
1. A robotic device, comprising:
a chassis;
a set of wheels;
a battery;
a plurality of sensors; and
a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor of the robotic device effectuates operations comprising:
capturing, with an image sensor disposed on the robotic device, one or more images of an environment of the robotic device as the robotic device drives back and forth in straight lines;
capturing, with at least one sensor of the plurality of sensors, sensor data of the environment as the robotic device drives back and forth in the straight lines;
generating or updating, with the processor, a map of the environment based on at least one of the one or more images and the sensor data;
recognizing, with the processor, one or more rooms in the map based on at least one of the one or more images and the sensor data;
determining, with the processor, at least one of a position and an orientation of the robotic device relative to its environment based on at least one of the one or more images and the sensor data;
actuating, with the processor, the robotic device to adjust a heading of the robotic device based on the at least one of the position and the orientation of the robotic device relative to its environment;
emitting, with at least one light emitter disposed on the robotic device, a light structure onto objects within the environment;
capturing, with the image sensor, images of the light structure emitted onto the objects;
determining, with the processor, positions of elements of the light structure within the images;
identifying, with the processor, a feature relating to at least one object in the images based on the positions of the elements of the light structure within the images;
determining, with the processor, an object type of the at least one object within the images based on a comparison between the feature of the at least one object extracted from the images and an object dictionary comprising various object types and their associated features; and
instructing, with the processor, the robot to execute at least one action based on the object type identified.
|