CPC G06T 7/593 (2017.01) [G01S 7/4804 (2013.01); G01S 17/48 (2013.01); G01S 17/86 (2020.01); G01S 17/89 (2013.01); G05D 1/0253 (2013.01); G05D 1/0274 (2013.01); G06N 5/047 (2013.01); G06T 7/0002 (2013.01); G06T 7/13 (2017.01); G06T 7/136 (2017.01); G06T 7/30 (2017.01); G06T 7/344 (2017.01); G06V 10/10 (2022.01); G06V 10/16 (2022.01); G06V 10/751 (2022.01); G06V 20/10 (2022.01); G06V 20/64 (2022.01); G05D 2201/0203 (2013.01); G06T 2207/10028 (2013.01)] | 50 Claims |
1. A robot configured to perceive a model of an environment, comprising:
a chassis;
a set of wheels;
a plurality of sensors;
a processor, and
memory storing instructions that when executed by the processor effectuates operations comprising:
capturing, with the plurality of sensors, a plurality of data while the robot moves within the environment, wherein:
the plurality of data comprises at least a first data and a second data captured by a first sensor of a first sensor type and a second sensor of a second sensor type, respectively;
the first sensor type is an imaging sensor;
the second senor type captures movement data;
an active source of illumination is positioned adjacent to the first sensor such that upon incidence of illumination light with an object in a path of the robot reflections of the illumination light fall within a field of view of the first sensor;
perceiving, with the processor, the model of the environment based on at least a portion of the plurality of data, the model being a top view of the environment;
storing, with the processor, the model of the environment in a memory accessible to the processor; and
transmitting, with the processor, the model of the environment and a status of the robot to an application of a smartphone previously paired with the robot;
wherein:
the application is configured to display:
the model of the environment in the current work session or a subsequent work session; historical information relating to a previous work session comprising at least areas within which debris was detected, areas cleaned, and a total cleaning time; a robot status; a total area cleaned after completion of a work session; a battery level; a current cleaning duration; an estimated total cleaning duration required to complete a work session; an image of an object and an object type of the object; maintenance information; firmware information; and customer service information; and is configured to
receive at least one user input designating a modification to a divider dividing at least a portion of the model of the environment; a deletion of a divider to merge at least two subareas within the model of the environment; an addition of a divider to divide an area within the model of the environment; a selection, an addition, or a modification of a label of a subarea within the model of the environment; an addition, a modification, or a deletion of a subarea within which the robot is not permitted to enter, scheduling information corresponding to different subareas; a number of coverage repetitions of a subarea or the environment by the robot during a work session; a vacuum power of the robot to use in a subarea or the environment; a vacuuming task to be performed within a subarea or the environment; a deletion or an addition of a robot paired with the application; an instruction to find the robot; an instruction for the docking station of the robot to empty a bin of the robot into a bin of the docking station; an instruction to dock at the docking station; and an instruction to navigate to a particular location to perform work;
the model of the environment is stored in the memory of the robot or on a cloud storage system and is accessible in a subsequent work session for use in autonomously navigating the environment; and
the robot displays at least one status of the robot using a combination of LEDs disposed on the robot.
|