US 11,657,531 B1
Method and apparatus for combining data to construct a floor plan
Ali Ebrahimi Afrouzi, Henderson, NV (US); Chen Zhang, Redmond, WA (US); Sebastian Schweigert, Sunnyvale, CA (US); and Lukas Robinson, York (CA)
Assigned to AI Incorporated, Toronto (CA)
Filed by Ali Ebrahimi Afrouzi, Henderson, NV (US); Chen Zhang, Redmond, WA (US); Sebastian Schweigert, Sunnyvale, CA (US); and Lukas Robinson, York (CA)
Filed on Jul. 29, 2022, as Appl. No. 17/876,634.
Application 17/876,634 is a continuation of application No. 17/582,512, filed on Jan. 24, 2022, granted, now 11,481,918.
Application 17/582,512 is a continuation of application No. 16/920,328, filed on Jul. 2, 2020, granted, now 11,348,269, issued on May 31, 2022.
Application 16/920,328 is a continuation in part of application No. 16/594,923, filed on Oct. 7, 2019, granted, now 10,740,920, issued on Aug. 11, 2020.
Application 16/594,923 is a continuation of application No. 16/048,179, filed on Jul. 27, 2018, granted, now 10,482,619, issued on Nov. 19, 2019.
Claims priority of provisional application 63/037,465, filed on Jun. 10, 2020.
Claims priority of provisional application 62/986,946, filed on Mar. 9, 2020.
Claims priority of provisional application 62/952,384, filed on Dec. 22, 2019.
Claims priority of provisional application 62/952,376, filed on Dec. 22, 2019.
Claims priority of provisional application 62/942,237, filed on Dec. 2, 2019.
Claims priority of provisional application 62/933,882, filed on Nov. 11, 2019.
Claims priority of provisional application 62/914,190, filed on Oct. 11, 2019.
Claims priority of provisional application 62/618,964, filed on Jan. 18, 2018.
Claims priority of provisional application 62/591,219, filed on Nov. 28, 2017.
Claims priority of provisional application 62/537,858, filed on Jul. 27, 2017.
Int. Cl. G06T 7/593 (2017.01); G05D 1/02 (2020.01); G01S 17/89 (2020.01); G06V 20/64 (2022.01); G06T 7/13 (2017.01); G06T 7/30 (2017.01); G01S 17/86 (2020.01); G06V 10/75 (2022.01); G06N 5/04 (2006.01); G01S 17/48 (2006.01); G06T 7/33 (2017.01); G01S 7/48 (2006.01); G06T 7/00 (2017.01); G06V 20/10 (2022.01); G06V 10/10 (2022.01); G06T 7/136 (2017.01); G06N 5/047 (2023.01)
CPC G06T 7/593 (2017.01) [G01S 7/4804 (2013.01); G01S 17/48 (2013.01); G01S 17/86 (2020.01); G01S 17/89 (2013.01); G05D 1/0253 (2013.01); G05D 1/0274 (2013.01); G06N 5/047 (2013.01); G06T 7/0002 (2013.01); G06T 7/13 (2017.01); G06T 7/136 (2017.01); G06T 7/30 (2017.01); G06T 7/344 (2017.01); G06V 10/10 (2022.01); G06V 10/751 (2022.01); G06V 20/10 (2022.01); G06V 20/64 (2022.01); G05D 2201/0203 (2013.01); G06T 2207/10028 (2013.01); G06V 10/16 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A robot configured to perceive a model of an environment, comprising:
a chassis;
a set of wheels coupled to the chassis comprising at least a right wheel and a left wheel;
a plurality of sensors coupled with the robot;
a processor; and
memory storing instructions that when executed by the processor effectuates operations comprising:
capturing, with the plurality of sensors, a plurality of data while the robot moves within the environment, wherein:
the plurality of data comprises at least a first data and a second data captured by a first sensor of a first sensor type and a second sensor of a second sensor type, respectively;
the first sensor type is an imaging sensor;
the second senor type captures movement data;
the first sensor is coupled with an active source of illumination positioned adjacent to the first sensor such that upon incidence of illumination light with an object in a path of the robot reflections of the illumination light fall within a field of view of the first sensor;
perceiving, with the processor, the model of the environment based on at least a portion of the plurality of data, the model being a top view of the environment;
storing, with the processor, the model of the environment in a memory accessible to the processor; and
transmitting, with the processor, the model of the environment and a status of the robot to an application of a smartphone previously paired with the robot;
wherein:
the application is configured to display:
the model of the environment in the current work session or a subsequent work session; historical information relating to a previous work session comprising at least areas within which debris was detected, areas cleaned, and a total cleaning time; a robot status; a total area cleaned after completion of a work session; a battery level; a current cleaning duration; an estimated total cleaning duration required to complete a work session; an image of an object and an object type of the object; maintenance information; and firmware information; and is configured to receive at least one user input designating a modification to a divider dividing at least a portion of the model of the environment; a deletion of a divider to merge at least two subareas within the model of the environment; an addition of a divider to divide an area within the model of the environment; a selection, an addition, or a modification of a label of a subarea within the model of the environment; an addition, a modification, or a deletion of a subarea within which the robot is not permitted to enter; scheduling information corresponding to different subareas; a number of coverage repetitions of a subarea or the environment by the robot during a work session; a vacuum power of the robot to use in a subarea or the environment; a vacuuming task to be performed within a subarea or the environment; a deletion or an addition of a robot paired with the application; an instruction to find the robot; an instruction to contact customer service; an instruction for the docking station of the robot to empty a bin of the robot into a bin of the docking station; an instruction to dock at the docking station; and an instruction to navigate to a particular location to perform work;
the model of the environment is stored in the memory of the robot or on a cloud storage system and is accessible in a subsequent work session for use in autonomously navigating the environment; and
the robot displays at least one status of the robot using a combination of LEDs disposed on the robot.