US 12,422,853 B2
Apparatus, system, and method of using depth assessment for autonomous robot navigation
Howard Cochran, St. Petersburg, FL (US); and Charles Martin, St. Petersburg, FL (US)
Assigned to JABIL INC.
Filed by JABIL INC., St. Petersburg, FL (US)
Filed on May 17, 2024, as Appl. No. 18/667,289.
Application 18/667,289 is a continuation of application No. 17/042,840, granted, now 12,019,452, previously published as PCT/US2019/023981, filed on Mar. 26, 2019.
Claims priority of provisional application 62/648,005, filed on Mar. 26, 2018.
Prior Publication US 2024/0370020 A1, Nov. 7, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G05D 1/00 (2024.01); B25J 9/16 (2006.01); B25J 19/02 (2006.01); G05D 1/20 (2024.01); G05D 1/22 (2024.01); G05D 1/249 (2024.01); G05D 1/628 (2024.01); G06T 7/593 (2017.01); G06V 20/10 (2022.01); G05D 1/247 (2024.01)
CPC G05D 1/0238 (2013.01) [B25J 9/1666 (2013.01); B25J 9/1676 (2013.01); B25J 19/023 (2013.01); G05D 1/00 (2013.01); G05D 1/0246 (2013.01); G05D 1/20 (2024.01); G05D 1/22 (2024.01); G05D 1/249 (2024.01); G05D 1/628 (2024.01); G06T 7/593 (2017.01); G06V 20/10 (2022.01); G05D 1/0248 (2013.01); G05D 1/247 (2024.01); G06T 2207/10028 (2013.01)] 16 Claims
OG exemplary drawing
 
1. An autonomous mobile robot, comprising:
a robot body;
a first three-dimensional depth camera sensor and a second three-dimensional depth camera sensor affixed on opposing sides of the robot body, each having an equal angle of incidence relative to a major floor surface directed along a first forward motion axis and a second backward motion axis, respectively, of the robot body, providing a 360-degree field of view of the major floor surface proximate to the robot body; and
a processing system communicative with the first and second three-dimensional depth camera sensors and comprising non-transitory computing code which, when executed by at least one processor associated with the processing system, causes to be executed the steps of:
receiving pixel data of the field of view;
obtaining missing or erroneous pixels from the pixel data;
comparing the missing or erroneous pixels to at least one template; and
outputting an indication of obstacles in or near the field of view based on the comparing.