US 11,992,950 B2
Mobile robot control apparatus for three dimensional modeling, three dimensional modeling system having the same and method of three dimensional modeling using the same
Sungho Jo, Daejeon (KR); and Soohwan Song, Daejeon (KR)
Assigned to Korea Advanced Institute of Science and Technology, Daejeon (KR)
Filed by Korea Advanced Institute of Science and Technology, Daejeon (KR)
Filed on Jun. 29, 2021, as Appl. No. 17/362,072.
Claims priority of application No. 10-2020-0080531 (KR), filed on Jun. 30, 2020.
Prior Publication US 2021/0402599 A1, Dec. 30, 2021
Int. Cl. B25J 9/16 (2006.01); G06T 7/55 (2017.01); G06T 17/10 (2006.01)
CPC B25J 9/1664 (2013.01) [B25J 9/162 (2013.01); B25J 9/1679 (2013.01); B25J 9/1697 (2013.01); G06T 7/55 (2017.01); G06T 17/10 (2013.01)] 26 Claims
OG exemplary drawing
 
1. A mobile robot control apparatus for a three dimensional (“3D”) modeling comprising:
an online 3D modeler configured to receive an image sequence from a mobile robot and to generate a first map and a second map different from the first map based on the image sequence; and
a path planner configured to generate a global path based on the first map, to extract a target surface based on the second map and to generate a local inspection path having a movement unit smaller than a movement unit of the global path based on the global path and the target surface,
wherein the online 3D modeler comprises:
a pose determiner configured to receive the image sequence and to determine a pose of a reference image; and
a depth estimator configured to estimate a depth of the reference image based on the reference image, the pose of the reference image, source images adjacent to the reference image and poses of the source images,
wherein the online 3D modeler further comprises a volumetric mapper configured to generate a volumetric map for determining whether an obstacle exists on a trajectory of the mobile robot or not based on the reference image, the pose of the reference image and the depth of the reference image,
wherein the first map is the volumetric map,
wherein the online 3D modeler further comprises a dense surfel mapper configured to generate a surfel map, which is a result of 3D modeling, based on the reference image, the pose of the reference image and the depth of the reference image,
wherein the second map is the surfel map,
wherein the depth estimator is configured to output the reference image, the pose of the reference image and the depth of the reference image to both of the volumetric mapper and the dense surfel mapper,
wherein the volumetric mapper is configured to generate the volumetric map based on the reference image, the pose of the reference image and the depth of the reference image received from the depth estimator, and
wherein the dense surfel mapper is configured to generate the surfel map based on the reference image, the pose of the reference image and the depth of the reference image received from the depth estimator.