US 11,941,760 B2
Method and system for generating 3D mesh of a scene using RGBD image sequence
Swapna Agarwal, Kolkata (IN); Soumyadip Maity, Kolkata (IN); Hrishav Bakul Barua, Kolkata (IN); and Brojeshwar Bhowmick, Kolkata (IN)
Assigned to TATA CONSULTANCY SERVICES LIMITED, Mumbai (IN)
Filed by Tata Consultancy Services Limited, Mumbai (IN)
Filed on Jun. 16, 2022, as Appl. No. 17/807,339.
Claims priority of application No. 202121033776 (IN), filed on Jul. 27, 2021.
Prior Publication US 2023/0063722 A1, Mar. 2, 2023
Int. Cl. G06T 17/20 (2006.01); G06T 7/70 (2017.01)
CPC G06T 17/20 (2013.01) [G06T 7/70 (2017.01); G06T 2207/10024 (2013.01)] 9 Claims
OG exemplary drawing
 
1. A processor implemented method of mesh surface reconstruction of a scene comprising:
fetching a sequence of RGBD images of the scene, via one or more hardware processors;
generating a mesh representation of each RGBD image in the sequence of RGBD images, via the one or more hardware processors, wherein generating the mesh representation for each of the RGBD images comprises:
segregating a planar point cloud and a non-planar point cloud from the RGBD image by performing a plane segregation on the RGBD image, wherein the planar point comprises objects that lie in same plane, and the non-planar point cloud comprises objects that do not lie in the same plane;
generating a planar mesh from the planar point cloud;
generating a non-planar mesh from the non-planar point cloud, wherein generating the non-planar mesh from the non-planar point cloud, comprises:
localizing a point cloud of a non-planar object from the RGBD image;
generating a mesh for the point cloud of the non-planar object;
segregating boundary nodes and inner nodes of the mesh;
segregating a plurality of points in the point cloud of the object into boundary points and inner points, wherein for each of the boundary points nearest mesh node of the generated mesh is a boundary node and for each of the inner points nearest mesh node is an inner node; and
extending the boundary nodes of the mesh near the boundary of the object to connect with the boundary points of the object; and
merging the planar mesh and the non-planar mesh to generate a combined mesh, wherein the combined mesh acts as a mesh representation of the RGBD image;
estimating a camera pose information, by performing plane matching and pose estimation on each two consecutive images in the sequence of RGBD images via the one or more hardware processors, wherein performing the plane matching and pose estimation comprises:
matching planes between each two consecutive images in the sequence of RGBD images, based on the segregated planar data;
determining a relative pose between the two consecutive images; and
performing an incremental merging of the generated mesh representation of the plurality of images using the estimated camera pose information, to generate a representation of the scene captured in the sequence of RGBD images, via the one or more hardware processors.