US 12,154,190 B2
2D and 3D floor plan generation
Shreya Goyal, Jaipur (IN); Chiranjoy Chattopadhyay, Jodhpur (IN); Naimul Mefraz Khan, Toronto (CA); Gaurav Bhatnagar, Jodhpur (IN); Srinivas Krishna, Toronto (CA); Laura Thomas, Toronto (CA); and Daniel Chantal Mills, Toronto (CA)
Assigned to AWE Company Limited, Toronto (CA)
Filed by AWE Company Limited, Toronto (CA)
Filed on Jun. 16, 2023, as Appl. No. 18/336,319.
Application 18/336,319 is a continuation of application No. 17/482,111, filed on Sep. 22, 2021, granted, now 11,734,861.
Prior Publication US 2023/0334727 A1, Oct. 19, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 11/00 (2006.01); G06F 18/23 (2023.01); G06F 30/13 (2020.01); G06T 3/06 (2024.01); G06T 7/00 (2017.01); G06T 7/13 (2017.01); G06T 7/30 (2017.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06V 10/44 (2022.01); G06V 20/10 (2022.01)
CPC G06T 11/00 (2013.01) [G06F 18/23 (2023.01); G06F 30/13 (2020.01); G06T 3/06 (2024.01); G06T 7/13 (2017.01); G06T 7/30 (2017.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06T 7/97 (2017.01); G06V 10/44 (2022.01); G06V 20/10 (2022.01); G06T 2207/10016 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30244 (2013.01); G06T 2210/04 (2013.01)] 27 Claims
OG exemplary drawing
 
1. A modelling method comprising:
receiving two-dimensional (2D) images of at least corners of an interior space;
generating, using a positioning module, a corresponding camera position and camera orientation in a three-dimensional (3D) coordinate system in the interior space for each 2D image;
generating a corresponding depth map for each 2D image by using a depth module to estimate depth for each pixel in each 2D image;
generating a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge;
generating, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a camera focal length and camera center coordinates;
transforming, using a transformation module, the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a camera perspective;
regularizing, using a regularization module, the 3D point clouds in the 2D space into boundary lines;
generating a 2D plan of the interior space from the boundary lines;
detecting, using an object detecting module, a presence of a object and a object position of the object in one or more of the 2D images; and
generating a object symbol in the object position in the 2D plan of the interior space using the following equations:

OG Complex Work Unit Math
wherein CBBI is a centroid of a bounding box of the object in the corresponding 2D image,
dist(CBBI, WI) is a distance between CBBI and WI (wall),
LWI is a distance between two corners of the walls in the corresponding 2D image,
RatioD is the ratio between dist(CBBI, WI) and LWI,
LWF is a distance between the two corners of the walls in the 2D plan of the interior space,
dist(CBBF, WIF) is a distance between a centroid of the object symbol (CBBF) and the wall (WIF) in the 2D plan of the interior space.