US 12,254,569 B2
Digital reality platform providing data fusion for generating a three-dimensional model of the environment
Burkhard Böckem, Jonen (CH); Jürgen Dold, Sempach (CH); Pascal Strupler, Ennetbaden (CH); Joris Schouteden, Kessel-Lo (BE); and Daniel Balog, Merchtem (BE)
Assigned to LEICA GEOSYSTEMS AG, Heerbrugg (CH); HEXAGON GEOSYSTEMS SERVICES AG, Heerbrugg (CH); and LUCIAD NV, Leuven (BE)
Appl. No. 17/790,061
Filed by LEICA GEOSYSTEMS AG, Heerbrugg (CH); HEXAGON GEOSYSTEMS SERVICES AG, Heerbrugg (CH); and LUCIAD NV, Leuven (BE)
PCT Filed Dec. 30, 2019, PCT No. PCT/EP2019/087150
§ 371(c)(1), (2) Date Jun. 29, 2022,
PCT Pub. No. WO2021/136583, PCT Pub. Date Jul. 8, 2021.
Prior Publication US 2023/0042369 A1, Feb. 9, 2023
Int. Cl. G06T 17/05 (2011.01); G01C 15/00 (2006.01); G06F 3/04815 (2022.01); G06F 16/29 (2019.01); G06F 30/13 (2020.01); G06T 15/04 (2011.01); G06T 17/20 (2006.01); G06T 19/20 (2011.01)
CPC G06T 17/05 (2013.01) [G01C 15/00 (2013.01); G06F 3/04815 (2013.01); G06F 16/29 (2019.01); G06F 30/13 (2020.01); G06T 15/04 (2013.01); G06T 17/205 (2013.01); G06T 19/20 (2013.01)] 21 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
reading input data providing a translocal 3D mesh of an environment and a local 3D mesh of an item within the environment, wherein the input data providing the translocal 3D mesh comprise aerial surveying data of a surveying device specifically foreseen to be carried by at least one of an aircraft, a satellite, and a surveying balloon, and the input data providing the local 3D mesh comprise data provided by a surveying station being specifically foreseen to be stationary during data acquisition or data provided by a portable surveying device being specifically foreseen to be carried by at least one of a human operator, an automated guided vehicle, and an unmanned aerial vehicle,
generating on an electronic graphical display a 3D environment visualization of the translocal 3D mesh,
inserting a 3D item visualization of the local 3D mesh into the 3D environment visualization, wherein the 3D item visualization is moveable within the 3D environment visualization by means of touchscreen input or mouse input, such that a pre-final placement of the 3D item visualization within the 3D environment visualization is settable by user-input,
using the pre-final placement to automatically incorporate the local 3D mesh into the translocal 3D mesh to form a combined 3D mesh, wherefore:
a section of the local 3D mesh corresponding to a spatial border part of the 3D item visualization, considered in the pre-final placement, is compared to a section of the translocal 3D mesh corresponding to an adjacent part of the 3D environment visualization, the adjacent part being adjacent to the spatial border part,
based on said comparison, a snapping-in is carried out such that a final placement of the 3D item visualization within the 3D environment visualization, and accordingly a final incorporation of the local 3D mesh into the translocal 3D mesh, is automatically set by refining the pre-final placement in such a way that a spatial discrepancy between the spatial border part of the 3D item visualization and the adjacent part of the 3D environment visualization is minimized,
for the snapping-in a correspondence between the local 3D mesh and the translocal 3D mesh is automatically determined by means of a feature matching algorithm identifying corresponding features within the local 3D mesh and a section of the translocal 3D mesh corresponding to a current overlap area between the 3D item visualization and the 3D environment visualization, and
for the snapping-in a geometric distortion between the local 3D mesh and the translocal 3D mesh is corrected.