CPC G06T 17/10 (2013.01) [G06T 19/20 (2013.01); G06T 2207/10028 (2013.01)] | 20 Claims |
1. A method comprising:
generating, based on first sensor data captured by a depth sensor on a mobile device, three-dimensional data representing a physical space that includes a real-world asset, wherein:
the real-world asset generates raw machine data, and
a data source stores a mapping between an asset identifier and the raw machine data;
generating, based on second sensor data captured by an image sensor on the mobile device, two-dimensional data representing the physical space;
generating an adaptable three-dimensional (3D) representation of the physical space based on the three-dimensional data and the two-dimensional data, wherein:
the adaptable 3D representation includes a plurality of coordinates representing different positions in a 3D coordinate space corresponding to the physical space, and
the plurality of coordinates encapsulates a digital representation of the real-world asset;
transforming the adaptable 3D representation into geometry data comprising (i) a set of vertices, (ii) a set of faces comprising edges between pairs of vertices, (iii) and texture data;
generating a host extended reality (XR) environment including the geometry data, and at least one visualization based on at least a portion of the raw machine data;
applying, based on a first input, a first color along a specified path in the host XR environment that appears on at least one face included in the set of faces to generate a first paint path; and
transmitting, to a remote device, data corresponding to the first input.
|