US 12,299,819 B1
Mesh updates via mesh frustum cutting
Devin Bhushan, San Jose, CA (US); Seunghee Han, San Jose, CA (US); Caelin Thomas Jackson-King, Santa Clara, CA (US); Jamie Kuppel, Sunnyvale, CA (US); Stanislav Yazhenskikh, Santa Clara, CA (US); and Jim Jiaming Zhu, Scarborough (CA)
Assigned to SPLUNK Inc., San Francisco, CA (US)
Filed by SPLUNK INC., San Francisco, CA (US)
Filed on Dec. 19, 2022, as Appl. No. 18/068,471.
Application 18/068,471 is a continuation of application No. 17/086,307, filed on Oct. 30, 2020, granted, now 11,551,421.
Claims priority of provisional application 63/093,123, filed on Oct. 16, 2020.
Claims priority of provisional application 63/093,111, filed on Oct. 16, 2020.
Claims priority of provisional application 63/093,143, filed on Oct. 16, 2020.
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/20 (2011.01); G01B 11/22 (2006.01); G01S 17/89 (2020.01); G06T 17/20 (2006.01); G06T 19/00 (2011.01); H04L 67/10 (2022.01)
CPC G06T 17/205 (2013.01) [G01B 11/22 (2013.01); G01S 17/89 (2013.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); H04L 67/10 (2013.01); G06T 2219/024 (2013.01); G06T 2219/2004 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for scanning a three-dimensional (3D) environment, comprising:
generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space;
determining that a mesh in the one or more 3D meshes is visible in a current frame captured by an image sensor;
determining a portion of the mesh that lies within a view frustum associated with the current frame; and
texturing the portion of the mesh with one or more pixels in the current frame onto which the portion of the mesh is projected.