US 12,444,136 B2
Scene understanding using occupancy grids
Divya Ramnath, Sunnyvale, CA (US); Shiyu Dong, Belmont, CA (US); Siddharth Choudhary, San Jose, CA (US); Siddharth Mahendran, Mountain View, CA (US); Arumugam Kalai Kannan, Sunnyvale, CA (US); Prateek Singhal, Mountain View, CA (US); and Khushi Gupta, Mountain View, CA (US)
Assigned to Magic Leap, Inc., Plantation, FL (US)
Appl. No. 18/275,468
Filed by Magic Leap, Inc., Plantation, FL (US)
PCT Filed Feb. 3, 2022, PCT No. PCT/US2022/015056
§ 371(c)(1), (2) Date Aug. 2, 2023,
PCT Pub. No. WO2022/169938, PCT Pub. Date Aug. 11, 2022.
Claims priority of provisional application 63/145,868, filed on Feb. 4, 2021.
Prior Publication US 2024/0127538 A1, Apr. 18, 2024
Int. Cl. G06T 17/20 (2006.01); G06V 20/64 (2022.01)
CPC G06T 17/20 (2013.01) [G06V 20/64 (2022.01); G06T 2210/12 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method performed by one or more data processing apparatus, the method, comprising:
recognizing one or more objects in a model of a physical environment generated using images of the physical environment;
for each object of the one or more objects:
fitting a bounding box around each object;
generating an occupancy grid within the bounding box around each object, wherein the occupancy grid includes a plurality of cells;
assigning a value to each cell of the occupancy grid based on whether the cell includes a portion of each object; and
generating an object representation that includes information describing the occupancy grid for each object; and
sending the object representations to one or more devices.