US 12,411,212 B1
Mobile safety systems
Sandeep Pandya, Irvine, CA (US); Sundeep Ahluwalia, San Diego, CA (US); Changsoo Jeong, Rancho Palos Verdes, CA (US); Michael Korkin, Glendale, CA (US); Kyungsuk Lee, Rancho Palos Verdes, CA (US); Christopher Ro, Aliso Viejo, CA (US); and Yong Wu, Signal Hill, CA (US)
Assigned to Everguard, Inc., Irvine, CA (US)
Filed by Everguard, Inc., Irvine, CA (US)
Filed on May 4, 2021, as Appl. No. 17/307,542.
Claims priority of provisional application 63/022,964, filed on May 11, 2020.
Int. Cl. G01S 7/48 (2006.01); G01S 7/4915 (2020.01); G01S 17/42 (2006.01); G01S 17/894 (2020.01); G06N 20/00 (2019.01)
CPC G01S 7/4802 (2013.01) [G01S 7/4915 (2013.01); G01S 17/42 (2013.01); G01S 17/894 (2020.01); G06N 20/00 (2019.01)] 20 Claims
OG exemplary drawing
 
1. A mobile system for managing safety in an environment, comprising:
a mobile platform configured to move the mobile system in the environment;
a computer vision component for generating computer vision output data of the environment;
a real-time locating component for generating location data about an object within the environment, the location data including identity of the object and location of the object within the environment, wherein the location of the object is determined based on location tracking performed by a mobile tag device carried by the object, further wherein the mobile tag identifies the object;
a light detection and ranging (LIDAR) component for generating 3D point cloud data of the environment; and
an edge computing device coupled to the computer vision component, the real-time locating component and the LIDAR component and configured to:
(i) receive a data stream including the computer vision output data, the location data and the 3D point cloud data to generate an input feature dataset, wherein the generation of the input feature dataset includes use of complementary information in the location data, the computer vision data, and the 3D point cloud for multimodal sensor fusion, further wherein:
the identity of the object depicted within the computer vision output data is determined based on the identity of the object and location of the object included in the location data; and
the 3D point cloud and the location data are registered to a 3D scene map of the environment, the 3D scene map of the environment generated based on the computer vision output data; and
(ii) process the input feature dataset using a machine learning algorithm trained model to generate a safety related result, wherein the safety related result indicates a detected or predicted adverse event involving the object and one or more other objects within the environment.