CPC G06V 20/52 (2022.01) [G06F 18/214 (2023.01); G06F 18/2148 (2023.01); G06F 18/251 (2023.01); G06N 20/00 (2019.01); G06V 10/225 (2022.01); G06V 10/74 (2022.01); G06V 10/82 (2022.01); G06V 20/46 (2022.01); G06V 40/103 (2022.01); H04N 7/183 (2013.01); H04N 23/80 (2023.01); G01J 5/0025 (2013.01); G01S 17/86 (2020.01)] | 20 Claims |
1. A computer-implemented method comprising:
receiving sensor data captured by a plurality of sensors placed in a location, wherein the plurality of sensors capture sensor data comprising a person being monitored and environment surrounding the person, the plurality of sensors comprising at least a camera and a second sensor, wherein the plurality of sensors comprise a non-visual sensor capturing non-visual sensor data and a visual sensor capturing visual sensor data;
performing object recognition in the visual sensor data to identify a first set of objects visible in the visual sensor data;
labeling each of the first set of objects identified in the visual sensor data;
performing object recognition in the non-visual sensor data to identify a second set of objects in the non-visual sensor data;
correlating objects from the second set of objects identified in the non-visual sensor data with the first set of objects annotated using the visual sensor data, the correlating based on location of each object in corresponding sensor data;
labeling objects from the second set of objects identified in the non-visual sensor data based on labels of corresponding objects identified in the visual sensor data;
performing a dignity preserving transformation of the visual sensor data, wherein the dignity preserving transformation replaces at least a portion of the visual sensor data with non-visual sensor data along with labels of identified objects in the non-visual sensor data; and
transmitting the visual sensor data transformed by applying the dignity preserving transformation to a remote monitoring system, for display via a user interface.
|