US 12,243,315 B2
Dignity preserving transformation of videos for remote monitoring based on visual and non-visual sensor data
Lily Vittayarukskul, San Francisco, CA (US)
Filed by Lily Vittayarukskul, San Francisco, CA (US)
Filed on Jan. 18, 2024, as Appl. No. 18/416,836.
Application 18/416,836 is a continuation of application No. 17/559,510, filed on Dec. 22, 2021, granted, now 11,922,696.
Claims priority of provisional application 63/208,792, filed on Jun. 9, 2021.
Prior Publication US 2024/0203124 A1, Jun. 20, 2024
Int. Cl. G06V 20/52 (2022.01); G01J 5/00 (2022.01); G01S 17/86 (2020.01); G06F 18/214 (2023.01); G06F 18/25 (2023.01); G06N 20/00 (2019.01); G06V 10/22 (2022.01); G06V 10/74 (2022.01); G06V 10/82 (2022.01); G06V 20/40 (2022.01); G06V 40/10 (2022.01); H04N 7/18 (2006.01); H04N 23/80 (2023.01)
CPC G06V 20/52 (2022.01) [G06F 18/214 (2023.01); G06F 18/2148 (2023.01); G06F 18/251 (2023.01); G06N 20/00 (2019.01); G06V 10/225 (2022.01); G06V 10/74 (2022.01); G06V 10/82 (2022.01); G06V 20/46 (2022.01); G06V 40/103 (2022.01); H04N 7/183 (2013.01); H04N 23/80 (2023.01); G01J 5/0025 (2013.01); G01S 17/86 (2020.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
receiving sensor data captured by a plurality of sensors placed in a location, wherein the plurality of sensors capture sensor data comprising a person being monitored and environment surrounding the person, the plurality of sensors comprising at least a camera and a second sensor, wherein the plurality of sensors comprise a non-visual sensor capturing non-visual sensor data and a visual sensor capturing visual sensor data;
performing object recognition in the visual sensor data to identify a first set of objects visible in the visual sensor data;
labeling each of the first set of objects identified in the visual sensor data;
performing object recognition in the non-visual sensor data to identify a second set of objects in the non-visual sensor data;
correlating objects from the second set of objects identified in the non-visual sensor data with the first set of objects annotated using the visual sensor data, the correlating based on location of each object in corresponding sensor data;
labeling objects from the second set of objects identified in the non-visual sensor data based on labels of corresponding objects identified in the visual sensor data;
performing a dignity preserving transformation of the visual sensor data, wherein the dignity preserving transformation replaces at least a portion of the visual sensor data with non-visual sensor data along with labels of identified objects in the non-visual sensor data; and
transmitting the visual sensor data transformed by applying the dignity preserving transformation to a remote monitoring system, for display via a user interface.