US 12,236,713 B2
System and method for identifying a person in a video
David Mendlovic, Tel Aviv (IL); Dan Raviv, Tel Aviv (IL); Lior Gelberg, Tel Aviv (IL); Khen Cohen, Tel Aviv (IL); Mor-Avi Azulay, Tel Aviv (IL); and Menahem Koren, Tel Aviv (IL)
Assigned to Ramot at Tel-Aviv University Ltd., Tel Aviv (IL)
Appl. No. 18/713,653
Filed by Ramot at Tel-Aviv University Ltd., Tel-Aviv (IL)
PCT Filed Nov. 30, 2022, PCT No. PCT/IB2022/061602
§ 371(c)(1), (2) Date May 26, 2024,
PCT Pub. No. WO2023/100105, PCT Pub. Date Jun. 8, 2023.
Claims priority of provisional application 63/284,643, filed on Dec. 1, 2021.
Prior Publication US 2024/0420504 A1, Dec. 19, 2024
Int. Cl. G06V 40/16 (2022.01); G06V 10/82 (2022.01); G06V 20/40 (2022.01)
CPC G06V 40/176 (2022.01) [G06V 10/82 (2022.01); G06V 20/41 (2022.01); G06V 40/161 (2022.01); G06V 40/171 (2022.01)] 11 Claims
OG exemplary drawing
 
1. A system for identifying a person in a video, comprising:
a computing device configured to generate a spatiotemporal emotion data compendium (STEM-DC) from the video; and to process the STEM-DC using a deep fully adaptive graph convolutional network (FAGC) to determine a first person representation vector that represents the person in the video,
wherein the generating the STEM-DC includes generating an iterated feature vector (IFV), and wherein the generating the IFV includes iterating a series of landmark feature vectors weighted by functions of transition probabilities between basic emotional states of the person detected in subsequent frames of the video.