CPC A61B 5/165 (2013.01) [A61B 5/0002 (2013.01); A61B 5/0036 (2018.08); A61B 5/0205 (2013.01); A61B 5/1176 (2013.01); A61B 5/4836 (2013.01); A61B 5/6803 (2013.01); A61B 5/681 (2013.01); G06V 10/255 (2022.01); G06V 10/40 (2022.01); G06V 10/764 (2022.01); G06V 10/945 (2022.01); G16H 20/70 (2018.01); G16H 30/40 (2018.01); G16H 40/63 (2018.01); G16H 50/20 (2018.01); G16H 50/30 (2018.01); A61B 5/1114 (2013.01); A61B 5/1126 (2013.01); A61B 5/1128 (2013.01); A61B 5/7405 (2013.01); A61B 5/742 (2013.01); A61B 5/7455 (2013.01)] | 20 Claims |
1. An image processing system, comprising:
at least one camera for capturing images of a surrounding environment; and
at least one processor and memory containing software;
wherein the software directs the at least one processor to:
obtain data comprising a sequence of images captured by the at least one camera;
detect a face for at least one person within a plurality of images in the sequence of images, wherein the at least one person is talking in at least one image of the plurality of images;
detect at least one emotional cue in the face based upon the plurality of images using a classifier that is trained using statistically representative social expression data that comprises image data of expressive talking sequences;
identify at least one emotion based on the at least one emotional cue; and
display at least one emotion indicator label in real time to provide therapeutic feedback to a user.
|