US 11,937,929 B2
Systems and methods for using mobile and wearable video capture and feedback plat-forms for therapy of mental disorders
Catalin Voss, Stanford, CA (US); Nicholas Joseph Haber, Palo Alto, CA (US); Dennis Paul Wall, Palo Alto, CA (US); Aaron Scott Kline, Saratoga, CA (US); and Terry Allen Winograd, Stanford, CA (US)
Assigned to The Board of Trustees of the Leland Stanford Junior University, Stanford, CA (US)
Filed by The Board of Trustees of the Leland Stanford Junior University, Stanford, CA (US); and Catalin Voss, Stanford, CA (US)
Filed on Aug. 9, 2021, as Appl. No. 17/397,675.
Application 17/397,675 is a continuation of application No. 17/066,979, filed on Oct. 9, 2020, granted, now 11,089,985.
Application 17/066,979 is a continuation of application No. 15/589,877, filed on May 8, 2017, granted, now 10,835,167, issued on Nov. 17, 2020.
Claims priority of provisional application 62/333,108, filed on May 6, 2016.
Prior Publication US 2022/0202330 A1, Jun. 30, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. A61B 5/16 (2006.01); A61B 5/00 (2006.01); A61B 5/0205 (2006.01); A61B 5/11 (2006.01); A61B 5/1171 (2016.01); G06V 10/20 (2022.01); G06V 10/40 (2022.01); G06V 10/764 (2022.01); G06V 10/94 (2022.01); G16H 20/70 (2018.01); G16H 30/40 (2018.01); G16H 40/63 (2018.01); G16H 50/20 (2018.01); G16H 50/30 (2018.01)
CPC A61B 5/165 (2013.01) [A61B 5/0002 (2013.01); A61B 5/0036 (2018.08); A61B 5/0205 (2013.01); A61B 5/1176 (2013.01); A61B 5/4836 (2013.01); A61B 5/6803 (2013.01); A61B 5/681 (2013.01); G06V 10/255 (2022.01); G06V 10/40 (2022.01); G06V 10/764 (2022.01); G06V 10/945 (2022.01); G16H 20/70 (2018.01); G16H 30/40 (2018.01); G16H 40/63 (2018.01); G16H 50/20 (2018.01); G16H 50/30 (2018.01); A61B 5/1114 (2013.01); A61B 5/1126 (2013.01); A61B 5/1128 (2013.01); A61B 5/7405 (2013.01); A61B 5/742 (2013.01); A61B 5/7455 (2013.01)] 20 Claims
OG exemplary drawing
 
1. An image processing system, comprising:
at least one camera for capturing images of a surrounding environment; and
at least one processor and memory containing software;
wherein the software directs the at least one processor to:
obtain data comprising a sequence of images captured by the at least one camera;
detect a face for at least one person within a plurality of images in the sequence of images, wherein the at least one person is talking in at least one image of the plurality of images;
detect at least one emotional cue in the face based upon the plurality of images using a classifier that is trained using statistically representative social expression data that comprises image data of expressive talking sequences;
identify at least one emotion based on the at least one emotional cue; and
display at least one emotion indicator label in real time to provide therapeutic feedback to a user.