US 11,676,732 B2
Machine learning-based diagnostic classifier
Monika Sharma Mellem, Falls Church, VA (US); Yuelu Liu, South San Francisco, CA (US); Parvez Ahammad, San Jose, CA (US); Humberto Andres Gonzalez Cabezas, Santa Clara, CA (US); William J. Martin, San Francisco, CA (US); and Pablo Christian Gersberg, San Francisco, CA (US)
Assigned to NEUMORA THERAPEUTICS, INC., Brisbane, CA (US)
Filed by BlackThorn Therapeutics, Inc., Brisbane, CA (US)
Filed on Sep. 1, 2021, as Appl. No. 17/446,633.
Application 17/446,633 is a continuation of application No. 16/514,879, filed on Jul. 17, 2019, granted, now 11,139,083.
Application 16/514,879 is a continuation of application No. 16/400,312, filed on May 1, 2019.
Claims priority of provisional application 62/665,243, filed on May 1, 2018.
Prior Publication US 2021/0398685 A1, Dec. 23, 2021
Int. Cl. G16H 50/30 (2018.01); G16H 50/20 (2018.01); G16H 10/60 (2018.01)
CPC G16H 50/30 (2018.01) [G16H 10/60 (2018.01); G16H 50/20 (2018.01)] 21 Claims
OG exemplary drawing
 
1. A system for evaluating a user, the system comprising:
a microphone;
a camera positioned to capture an image of the user and configured to output video data;
a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of evaluating the user; and
a control system coupled to the memory comprising one or more processors, the control system configured to execute the machine executable code to cause the control system to:
record, by the camera, a set of test video data during a time period;
record, by the microphone, a set of test audio data during the time period;
process the video data to assign a plurality of pixels to a face of the user;
analyze the plurality of pixels to determine whether the face of the user is within a frame captured by the camera;
in response to determining that the face of the user is within the frame captured by the camera, process the plurality of pixels to output video features associated with the user;
process the audio data to identify sounds representing a voice of the user and output audio features associated with the user;
process, using a machine learning model, the audio and video features, wherein the machine learning model was previously trained with a set of training data comprising audio and video data recorded from a plurality of individuals with labels indicating whether each of the plurality of individuals has one of a plurality of characteristics; and
output an indication of whether the user has at least one of the plurality of characteristics.