US 12,230,060 B2
System and method for determining human emotions
Kanat Sultanbekov, New York, NY (US); and Viktor Ivanov, Moscow (RU)
Assigned to Gleenr Inc., New York, NY (US)
Appl. No. 17/794,748
Filed by UTEST APP, INC., New York, NY (US)
PCT Filed Jan. 22, 2021, PCT No. PCT/US2021/014536
§ 371(c)(1), (2) Date Jul. 22, 2022,
PCT Pub. No. WO2021/150836, PCT Pub. Date Jul. 29, 2021.
Claims priority of provisional application 62/964,776, filed on Jan. 23, 2020.
Prior Publication US 2023/0111692 A1, Apr. 13, 2023
Int. Cl. G06V 40/16 (2022.01); A61B 5/026 (2006.01); G06V 40/18 (2022.01); G06V 40/19 (2022.01)
CPC G06V 40/174 (2022.01) [A61B 5/0261 (2013.01); G06V 40/193 (2022.01)] 23 Claims
OG exemplary drawing
 
1. A system comprising:
an input device for receiving at least one image of a human face;
a transdermal imaging processor, executing at least one predetermined software application, for processing the at least one image by: mapping topography of blood vessels and muscles underlying exposed skin and tissues of the human face based on light reflected in the at least one image, applying a digital monochrome filter to remove red, green and blue colors from the at least one image to determine hemoglobin movement intensity, and applying a digital heat filter to generate a heat map of the at least one image as a first identifying data;
a heat map processor, executing the at least one predetermined software application, for processing the heat map by comparing the heat map with an emotional condition database to generate a second identifying data, wherein the emotional condition database comprises a plurality of predetermined emotional conditions corresponding to respective predetermined heat patterns of a human;
a facial muscle processor, executing the at least one predetermined software application, for processing the at least one image by comparing the at least one image with a motion unit database to generate a third identifying data, the motion unit database comprising a plurality of predetermined motion units corresponding to respective predetermined motion unit numbers, and predetermined degrees of intensity;
an oculomotor reaction processor, executing the at least one predetermined software application, for processing the at least one image by comparing the at least one image with an oculomotor database to generate a fourth identifying data, the oculomotor database comprising a plurality of oculomotor parameters corresponding to respective oculomotor characteristics to determine a characteristic of truthfulness;
a main processor, executing the at least one predetermined software application, for receiving and processing the first, second, third and fourth identifying data, and generating a final emotion identifier; and
an output device for receiving and displaying the final emotion identifier.
 
7. A method comprising the steps of:
capturing at least one image of a human face with an input device;
executing at least one predetermined software application using a transdermal imaging processor for processing the at least one image by: mapping topography of blood vessels and muscles underlying exposed skin and tissues of the human face based on light reflected in the at least one image, applying a digital monochrome filter to remove red, green and blue colors from the at least one image to determine hemoglobin movement intensity, and applying a digital heat filter to generate a heat map of the at least one image as a first identifying data;
executing the at least one predetermined software application using a heat map processor for processing the heat map by comparing the heat map with an emotional condition database to generate a second identifying data, wherein the emotional condition database comprises a plurality of predetermined emotional conditions corresponding to respective predetermined heat patterns of a human;
executing the at least one predetermined software application using a facial muscle processor for processing the at least one image by comparing the at least one image with a motion unit database to generate a third identifying data, the motion unit database comprising a plurality of predetermined motion units corresponding to respective predetermined motion unit numbers, and predetermined degrees of intensity;
executing the at least one predetermined software application using an oculomotor reaction processor, for processing the at least one image by comparing the at least one image with an oculomotor database to generate a fourth identifying data, the oculomotor database comprising a plurality of oculomotor parameters corresponding to respective oculomotor characteristics to determine a characteristic of truthfulness;
executing the at least one predetermined software application using a main processor for receiving and processing the first, second, third and fourth identifying data, and generating a final emotion identifier; and
receiving and displaying the final emotion identifier on an output device.
 
14. A non-transitory computer readable medium storing instructions executable by a processor, the stored instructions implementing a method comprising the steps of:
receiving, at an input device, at least one image of a human face;
executing, by a processor, a predetermined program for determining human emotions by:
processing the at least one image by: mapping topography of blood vessels and muscles underlying exposed skin and tissues of the human face based on light reflected in the at least one image, applying a digital monochrome filter to remove red, green and blue colors from the at least one image to determine hemoglobin movement intensity, and applying a digital heat filter to generate a heat map of the at least one image as a first identifying data,
processing the heat map by comparing the heat map with an emotional condition database to generate a second identifying data, wherein the emotional condition database comprises a plurality of predetermined emotional conditions corresponding to respective predetermined heat patterns of a human,
processing the at least one image by comparing the at least one image with a motion unit database to generate a third identifying data, the motion unit database comprising a plurality of predetermined motion units corresponding to respective predetermined motion unit numbers, and predetermined degrees of intensity,
processing the at least one image by comparing the at least one image with an oculomotor database to generate a fourth identifying data, the oculomotor database comprising a plurality of oculomotor parameters corresponding to respective oculomotor characteristics to determine a characteristic of truthfulness, and
receiving and processing the first, second, third and fourth identifying data, and generating a final emotion identifier; and
outputting, at an output device, the final emotion identifier.