CPC A61B 5/117 (2013.01) [A61B 5/165 (2013.01); A61B 5/6803 (2013.01); A61B 5/6843 (2013.01); A61B 5/7203 (2013.01); A61B 5/725 (2013.01); A61B 5/7267 (2013.01); A61B 7/04 (2013.01); H04R 1/083 (2013.01); H04R 1/1016 (2013.01); H04R 1/1041 (2013.01); H04R 1/1075 (2013.01); H04R 1/1091 (2013.01); G06F 21/32 (2013.01); H04R 2201/10 (2013.01)] | 19 Claims |
1. A method for a user recognition and an emotion monitoring based on a smart headset, wherein the smart headset comprising an earplug part and a main body, the earplug part is provided with a first microphone and a wearing detection sensor, and a housing of the main body is internally provided with a signal amplification circuit, a communication module, and a microcontroller, wherein the method comprises
detecting whether the user wears the smart headset properly by the wearing detection sensor,
obtaining a sound signal in an ear canal by the first microphone,
amplifying the sound signal by the signal amplification circuit to obtain an amplified sound signal,
outputting the amplified sound signal is outputted to the microcontroller,
transmitting the amplified sound signal by the microcontroller via the communication module to a smart terminal paired with the smart headset to extract a heart sound signal characteristic,
validating a legality of an identity of the user and
inferring a current emotional state of the user according to the heart sound signal characteristic;
wherein the step of validating the legality of the identity of the user and inferring the current emotional state of the user according to the heart sound signal characteristic comprises steps of:
collecting an original sound signal in the ear canal by using the first microphone arranged in the earplug part;
amplifying the original sound signal to obtain the amplified sound signal in the ear canal;
processing the amplified sound signal in the ear canal and extracting the heart sound signal characteristic, inputting the heart sound signal characteristic into a pre-trained identity recognition model for an identity authentication, and inputting the heart sound signal characteristic into a pre-trained emotion recognition model for an emotional categorization; and
determining whether to unlock the smart headset and whether to unlock the smart terminal paired with the smart headset according to an identity authentication result, and determining the user's current emotional state of the user according to an emotional categorization result to generate an emotion monitoring report.
|