| CPC G06F 16/35 (2019.01) [G06F 40/284 (2020.01); G10L 15/26 (2013.01); G06V 40/176 (2022.01); G06V 40/20 (2022.01)] | 20 Claims |

|
1. A computer-implemented method comprising:
dividing at least a portion of user input data into at least a first set of text data and at least one set of non-text data;
converting at least a first portion of the at least one set of non-text data into at least a second set of text data;
classifying at least a portion of the at least a first set of text data and at least a portion of the at least a second set of text data in accordance with one or more sentiment-related categories using a first set of one or more artificial intelligence techniques;
classifying at least a second portion of the at least one set of non-text data in accordance with the one or more sentiment-related categories using a second set of one or more artificial intelligence techniques, wherein classifying the at least a second portion of the at least one set of non-text data comprises:
generating one or more frames from the at least a second portion of the at least one set of non-text data in accordance with at least one designated temporal parameter;
identifying, in at least a portion of the one or more frames, at least one of one or more facial gestures and one or more body gestures by at least one individual; and
mapping at least a portion of the at least one of one or more facial gestures and one or more body gestures to one or more feature values using at least one convolutional neural network, wherein each of the one or more feature values is associated with at least one designated sentiment; and
performing one or more automated actions based at least in part on one or more of the classifying of the text data and the classifying of the non-text data;
wherein the method is performed by at least one processing device comprising a processor coupled to a memory.
|