| CPC G06F 16/3334 (2019.01) [G06F 16/35 (2019.01)] | 14 Claims |

|
1. A system for automated measurement of interactive content equivalence comprising:
(a) a computer that includes at least one processor;
(b) a machine-learning software module and training data, wherein the machine-learning software module causes the processor to perform the operations of:
(i) iteratively training, using the training data, a neural network to measure co-occurrence;
(ii) inserting the training data into an iterative training and testing loop to predict a target variable;
(iii) repeatedly determining, during each iteration of the training and testing loop, the target variable, wherein each iteration of the training and testing loop has differing weights assigned to one or more nodes of the neural network, each of the differing weights being updated with each iteration of the training and testing loop to reduce error in predicting the target variable and improve predictability of the neural network thereby creating a trained neural network; and
(iv) deploying the trained neural network;
(c) a memory device storing data and executable code that, when executed, causes the at least one processor to:
(i) run a search on a production set of interactive content files to determine matching interactive content files that comprise one or more user-specified communication elements, wherein the matching interactive content files are used to create an interactive content file subset;
(ii) select matching interactive content files within the content file subset having a subject identification that corresponds to a given subject identification, wherein the selected matching interactive content files are stored as a content file seed set;
(iii) select a plurality of training interactive content files from the production set of interactive content files;
(iv) convert the training interactive content files and the content file seed set into machine encoded n-grams;
(v) generate by the trained neural network an equivalence value threshold by measuring co-occurrence between each individual training interactive content file and the content file seed set;
(vi) deploy automated equivalence detection software that, when executed, causes the processor to:
(A) activate a digital recorder that records a plurality of interactive voice communication sessions between an agent and an end user and that stores the plurality of interactive voice communication sessions to the memory device as an audio file,
(B) convert each audio file to a target interactive content file that comprises a transcript of at least part of the interactive voice communication;
(C) convert the each target interactive content file to a plurality of machine encoded n-grams,
(D) measure, by the trained neural network, a co-occurrence of the machine encoded n-gram in each of the target interactive content files to n-grams in a seed set of concentrated target interactive content files, wherein the co-occurrence is converted to an equivalence value, and
(E) determine if the equivalence value is above a equivalence value threshold, wherein when the equivalence value is above the equivalence value threshold, the target interactive content file is stored to a positive match database;
(vii) monitor a volume of target interactive content files stored to the positive match database to detect whether the volume exceeds a positive match threshold;
(viii) determine the subject identification for each of the interactive content files stored to the positive match database and record the subject identification that appears the most frequently; and
(d) an interactive voice response software application configured to output audio data that corresponds to end user selectable options, wherein
(i) when the volume of target interactive content files in the positive match database exceeds the positive match threshold, the processor modifies the interactive voice response software application so that the audio data incorporates the subject identification that appears most frequently into one of the user selectable options,
(ii) the audio data is transmitted to an end user computing device in response to a phone call from the end user, and
(iii) in response to the end user selecting the option incorporating the most frequent subject identification, the phone call is routed to an agent having training and experience that relates to the subject identification.
|