| CPC G10H 1/0008 (2013.01) [G06F 16/685 (2019.01); G06F 16/686 (2019.01); G10H 2220/106 (2013.01)] | 20 Claims |

|
1. A system for creating multimedia moments from audio data comprising:
(a) a server comprising one or more processors;
(b) a model database configured to store a plurality of moment models, wherein each moment model of the plurality of moment models is configured to identify a unique moment type, wherein the plurality of moment models comprises a base moment model;
(c) a transcript database configured to store a plurality of transcript datasets, wherein each transcript dataset of the plurality of transcript datasets comprises text derived from corresponding audio data and is time indexed to the corresponding audio data;
wherein the one or more processors are configured to:
(i) receive an episode audio dataset;
(ii) create a transcript dataset based on the episode audio dataset, and add the transcript dataset to the plurality of transcript datasets;
(iii) determine whether the plurality of moment models comprises a focused moment model for the episode audio dataset, and use the focused moment model as a selected moment model;
(iv) where the focused moment model is not within the plurality of moment models, use the base moment model as the selected moment model;
(v) analyze the transcript dataset using the selected moment model to identify a plurality of moments within the transcript dataset, wherein the plurality of moments comprises a set of positive moments that are of high relevance to the unique moment type;
(vi) for at least one positive moment of the set of positive moments, create a multimedia moment based on that positive moment, wherein the multimedia moment comprises a transcript text from the transcript dataset that corresponds to that positive moment, an audio segment from the episode audio dataset that corresponds to the transcript text, and a moment type that describes the unique moment type associated with that positive moment; and
(vii) cause a user interface that is based on the multimedia moment to display on a user device, wherein the user interface is configured to
(1) present the transcript text in synchronized alignment with the audio segment,
(2) accept user feedback regarding relevance of the multimedia moment to the unique moment type, and
(3) update a training dataset associated with the selected moment model based on the user feedback.
|