| CPC G10L 17/02 (2013.01) [G10L 15/04 (2013.01); G10L 15/183 (2013.01); G10L 15/26 (2013.01); G10L 17/00 (2013.01); H04H 20/95 (2013.01); H04L 12/18 (2013.01); H04L 12/1822 (2013.01); H04L 12/1831 (2013.01)] | 15 Claims |

|
1. A computer-implemented method for processing and broadcasting one or more moment-associating elements, the method comprising:
connecting with one or more calendar systems containing event information associated with an event;
receiving the event information from the one or more calendar systems, the event information including one or more speaker names associated with one or more speakers, one or more speech titles, one or more starting times, one or more end times, a custom vocabulary, location information, and attendee information associated with one or more attendees;
receiving one or more voiceprints corresponding to one or more voice-generating sources respectively;
granting subscription permission to one or more subscribers;
receiving the one or more moment-associating elements, the one or more moment-associating elements including one or more voice elements of the one or more voice-generating sources;
transforming the one or more moment-associating elements into one or more pieces of moment-associating information based at least in part on the one or more voiceprints and the event information; and
transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers;
wherein the transforming the one or more moment-associating elements into one or more pieces of moment-associating information based at least in part on the one or more voiceprints and the event information includes:
segmenting the one or more moment-associating elements into a plurality of moment-associating segments based at least in part on the one or more voiceprints;
assigning a segment speaker for each segment of the plurality of moment-associating segments based at least in part on the one or more voiceprints;
creating a custom language model based at least in part on the event information;
transcribing the plurality of moment-associating segments into a plurality of transcribed segments based at least in part on the custom language model and the one or more voiceprints; and
generating the one or more pieces of moment-associating information based at least in part on the plurality of transcribed segments and the segment speaker assigned for each segment of the plurality of moment-associating segments;
wherein the creating a custom language model based at least in part on the event information includes:
determining the language model based at least in part on one or more points of interest corresponding to the location information;
building the custom language model based at least in part on social information corresponding to the one or more speakers and the one or more attendees; and
integrating the custom vocabulary into the custom language model.
|