US 12,248,748 B2
Generating encoded text based on spoken utterances using machine learning systems and methods
Peter P. Myron, New Braunfels, TX (US); and Michael Mitchell, North Bend, WA (US)
Assigned to T-Mobile USA, Inc., Bellevue, WA (US)
Filed by T-Mobile USA, Inc., Bellevue, WA (US)
Filed on Jan. 16, 2024, as Appl. No. 18/414,095.
Application 18/414,095 is a continuation of application No. 17/841,518, filed on Jun. 15, 2022, granted, now 11,880,645.
Prior Publication US 2024/0152684 A1, May 9, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 17/00 (2019.01); G06F 40/126 (2020.01); G06F 40/166 (2020.01); G06N 20/00 (2019.01); G10L 15/22 (2006.01)
CPC G06F 40/126 (2020.01) [G06F 40/166 (2020.01); G06N 20/00 (2019.01); G10L 15/22 (2013.01); G10L 2015/226 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A mobile device for generating encoded text to convey nonverbal meaning based on audio inputs, the mobile device comprising:
at least one hardware processor;
at least one hardware display screen; and
at least one non-transitory memory carrying instructions that, when executed by the at least one hardware processor, cause the mobile device to:
analyze audio data for a spoken utterance using a text encoding model to identify a nonverbal characteristic including a sentiment of the spoken utterance;
generate, by the text encoding model, an encoded representation of the spoken utterance, the encoded representation comprising a transcription and a visual representation of the nonverbal characteristic of the spoken utterance;
generate, based on the nonverbal characteristic, a prompt to input a second spoken utterance comprising at least one suggestion for changes to one or more different nonverbal characteristics indicative of a different sentiment; and
cause display, on the at least one hardware display screen, of the encoded representation and the prompt.