| CPC G16H 15/00 (2018.01) [G16H 80/00 (2018.01)] | 20 Claims |

|
1. A system, comprising:
orchestration logic configured to:
continually receive, from a multi-modal interface, audio data comprising at least a portion of an upstream conversation between a doctor and a patient;
continually provide, to at least one large language model, the audio data and the patient's medical history, wherein the at least one large language is trained using training data that includes multiple audio conversations between doctors and patients to create at least one trained large language model; and
continually cause the at least one trained large language model to generate raw decision support insights based at least in part on audio data and the patient's medical history; and
real-time decision support logic, in communication with the orchestration logic, and configured to:
transform the raw decision support insights into conversation-responsive decision support insights;
prioritize the conversation-responsive decision support insights based on medical urgency to create prioritized, conversation-responsive decision support insights;
deliver a text-based presentation of the prioritized, conversation-responsive decision support insights to the multi-modal interface for presentation to the doctor via a continually updated graphical user interface;
determine, based on the continually received audio data, that a condition has been met by a particular insight of the conversation-responsive decision support insights;
based on determining that the condition has been met by a particular insight of the conversation-responsive decision support insights, modify a graphical characteristic of the text-based presentation of the particular insight being presented in the graphical user interface; and
re-training the at least one trained large language using additional training data comprising the upstream conversation between the doctor and the patient that includes the continually received audio data.
|