US 11,875,883 B1
De-duplication and contextually-intelligent recommendations based on natural language understanding of conversational sources
Leo V. Perez, Platte City, MO (US); Justin Morrison, Kansas City, KS (US); Tanuj Gupta, Leawood, KS (US); Joe Geris, Kansas City, KS (US); Rachel Gegen, Overland Park, KS (US); Jacob Geers, Kansas City, KS (US); Gyandeep Singh, Olathe, KS (US); and Emin Agassi, Blue Bell, PA (US)
Assigned to Cerner Innovation, Inc., Kansas City, KS (US)
Filed by CERNER INNOVATION, INC., Kansas City, KS (US)
Filed on Dec. 23, 2020, as Appl. No. 17/132,859.
Application 17/132,859 is a continuation in part of application No. 16/720,641, filed on Dec. 19, 2019, granted, now 11,398,232.
Claims priority of provisional application 62/783,695, filed on Dec. 21, 2018.
Claims priority of provisional application 62/783,688, filed on Dec. 21, 2018.
Int. Cl. G16H 10/60 (2018.01); G10L 15/22 (2006.01); G10L 15/18 (2013.01); G06F 40/174 (2020.01); G06F 40/30 (2020.01); G06F 40/279 (2020.01); G06F 16/215 (2019.01); G16H 70/20 (2018.01); G16H 50/70 (2018.01); G16H 50/20 (2018.01); G16H 40/20 (2018.01); G16H 70/40 (2018.01); G16H 20/10 (2018.01); G06F 3/0482 (2013.01)
CPC G16H 10/60 (2018.01) [G06F 3/0482 (2013.01); G06F 16/215 (2019.01); G06F 40/174 (2020.01); G06F 40/279 (2020.01); G06F 40/30 (2020.01); G10L 15/1815 (2013.01); G10L 15/22 (2013.01); G16H 20/10 (2018.01); G16H 40/20 (2018.01); G16H 50/20 (2018.01); G16H 50/70 (2018.01); G16H 70/20 (2018.01); G16H 70/40 (2018.01)] 19 Claims
OG exemplary drawing
 
1. One or more non-transitory computer-readable media having computer-executable instructions embodied thereon that, when executed using one or more hardware processors, perform a method for de-duplication and contextual recommendations using natural language understanding on voice conversations, the media comprising:
identifying one or more clinical concepts using one or more clinical ontologies, each clinical ontology providing contextual relationships between the one or more clinical concepts, wherein the one or more clinical concepts are identified in near real-time with transcription of voice data being captured by a device;
for each of the one or more clinical concepts, identifying one or more classification groups that correspond to the clinical concept;
determining for each of the one or more clinical concepts, whether the clinical concept is contextually present or contextually absent in a data source by utilizing the contextual relationships provided by the one or more clinical ontologies to contextually match an item in the data source, the item corresponding to the one or more clinical concepts;
for each of the one or more clinical concepts determined to be contextually absent in the data source, providing a primary recommendation for addressing each of the one or more clinical concepts determined to be contextually absent, wherein the primary recommendation includes adding a corresponding clinical concept that was contextually absent to an electronic documentation of a particular clinical visit within the data source;
for each of the one or more clinical concepts determined to be contextually present, providing a secondary recommendation for addressing each of the one or more clinical concepts determined to be contextually present, wherein the secondary recommendation includes modifying the corresponding clinical concept that is contextually present in the data source and/or initiating a clinical action that is specific to the clinical concept, and wherein the clinical action includes an electronic medical order for a specific patient, medicine, and dosage; and
populating a graphical user interface with (i) the one or more clinical concepts determined to be contextually absent and the primary recommendation, sorted into the one or more classification groups in the graphical user interface, and (ii) the one or more clinical concepts determined to be contextually present and the secondary recommendation, sorted into the one or more classification groups in the graphical user interface, wherein the graphical user interface is populated in near real-time with the transcription of the voice data, the primary and secondary recommendations, and one or more newly identified clinical concepts.