US 11,900,266 B2
Database systems and interactive user interfaces for dynamic conversational interactions
Murray A. Reicher, Rancho Santa Fe, CA (US); Stewart Nickolas, Austin, TX (US); and David Boloker, Brookfield, MA (US)
Assigned to MERATIVE US L.P., Ann Arbor, MI (US)
Filed by MERATIVE US L.P., Ann Arbor, MI (US)
Filed on May 10, 2019, as Appl. No. 16/409,473.
Application 16/409,473 is a continuation of application No. 15/811,526, filed on Nov. 13, 2017.
Prior Publication US 2019/0266495 A1, Aug. 29, 2019
This patent is subject to a terminal disclaimer.
Int. Cl. G06N 5/02 (2023.01); G16H 70/20 (2018.01); G16H 30/20 (2018.01); G06F 9/451 (2018.01); G06N 20/00 (2019.01); G06F 16/25 (2019.01); G06F 16/242 (2019.01); G06F 16/435 (2019.01)
CPC G06N 5/02 (2013.01) [G06F 9/453 (2018.02); G06F 16/243 (2019.01); G06F 16/25 (2019.01); G06F 16/437 (2019.01); G06N 20/00 (2019.01); G16H 30/20 (2018.01); G16H 70/20 (2018.01)] 13 Claims
OG exemplary drawing
 
1. A method for assisting a user employing a system capable of analyzing medical images and responding to inputs from the user comprising:
receiving a first input from the user by a system capable of retrieving and displaying medical information, the first input including a request for prior images of a patient represented within a currently-displayed image;
applying natural language processing (NLP) to the first input to determine a first intent of the first input;
determining, based on the first input, the first intent, context of viewing activity by the user including medical images and data previously provided to the user as a result of an action, information in a knowledge base regarding a plurality of clinical data elements, including correlations between respective medical symptoms and medical diagnoses, a state of a conversation between the user and the system, and user behavior learned by applying recursively deep analytic analysis to one or more inputs, one or more actions to take, wherein the one or more actions comprise timed responses including a first action including providing a first speech output to the user and automatically selecting and displaying two or more images of the patient for comparison by the user based on contextual rules for the user, wherein the two or more images of the patient are selected by automatically accessing the knowledge base to determine images of interest to the user, automatically selecting, from image storage, a plurality of prior images of the patient, and displaying the plurality of prior images of the patient;
maintaining a prioritized listing of possible conversation actions;
displaying a first portion of the prioritized listing in response to user input;
automatically displaying a second portion of the prioritized listing in response to a risk level to the patient exceeding a predetermined threshold;
receiving second input from the user including a workflow command;
determining a second action based on the second input, the second action including providing second speech output to the user prompting the user to slow down; and
providing the second speech output to the user.