CPC G10L 15/1815 (2013.01) [G10L 15/1822 (2013.01); G10L 15/22 (2013.01); G10L 15/26 (2013.01); G10L 13/08 (2013.01); G10L 2015/223 (2013.01); G10L 2015/225 (2013.01); G10L 2015/227 (2013.01); G10L 2015/228 (2013.01)] | 16 Claims |
4. A method comprising:
receiving first input data representing a first natural language declarative statement;
determining that the first input data corresponds to a declarative intent;
determining, based at least in part on the first input data corresponding to the declarative intent, first data representing a first state described by at least a portion of the first input data;
generating, by a first machine learning model, second data based at least in part on the first data, wherein the second data represents a predicted action for the first natural language declarative statement;
generating output data representing a request for clarification based at least in part on the predicted action;
receiving second input data representing a response to the request for clarification;
determining an action intended to be taken in response to the first input data based at least in part on the second input data and the first input data;
storing the first data representing the first state in association with third data representing the action in a first data structure;
receiving third input data representing a second natural language declarative statement different from the first natural language declarative statement;
determining that the third input data corresponds to the declarative intent;
determining fourth data representing a second state described by at least a portion of the second natural language declarative statement;
determining that the second state corresponds to the first state;
receiving, from the first data structure, the third data representing the action; and
generating a natural language prompt as output data, the natural language prompt representing a prompt for executing the action.
|