US 12,346,945 B2
System for virtual agents to help customers and businesses
Jagadeshwar Nomula, Sunnyvale, CA (US); and Vinesh Gudla, Pleasanton, CA (US)
Assigned to Voicemonk, Inc., Sunnyvale, CA (US)
Filed by Voicemonk, Inc., Sunnyvale, CA (US)
Filed on May 28, 2024, as Appl. No. 18/676,471.
Application 18/676,471 is a continuation of application No. 18/465,186, filed on Sep. 12, 2023, granted, now 11,995,698.
Application 18/465,186 is a continuation of application No. 17/323,287, filed on May 18, 2021, granted, now 12,039,583.
Application 17/323,287 is a continuation of application No. 15/356,512, filed on Nov. 18, 2016, granted, now 11,068,954.
Claims priority of provisional application 62/318,762, filed on Apr. 5, 2016.
Claims priority of provisional application 62/275,043, filed on Jan. 5, 2016.
Claims priority of provisional application 62/257,722, filed on Nov. 20, 2015.
Prior Publication US 2024/0311888 A1, Sep. 19, 2024
Int. Cl. G06Q 30/00 (2023.01); G06F 16/957 (2019.01); G06Q 30/0601 (2023.01)
CPC G06Q 30/0617 (2013.01) [G06F 16/957 (2019.01); G06Q 30/0629 (2013.01)] 7 Claims
OG exemplary drawing
 
1. A system configured to execute actions based on user input, the system comprising a virtual agent comprising a virtual agent client and a virtual agent server for an application, wherein the virtual agent is disposed to be configured to function with the application, wherein the virtual agent server is configured to:
receive an input from a user of the application;
identify among a plurality of actions, using the input, a desired action by the user to be performed;
execute at least one of the plurality of actions;
wherein the virtual agent client and server are collectively configured to:
determine a correlation between a first action available in the application and a second action available in the application;
store, in the virtual agent server, the correlation between the first and second actions;
associate the first and second actions with one or more tags;
execute at least one of the first and second actions, based on the desired action, the correlation between the first and second actions, and by executing a code snippet in the virtual agent client; and
display an output page to the user based on the executed action,
wherein the desired action is determined using a machine learning model trained on previous user interactions, and
wherein the virtual agent client and server are collectively configured to identify, using a machine learning model, a plurality of interactive elements within an interface of the application, each corresponding to a respective one of the first action and the second action.