US 11,946,753 B2
Generating digital event recommendation sequences utilizing a dynamic user preference interface
Fan Du, Santa Clara, CA (US); Sana Malik Lee, Cupertino, CA (US); Georgios Theocharous, San Jose, CA (US); and Eunyee Koh, San Jose, CA (US)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Jun. 30, 2021, as Appl. No. 17/364,480.
Application 17/364,480 is a continuation of application No. 16/047,908, filed on Jul. 27, 2018, granted, now 11,085,777.
Prior Publication US 2021/0325193 A1, Oct. 21, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. H04W 4/024 (2018.01); G01C 21/34 (2006.01); G06Q 10/047 (2023.01); H04W 4/021 (2018.01)
CPC G01C 21/343 (2013.01) [G01C 21/3476 (2013.01); G01C 21/3484 (2013.01); G06Q 10/047 (2013.01); H04W 4/021 (2013.01); H04W 4/024 (2018.02)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
generating, for a user of a client device, a popular event sequence based on a frequency of use of event sequences by a plurality of prior users represented in a plurality of historical event sequences;
generating a recommended event sequence by using a recommendation machine learning model to select an event sequence for recommendation based on a reward function having a plurality of parameters that were learned during multiple training iterations to provide expected values of recommendations, the recommended event sequence corresponding to a general recommendation provided to client devices users as a default;
receiving, from the client device, one or more user preferences with respect to one or more events by receiving at least one user interaction with one or more interactive elements corresponding to the one or more events via a graphical user interface of the client device;
generating a modified recommended event sequence using the recommendation machine learning model by modifying the reward function to include a weighting factor that modifies the plurality of parameters of the reward function via one or more preference weights that represent the one or more user preferences to modify how the recommendation machine learning model selects the event sequence for recommendation without retraining the recommendation machine learning model; and
providing, for simultaneous display within the graphical user interface on the client device, the recommended event sequence, the modified recommended event sequence, and the popular event sequence.