US 12,265,593 B2
Providing ambient information based on learned user context and interaction, and associated systems and devices
William Noah Schilit, Mountain View, CA (US); Paige Pritchard, Oakland, CA (US); and Alon Hetzroni, Santa Clara, CA (US)
Assigned to Google LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Jan. 8, 2021, as Appl. No. 17/144,196.
Prior Publication US 2022/0222482 A1, Jul. 14, 2022
Int. Cl. G06N 3/04 (2023.01); G06F 3/04842 (2022.01); G06F 18/2113 (2023.01); G06F 18/2415 (2023.01); G06F 18/40 (2023.01); G06N 5/04 (2023.01); G06N 7/01 (2023.01); G06N 20/00 (2019.01)
CPC G06F 18/2113 (2023.01) [G06F 3/04842 (2013.01); G06F 18/24155 (2023.01); G06F 18/40 (2023.01); G06N 5/04 (2013.01); G06N 7/01 (2023.01); G06N 20/00 (2019.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method for selecting and presenting glanceable information by a computerized information system, the method comprising:
recording information corresponding to one or more ambient screens displayed in response to a request of a user, wherein each ambient screen comprises a presentation of information;
building a model for a defined timeslot based at least in part on the recorded information corresponding to the one or more ambient screens displayed in response to the request of the user;
ranking the one or more ambient screens based at least in part on the model;
selecting a candidate ambient screen from the ranked one or more ambient screens; and
displaying the candidate ambient screen during an idle timeslot, wherein, during the idle timeslot, use of the display is not actively being directed by the user.