US 11,755,283 B2
Human-machine interfaces for utterance-based playlist selection
Daniel Bromand, Stockholm (SE); Richard Mitic, Stockholm (SE); Horia-Dragos Jurcut, Hägersten (SE); Henriette Susanne Martine Cramer, San Francisco, CA (US); and Ruth Brillman, Somerville, MA (US)
Assigned to Spotify AB, Stockholm (SE)
Filed by Spotify AB, Stockholm (SE)
Filed on Apr. 14, 2022, as Appl. No. 17/720,486.
Application 17/720,486 is a continuation of application No. 16/504,892, filed on Jul. 8, 2019, granted, now 11,334,315.
Claims priority of application No. 18184291 (EP), filed on Jul. 18, 2018.
Prior Publication US 2022/0244909 A1, Aug. 4, 2022
Int. Cl. G06F 16/638 (2019.01); G06F 3/16 (2006.01); G10L 15/22 (2006.01); G10L 15/26 (2006.01)
CPC G06F 3/167 (2013.01) [G10L 15/22 (2013.01); G10L 15/26 (2013.01); G06F 16/639 (2019.01); G10L 2015/223 (2013.01)] 23 Claims
OG exemplary drawing
 
1. A method comprising:
receiving, by a human-machine interface of a device, first input data including first utterance data;
determining the first utterance data includes a request to provide a list of playlists;
receiving the list of playlists, the list of playlists being associated with the first utterance data;
predicting an activity of a user of the device;
determining, for each playlist of the list of playlists, a similarity value describing how related the respective playlist is to the predicted activity;
reordering the list of playlists based on the similarity value, thereby generating a reordered list of playlists;
traversing the list of playlists according to the reordered list of playlists;
audibly outputting an introduction corresponding to descriptor data of the playlist, and
audibly outputting a predetermined portion of each playlist in the reordered list of playlists, the predetermined portion including a playlist trailer generated from a subset of one or more media content items of the playlist.