US 12,271,377 B2
Reducing latency in query-based search engines
Jacob Vincent Bouvrie, Arlington, MA (US); Michele Alberti, Bern (CH); and Giorgos Zacharia, Winchester, MA (US)
Assigned to KAYAK Software Corporation, Stamford, CT (US)
Filed by Kayak Software Corporation, Stamford, CT (US)
Filed on May 24, 2022, as Appl. No. 17/664,762.
Prior Publication US 2023/0409572 A1, Dec. 21, 2023
Int. Cl. G06F 16/2453 (2019.01); G06F 16/248 (2019.01); G06Q 10/02 (2012.01)
CPC G06F 16/24539 (2019.01) [G06Q 10/025 (2013.01); G06F 16/248 (2019.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
receiving, from a client device, a query specifying one or more criteria for a travel reservation;
transmitting, over a network, one or more requests for live travel data satisfying the one or more criteria;
transmitting a request for first cached travel data in parallel to the one or more requests for live travel data;
determining, by at least one processor, one or more query results that satisfy the one or more criteria based in part on querying a schedule of travel reservations based on the one or more criteria;
receiving the first cached travel data;
generating, based at least in part on the first cached travel data, a list of query results, the list of query results associated with a data record format configured to be input into a prediction engine;
retrieving, by the at least one processor, second cached travel data for at least one of the one or more query results;
receiving, within a predetermined period of time from transmission of the one or more requests for live travel data, a first portion of live travel data satisfying the one or more criteria;
generating, using the prediction engine and while responses to others of the one or more requests for live travel data are still being received, live travel data predictions based at least in part on at least one of the first cached travel data, the second cached travel data and the first portion of live travel data, wherein the prediction engine includes one or more machine learning models configured to receive the first cached travel data, the second cached travel data and the first portion of live travel data as input and generate the live travel data prediction as output; and
transmitting the live travel data predictions to the client device for presentation to a user.