US 12,141,667 B2
Systems and methods implementing an intelligent optimization platform
Patrick Hayes, San Francisco, CA (US); Michael McCourt, San Francisco, CA (US); Alexandra Johnson, San Francisco, CA (US); George Ke, San Francisco, CA (US); and Scott Clark, San Francisco, CA (US)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Dec. 23, 2021, as Appl. No. 17/561,480.
Application 17/561,480 is a continuation of application No. 16/796,489, filed on Feb. 20, 2020, granted, now 11,301,781.
Application 16/796,489 is a continuation of application No. 16/243,361, filed on Jan. 9, 2019, granted, now 10,607,159, issued on Mar. 31, 2020.
Application 16/243,361 is a continuation of application No. 15/977,168, filed on May 11, 2018, granted, now 10,217,061, issued on Feb. 26, 2019.
Claims priority of provisional application 62/608,076, filed on Dec. 20, 2017.
Claims priority of provisional application 62/608,090, filed on Dec. 20, 2017.
Claims priority of provisional application 62/593,785, filed on Dec. 1, 2017.
Claims priority of provisional application 62/578,788, filed on Oct. 30, 2017.
Claims priority of provisional application 62/540,367, filed on Aug. 2, 2017.
Claims priority of provisional application 62/507,503, filed on May 17, 2017.
Prior Publication US 2022/0121993 A1, Apr. 21, 2022
Int. Cl. G06N 20/00 (2019.01); G06F 9/54 (2006.01); G06N 5/01 (2023.01); G06N 7/01 (2023.01); G06N 20/20 (2019.01); G06N 99/00 (2019.01)
CPC G06N 20/00 (2019.01) [G06F 9/54 (2013.01); G06N 5/01 (2023.01); G06N 7/01 (2023.01); G06N 20/20 (2019.01); G06N 99/00 (2013.01)] 22 Claims
OG exemplary drawing
 
1. An apparatus comprising:
at least one memory;
machine-readable instructions in the apparatus; and
at least one processor circuit to execute the machine-readable instructions to:
cause a first machine instance and a second machine instance to operate in parallel to service a work request;
cause the first machine instance to run a first tuning operation to generate a first hyperparameter configuration;
cause the second machine instance to run a second tuning operation to generate a second hyperparameter configuration;
before completion of the work request:
evaluate the first hyperparameter configuration and the second hyperparameter configuration for a first model using a surrogate model of the first model;
generate a first probability that the first hyperparameter configuration improves a performance of a computer that is to execute the first model, and generate a second probability that the second hyperparameter configuration improves the performance of the computer that is to execute the first model; and
generate a ranking of the first hyperparameter configuration and the second hyperparameter configuration based on the first probability and the second probability; and
cause transmission of a partial response to the computer, the partial response to include:
the ranking of the first hyperparameter configuration and the second hyperparameter configuration;
the first probability;
the second probability; and
an indication of time remaining to generate a full response based on the completion of the work request.