US 11,733,758 B2
Processor power management
Altug Koker, El Dorado Hills, CA (US); Abhishek R. Appu, El Dorado Hills, CA (US); Kiran C. Veernapu, Bangalore (IN); Joydeep Ray, Folsom, CA (US); Balaji Vembu, Folsom, CA (US); Prasoonkumar Surti, Folsom, CA (US); Kamal Sinha, Rancho Cordova, CA (US); Eric J. Hoekstra, Latrobe, CA (US); Wenyin Fu, Folsom, CA (US); Nikos Kaburlasos, Lincoln, CA (US); Bhushan M. Borole, Rancho Cordova, CA (US); Travis T. Schluessler, Hillsboro, OR (US); Ankur N. Shah, Folsom, CA (US); and Jonathan Kennedy, Bristol (GB)
Assigned to INTEL CORPORATION, Santa Clara, CA (US)
Filed by INTEL CORPORATION, Santa Clara, CA (US)
Filed on Aug. 25, 2021, as Appl. No. 17/411,878.
Application 17/411,878 is a continuation of application No. 16/805,480, filed on Feb. 28, 2020, granted, now 11,106,264.
Application 16/805,480 is a continuation of application No. 15/477,029, filed on Apr. 1, 2017, granted, now 10,579,121, issued on Mar. 3, 2020.
Prior Publication US 2022/0113783 A1, Apr. 14, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G09G 3/00 (2006.01); G06F 1/3209 (2019.01); H04W 52/02 (2009.01); G06F 1/324 (2019.01); G06F 1/3203 (2019.01); G06F 1/3212 (2019.01); G06F 1/3218 (2019.01); G06F 1/3231 (2019.01); G06F 3/01 (2006.01); G06F 11/07 (2006.01); G06F 11/30 (2006.01); H04M 1/72448 (2021.01)
CPC G06F 1/3209 (2013.01) [G06F 1/3203 (2013.01); G06F 1/324 (2013.01); G06F 1/3212 (2013.01); G06F 1/3218 (2013.01); G06F 1/3231 (2013.01); G06F 3/01 (2013.01); G06F 11/0781 (2013.01); G06F 11/3062 (2013.01); H04W 52/0258 (2013.01); H04M 1/72448 (2021.01); Y02D 10/00 (2018.01); Y02D 30/70 (2020.08)] 17 Claims
OG exemplary drawing
 
1. An apparatus comprising:
one or more processors including a graphics processing unit, the graphics processing unit including a graphics processing pipeline; and
a memory to store data, including graphics data processed by the graphics processing pipeline;
wherein the graphics processing unit is to:
conduct a training session with an application, the training session including a plurality of executions of the application utilizing the graphics processing pipeline, wherein the plurality of executions of the application includes executing the application under a plurality of different operating parameters, a plurality of different hardware configurations, or both;
collect performance data for the application during the plurality of executions of the application;
generate a performance profile for the application as processed in the graphics processing pipeline based on the collected performance data;
train a neural network to configure the graphics processing pipeline based on performance profile data from the performance profile for the application; and
utilize the trained neural network to configure the graphics processing pipeline to execute an instance of the application.