US 11,989,595 B2
Disaggregated computing for distributed confidential computing environment
Reshma Lal, Portland, OR (US); Pradeep Pappachan, Tualatin, OR (US); Luis Kida, Beaverton, OR (US); Soham Jayesh Desai, Hillsboro, OR (US); Sujoy Sen, Beaverton, OR (US); Selvakumar Panneer, Portland, OR (US); and Robert Sharp, Austin, TX (US)
Assigned to INTEL CORPORATION, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Nov. 15, 2021, as Appl. No. 17/526,097.
Application 17/526,097 is a continuation of application No. 17/133,066, filed on Dec. 23, 2020.
Claims priority of provisional application 63/083,565, filed on Sep. 25, 2020.
Prior Publication US 2022/0100580 A1, Mar. 31, 2022
Int. Cl. G06F 9/50 (2006.01); G06F 9/38 (2018.01); G06T 1/20 (2006.01); G06T 1/60 (2006.01)
CPC G06F 9/5083 (2013.01) [G06F 9/3814 (2013.01); G06F 9/5027 (2013.01); G06T 1/20 (2013.01); G06T 1/60 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A remote server platform comprising:
one or more processors communicably coupled to host memory and a remote graphics processing unit (GPU) hosted by the remote server platform, the one or more processors to:
provide a remote GPU middleware layer to act as a proxy for an application stack on a client platform that is separate from the remote server platform, wherein the remote GPU middleware layer comprises is to expose an abstraction of the remote GPU to userspace components of a remote GPU stack, the userspace components running on the client platform, and wherein the client platform is to execute a corresponding remote GPU middleware layer to interface with the remote GPU middleware layer of the remote server platform;
communicate, by the remote GPU middleware layer, with a kernel mode driver of the one or more processors to cause the host memory to be allocated for command buffers and data structures received from the client platform for consumption by a command streamer of the remote GPU; and
invoke, by the remote GPU middleware layer, the kernel mode driver to submit a workload generated by the application stack, the workload submitted for processing by the remote GPU using the command buffers and the data structures allocated in the host memory as directed by the command streamer.