US 11,972,298 B2
Technologies for data migration between edge accelerators hosted on different edge locations
Evan Custodio, North Attleboro, MA (US); Francesc Guim Bernat, Barcelona (ES); Suraj Prabhakaran, Aachen (DE); Trevor Cooper, Portland, OR (US); Ned M. Smith, Beaverton, OR (US); Kshitij Doshi, Tempe, AZ (US); and Petar Torre, Feldkirchen (DE)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Feb. 7, 2022, as Appl. No. 17/666,366.
Application 17/666,366 is a continuation of application No. 16/369,036, filed on Mar. 29, 2019, granted, now 11,243,817.
Prior Publication US 2022/0237033 A1, Jul. 28, 2022
Int. Cl. G06F 3/06 (2006.01); G06F 8/30 (2018.01); G06F 8/41 (2018.01); G06F 9/50 (2006.01); G06F 11/00 (2006.01); G06F 11/20 (2006.01); G06F 21/62 (2013.01)
CPC G06F 9/505 (2013.01) [G06F 9/5044 (2013.01); G06F 9/5083 (2013.01); G06F 2209/509 (2013.01)] 24 Claims
OG exemplary drawing
 
1. An accelerator device comprising:
first accelerator circuitry to execute a workload offloaded from a client compute device to the accelerator device, the client compute device to communicate with a first compute device at a first network location, the first compute device including the accelerator device; and
acceleration migration logic circuitry to:
convert the workload from a first format to a second format, the first format specific to the first accelerator circuitry; and
cause transmission of the workload to a second compute device at a second network location, the workload in the second format, the transmission to cause the second compute device to convert the workload from the second format to a third format that is specific to second accelerator circuitry of the second compute device.