US 11,947,928 B2
Multi-die dot-product engine to provision large scale machine learning inference applications
Craig Warner, Plano, TX (US); Eun Sub Lee, Plano, TX (US); Sai Rahul Chalamalasetti, Milpitas, CA (US); and Martin Foltin, Ft. Collins, CO (US)
Assigned to Hewlett Packard Enterprise Development LP, Spring, TX (US)
Filed by HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, Houston, TX (US)
Filed on Sep. 10, 2020, as Appl. No. 17/017,557.
Prior Publication US 2022/0075597 A1, Mar. 10, 2022
Int. Cl. G06F 7/544 (2006.01); G06F 9/38 (2018.01); G06F 9/52 (2006.01); G06F 40/20 (2020.01); G06N 3/063 (2023.01)
CPC G06F 7/5443 (2013.01) [G06F 9/3867 (2013.01); G06F 9/522 (2013.01); G06F 40/20 (2020.01); G06N 3/063 (2013.01)] 10 Claims
OG exemplary drawing
 
1. A multi-chip interface system, comprising:
a plurality of dot-product engine (DPE) chips, wherein each of the plurality of DPE chips performs inference computations for performing deep learning operations; and
a hardware interface between a memory of a host computer and the plurality of DPE chips, wherein the hardware interface communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips.