CPC G06N 20/00 (2019.01) [G06F 12/0607 (2013.01); G06F 9/3895 (2013.01); G06F 9/3897 (2013.01); G06F 12/0851 (2013.01); G06F 15/17 (2013.01); G06F 15/781 (2013.01); G06F 15/7807 (2013.01); G06F 15/7857 (2013.01); G06F 15/80 (2013.01); G06F 2212/1041 (2013.01)] | 16 Claims |
1. A system to support an operation, comprising:
an inference engine comprising one or more processing tiles, wherein each processing tile comprises at least one or more of
an on-chip memory (OCM) configured to load and maintain data for local access by components in the processing tile; and
one or more processing units configured to perform one or more computation tasks of the operation on data in the OCM by executing a set of task instructions; and
a data streaming engine configured to stream data between the a memory and the OCMs of the one or more processing tiles of the inference engine, wherein the data streaming engine is configured to interleave an address associated with a memory access transaction for accessing the memory, wherein a subset of bits of the interleaved address is used to determine an appropriate communication channel through which to access the memory; and
a network interface controller configured to support address interleaving for a burst length greater than a burst length of the address.
|