US 11,734,608 B2
Address interleaving for machine learning
Avinash Sodani, San Jose, CA (US); and Ramacharan Sundararaman, San Jose, CA (US)
Assigned to Marvell Asia Pte Ltd, Singapore (SG)
Filed by Marvell Asia Pte, Ltd., Singapore (SG)
Filed on Dec. 23, 2020, as Appl. No. 17/247,810.
Application 17/247,810 is a continuation of application No. 16/420,078, filed on May 22, 2019, granted, now 10,929,778.
Application 16/420,078 is a continuation in part of application No. 16/226,539, filed on Dec. 19, 2018, granted, now 10,824,433, issued on Nov. 3, 2020.
Claims priority of provisional application 62/675,076, filed on May 22, 2018.
Prior Publication US 2021/0117866 A1, Apr. 22, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 9/38 (2018.01); G06F 12/06 (2006.01); G06N 20/00 (2019.01); G06F 15/78 (2006.01); G06F 15/17 (2006.01); G06F 12/0846 (2016.01); G06F 15/80 (2006.01)
CPC G06N 20/00 (2019.01) [G06F 12/0607 (2013.01); G06F 9/3895 (2013.01); G06F 9/3897 (2013.01); G06F 12/0851 (2013.01); G06F 15/17 (2013.01); G06F 15/781 (2013.01); G06F 15/7807 (2013.01); G06F 15/7857 (2013.01); G06F 15/80 (2013.01); G06F 2212/1041 (2013.01)] 16 Claims
OG exemplary drawing
 
1. A system to support an operation, comprising:
an inference engine comprising one or more processing tiles, wherein each processing tile comprises at least one or more of
an on-chip memory (OCM) configured to load and maintain data for local access by components in the processing tile; and
one or more processing units configured to perform one or more computation tasks of the operation on data in the OCM by executing a set of task instructions; and
a data streaming engine configured to stream data between the a memory and the OCMs of the one or more processing tiles of the inference engine, wherein the data streaming engine is configured to interleave an address associated with a memory access transaction for accessing the memory, wherein a subset of bits of the interleaved address is used to determine an appropriate communication channel through which to access the memory; and
a network interface controller configured to support address interleaving for a burst length greater than a burst length of the address.