US 12,236,343 B2
Systems and methods for reducing memory requirements in neural networks
Mark Alan Lovell, Lucas, TX (US); and Robert Michael Muchsel, Addison, TX (US)
Assigned to Maxim Integrated Products, Inc., San Jose, CA (US)
Filed by Maxim Integrated Products, Inc., San Jose, CA (US)
Filed on Dec. 21, 2020, as Appl. No. 17/128,219.
Claims priority of provisional application 62/958,666, filed on Jan. 8, 2020.
Prior Publication US 2021/0216868 A1, Jul. 15, 2021
Int. Cl. G06N 20/00 (2019.01); G06N 3/04 (2023.01); G06N 3/08 (2023.01)
CPC G06N 3/08 (2013.01) [G06N 3/04 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for processing large amounts of neural network data, the method comprising:
determining one or more active neural network layers in a neural network using counters, wherein the counters control a set of input data by counting input shifts of data into the one or more active neural network layers relative to at least one shift value to identify the one or more active neural network layers;
using the one or more active neural network layers to process a subset of the set of input data of a first neural network layer, the subset having a data size that is substantially less than the size of the set of input data;
outputting a first set of output data from the first neural network layer;
using the first set of output data in a second neural network layer; and
outputting a second set of output data from the second neural network layer prior to processing all of the set of input data.