US 11,941,511 B1
Storing of intermediate computed values for subsequent use in a machine trained network
Ryan J. Cassidy, San Diego, CA (US); and Steven L. Teig, Menlo Park, CA (US)
Assigned to PERCEIVE CORPORATION, San Jose, CA (US)
Filed by Perceive Corporation, San Jose, CA (US)
Filed on Nov. 9, 2020, as Appl. No. 17/093,296.
Claims priority of provisional application 62/933,960, filed on Nov. 11, 2019.
This patent is subject to a terminal disclaimer.
Int. Cl. G06N 3/04 (2023.01); G06N 3/048 (2023.01); G06N 3/049 (2023.01); G06N 3/063 (2023.01)
CPC G06N 3/049 (2013.01) [G06N 3/048 (2023.01); G06N 3/063 (2013.01)] 14 Claims
OG exemplary drawing
 
1. A non-transitory machine readable medium storing a program for implementing a temporal convolution network (TCN) comprising a plurality of layers of machine-trained processing nodes, the program comprising sets of instructions for:
configuring a first set of processing nodes (i) to compute a first plurality of activation values while the TCN propagates a first set of input values, provided at a first instance in time, through the layers of the TCN to produce a first output of the TCN and (ii) to store the plurality of activation values in a set of memories;
configuring a second set of processing nodes i)_to retrieve the first plurality of activation values from the set of memories and (ii) to use the retrieved first plurality of activation values to compute a second plurality of activation values while the TCN propagates a second set of input values, provided to the TCN at a second instance in time, through the layers of the TCN in order to compute a second output of the TCN;
configuring a third set of processing nodes (i) to retrieve the first plurality of activation values from the set of memories and (ii) to use the retrieved first plurality of activation values to compute a third plurality of activation values while the TCN propagates a third set of input values, provided to the TCN at a third instance in time, through the layers of the TCN in order to compute a third output of the TCN,
wherein the second and third sets of MT processing nodes are different sets of processing nodes such that the first plurality of activation values are used by different sets of MT processing nodes when computing the second output of the TCN for the second set of input values than when computing the third output of the TCN for the third set of input values.