US 12,229,680 B2
Neural network accelerators resilient to conductance drift
HsinYu Tsai, San Jose, CA (US); Stefano Ambrogio, San Jose, CA (US); Sanjay Kariyappa, Atlanta, GA (US); and Mathieu Gallot, Les Sables-d'Olonne (FR)
Assigned to International Business Machines Cororation, Armonk, NY (US)
Filed by International Business Machines Corporation, Armonk, NY (US)
Filed on Sep. 28, 2020, as Appl. No. 17/035,005.
Prior Publication US 2022/0101142 A1, Mar. 31, 2022
Int. Cl. G06N 3/04 (2023.01); G06N 3/048 (2023.01); G06N 3/063 (2023.01); G06N 3/065 (2023.01); G06N 3/08 (2023.01); G06N 3/084 (2023.01)
CPC G06N 3/084 (2013.01) [G06N 3/048 (2023.01); G06N 3/063 (2013.01)] 25 Claims
OG exemplary drawing
 
1. A method comprising:
receiving an input signal for processing in one or more neurons of a neural network, wherein the neural network includes a plurality of resistive processing unit (RPU) devices, where values of the plurality of RPU devices are RPU weights that connect layers of neurons in the neural network, each neuron of the neural network having an activation function to compensate for conductance drift in the values of the RPU weights, and the neural network does not include bias neurons;
applying an arbitrary amplification factor to activation function outputs of the one or more neurons in the neural network, wherein the arbitrary amplification factor is based on a dynamic range of components in the neural network and compensates for the conductance drift in the values of the RPU weights; and
performing a calculation with the neural network using the amplified activation function outputs of the one or more neurons.