US 12,307,360 B2
Method and system for optimized spike encoding for spiking neural networks
Dighanchal Banerjee, Kolkata (IN); Sounak Dey, Kolkata (IN); Arijit Mukherjee, Kolkata (IN); and Arun George, Bangalore (IN)
Assigned to Tata Consultancy Services Limited, Mumbai (IN)
Filed by Tata Consultancy Services Limited, Mumbai (IN)
Filed on Mar. 1, 2021, as Appl. No. 17/187,912.
Claims priority of application No. 202021052964 (IN), filed on Dec. 4, 2020.
Prior Publication US 2022/0222522 A1, Jul. 14, 2022
Int. Cl. G06N 3/08 (2023.01); G06N 3/049 (2023.01)
CPC G06N 3/08 (2013.01) [G06N 3/049 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A processor-implemented method for optimized spike encoding for spiking neural networks (300) comprising:
receiving a plurality of input signals from a data source, via a one or more hardware processors, wherein an input signal of the plurality of input signals is a time series signal and the data source comprises of a plurality of sensors, wherein the time series signal is a Mackey-Glass Time series (MGS) having a non-linear chaotic time-series generated by a delay differential equation (302);
introducing a Gaussian noise into the plurality of input signals, via the one or more hardware processors, to obtain a plurality of Gaussian noise introduced signals (304);
generating a plurality of initial encoded spike trains from the plurality of Gaussian noise introduced signals, via the one or more hardware processors, based on an encoding technique (306), wherein the encoding technique comprises a non-temporal neural coding scheme that comprises a Poisson encoding technique, and wherein Poisson encoding technique is expressed mathematically as shown below:

OG Complex Work Unit Math
where <n> is the average spike count given by

OG Complex Work Unit Math
where,
P is a probability of an event,
n is number of spikes,
(t2−t1) is time interval,
<n> is an average spike count,
r(t) is an instantaneous rate of generating spikes during the encoding process, and
dt is a small time sub-interval;
computing a mutual information (MI), via the one or more hardware processors, between (i) the plurality of Gaussian noise introduced signals and the (ii) the plurality of initial encoded spike trains based on a MI computation technique, wherein the MI is computed across an entire length of an initial encoded spike train from amongst the plurality of initial encoded spike trains (308);
optimizing the mutual information (MI), via the one or more hardware processors, through an optimization technique that maximizes the MI by varying the Gaussian noise in the plurality of Gaussian noise introduced signals (310);
identifying an initial encoded spike train among the plurality of initial encoded spike trains to obtain an optimized spike train, via the one or more hardware processors, wherein the optimized spike train is identified based on a pre-defined criteria of the MI, wherein the mutual information increases with increase in an input noise and decreases after a noise level of 0.07 and an inflection point corresponding to the noise level is considered as the optimized mutual information available between the input signal and the initial encoded spike train, and the corresponding spike train is considered as the optimized spike train which carries maximum information about the MGS (312);
feeding the optimized spike train, via the one or more hardware processors, to a spiking neural network (SNN) for training the SNN for at least a task (314);
training, via the one or more hardware processors, the SNN using the optimized spike train, wherein the SNN is trained to learn temporal dynamics of the optimized spike train, and a post-synaptic trace of the SNN is then fed into a Linear Regression module with corresponding real values from the plurality of input signals as labels to learn during training and to reconstruct the plurality of input signals; and
reconstructing the plurality of input signals, via the one or more hardware processors, using the SNN by increasing noise and changing the MI on learning capability of the SNN on the Linear Regression module to improve a reconstruction performance of the SNN when compared to a base encoding technique, wherein the reconstruction performance of the SNN is measured using a reconstruction score, and wherein the reconstructed plurality of input signals captures a real world stimuli from the time series signal.