US 11,836,611 B2
Method for meta-level continual learning
Hong Yu, Shrewsbury, MA (US); and Tsendsuren Munkhdalai, Worcester, MA (US)
Assigned to UNIVERSITY OF MASSACHUSETTS, Boston, MA (US)
Filed by University of Massachusetts, Boston, MA (US)
Filed on Jul. 24, 2018, as Appl. No. 16/044,108.
Claims priority of provisional application 62/549,509, filed on Aug. 24, 2017.
Claims priority of provisional application 62/536,945, filed on Jul. 25, 2017.
Prior Publication US 2019/0034798 A1, Jan. 31, 2019
Int. Cl. G06N 3/08 (2023.01); H04L 67/10 (2022.01); G06N 3/04 (2023.01); G06N 3/063 (2023.01); G06N 3/084 (2023.01); G06N 3/045 (2023.01); G06N 3/048 (2023.01); G06N 3/044 (2023.01); G06N 3/047 (2023.01)
CPC G06N 3/08 (2013.01) [G06N 3/04 (2013.01); G06N 3/045 (2023.01); G06N 3/048 (2023.01); G06N 3/063 (2013.01); G06N 3/084 (2013.01); H04L 67/10 (2013.01); G06N 3/044 (2023.01); G06N 3/047 (2023.01)] 19 Claims
OG exemplary drawing
 
1. A method of classifying an input task data set by meta level continual learning, by a processor and an instruction memory with computer code instructions stored thereon, the instruction memory operatively coupled to the processor such that, when executed by the processor, the computer code instructions cause a system to implement the method, the method comprising:
a) analyzing a first training data set to thereby generate a first meta information value in a task space;
b) assigning the first meta information value to the first training data set to generate a first meta weight value in a meta space;
c) analyzing a second training data set that is distinct from the first training data set to generate a second meta information value in the task space;
d) assigning the second meta information value to the second training data set to generate a second meta weight value in the meta space;
e) comparing the first meta weight value and the second meta weight value to generate a slow weight value;
f) storing the slow weight value in a weight memory that is accessible by the task space and the meta space;
g) comparing the input task data set to the slow weight value to generate a third meta information value in the task space;
h) transmitting the third meta information value from the task space to the meta space;
i) comparing the third meta information value to the slow weight value to generate a fast weight value in the meta space;
j) storing the fast weight in the weight memory; and
k) parameterizing the first and second meta weight values with the fast weight value to update the slow weight value, whereby a value is associated with the input task data set, thereby classifying the input task data set by meta level continual learning.