| CPC G06N 20/00 (2019.01) [G06N 3/126 (2013.01)] | 20 Claims |

|
1. A system comprising:
memory circuitry for storing computer instructions;
a network interface circuitry; and
processor circuitry in communication with the network interface circuitry and the memory circuitry, the processor circuitry configured to execute the computer instructions to:
receive sharable data from a plurality of local computation nodes;
cluster the plurality of local computation nodes into a plurality of clusters based on a set of clustering features extracted from the sharable data;
select a subset of local computation nodes from the plurality of local computation nodes as representatives of the plurality of clusters to participate in a collaborative machine learning; and
iteratively provision the collaborative machine learning by the subset of the local computation nodes until a termination condition is met by:
receipt, from the subset of local computation node, sets of model hyper parameters and sets of model metrics associated with machine learning models trained at the subset of local computation nodes using non-sharable datasets of the subset of local computation nodes;
performance of at least one model architectural hyper parameter cross-over of the machine learning models among the subset of local computation nodes to update the sets of model hyper parameters for the subset of local computing nodes, wherein the performance of the at least one model architectural hyper parameter cross-over of the machine learning models among the subset of local computation nodes is limited to intra-cluster cross-over;
elimination of selected local computation nodes of the subset of local computation nodes to obtain a remaining subset of local computation nodes using a multi-dimensional cost/performance function; and
instruction of the remaining subset of local computation nodes to perform a next round of training using the non-sharable datasets based on the updated sets of model hyper parameters.
|