US 12,235,930 B2
Graph neural network training methods and systems
Houyi Li, Hangzhou (CN); and Changhua He, Hangzhou (CN)
Assigned to Alipay (Hangzhou) Information Technology Co., Ltd., Zhejiang (CN)
Filed by ALIPAY (HANGZHOU) INFORMATION TECHNOLOGY CO., LTD., Zhejiang (CN)
Filed on Jan. 12, 2022, as Appl. No. 17/574,428.
Application 17/574,428 is a continuation of application No. 17/362,963, filed on Jun. 29, 2021, granted, now 11,227,190.
Claims priority of application No. 202010864281.0 (CN), filed on Aug. 25, 2020.
Prior Publication US 2022/0138502 A1, May 5, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06N 3/00 (2023.01); G06F 9/50 (2006.01); G06F 18/214 (2023.01); G06N 3/04 (2023.01)
CPC G06F 18/2148 (2023.01) [G06F 9/5094 (2013.01); G06N 3/04 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method for training a graph neural network, wherein the method comprises:
obtaining a graph that comprises a plurality of nodes and edges between the plurality of nodes that represent relationships between the plurality of nodes;
dividing the graph into a plurality of subgraphs by using a community discovery algorithm, wherein dividing the graph comprises grouping nodes that are more related to each other into a same subgraph;
obtaining at least one subgraph from the plurality of subgraphs;
obtaining, for each node in the at least one subgraph, a node feature vector;
obtaining, for each node in the at least one subgraph, a node fusion vector based on performing propagation and aggregation using the node feature vector for each node in the at least one subgraph and edges connected to each node in the at least one subgraph; and
training the graph neural network by using the at least one subgraph and based on optimizing a loss function that is computed based on the node fusion vector for each node in the at least one subgraph.