US 12,315,218 B2
Systems and methods for tracking groups of objects in medical images
Yikang Liu, Cambridge, MA (US); Luojie Huang, Baltimore, MD (US); Zhang Chen, Cambridge, MA (US); Xiao Chen, Cambridge, MA (US); and Shanhui Sun, Cambridge, MA (US)
Assigned to Shanghai United Imaging Intelligence Co., Ltd., Shanghai (CN)
Filed by Shanghai United Imaging Intelligence Co., Ltd., Shanghai (CN)
Filed on Jul. 6, 2022, as Appl. No. 17/858,663.
Prior Publication US 2024/0013510 A1, Jan. 11, 2024
Int. Cl. G06V 10/75 (2022.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06T 7/73 (2017.01); G06V 10/764 (2022.01)
CPC G06V 10/751 (2022.01) [G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06T 7/73 (2017.01); G06V 10/764 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01)] 16 Claims
OG exemplary drawing
 
1. An apparatus, comprising:
a memory and one or more processors, wherein the one or more processors are configured to:
obtain at least a first medical image and a second medical image;
for each of the first medical image and the second medical image:
determine, using a first neural network, a plurality of candidate objects in the medical image, wherein the first neural network is used to predict the probability of each pixel of the medical image belonging to at least one of the plurality of candidate objects and determine respective locations of the plurality of candidate objects in the medical image based on the predicted probability;
group the plurality of candidate objects into at least a first group of two or more candidate objects and a second group of two or more candidate objects;
extract first features from a first region of the medical image that includes the first group of two or more candidate objects; and
extract second features from a second region of the medical image that includes the second group of two or more candidate objects;
determine, using a graphical neural network (GNN), a match between the first group of two or more candidate objects in the first medical image and the first group of two or more candidate objects in the second medical image based on the first features respectively extracted from the first medical image and the second medical image; and
further determine, using the GNN, a match between the second group of two or more candidate objects in the first medical image and the second group of two or more candidate objects in the second medical image based on the second features respectively extracted from the first medical image and the second medical image;
wherein the GNN is configured to generate a graph representation that includes a first node, a second node, and an edge between the first node and the second node, the first node representing the first group of two or more candidate objects in the first medical image, the second node representing the first group of two or more candidate objects in the second medical image, and the edge representing a relationship between the first group of two or more candidate objects in the first medical image and the first group of two or more candidate objects in the second medical image;
wherein the GNN is further configured to calculate, based at least on the probability predicted by the first neural network for each pixel of the first medical image and the second medical image, a first node label for the first node and a second node label for the second node, the first node label indicating whether the first group of two or more candidate objects in the first medical image corresponds to real objects, and the second node label indicating whether the first group of two or more candidate objects in the second medical image corresponds to real objects; and
wherein the GNN is further configured to calculate an edge value for the edge between the first node and the second node, the edge value indicating whether the first group of two or more candidate objects in the first medical image and the first group of two or more candidate objects in the second medical image are associated with a same group of candidate objects.