US 12,456,202 B2
Methods and systems for automated image segmentation of anatomical structure
Aparna Kanakatte Gurumurthy, Bangalore (IN); Avik Ghose, Kolkata (IN); Divya Manoharlal Bhatia, Bangalore (IN); and Jayavardhana Rama Gubbi Lakshminarasimha, Bangalore (IN)
Assigned to TATA CONSULTANCY SERVICES LIMITED, Mumbai (IN)
Filed by Tata Consultancy Services Limited, Mumbai (IN)
Filed on Jun. 26, 2023, as Appl. No. 18/213,931.
Claims priority of application No. 202221037838 (IN), filed on Jun. 30, 2022.
Prior Publication US 2024/0005512 A1, Jan. 4, 2024
Int. Cl. G06T 7/12 (2017.01); G06T 15/00 (2011.01); G06V 10/25 (2022.01); G06V 10/44 (2022.01); G06V 10/764 (2022.01); G06V 10/771 (2022.01)
CPC G06T 7/12 (2017.01) [G06T 15/00 (2013.01); G06V 10/25 (2022.01); G06V 10/44 (2022.01); G06V 10/764 (2022.01); G06V 10/771 (2022.01)] 17 Claims
OG exemplary drawing
 
1. A processor-implemented method for automated image segmentation of an anatomical structure, comprising the steps of:
receiving, via one or more hardware processors, a plurality of 3-dimensional (3-D) training images corresponding to the anatomical structure and a ground-truth 3-D image associated with each of the plurality of 3-D training images, wherein the plurality of 3-D training images is associated with a plurality of classes of the anatomical structure;
pre-processing, via the one or more hardware processors, the plurality of 3-D training images, to obtain a plurality of pre-processed training images;
forming, via the one or more hardware processors, one or more mini-batches from the plurality of pre-processed training images, based on a predefined mini-batch size, wherein each mini-batch comprises one or more pre-processed training images; and
training, via the one or more hardware processors, a segmentation network model, with the one or more pre-processed training images present in each mini-batch at a time, until the one or more mini-batches are completed for a predefined training epochs, to obtain a trained segmentation network model, wherein the segmentation network model comprises a generator and a patch-based discriminator, and training the segmentation network model with the one or more pre-processed training images present in each mini-batch comprises:
passing each pre-processed training image present in the mini-batch to an encoder network of the generator, to obtain a set of patched feature maps and a set of encoded feature maps, corresponding to the pre-processed training image;
channel-wise concatenating the set of patched feature maps and the set of encoded feature maps, through a bottleneck network of the generator, to obtain a concatenated feature map corresponding to each pre-processed training image;
passing the concatenated feature map to a decoder network of the generator, to predict a segmented image corresponding to each pre-processed training image;
predicting a probability value corresponding to each pre-processed training image, by using (i) the predicted segmented image corresponding to the pre-processed training image and (ii) the ground-truth 3-D image of the corresponding pre-processed training image, through the patch-based discriminator;
calculating a value of a loss function of the segmentation network model, for the one or more pre-processed training images present in the mini-batch, using the predicted probability value corresponding to each pre-processed training image; and
backpropagating weights of the segmentation network model, based on the calculated value of the loss function of the segmentation network model.