US 11,842,485 B2
System and methods for inferring thickness of anatomical classes of interest in two-dimensional medical images using deep neural networks
Tao Tan, Nuenen (NL); Máté Fejes, Budapest (HU); Gopal Avinash, San Ramon, CA (US); Ravi Soni, San Ramon, CA (US); Bipul Das, Chennai (IN); Rakesh Mullick, Bangalore (IN); Pál Tegzes, Budapest (HU); Lehel Ferenczi, Budapest (HU); Vikram Melapudi, Bangalore (IN); and Krishna Seetharam Shriram, Bangalore (IN)
Assigned to GE PRECISION HEALTHCARE LLC, Wauwatosa, WI (US)
Filed by GE Precision Healthcare LLC, Wauwatosa, WI (US)
Filed on Mar. 4, 2021, as Appl. No. 17/192,804.
Prior Publication US 2022/0284570 A1, Sep. 8, 2022
Int. Cl. G06T 7/00 (2017.01); G06N 3/08 (2023.01); G06T 15/08 (2011.01)
CPC G06T 7/0012 (2013.01) [G06N 3/08 (2013.01); G06T 15/08 (2013.01); G06T 2207/10088 (2013.01); G06T 2207/10104 (2013.01)] 11 Claims
OG exemplary drawing
 
1. A method comprising:
receiving a two-dimensional (2D) medical image of a first region of an imaging subject;
receiving a three-dimensional (3D) medical image of the first region of the imaging subject;
annotating voxels of the 3D medical image with object class labels for a first object class of interest to produce a first plurality of annotated voxels;
projecting the 3D medical image along a plurality of rays onto a plane to produce a synthetic 2D medical image matching the 2D medical image;
projecting the first plurality of annotated voxels along the plurality of rays onto the plane to produce a first plurality of thickness values for the first object class of interest;
producing a first ground truth thickness mask for the first object class of interest from the first plurality of thickness values; and
training a deep neural network to learn a mapping between 2D medical images and thickness masks for the first object class of interest by:
mapping the 2D medical image to a first predicted thickness mask for the first object class of interest;
determining a loss for the first predicted thickness mask based on a difference between the first predicted thickness mask and the first ground truth thickness mask; and
updating parameters of the deep neural network based on the loss.