CPC G06T 7/0012 (2013.01) [G06N 3/08 (2013.01); G06T 15/08 (2013.01); G06T 2207/10088 (2013.01); G06T 2207/10104 (2013.01)] | 11 Claims |
1. A method comprising:
receiving a two-dimensional (2D) medical image of a first region of an imaging subject;
receiving a three-dimensional (3D) medical image of the first region of the imaging subject;
annotating voxels of the 3D medical image with object class labels for a first object class of interest to produce a first plurality of annotated voxels;
projecting the 3D medical image along a plurality of rays onto a plane to produce a synthetic 2D medical image matching the 2D medical image;
projecting the first plurality of annotated voxels along the plurality of rays onto the plane to produce a first plurality of thickness values for the first object class of interest;
producing a first ground truth thickness mask for the first object class of interest from the first plurality of thickness values; and
training a deep neural network to learn a mapping between 2D medical images and thickness masks for the first object class of interest by:
mapping the 2D medical image to a first predicted thickness mask for the first object class of interest;
determining a loss for the first predicted thickness mask based on a difference between the first predicted thickness mask and the first ground truth thickness mask; and
updating parameters of the deep neural network based on the loss.
|