| CPC G06V 10/7747 (2022.01) [G06V 10/776 (2022.01); G06V 10/82 (2022.01); G06V 10/98 (2022.01); G06V 2201/03 (2022.01)] | 21 Claims |

|
1. A system comprising:
a memory to store instructions;
a set of one or more processors to execute the instructions stored in the memory to:
identify a group of unlabeled and unannotated training samples, wherein each unlabeled and unannotated training sample includes a medical image of a selected anatomical region of a body of a patient;
for each unlabeled and unannotated training sample in the group of unlabeled and unannotated training samples:
identify a patch that is a portion of the medial image corresponding to the unlabeled and unannotated training sample;
identify one or more transformations to be applied to the patch; and
generate a transformed patch by applying the one or more transformations to the patch; and
train a source model comprising an encoder-decoder network to learn anatomical patterns from the medical images of the selected anatomical region in a self-supervised manner using a group of transformed patches corresponding to the group of unlabeled and unannotated training samples, and without using labeled or annotated training samples, wherein the encoder-decoder network is trained to generate an approximation of the patch from a corresponding transformed patch, and wherein the encoder-decoder network is trained to minimize a loss function that indicates a difference between the generated approximation of the patch and the patch.
|
|
9. A non-transitory computer-readable medium containing computer executable instructions that, when executed by a processor, cause the processor to perform a method comprising:
identifying a group of unlabeled and unannotated training samples, wherein each unlabeled and unannotated training sample includes a medical image of a selected anatomical region;
for each unlabeled and unannotated training sample in the group of unlabeled and unannotated training samples:
identifying a patch that is a portion of the medical image corresponding to the unlabeled and unannotated training sample;
identifying one or more transformations to be applied to the patch; and
generating a transformed patch by applying the one or more transformations to the patch; and
training a source model comprising an encoder-decoder network to learn anatomical patterns from the medical images of the selected anatomical region in a self-supervised manner using a group of transformed patches corresponding to the group of unlabeled and unannotated training samples, and without using labeled or annotated training samples, wherein the encoder-decoder network is trained to generate an approximation of the patch from a corresponding transformed patch, and wherein the encoder-decoder network is trained to minimize a loss function that indicates a difference between the generated approximation of the patch and the patch.
|