US 11,657,497 B2
Method and apparatus for registration of different mammography image views
William C. Walton, Severn, MD (US); and Seung-Jun Kim, Baltimore, MD (US)
Assigned to The Johns Hopkins University, Baltimore, MD (US)
Filed by The Johns Hopkins University, Baltimore, MD (US)
Filed on Mar. 25, 2020, as Appl. No. 16/829,556.
Claims priority of provisional application 62/823,972, filed on Mar. 26, 2019.
Prior Publication US 2020/0311923 A1, Oct. 1, 2020
Int. Cl. G06T 7/00 (2017.01); G06T 3/00 (2006.01); A61B 6/00 (2006.01)
CPC G06T 7/0012 (2013.01) [A61B 6/502 (2013.01); A61B 6/5217 (2013.01); G06T 3/0093 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30068 (2013.01); G06T 2207/30096 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method of identifying potential lesions in mammographic images, the method comprising:
receiving, by an image processing device, first image data;
receiving, by the image processing device, second image data, one of the first image data or the second image data being two-dimensional Craniocaudal (CC) mammographic image data or two-dimensional Mediolateral Oblique (MLO) mammographic image data;
registering, by the image processing device, the first image data and the second image data by employing an image registration convolutional neural network (CNN) using pixel level registration; wherein registering the first image data with the second image data comprises:
inputting the first image data and the second image data into the image registration CNN;
generating, via convolutions performed by the image registration CNN on the first image data and the second image data, a deformation field of deformation vectors that map pixels of the first image data to pixels of the second image data; the deformation field comprising, to define the deformation vectors,
a vertical deformation data array that defines row-wise relationships between the pixels of the first image data and the pixels of the second image data and
a horizontal deformation data array that defines column-wise relationships between the pixels of the first image data and the pixels of the second image data;
determining, by the image processing device, whether a candidate detection of a lesion exists in both the first image data and the second image data based on the first image data and the second image data and a mapping of the first image data to the second image data provided by deformation field output from the image registration CNN; and
generating, by the image processing device, display output identifying the lesion.