CPC A61B 5/4576 (2013.01) [A61B 5/4519 (2013.01); A61B 5/4523 (2013.01); A61B 5/7267 (2013.01); A61B 5/742 (2013.01); A61B 5/7475 (2013.01); G06T 7/10 (2017.01); G06T 7/30 (2017.01); A61B 5/055 (2013.01); A61B 6/032 (2013.01); A61B 8/08 (2013.01); G06T 2207/10081 (2013.01); G06T 2207/10088 (2013.01); G06T 2207/10132 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/30004 (2013.01)] | 11 Claims |
1. A method comprising:
receiving one or more 3D (three dimensional) medical images of one or more anatomical objects of a patient;
determining correspondences between 2D (two dimensional) slices of the one or more 3D medical images and points on a 2D map representing the one or more anatomical objects by:
annotating an atlas of the one or more anatomical objects with anatomical features that correspond to anatomical features in the 2D map,
segmenting anatomical structures, using a trained machine learning based segmentation network, from 1) the one or more 3D medical images and 2) the annotated atlas, and
registering the annotated atlas with the one or more 3D medical images to establish the correspondences between the 2D slices of the one or more 3D medical images and the points on the 2D map based on the anatomical structures segmented from the one or more 3D medical images and the anatomical structures segmented from the annotated atlas;
updating the 2D map with patient information extracted from the one or more 3D medical images;
presenting the updated 2D map with the determined correspondences to a user via a display device;
receiving user input, from the user interacting with the display device, selecting one or more of the points on the updated 2D map; and
in response to receiving the user input, displaying, to the user via the display device, one or more of the 2D slices that correspond to the selected one or more points based on the determined correspondences.
|