CPC G06T 17/20 (2013.01) [A61B 1/00009 (2013.01); A61B 1/3132 (2013.01); A61B 34/10 (2016.02); G06T 7/344 (2017.01); G06T 7/50 (2017.01); G06T 7/75 (2017.01); G06T 7/80 (2017.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); A61B 1/00057 (2013.01); A61B 2034/105 (2016.02); G06T 2207/10016 (2013.01); G06T 2207/10068 (2013.01); G06T 2207/20092 (2013.01); G06T 2207/30004 (2013.01); G06T 2210/41 (2013.01); G06T 2219/2004 (2013.01); G06T 2219/2016 (2013.01)] | 18 Claims |
1. A method implemented on at least one processor, a memory, and a communication platform for fusing a three-dimensional (3D) virtual model with a two-dimensional (2D) image associated with an organ of a patient, comprising:
determining a key-pose to represent an approximate position and orientation of a medical instrument with respect to the patient's organ;
generating, based on the key-pose, an overlay on a 2D image of the patient's organ, acquired via the medical instrument, by projecting a 3D virtual model for the patient's organ;
obtaining a first pair of corresponding feature points comprising a first 2D feature point from the organ observed in the 2D image and a first corresponding 3D feature point from the 3D virtual model; and
determining a first 3D coordinate of the first 3D feature point with respect to a camera coordinate system based on a first 2D coordinate of the first 2D feature point with respect to the image coordinate system, wherein a depth of the first 3D coordinate is on a line of sight of the first 2D feature point and is determined so that a projection of the 3D virtual model at the depth creates the overlay approximately matching the organ observed in the 2D image, wherein the depth is determined by:
determining, respectively, a minimum depth value and a maximum depth value to represent a depth range;
projecting, at each depth within the range, the 3D virtual model on to the 2D image plane; and
selecting a depth value within the range that yields a best match between the projection of the 3D virtual model and the patient's organ observed in the 2D image, resulting in the depth.
|