US 11,989,833 B2
Method and system of model fusion for laparoscopic surgical guidance
Xiaonan Zang, Princeton, NJ (US); Guo-Qing Wei, Plainsboro, NJ (US); Cheng-Chung Liang, West Windsor, NJ (US); Li Fan, Belle Mead, NJ (US); Xiaolan Zeng, Princeton, NJ (US); and Jianzhong Qian, Princeton Junction, NJ (US)
Assigned to EDDA TECHNOLOGY, INC., Princeton, NJ (US)
Filed by EDDA TECHNOLOGY, INC., Princeton, NJ (US)
Filed on May 16, 2022, as Appl. No. 17/745,555.
Claims priority of provisional application 63/188,625, filed on May 14, 2021.
Prior Publication US 2022/0361730 A1, Nov. 17, 2022
Int. Cl. G06T 7/50 (2017.01); A61B 1/00 (2006.01); A61B 1/313 (2006.01); A61B 34/10 (2016.01); G06T 7/33 (2017.01); G06T 7/73 (2017.01); G06T 7/80 (2017.01); G06T 17/20 (2006.01); G06T 19/00 (2011.01); G06T 19/20 (2011.01)
CPC G06T 17/20 (2013.01) [A61B 1/00009 (2013.01); A61B 1/3132 (2013.01); A61B 34/10 (2016.02); G06T 7/344 (2017.01); G06T 7/50 (2017.01); G06T 7/75 (2017.01); G06T 7/80 (2017.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); A61B 1/00057 (2013.01); A61B 2034/105 (2016.02); G06T 2207/10016 (2013.01); G06T 2207/10068 (2013.01); G06T 2207/20092 (2013.01); G06T 2207/30004 (2013.01); G06T 2210/41 (2013.01); G06T 2219/2004 (2013.01); G06T 2219/2016 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method implemented on at least one processor, a memory, and a communication platform for fusing a three-dimensional (3D) virtual model with a two-dimensional (2D) image associated with an organ of a patient, comprising:
determining a key-pose to represent an approximate position and orientation of a medical instrument with respect to the patient's organ;
generating, based on the key-pose, an overlay on a 2D image of the patient's organ, acquired via the medical instrument, by projecting a 3D virtual model for the patient's organ;
obtaining a first pair of corresponding feature points comprising a first 2D feature point from the organ observed in the 2D image and a first corresponding 3D feature point from the 3D virtual model; and
determining a first 3D coordinate of the first 3D feature point with respect to a camera coordinate system based on a first 2D coordinate of the first 2D feature point with respect to the image coordinate system, wherein a depth of the first 3D coordinate is on a line of sight of the first 2D feature point and is determined so that a projection of the 3D virtual model at the depth creates the overlay approximately matching the organ observed in the 2D image, wherein the depth is determined by:
determining, respectively, a minimum depth value and a maximum depth value to represent a depth range;
projecting, at each depth within the range, the 3D virtual model on to the 2D image plane; and
selecting a depth value within the range that yields a best match between the projection of the 3D virtual model and the patient's organ observed in the 2D image, resulting in the depth.