US 11,900,541 B2
Method and system of depth determination with closed form solution in model fusion for laparoscopic surgical guidance
Xiaonan Zang, Princeton, NJ (US); Guo-Qing Wei, Plainsboro, NJ (US); Cheng-Chung Liang, West Windsor, NJ (US); Li Fan, Belle Mead, NJ (US); Xiaolan Zeng, Princeton, NJ (US); and Jianzhong Qian, Princeton Junction, NJ (US)
Assigned to EDDA TECHNOLOGY, INC., Princeton, NJ (US)
Filed by EDDA TECHNOLOGY, INC., Princeton, NJ (US)
Filed on May 16, 2022, as Appl. No. 17/745,600.
Claims priority of provisional application 63/188,625, filed on May 14, 2021.
Prior Publication US 2022/0375173 A1, Nov. 24, 2022
Int. Cl. G06T 17/20 (2006.01); A61B 34/10 (2016.01); G06T 19/20 (2011.01); G06T 7/50 (2017.01); G06T 7/33 (2017.01); G06T 7/73 (2017.01); A61B 1/00 (2006.01); A61B 1/313 (2006.01); G06T 7/80 (2017.01); G06T 19/00 (2011.01)
CPC G06T 17/20 (2013.01) [A61B 1/00009 (2013.01); A61B 1/3132 (2013.01); A61B 34/10 (2016.02); G06T 7/344 (2017.01); G06T 7/50 (2017.01); G06T 7/75 (2017.01); G06T 7/80 (2017.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); A61B 1/00057 (2013.01); A61B 2034/105 (2016.02); G06T 2207/10016 (2013.01); G06T 2207/10068 (2013.01); G06T 2207/20092 (2013.01); G06T 2207/30004 (2013.01); G06T 2210/41 (2013.01); G06T 2219/2004 (2013.01); G06T 2219/2016 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method implemented on at least one processor, a memory, and a communication platform for estimating a three-dimensional (3D) coordinate of a 3D virtual model, comprising:
accessing a 3D virtual model constructed for an organ of a patient based on a plurality of images of the organ prior to a medical procedure;
obtaining a first pair of corresponding feature points, with a first two-dimensional (2D) feature point on the organ observed in a 2D image acquired during the medical procedure and a first corresponding 3D feature point from the 3D virtual model;
obtaining a second pair of corresponding feature points, with a second 2D feature point on the organ observed in the 2D image and a second corresponding 3D feature point from the 3D virtual model, wherein the depths of the first and the second 3D feature points are substantially the same; and
automatically determining a first 3D coordinate of the first 3D feature point and a second 3D coordinate of the second 3D feature point based on the first and the second pairs of corresponding feature points so that a first distance between the determined first 3D coordinate and the determined second 3D coordinate equals to a second distance between a first actual 3D coordinate of the first 3D feature point and a second actual 3D coordinate of the second 3D feature point in the 3D virtual model.