US 12,380,537 B2
2D and 3D color fusion imaging
Robert Trout, Durham, NC (US); Joseph Izatt, Durham, NC (US); Christian B. Viehland, Durham, NC (US); Cynthia Toth, Durham, NC (US); Anthony Kuo, Durham, NC (US); Jianwei Li, Durham, NC (US); Lejla Vajzovic, Durham, NC (US); and Al-Hafeez Dhalla, Durham, NC (US)
Filed by DUKE UNIVERSITY, Durham, NC (US)
Filed on May 11, 2022, as Appl. No. 17/741,961.
Claims priority of provisional application 63/186,984, filed on May 11, 2021.
Prior Publication US 2022/0366551 A1, Nov. 17, 2022
Int. Cl. G06T 5/50 (2006.01); A61B 6/00 (2006.01); A61B 6/46 (2024.01)
CPC G06T 5/50 (2013.01) [A61B 6/466 (2013.01); A61B 6/481 (2013.01); G06T 2207/20221 (2013.01); G06T 2207/30041 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
receiving 3D imaging data and 2D color imaging data of a region of interest;
segmenting the 3D imaging data to identify anatomical features in the region of interest, including surfaces of the anatomical features and a corresponding volume of the anatomical features;
generating an image by fusing the 2D color imaging data to the 3D imaging data according to the surfaces, the corresponding volumes, and identities of the anatomical features; and
rendering a final image at an output image plane by casting a ray through a fused 3D imaging data of the image generated by fusing the 2D color imaging data to the 3D imaging data for each pixel and viewpoint of the output image plane for the image,
wherein for each surface in the fused 3D imaging data that the ray intersects, a surface shader is computed from volume features at a point of surface intersection and added for that pixel and viewpoint of the output image plane for the image.