US 12,067,675 B2
Autonomous reconstruction of vessels on computed tomography images
Kris Siemionow, Chicago, IL (US); Paul Lewicki, Tulsa, OK (US); Marek Kraft, Poznan (PL); Dominik Pieczynski, Tulce (PL); Michal Mikolajczak, Poznan (PL); and Jacek Kania, Rogozno (PL)
Assigned to KARDIOLYTICS INC., Tulsa, OK (US)
Filed by Kardiolytics Inc., Tulsa, OK (US)
Filed on Mar. 27, 2022, as Appl. No. 17/705,336.
Claims priority of application No. 21160817 (EP), filed on Mar. 4, 2021.
Prior Publication US 2022/0335687 A1, Oct. 20, 2022
Int. Cl. G06T 17/00 (2006.01); G06T 19/20 (2011.01)
CPC G06T 17/00 (2013.01) [G06T 19/20 (2013.01); G06T 2210/41 (2013.01); G06T 2219/2004 (2013.01)] 7 Claims
OG exemplary drawing
 
1. A computer-implemented method for autonomous reconstruction of vessels on computed tomography images, the method comprising:
providing a reconstruction convolutional neural network (CNN) that is pre-trained with a plurality of batches of training data comprising known 3D models of vessel and their reconstructed 3D models, in order to generate a reconstructed 3D model fragment based on an input 3D model fragment;
receiving an input 3D model of a vessel to be reconstructed;
defining a region of interest (ROI) and a movement step, wherein the ROI is a 3D volume that covers an area to be processed;
defining a starting position and positioning the ROI at the starting position;
reconstructing a shape of the input 3D model within the ROI by inputting the fragment of the input 3D model within the ROI to the reconstruction convolutional neural network (CNN) and receiving the reconstructed 3D model fragment;
moving the ROI by the movement step along a scanning path;
repeating the reconstruction and moving steps to reconstruct a desired portion of the input 3D model at consecutive ROI positions;
combining the reconstructed 3D model fragments from the consecutive ROI positions to obtain a reconstructed 3D model of the vessel; and
comparing the reconstructed 3D model fragment with the input 3D model fragment to determine a difference between the reconstructed 3D model fragment and the input 3D model fragment, wherein if the difference exceeds a threshold, the movement step is decreased.