US 11,900,620 B2
Method and system for registering images containing anatomical structures
Florent Lalys, Rennes (FR); Mathieu Colleaux, Rennes (FR); and Vincent Gratsac, Thorigné-Fouillard (FR)
Assigned to THERENVA, Rennes (FR)
Filed by THERENVA, Rennes (FR)
Filed on May 19, 2021, as Appl. No. 17/324,633.
Claims priority of application No. 2005371 (FR), filed on May 20, 2020.
Prior Publication US 2021/0366135 A1, Nov. 25, 2021
Int. Cl. G06F 18/2431 (2023.01); G06T 7/33 (2017.01)
CPC G06T 7/33 (2017.01) [G06F 18/2431 (2023.01); G06T 2200/04 (2013.01); G06T 2207/10064 (2013.01); G06T 2207/10081 (2013.01); G06T 2207/10088 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20221 (2013.01); G06T 2207/30101 (2013.01); G06V 2201/03 (2022.01)] 11 Claims
OG exemplary drawing
 
1. A method for registration between a first three-dimensional image acquired according to a first acquisition mode, and comprising anatomical structures of a patient, and a second two-dimensional image, acquired according to a second acquisition mode by an image acquisition device mounted on a scoping arch movable in rotation and in translation, the second image comprising a portion of the anatomical structures of said patient, the registration implementing a rigid spatial transformation defined by rotation and translation parameters,
the method being implemented by a processor of a programmable electronic device and comprising:
automatic detection of the anatomical structures in the two-dimensional image by application of a first detection neural network trained on a generic database,
estimation, from the anatomical structures automatically detected in said second two-dimensional image, by application of at least one classification neural network trained beforehand on a generic database, of the rotation and translation parameters of said rigid spatial transformation, said estimation including:
a first estimation, from said anatomical structures automatically detected in said second two-dimensional image, by application of said at least one classification neural network, of a first angle of rotation and of a second angle of rotation of said rigid spatial transformation, characterizing the position of the scoping arch, and
a second estimation of translational parameters, of a third angle of rotation and of a zoom factor of said rigid spatial transformation, the second estimation using a result of the first estimation, and
3D/2D iconic registration between the first three-dimensional image and the second two-dimensional image starting from an initialization with said rigid spatial transformation.