US 12,245,825 B2
Systems and methods of using photogrammetry for intraoperatively aligning surgical elements
Brian R. Harris, Jr., Cordova, TN (US); and Fred W. Bowman, Germantown, TN (US)
Assigned to MicroPort Orthopedics Holdings Inc., Arlington, TN (US)
Filed by MicroPort Orthopedics Holdings Inc., Arlington, TN (US)
Filed on Sep. 28, 2022, as Appl. No. 17/936,130.
Claims priority of provisional application 63/250,906, filed on Sep. 30, 2021.
Prior Publication US 2023/0094903 A1, Mar. 30, 2023
Int. Cl. A61B 34/20 (2016.01); A61B 90/00 (2016.01); A61F 2/46 (2006.01); G06T 7/60 (2017.01); G06T 7/70 (2017.01)
CPC A61B 34/20 (2016.02) [A61F 2/4607 (2013.01); A61F 2/4609 (2013.01); G06T 7/60 (2013.01); G06T 7/70 (2017.01); A61B 2090/373 (2016.02); A61F 2002/4668 (2013.01); G06T 2207/10121 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30008 (2013.01); G06T 2207/30052 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A system for ascertaining a position of an orthopedic element and a component of an endoprosthetic implant in space comprising:
a tissue-penetrating imaging machine;
a first two-dimensional input image, the first two-dimensional input image taken by the tissue penetrating imaging machine from a first reference frame, the first two-dimensional input image depicting a calibration jig;
a second two-dimensional input image, the second two-dimensional input image taken by the tissue penetrating imaging machine from a second reference frame, the second reference frame being offset from the first reference frame at a first offset angle, the second two-dimensional input image depicting the calibration jig;
a third two-dimensional input image, the third two-dimensional input image taken by the tissue penetrating imaging machine from a third reference frame, the third reference frame being offset from both the first reference frame and the second reference frame at a second offset angle, the third two-dimensional input image depicting the calibration jig, and the first two-dimensional input image, the second two dimensional input image, and the third two-dimensional input image comprising spatial data; and
a computational machine configured to project the spatial data from the first two-dimensional input image, the second two-dimensional input image, and the third two-dimensional input image at the first offset angle and the second offset angle to define volume data, the computational machine further configured to run a deep learning network, wherein the deep learning network is configured to identify an orthopedic element from the volume data and a component of an endoprosthetic implant from the volume data to define an identified orthopedic element and an identified component of the endoprosthetic implant, and to map the identified orthopedic element and the identified component of the endoprosthetic implant to the spatial data ascertained by the first two-dimensional input image, the second two-dimensional input image, and the third two-dimensional input image to thereby determine the position of the identified orthopedic element and the identified component of the endoprosthetic implant in three-dimensional space, and wherein the computational machine is further configured to calculate an abduction angle of the identified component of the endoprosthetic implant or an anteversion angle of the identified component of the endoprosthetic implant.