US 12,251,173 B2
Markerless navigation using AI computer vision
Thomas Harte, London (GB); and Huy Quoc Phan, Fleet (GB)
Assigned to HEALTHCARE OUTCOMES PERFORMANCE COMPANY LIMITED, Altrincham (GB)
Filed by Healthcare Outcomes Performance Company Limited, Altrincham (GB)
Filed on Nov. 14, 2023, as Appl. No. 18/509,079.
Application 18/509,079 is a continuation of application No. 17/243,333, filed on Apr. 28, 2021, granted, now 11,857,271.
Claims priority of provisional application 63/017,447, filed on Apr. 29, 2020.
Claims priority of provisional application 63/074,338, filed on Sep. 3, 2020.
Prior Publication US 2024/0081917 A1, Mar. 14, 2024
Int. Cl. G06T 7/10 (2017.01); A61B 34/20 (2016.01); A61B 90/00 (2016.01); G06N 3/08 (2023.01); G06T 7/73 (2017.01)
CPC A61B 34/20 (2016.02) [A61B 90/39 (2016.02); G06N 3/08 (2013.01); G06T 7/10 (2017.01); G06T 7/74 (2017.01); A61B 2034/2057 (2016.02); A61B 2034/2065 (2016.02); A61B 2090/3945 (2016.02); A61B 2090/3983 (2016.02); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-based method for tracking an object of interest comprising:
(a) inputting data of an image comprising a light beam projected onto a contour of an object of interest into a software module using a processor;
(b) applying a first set of predetermined number (N) of convolution filters to the data of the image to generate first filtered images and merging the first filtered images into a first merged image using the software module;
(c) quantizing the data of the image by dividing the data of the image in to M bins using a comb mask having M teeth and selecting for pixel data above a threshold in the data divided into M bins using the software module;
(d) reconstructing a three-dimensional profile from the image using the software module;
(e) converting the three-dimensional profile to a two-dimensional profile using the software module;
(f) generating a feature vector by normalizing and concatenating the two-dimensional profile using the software module; and
(g) generating a pose vector by inputting the feature vector to a machine learning model, wherein the pose vector provides at least one of location, orientation, and rotation of the object of interest.