US 11,883,312 B2
Methods and systems for using computer-vision to enhance surgical tool control during surgeries
Andre Chow, London (GB); Danail Stoyanov, London (GB); Imanol Luengo Muntion, London (GB); Petros Giataganas, London (GB); and Jean Nehme, London (GB)
Assigned to DIGITAL SURGERY LIMITED, London (GB)
Filed by DIGITAL SURGERY LIMITED, London (GB)
Filed on Aug. 31, 2022, as Appl. No. 17/899,687.
Application 17/899,687 is a continuation of application No. 16/933,454, filed on Jul. 20, 2020, granted, now 11,446,092, issued on Sep. 20, 2022.
Application 16/933,454 is a continuation of application No. 16/511,978, filed on Jul. 15, 2019, granted, now 10,758,309, issued on Sep. 1, 2020.
Prior Publication US 2022/0409285 A1, Dec. 29, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. A61F 5/00 (2006.01); A61B 34/10 (2016.01); G06N 20/00 (2019.01); G05B 19/4155 (2006.01); A61B 17/32 (2006.01); A61B 17/068 (2006.01); G06V 20/40 (2022.01); A61B 1/00 (2006.01); G06F 18/21 (2023.01); G06V 10/70 (2022.01); A61B 17/00 (2006.01)
CPC A61F 5/0076 (2013.01) [A61B 1/000096 (2022.02); A61B 17/068 (2013.01); A61B 17/320016 (2013.01); A61B 17/320092 (2013.01); A61B 34/10 (2016.02); G05B 19/4155 (2013.01); G06F 18/217 (2023.01); G06N 20/00 (2019.01); G06V 10/70 (2022.01); G06V 20/41 (2022.01); A61B 2017/00061 (2013.01); A61B 2017/00119 (2013.01); A61B 2017/00199 (2013.01); A61B 2017/00212 (2013.01); A61B 2017/320082 (2017.08); A61B 2017/320095 (2017.08); A61B 2034/107 (2016.02); G05B 2219/36414 (2013.01); G06V 2201/034 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
receiving one or more data streams, each of the one or more data streams having been generated at and received from an electronic device configured and positioned to capture live video within a field of view during a particular surgical procedure being performed using one or more surgical tools, the one or more data streams including a sequence of images of the live video within the field of view;
inputting the one or more data streams into one or more models trained to recognize surgical tools and anatomical structures from image data;
in response to inputting the one or more data streams into the one or more models, detecting a surgical tool and detecting an anatomical structure from the sequence of images of the one or more data streams, the detection of the surgical tool and the anatomical structure being performed by utilizing the one or more models from the sequence of images of the live video, and the detection of the surgical tool indicating an orientation of the surgical tool in relation to the anatomical structure;
in response to the surgical tool being in a specific articulation, position, or orientation in relation to the anatomical structure, regulating performance of at least a first function of the surgical tool; and
in response to the surgical tool not being in the specific articulation, position, or orientation in relation to the anatomical structure, ceasing regulation of the performance of the at least the first function of the surgical tool.