US 12,263,043 B2
Method of graphically tagging and recalling identified structures under visualization for robotic surgery
Kevin Andrew Hufford, Durham, NC (US); and David J Meagher, Durham, NC (US)
Assigned to Asensus Surgical US, Inc., Durham, NC (US)
Filed by Asensus Surgical US, Inc., Durham, NC (US)
Filed on Oct. 3, 2023, as Appl. No. 18/480,321.
Application 18/480,321 is a continuation of application No. 17/499,822, filed on Oct. 12, 2021, granted, now 11,771,518.
Application 17/499,822 is a continuation of application No. 16/018,037, filed on Jun. 25, 2018, granted, now 11,141,226, issued on Oct. 12, 2021.
Claims priority of provisional application 62/524,154, filed on Jun. 23, 2017.
Claims priority of provisional application 62/524,143, filed on Jun. 23, 2017.
Claims priority of provisional application 62/524,133, filed on Jun. 23, 2017.
Prior Publication US 2024/0024064 A1, Jan. 25, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G09G 5/00 (2006.01); A61B 1/00 (2006.01); A61B 34/00 (2016.01); A61B 34/10 (2016.01); A61B 90/00 (2016.01); G06T 7/00 (2017.01); G06T 7/40 (2017.01); G06T 7/50 (2017.01); G06T 7/90 (2017.01); G06T 11/00 (2006.01)
CPC A61B 90/361 (2016.02) [A61B 1/000094 (2022.02); A61B 1/0005 (2013.01); A61B 1/00055 (2013.01); A61B 34/10 (2016.02); A61B 34/25 (2016.02); G06T 7/0012 (2013.01); G06T 7/40 (2013.01); G06T 7/50 (2017.01); G06T 7/90 (2017.01); G06T 11/00 (2013.01); A61B 2034/105 (2016.02); A61B 2034/254 (2016.02); G06T 2207/10068 (2013.01); G06T 2210/41 (2013.01); G06T 2210/62 (2013.01)] 13 Claims
OG exemplary drawing
 
1. A method of tagging regions of interest on displayed images during a medical procedure, comprising:
positioning an endoscope in a body cavity;
positioning a surgical instrument in the body cavity;
capturing images of a surgical site within the body cavity and displaying the images on a display, the displayed images including images of the surgical instrument,
displaying a graphical pointer as an overlay on the images displayed on the display,
receiving input from an eye tracker in response to a user directing the user's gaze towards the displayed images,
in response to a user directing the user's gaze towards a first region of the surgical site as displayed in the displayed images, positioning the graphical pointer at the first region; and
in response to user selection input, displaying a graphical tag at the first region of the displayed images.