US 12,236,630 B2
Robotic surgery depth detection and modeling
Xing Jin, San Jose, CA (US); Caitlin Donhowe, Mountain View, CA (US); Martin Habbecke, Palo Alto, CA (US); and Joëlle Barral, Mountain View, CA (US)
Assigned to Verily Life Sciences LLC, Dallas, TX (US)
Filed by Verily Life Sciences LLC, South San Francisco, CA (US)
Filed on Nov. 13, 2020, as Appl. No. 16/949,786.
Claims priority of provisional application 62/935,824, filed on Nov. 15, 2019.
Prior Publication US 2021/0145523 A1, May 20, 2021
Int. Cl. G06T 7/73 (2017.01); A61B 1/00 (2006.01); A61B 34/00 (2016.01); A61B 34/10 (2016.01); A61B 34/20 (2016.01); A61B 34/30 (2016.01); A61B 90/00 (2016.01)
CPC G06T 7/73 (2017.01) [A61B 1/00149 (2013.01); A61B 1/00193 (2013.01); A61B 34/20 (2016.02); A61B 2034/105 (2016.02); A61B 2034/2051 (2016.02); A61B 2034/2057 (2016.02); A61B 2034/2063 (2016.02); A61B 2034/2065 (2016.02); A61B 2034/252 (2016.02); A61B 2034/301 (2016.02); A61B 2090/365 (2016.02); A61B 2090/374 (2016.02); A61B 2090/3762 (2016.02); A61B 2090/378 (2016.02)] 17 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
receiving procedure type data describing a surgical procedure to be performed;
receiving first image data corresponding to preoperative imaging of a surgical location, the surgical location based on the procedure type data;
receiving second image data from a camera connected to a robotic arm of a robotic surgical system;
receiving kinematic data describing a position and orientation of the robotic arm in relation to the surgical location;
generating a feature model of the surgical location based on the image data and the kinematic data;
generating an image-mapped feature model by mapping the first image data onto the feature model;
determining a sequence of procedure steps based on the procedure type data and the image-mapped feature model;
determining a current status of the surgical procedure based on the image-mapped feature model and current kinematic data using a pattern recognition algorithm, wherein the image-mapped feature model represents a first portion of anatomy yet to be removed and a second portion of anatomy that has already been removed; and
providing, for display at a display device, (i) the sequence of procedure steps, and (ii) the current status of the surgical procedure, using the image-mapped feature model, by showing a first representation of the first portion of anatomy yet to be removed and a second representation of the second portion of anatomy that has already been removed, wherein the second representation of the second portion of anatomy that has already been removed comprises a shadow portion of the image-mapped feature model.