US 12,150,713 B2
Confidence-based robotically-assisted surgery system
Hamed Saeidi, College Park, MD (US); Axel Krieger, Alexandria, VA (US); Simon Leonard, State College, PA (US); Justin Opfermann, Washington, DC (US); and Michael Kam, Greenbelt, MD (US)
Assigned to UNIVERSITY OF MARYLAND, COLLEGE PARK, College Park, MD (US); THE JOHNS HOPKINS UNIVERSITY, Baltimore, MD (US); and CHILDREN'S NATIONAL MEDICAL CENTER, Washington, DC (US)
Filed by University of Maryland, College Park, College Park, MD (US); The Johns Hopkins University, Baltimore, MD (US); and Children's National Medical Center, Washington, DC (US)
Filed on Nov. 16, 2020, as Appl. No. 17/098,990.
Application 17/098,990 is a continuation of application No. PCT/US2020/033270, filed on May 15, 2020.
Application PCT/US2020/033270 is a continuation in part of application No. PCT/US2019/032635, filed on May 16, 2019.
Claims priority of provisional application 62/907,872, filed on Sep. 30, 2019.
Claims priority of provisional application 62/848,979, filed on May 16, 2019.
Claims priority of provisional application 62/672,485, filed on May 16, 2018.
Prior Publication US 2021/0077195 A1, Mar. 18, 2021
Int. Cl. A61B 34/10 (2016.01); A61B 18/14 (2006.01); A61B 34/00 (2016.01); A61B 34/32 (2016.01); A61B 90/00 (2016.01); A61B 18/00 (2006.01); A61B 18/16 (2006.01)
CPC A61B 34/10 (2016.02) [A61B 18/14 (2013.01); A61B 34/25 (2016.02); A61B 34/32 (2016.02); A61B 90/39 (2016.02); A61B 2018/00595 (2013.01); A61B 18/16 (2013.01); A61B 2034/107 (2016.02); A61B 34/76 (2016.02); A61B 2090/371 (2016.02); A61B 2090/3979 (2016.02)] 20 Claims
OG exemplary drawing
 
1. A system comprising:
a camera system that includes a first camera and a second camera;
an articulating member that includes a tool;
a computer comprising:
at least one processor; and
a non-transitory memory configured to store computer-readable instructions which, when executed, cause the at least one processor to:
receive image data from the first camera;
receive point cloud image data from the second camera,
wherein the image data and the point cloud image data correspond to a tissue on which markers are disposed;
identify marker positions of the markers based on the image data and the point cloud image data;
generate a path between a first point on the point cloud and a second point on the point cloud based at least on the marker positions;
filter the path;
receive real-time position data corresponding to the articulating member;
generate a three-dimensional (3D) trajectory based on the filtered path and the real-time position data;
generate control commands based on the 3D trajectory; and
control the articulating member and the tool to follow the 3D trajectory based on the control commands.