| CPC B25J 9/1689 (2013.01) [B25J 9/1653 (2013.01); B25J 9/1697 (2013.01); B25J 13/006 (2013.01); G05D 1/0038 (2013.01); G05D 1/0044 (2013.01)] | 20 Claims |

|
1. A system for providing virtual presence for telerobotics in a dynamic scene, the system comprising:
a remote viewing device and a remote controller coupled to the remote viewing device;
a sensor device that captures one or more frames of a scene comprising one or more objects, each frame comprising (i) one or more color images of the scene and the one or more objects and (ii) one or more depth maps of the scene and the one or more objects;
a computing device coupled to the sensor device, the computing device comprising a memory that stores computer-executable instructions and a processor that executes the instructions to:
receive a first set of frames from the sensor device;
for each frame in the first set of frames:
generate a set of feature points corresponding to the one or more objects in the scene,
match the set of feature points to one or more corresponding 3D points in a depth map from the frame, and
construct a dense mesh of the scene and the one or more objects using the matched feature points;
receive a second set of frames from the sensor device;
for each frame in the second set of frames:
a) calculate a geometric error between one or more segments of the dense mesh and corresponding 3D points in a depth map from the frame,
b) classify one or more segments of the dense mesh as one or more dynamic segments based on the geometric error calculated for the one or more segments,
c) non-rigidly deform only the one or more segments of the dense mesh that are classified as the one or more dynamic segments using data associated with the corresponding 3D points in the depth map from the frame,
d) calculate a second geometric error between the deformed one or more dynamic segments and corresponding 3D points in the depth map from the frame,
e) modify the deformed dense mesh using a non-linear optimization if the second geometric error does not converge,
f) repeat steps d) and e) until the second geometric error converges, and
g) update a texture of the deformed dense mesh by aligning a color image from the frame to the deformed dense mesh; and
determine net changes to the dense mesh resulting from the non-rigid deformation and transmit (i) the net changes to the dense mesh of the scene and the one or more objects and (ii) the frame to the remote viewing device,
wherein the remote viewing device configured to:
update an existing 3D representation of the scene and the one or more objects using the net changes to the dense mesh and the frame for display to a user;
receive one or more commands from the user via the remote controller, the one or more commands corresponding to interaction with the one or more objects in the updated 3D representation of the scene, and
transmit the one or more commands to a robot device, and
wherein the robot device is configured to:
execute the one or more commands received from the remote viewing device to perform one or more operations.
|