| CPC B25J 9/1692 (2013.01) [B25J 9/1697 (2013.01); G05B 2219/39016 (2013.01); G05B 2219/39022 (2013.01)] | 5 Claims |

|
1. A method for improved hand-eye calibration based on a structured light camera, comprising:
step 1: establishing a pinhole camera model, and using a depth camera to detect a three-dimensional (3D) coordinate to obtain a physical coordinate of each point in an image coordinate system with a known depth relative to a camera coordinate system;
step 2: establishing a Denavit-Hartenberg (DH) model of a robotic arm, and moving the robotic arm to a determined coordinate using inverse kinematics;
step 3: collecting n sets of point cloud data, applying depth scaling coefficients to the n sets of point cloud data to perform Singular Value Decomposition (SVD), and solving for an optimal depth scaling coefficient using a Nelder-Mead algorithm; wherein in the step 3, the robotic arm is driven using a given set of known joint angles, an end effector position Pbasetool of the robotic arm is obtained using a kinematic model of the robotic arm, an axial difference Ptoolcal of a checkerboard corner point at a specified position relative to the end effector of the robotic arm is manually measured, and pbasetool and Ptoolcal are combined to obtain Pbasecal; the above operation is repeated to obtain a set of point cloud data relative to a base coordinate system of the robotic arm; the depth camera obtains a pixel coordinate of the checkerboard corner point at the specified position relative to the image coordinate system of the depth camera and a depth value of the checkerboard corner point relative to the camera coordinate system through corner detection, the pixel coordinate and the depth value are substituted into the pinhole camera model to obtain a 3D coordinate Pcameracal of the checkerboard corner point relative to the depth camera; the above operation is repeated to obtain a set of point clouds relative to the camera coordinate system, depth values of all point cloud data collected by the depth camera are processed through a first-order Kalman filter, and after the depth values measured by the depth camera are filtered and converged, complete point cloud data are calculated using an intrinsic parameter matrix of the depth camera;
in the step 3, the performing SVD includes: de-centering two sets of point cloud data, constructing a covariance matrix based on normalized two sets of point cloud data, decomposing the covariance matrix to obtain a matrix U, a matrix S, and a matrix V, and obtaining a calibration result by determining a rotation matrix R and a translation vector t based on a decomposition result;
analyzing and compensating for axial scaling of a depth of the depth camera; multiplying the depth value obtained by the depth camera by a scaling coefficient, such that the point cloud data of the checkerboard corner point is changed relative to the depth camera, multiplying the point cloud data collected by the depth camera and a mean value of the point cloud data by the scaling coefficient, and according to properties of the SVD, U and V remain unchanged, and S is scaled by a same factor as the covariance matrix:
![]() in the step 3, the depth scaling coefficient, denoted as z_scale, is optimized based on the calibration result, and an error function is defined using the scaling coefficient as an independent variable:
e(z_scale)=Pbasecal−(R*Pcameracal′+t′);
designating a coordinate of a calibration plate relative to the base coordinate system of the robotic arm as a true value, determining a calibration error by subtracting a calculation result of a measured coordinate of the depth camera and the calibration result from the true value, taking the independent variable of the error function as the depth scaling coefficient and combining the above equations to obtain an equation:
e(z_scale)=Pbasecal−(R*Pcameracal*z_scale+Y−R*X*z_scale)
determining a relationship between the error function and the depth scaling coefficient based on the combined equation, calculating a magnitude of an error, with an optimization objective being identifying a minimum value of the error; and using the Nelder-Mead algorithm to identify the minimum value and solving for the optimal depth scaling coefficient; and
step 4: completing the hand-eye calibration of the robotic arm based on the solved optimal depth scaling coefficient.
|