A detailed explnation for class FeatureDetection
This section provides a detailed description of some of the functions declared within the FeatureDetection
class, which encapsulates computer vision algorithms used in this project. Descriptions for the other functions within this class can be found in the Documentation section.
Private variables & functions
1. ContourDetection
void ContourDetection(const cv::Mat image, int min_area, std::vector<cv::Mat> &contour_img_list)
This function reads input image and detect circular contours, the size of which is larger than min_area
. Output contour_img_list
is a vector of images overlaid with detected circular contours, in the descending order of contour areas.
![]() |
![]() |
---|---|
Fig 1. Init_IR photo | Fig 2. Init_depth photo |
First, we leverage the built-in OpenCV function cv::findContours
to detect all circular contours in the image.
contour_img_list.clear();
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(image, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
Then we rearrange the detected contours in an descending order
std::sort(contours.begin(), contours.end(),[](const std::vector<cv::Point>& c1, const std::vector<cv::Point>& c2)
{return cv::contourArea(c1,false) > cv::contourArea(c2, false);});
for(int i=0; i<contours.size(); i++)
{
if(cv::contourArea(contours[i]) > min_area)
{
auto area = cv::contourArea(contours[i]);
cv::Mat image_with_contour = cv::Mat::zeros(image.rows, image.cols, CV_8UC1);
cv::drawContours(image_with_contour, contours, i, cv::Scalar(255), -1, 8,hierarchy);
contour_img_list.push_back(image_with_contour);
}
}
![]() |
![]() |
---|---|
Fig 3. Contour 0 detected | Fig 4. Contour 1 detected |
2. Fit3DSphere
void Fit3DSphere(const std::vector<cv::Point3d> &pt_list, double &xc, double &yc, double &zc, double &rc);
This function aims to find the 3D position of a spherical ball centre and the radius of the ball, given 3D positions of points on the spherical surface.
Assume there are \(\textrm{i}\) points on the surface, with their 3D positions \(P_ \textrm{i}=( \textrm{x} _ \textrm{i}, \textrm{y} _ \textrm{i}, \textrm{z} _ \textrm{i})\). The centre of the spherical ball with a radius \(\textrm{r} _ \textrm{c}\) is \(P_ \textrm{c}=( \textrm{x} _ \textrm{c}, \textrm{y} _ \textrm{c}, \textrm{z} _ \textrm{c})\). The following relationship is satisfied \(( \textrm{x} _ \textrm{i}- \textrm{x} _ \textrm{c})^2+( \textrm{y} _ \textrm{i} - \textrm{y} _ \textrm{c})^2+( \textrm{z} _ \textrm{i} - \textrm{z} _ \textrm{c})^2= \textrm{r} ^2 _ \textrm{c}\). By rearranging the equation, we have \(\textrm{x} _ \textrm{i} \times \textrm{c} _ 0+ \textrm{y} _ \textrm{i} \times \textrm{c} _ 1 + \textrm{z} _ \textrm{i} \times \textrm{c} _ 2 + 1 \times \textrm{c} _ 3 = \textrm{x} ^ 2_ \textrm{i} + \textrm{y}^2 _ \textrm{i} + \textrm{z}^2 _ \textrm{i}\), where \(\textrm{c} _ 0=\frac{1}{2} \textrm{x} _ \textrm{c}, \textrm{c} _ 1=\frac{1}{2} \textrm{y} _ \textrm{c}, \textrm{c} _ 2=\frac{1}{2} \textrm{z} _ \textrm{c}, \textrm{c}_ 3= \textrm{r} ^ 2_ \textrm{c} - \textrm{x}^2 _ \textrm{c} - \textrm{y} ^ 2 _ \textrm{c} - \textrm{z} ^ 2 _ \textrm{c}\). Hence, we can calculate \(\textrm{x} _ \textrm{c}, \textrm{y} _ \textrm{c}, \textrm{z} _ \textrm{c}\) and \(\textrm{r} _ \textrm{c}\) after collecting a list of \(P _ \textrm{i}\). The code goes as follows.
int n_pts = pt_list.size();
Eigen::MatrixXd A(n_pts, 4), b(n_pts,1);
for(size_t i=0; i<n_pts; i++)
{
double x = pt_list[i].x, y = pt_list[i].y, z = pt_list[i].z;
A(i,0) = x; A(i,1) = y; A(i,2) = z; A(i,3) = 1;
b(i,0) = x*x + y*y + z*z;
}
Eigen::VectorXd c = A.bdcSvd(Eigen::ComputeThinU | Eigen::ComputeThinV).solve(b);
// Eigen::VectorXd c = A.colPivHouseholderQr().solve(b);
xc = c[0]/2; yc = c[1]/2; zc = c[2]/2;
rc = sqrt(c[3]+xc*xc+yc*yc+zc*zc);
void FindBallCentres(cv::Mat img, PointCloudT::Ptr cloud, double radius_ball_1, double radius_ball_2, Eigen::Vector3d ¢re_ball_1, Eigen::Vector3d ¢re_ball_2)
This function builds upon the previous functions ContourDetection
and Fit3DSphere
, aiming to find the centre positions of both red spherical markers given the depth image of the current scene.
First, we use the function ContourDetection
to generate binary images with overlaid contours, where pixels within the contour are coloured in white. We also conduct a sanity check to make sure there are at least two circular contours detected in each frame.
// Step 1, find contours
std::vector<cv::Mat> contour_img_list;
std::vector<double> radius_list = {radius_ball_1, radius_ball_2}; // unit (mm), r1 > r2
std::vector<cv::Point3d> ball_centre_list(2);
int min_area = 400;
this->ContourDetection(img, min_area, contour_img_list);
assert(contour_img_list.size() >= 2 && "Not enough ball contours detected");
Second, we aim to extract the 3D position of points on the spherical surface, the 2D projection of which are pixels confined within the detected contour. Our method is to look up their 3D position from the depth image. The 3D position of the detected surface points are stored in contour_3d_positions
, which are then sent into Fit3DSphere
for calculating the centre position of two spherical markers.
cv::Mat contour_img = contour_img_list[i];
std::vector<cv::Point3d> contour_3d_positions;
for(size_t j=0; j<contour_img.rows; j++)
{
for(size_t k=0; k<contour_img.cols; k++)
{
int index = j * contour_img.cols + k;
int pixelValue = (int)contour_img.at<uchar>(j,k);
if(pixelValue == 255)
{
cv::Point3d pt;
pt.x = cloud->points[index].x;
pt.y = cloud->points[index].y;
pt.z = cloud->points[index].z;
if (pt.z > 0.0) contour_3d_positions.push_back(pt);
}
}
}
// find the central position of the ball
double xc, yc, zc, rc;
this->Fit3DSphere(contour_3d_positions, xc, yc, zc, rc);
To ensure the accuracy of the calculations, we have added a block for sanity check. The criteria is that the reconstructed radii of markers should be within a 2 mm range of the ground truth values.
if(fabs(rc - radius_list[i])<=2.0)
{
cv::Point3d pt_centre;
pt_centre.x = xc; pt_centre.y=yc; pt_centre.z=zc;
ball_centre_list[i] = pt_centre;
}
else{
std::cout<<"Ball central position reconstruction failure"<<std::endl;
}
Finally, we have obtained centre_ball_1
and centre_ball_2
for the reconstructed centre position of both markers.
Public functions
1. ReconstructJ4Position
Eigen::Vector3d ReconstructJ4Position(const cv::Mat &img_depth, const PointCloudT::Ptr &cloud, const double &ball_1_radius, const double &ball_2_radius)
This public function serves as a wrapper of several private functions for users to reconstruct the 3D position of joint 4.
The pipeline goes as follows
- Calculate the position of both markers in the depth camera frame through FindBallCentres
.
- Find out joint 4 position via simple linear algebra.
2. drawShaftAxisColourAcusense
cv::Mat drawShaftAxisColourAcusense(cv::Mat img, std::string window_name, const Eigen::Vector3d &origin, const Eigen::Vector3d &end_pt)
This function overlays the back-projected tool axis onto the image plane. The central tool axis is defined by two points on the axis. We selected the RCM point and joint 4 in this project. The idea is to first convert both the RCM point and joint 4 from the robot base frame to the depth frame of the camera via cvt2cameraFrame
. Then we transform both points to the colour frame of the camera via cvtDepth2Colour_Acusense
. Finally, we back project both points onto the colour image plane via world2pixel
and connect these two points using a dashed line.