Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating a pcl grabber without Ros #21

Closed
giacomodabisias opened this issue Nov 17, 2014 · 34 comments
Closed

Creating a pcl grabber without Ros #21

giacomodabisias opened this issue Nov 17, 2014 · 34 comments
Labels

Comments

@giacomodabisias
Copy link

Hello,
I am working on simple wrapper to use libfreenect2 to get a pcl point cloud without ros.
I used part of your code for registration and I did the following


  typename pcl::PointCloud<PointT>::Ptr
  GetCloud(){

    frames_ =  *GetRawFrames();
    rgb_ = frames_[libfreenect2::Frame::Color];
    depth_ = frames_[libfreenect2::Frame::Depth];
    tmp_depth_ = cv::Mat(depth_->height, depth_->width, CV_32FC1, depth_->data);
    tmp_rgb_ = cv::Mat(rgb_->height, rgb_->width, CV_8UC3, rgb_->data);

    typename pcl::PointCloud<PointT>::Ptr cloud(new pcl::PointCloud<PointT>());
    cloud->height = tmp_rgb_.rows;
    cloud->width = tmp_rgb_.cols;
    cloud->is_dense = false;
    cloud->points.resize(cloud->height * cloud->width);

    if(depthReg_ == 0){
      cameraMatrixColor_ = (cv::Mat_<double>(3,3) <<  
        1.0660120360637927e+03,  0,                      9.4558752751085558e+02,
        0,                       1.0688708399911650e+03, 5.2006994012356529e+02,
        0,                       0,                      1                      );
      cameraMatrixDepth_ = (cv::Mat_<double>(3,3) <<
        3.6638346288148574e+02,  0,                      2.5564531890330468e+02,
        0,                       3.6714380707081017e+02, 2.0398020160452000e+02,
        0,                       0,                      1                      );

      createLookup(tmp_rgb_.cols, tmp_rgb_.rows);

      depthReg_ = new DepthRegistrationCPU(cv::Size(tmp_rgb_.cols, tmp_rgb_.rows),
                    cv::Size(tmp_depth_.cols, tmp_depth_.rows),
                    cv::Size(tmp_depth_.cols, tmp_depth_.rows),
                    0.5f, 20.0f);
      depthReg_->init(cameraMatrixColor_, cameraMatrixDepth_, 
                    cv::Mat::eye(3, 3, CV_64F), cv::Mat::zeros(1, 3, CV_64F),
                    cv::Mat::zeros(tmp_depth_.rows, tmp_depth_.cols, CV_32F),
                    cv::Mat::zeros(tmp_depth_.rows, tmp_depth_.cols, CV_32F));
    }

    depthReg_->depthToRGBResolution(tmp_depth_, scaled_);
    depthReg_->registerDepth(scaled_, scaled_registered_ );
    createCloud(scaled_registered_, tmp_rgb_, cloud);

    return cloud;
  }

I am not sure about the registration API and I am not getting a nice cloud. Is the call order correct?
Thanks

@kohrt
Copy link

kohrt commented Nov 18, 2014

Hi,
first you have to register the depth image and then you can upscale it. you also need to give a translation and rotation.

@giacomodabisias
Copy link
Author

Thanks,
so the order should be:
new....
init...
registerDepth...
depthToRGBResolution...
right?
What do you mean with "give a translation and rotation?"
Thanks!

@giacomodabisias
Copy link
Author

OK I understood now what you mean.
I used the parameters from the calibration you provided but I still get a very bad result
kinect2

This is the new code:

  typename pcl::PointCloud<PointT>::Ptr
  GetCloud(){

    frames_ =  *GetRawFrames();
    rgb_ = frames_[libfreenect2::Frame::Color];
    depth_ = frames_[libfreenect2::Frame::Depth];
    tmp_depth_ = cv::Mat(depth_->height, depth_->width, CV_32FC1, depth_->data);
    tmp_rgb_ = cv::Mat(rgb_->height, rgb_->width, CV_8UC3, rgb_->data);

    typename pcl::PointCloud<PointT>::Ptr cloud(new pcl::PointCloud<PointT>());
    cloud->height = tmp_rgb_.rows;
    cloud->width = tmp_rgb_.cols;
    cloud->is_dense = false;
    cloud->points.resize(cloud->height * cloud->width);

    if(depthReg_ == 0){

        cameraMatrixColor_ = (cv::Mat_<double>(3,3) <<  
            1.0607072507083330e+03, 0., 9.5635447181548398e+02,
            0., 1.0586083263054650e+03, 5.1897844298824486e+02,
            0., 0., 1.  );

        cameraMatrixDepth_ = (cv::Mat_<double>(3,3) <<
            3.6753450559472907e+02, 0., 2.4449142359870606e+02,
            0.,3.6659938826108794e+02, 2.0783007950245891e+02,
            0., 0., 1. );

        createLookup(tmp_rgb_.cols, tmp_rgb_.rows);

        rotation_ = (cv::Mat_<double>(3,3) <<
            9.9983695759005575e-01, -1.6205694274052811e-02,-7.9645282444769407e-03, 
            1.6266694212988934e-02, 9.9983838631845712e-01, 7.6547961099503467e-03,
            7.8391897822575607e-03, -7.7831045990485416e-03, 9.9993898333166209e-01 );

        translation_ = (cv::Mat_<double>(1,3) <<
            -5.1927840124258939e-02, -4.5307585220976776e-04, 7.0571985343338605e-05 );

        depthReg_ = new DepthRegistrationCPU(cv::Size(tmp_rgb_.cols, tmp_rgb_.rows),
                                             cv::Size(tmp_depth_.cols, tmp_depth_.rows),
                                             cv::Size(tmp_depth_.cols, tmp_depth_.rows),
                                             0.5f, 20.0f);
        depthReg_->init(cameraMatrixColor_, cameraMatrixDepth_, 
                        rotation_, translation_,
                        cv::Mat::zeros(tmp_depth_.rows, tmp_depth_.cols, CV_32F),
                        cv::Mat::zeros(tmp_depth_.rows, tmp_depth_.cols, CV_32F));
    }

    depthReg_->registerDepth(tmp_depth_, registered_ );
    depthReg_->depthToRGBResolution(registered_, scaled_);
    createCloud(scaled_, tmp_rgb_, cloud);

    return cloud;
  }

Any idea?

@kohrt kohrt added the question label Nov 19, 2014
@kohrt
Copy link

kohrt commented Nov 19, 2014

I don't know. looks like errors from interpolated depth points.

@giacomodabisias
Copy link
Author

Ok, thanks. Still having problems.
Just to be sure I understood correctly.
Steps to get a registered point cloud:
1)Create a Lookup Table
2) Read intrinsic and extrinsic parameters
3) create a new DepthRegistrationObject (CPU or OpenCL)
4) initialize the object with the intrinsic and extrinsic parameters
5) register the depth image (which is still low res) using registerDepth()
6) upscale the depth image to the rgb resolution using depthToRGBResolution()
7) create cloud using the upscaled depth and the rgb mat.

Am i correct? So that in case I can focus on the single parts. I have seen that even just passing to createCloud directly the depth and the rgb of the camera gives bad results, but that function just uses the lookuptable created with createLookup(). Just visualizing the depth cloud (setting all points to the same color) shows problems. I presume that there is some fault in my camera parameters. The parameters are the following:

color:
fx 1081.37
fy 1081.37
cx 959.5
cy 539.5
ir
fx 365.478
fy 365.478
cx 258.937
cy 207.594

rotation_
9.9983695759005575e-01, -1.6205694274052811e-02,-7.9645282444769407e-03,
1.6266694212988934e-02, 9.9983838631845712e-01, 7.6547961099503467e-03,
7.8391897822575607e-03, -7.7831045990485416e-03, 9.9993898333166209e-01

translation -5.1927840124258939e-02, -4.5307585220976776e-04, 7.0571985343338605e-05

Could you just confirm that you are using the same, or almost the same?

Thanks

@kohrt
Copy link

kohrt commented Nov 25, 2014

Hi,
I also followed your conversation on the PCL mailinglist. The opencl implementation is not limited to NVIDIA, it work also with AMD and on macs with INTEL GPUs.

in theory you just have to do the following:

  • undistort the depth image
  • create 3d points from the depth image using the camera parameters of the depth sensor
  • project all points to the color camera frame, using translation and rotation
  • project them back to the image plane

But since the resolutions are different I did some extra steps:

  • create a mapping between the depth and color image using cv::initUndistortRectifyMap. This maps give for each pixel of a color image the corresponding coordinates of a depth image. This map also takes into account the distortion coefficients.
  • with this map the depth image is scaled and interpolated to the match the color image. so the result is a depth image with the same intrinsic as the color image. The interpolation only takes into account depth values that are not far away from each other to not introduce false depth values.
  • then 3d points are created from the depth image using the intrinsic of the color sensor
  • all points are then projected to the color camera frame using translation and rotation
  • and at the and project to the image plane

In the https://github.com/code-iai/iai_kinect2/tree/libfreenect2-upstream branch I have modified the depth registration to be easier to use. That one just needs to be initialized with the camera parameters and image resolutions of both sensors, distortion coefficients of the depth sensor and the translation and rotation between the depth and color sensor. And than you simply have to give the raw depth image as input.

So if you want to create a lower resolution depth image you can simply half the intrinsic and registered image size.

Right now the rotation and translation has to be computed for each sensor. That can be done with the calibration tool I provided. Or you can simply use the values that are in the repository, but those might not be perfect for other sensors.

@giacomodabisias
Copy link
Author

Perfect, thank you for the exhaustive answer.
Just to be sure, could you quickly check if my parameters are in some plausible range?
I will have a look at the new repo, but also the one before should work; I just need to figure out where the problems is.

@kohrt
Copy link

kohrt commented Nov 25, 2014

Those look fine. In https://github.com/code-iai/iai_kinect2/tree/master/kinect2_bridge/data are the parameter I got from calibrating my sensors.

@giacomodabisias
Copy link
Author

Ok, thanks.
The new interface should use the following sequence if I am correct:
1)init(...)
2)registerDepth(...)

Nice improvement :)

@kohrt
Copy link

kohrt commented Nov 25, 2014

yep.

@giacomodabisias
Copy link
Author

Thanks,
small suggestion, change the cameraMatrixRegistered name. It does not really remind of the camera parameters matrix.

@kohrt
Copy link

kohrt commented Nov 25, 2014

It should mean something like "camera matrix of the sensor to which the depth image should be registered". What would you suggest instead?

@giacomodabisias
Copy link
Author

If I understood correctly its the camera matrix which before you called cameraMatrixColor right? I would leave that name or something like cameraParameterMatrixColor

@kohrt
Copy link

kohrt commented Nov 25, 2014

yes, but you can register it to any other camera, so it does not have to be a color camera... that's why i changed it.

@giacomodabisias
Copy link
Author

Ok, seems logic.

@giacomodabisias
Copy link
Author

Maybe this could be useful also for other people if using this code with the kinect v2 (correct me if I am wrong):
cameraMatrixRegistered: The Color camera sensor parameters matrix
sizeRegistered: 3x3
cameraMatrixDepth: The Depth camera sensor parameters matrix
sizeDepth: 3x3
distortionDepth: Distorsion matrix (1x5)
rotation: The rotation between the sensors (3x3)
translation: The translation between the sensors (1x3)
zNear: The cut limit near
zFar: The cut limit far

@kohrt
Copy link

kohrt commented Nov 25, 2014

sizeRegistered: cv::Size(1920,1080)
sizeDepth: cv::Size(512,424)
distortionDepth: cv::Mat 1x5 {k1, k2, p1, p2, k3} (is used for undistorting the depth image. There are default values for each sensor. readable through libfreenect)
translation: 3x1
camera matrix:

[ fx  0 cx ]
[  0 fy cy ]
[  0  0  1 ]

@giacomodabisias
Copy link
Author

Ok thanks

@giacomodabisias
Copy link
Author

Today I tested your new version, but there is a problem.

        frames_ =  *GetRawFrames();
    rgb_ = frames_[libfreenect2::Frame::Color];
    depth_ = frames_[libfreenect2::Frame::Depth];
    tmp_depth_ = cv::Mat(depth_->height, depth_->width, CV_32FC1, depth_->data);
    tmp_rgb_ = cv::Mat(rgb_->height, rgb_->width, CV_8UC3, rgb_->data);

    if(depthReg_ == 0){

        cloud_->height = tmp_rgb_.rows;
        cloud_->width = tmp_rgb_.cols;
        cloud_->is_dense = false;
        cloud_->points.resize(cloud_->height * cloud_->width);

        cameraMatrixColor_ = (cv::Mat_<double>(3,3) <<  
            color_camera_params_.fx,                0.,               color_camera_params_.cx,
                    0.,                  color_camera_params_.fy,     color_camera_params_.cy,
                    0.,                             0.,                         1.             );

        cameraMatrixDepth_ = (cv::Mat_<double>(3,3) <<
            ir_camera_params_.fx,                   0.,               ir_camera_params_.cx,
                    0.,                  ir_camera_params_.fy,        ir_camera_params_.cy,
                    0.,                             0.,                         1.             );

        rotation_ = (cv::Mat_<double>(3,3) <<
            9.9983695759005575e-01, -1.6205694274052811e-02,-7.9645282444769407e-03, 
            1.6266694212988934e-02, 9.9983838631845712e-01, 7.6547961099503467e-03,
            7.8391897822575607e-03, -7.7831045990485416e-03, 9.9993898333166209e-01  );

        translation_ = (cv::Mat_<double>(3,1) <<
            -5.1927840124258939e-02, -4.5307585220976776e-04, 7.0571985343338605e-05 );

        distorsion_ = (cv::Mat_<double>(1,5) <<
            1.0290630865336201e-01, -2.8973989337475542e-01, 1.0698071918753883e-03,
            1.1438966542906426e-04,  1.1137787974461313e-01                          );             

        depthReg_ = new DepthRegistrationCPU();

        size_registered_ = cv::Size(1920, 1080); 
        size_depth_ = cv::Size(512, 424);


        depthReg_->init(cameraMatrixColor_,
                        size_registered_,
                        cameraMatrixDepth_,
                        size_depth_,
                        distorsion_,
                        rotation_, 
                        translation_ );

    }
    depthReg_->registerDepth(tmp_depth_, registered_ );

with the following parameters taken from the camera:

color:
fx 1081.37
fy 1081.37
cx 959.5
cy 539.5
ir
fx 365.478
fy 365.478
cx 258.937
cy 207.594

registered_ is just in a sequence of zero values if printed out as uint16_t.

@kohrt
Copy link

kohrt commented Nov 26, 2014

first of all the images are flipped. so you need to flip them

cv::flip(tmp_depth_, tmp_depth_, 1);
cv::flip(tmp_rgb_, tmp_rgb_, 1);

the depth registration need a depth image with uint16_t values. try to convert it first tmp_depth_.convertTo(tmp_depth_, CV_16U);

@giacomodabisias
Copy link
Author

Added that. Still problems but I will debug a little and see.
Thanks!

@kohrt
Copy link

kohrt commented Nov 26, 2014

but the depth image is not 0 right?

@giacomodabisias
Copy link
Author

No, I was wrong, interpolation is not always 0. I will continue investigating and let you know as soon as I find something strange.

@giacomodabisias
Copy link
Author

Fixing some problems I arrived at this result. Any ideas?
screen
The depth seems ok but everything is curved.

@kohrt
Copy link

kohrt commented Nov 26, 2014

how do you compute the 3d points? I do it in the viewer like this:

  void createCloud(const cv::Mat &depth, const cv::Mat &color, pcl::PointCloud<pcl::PointXYZRGBA>::Ptr &cloud) const
  {
    const float badPoint = std::numeric_limits<float>::quiet_NaN();

    #pragma omp parallel for
    for(int r = 0; r < depth.rows; ++r)
    {
      pcl::PointXYZRGBA *itP = &cloud->points[r * depth.cols];
      const uint16_t *itD = depth.ptr<uint16_t>(r);
      const cv::Vec3b *itC = color.ptr<cv::Vec3b>(r);
      const float y = lookupY.at<float>(0, r);
      const float *itX = lookupX.ptr<float>();

      for(size_t c = 0; c < (size_t)depth.cols; ++c, ++itP, ++itD, ++itC, ++itX)
      {
        register const float depthValue = *itD / 1000.0f;
        // Check for invalid measurements
        if(isnan(depthValue) || depthValue <= 0.001)
        {
          // not valid
          itP->x = itP->y = itP->z = badPoint;
          itP->rgba = 0;
          continue;
        }
        itP->z = depthValue;
        itP->x = *itX * depthValue;
        itP->y = y * depthValue;
        itP->b = itC->val[0];
        itP->g = itC->val[1];
        itP->r = itC->val[2];
        itP->a = 0;
      }
    }
  }

@giacomodabisias
Copy link
Author

I have fixed something and it looks better but it is not registered. I am using a similar code to yours. I tried yours but it is not working...
My code looks like this:

void createCloud(const cv::Mat &depth, const cv::Mat &color, typename pcl::PointCloud<PointT>::Ptr &cloud) const
    {
        const float badPoint = std::numeric_limits<float>::quiet_NaN();

        #pragma omp parallel for
        for(int r = 0; r < depth.rows; ++r)
        {
          PointT *itP = &cloud->points[r * depth.cols];
          const uint16_t *itD = depth.ptr<uint16_t>(r);
          const cv::Vec3b *itC = color.ptr<cv::Vec3b>(r);

          for(size_t c = 0; c < (size_t)depth.cols; ++c, ++itP, ++itD, ++itC )
          {
            register const float depthValue = *itD / 1000.0f;
            // Check for invalid measurements

            const double invZ = 1 / depthValue;
            const int xP = (color_camera_params_.fx * c) * depthValue + color_camera_params_.cx;
            const int yP = (color_camera_params_.fy * r) * depthValue + color_camera_params_.cy;

            if(isnan(depthValue) || depthValue <= 0.001)
            {
              // not valid
              itP->x = itP->y = itP->z = badPoint;
              itP->rgba = 0;
              continue;
            }
            itP->z = depthValue * 1000 ;
            itP->x = xP * 0.001;
            itP->y = yP * 0.001;
            itP->b = itC->val[0];
            itP->g = itC->val[1];
            itP->r = itC->val[2];
          }
        }
    }

And I get the following cloud :
screen2

@giacomodabisias
Copy link
Author

Fixed something else and it looks better.
With the previous code I was projecting everything again getting strange results.
This code works almost fine but the cloud is still not registered and it looks stretched (depth) and not straight (the wall on the right).

screen3

screen4

    void createCloud(const cv::Mat &depth, const cv::Mat &color, typename pcl::PointCloud<PointT>::Ptr &cloud) const
    {
        const float badPoint = std::numeric_limits<float>::quiet_NaN();

        #pragma omp parallel for
        for(int r = 0; r < depth.rows; ++r)
        {
          PointT *itP = &cloud->points[r * depth.cols];
          const uint16_t *itD = depth.ptr<uint16_t>(r);
          const cv::Vec3b *itC = color.ptr<cv::Vec3b>(r);

          for(size_t c = 0; c < (size_t)depth.cols; ++c, ++itP, ++itD, ++itC )
          {
            register const float depthValue = *itD / 1000.0f;
            // Check for invalid measurements

            const double invZ = 1 / depthValue;
            const int xP = (color_camera_params_.fx * c)  + color_camera_params_.cx;
            const int yP = (color_camera_params_.fy * r)  + color_camera_params_.cy;

            if(isnan(depthValue) || depthValue <= 0.001)
            {
              // not valid
              itP->x = itP->y = itP->z = badPoint;
              itP->rgba = 0;
              continue;
            }
            itP->z = depthValue * 1000 ;
            itP->x = xP * 0.001;
            itP->y = yP * 0.001;
            itP->b = itC->val[0];
            itP->g = itC->val[1];
            itP->r = itC->val[2];
          }
        }
    }```

@kohrt
Copy link

kohrt commented Nov 26, 2014

try this:

register const float depthValue = *itD / 1000.0f;
// Check for invalid measurements

if(isnan(depthValue) || depthValue <= 0.001)
{
   // not valid
   itP->x = itP->y = itP->z = badPoint;
   itP->rgba = 0;
   continue;
}
itP->z = depthValue;
itP->x = ((c - color_camera_params_.cx) / color_camera_params_.fx) * depthValue;
itP->y = ((r - color_camera_params_.cy) / color_camera_params_.fy) * depthValue;
itP->b = itC->val[0];
itP->g = itC->val[1];
itP->r = itC->val[2];

@giacomodabisias
Copy link
Author

Ok, the cloud looks perfect now but not aligned
screen5

@kohrt
Copy link

kohrt commented Nov 26, 2014

the alignment is a calibration problem. so if you calibrate your sensor you will get better results.

@giacomodabisias
Copy link
Author

Ok I will try to calibrate it. You have been really helpful, thank you again.

@kohrt
Copy link

kohrt commented Nov 26, 2014

your welcome. could this be closed now?

@giacomodabisias
Copy link
Author

I would suggest to just wait 1-2 day to be sure everything is fixed if it is ok for you.

@giacomodabisias
Copy link
Author

You can close, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants