Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting the point cloud with a custom wrapper #44

Closed
v0n0 opened this issue Dec 9, 2014 · 9 comments
Closed

Getting the point cloud with a custom wrapper #44

v0n0 opened this issue Dec 9, 2014 · 9 comments

Comments

@v0n0
Copy link

v0n0 commented Dec 9, 2014

Hi, I have been looking to test your library. As an extension, I wanted to make my own wrapper for output, and visualize the point cloud with my own code. When I take the idepth that is >0 and backproject it, this is what I get:

screen shot 2014-12-08 at 6 25 39 pm

Is something going wrong? I don't get any errors from the console, even with debug activated. My publishKeyframe is this:

  int width = kf->width();

  // extract depth map
  const float *depthMap = kf->idepth();
  const float *depthMapValidity = kf->idepthVar();

  // extract camera
  Sim3 camToWorld = kf->getScaledCamToWorld();
  double fx = kf->fx();
  double fy = kf->fy();
  double cx = kf->cx();
  double cy = kf->cy();
  double fxi = 1/fx;
  double fyi = 1/fy;
  double cxi = -cx / fx;
  double cyi = -cy / fy;

  // extract image
  float *image = kf->image(0);
  int minNearSupport = 5;
  std::vector<Sophus::Vector3d> transformedPoints;
  std::vector<uint8_t> intensities;

  for(int y=1; y<kf->height()-1; y++) {
    for(int x=1; x<width-1; x++) {
      double d = depthMap[x+y*width];

      // skip all invalid points
      if (d <= 0) {
        continue;
      }

      // calculate the depth of the point
      float depth = 1 / d;

      int nearSupport = 0;

      for(int dx=-1;dx<2;dx++) {
        for(int dy=-1;dy<2;dy++) {
          int idx = x+dx+(y+dy)*width;
          double neighbour = depthMap[idx];

          if (neighbour > 0){
            float diff = neighbour - 1.0f / depth;
            if (diff * diff < 2 * depthMapValidity[x+y*width])
              nearSupport++;
          }
        }

        if (nearSupport < minNearSupport) {
          continue;
        }
      }

      // backproject to world
      Sophus::Vector3d homogeneous((x*fxi + cxi), (y*fyi + cyi), 1);
      Sophus::Vector3d transformedPoint = camToWorld * (homogeneous * depth);

      // add point to pointcloud
      transformedPoints.push_back(transformedPoint);
      intensities.push_back(image[y*width+x]);
    }
  }

The depth map looks something like this for the first keyframe:

img_1468

Some stats:

Upd 984.9ms (0.4Hz); Create 0.0ms (0.0Hz); Final 0.0ms (0.0Hz) // Obs 564.8ms (0.4Hz); Reg 336.8ms (0.4Hz); Prop 0.0ms (0.0Hz); Fill 58.8ms (0.4Hz); Set 24.4ms (0.4Hz)
MapIt: 984.9ms (0.4Hz); Track: 880.3ms (0.5Hz); Create: 0.0ms (0.0Hz); FindRef: 0.0ms (0.0Hz); PermaTrk: 0.0ms (0.0Hz); Opt: 0.0ms (0.0Hz); FindConst: 0.0ms (0.0Hz);
FINALIZING KF 0
TRACKING 11 on 0
(3-0): ACCEPTED increment of 0.012268 with lambda 0.0, residual: 3.680577 -> 3.656424
         p=-0.0345 0.0244 0.0029 -0.0175 0.0251 -0.0019
(3-1): ACCEPTED increment of 0.011236 with lambda 0.0, residual: 3.656424 -> 3.254479
         p=-0.0421 0.0243 0.0005 -0.0200 0.0325 -0.0028
(3-2): ACCEPTED increment of 0.005422 with lambda 0.0, residual: 3.254479 -> 3.244507
         p=-0.0448 0.0256 -0.0027 -0.0208 0.0355 -0.0029
(3-3): ACCEPTED increment of 0.007936 with lambda 0.0, residual: 3.244507 -> 3.012756
         p=-0.0495 0.0287 -0.0054 -0.0201 0.0403 -0.0026
(3-4): ACCEPTED increment of 0.015319 with lambda 0.0, residual: 3.012756 -> 2.754024
         p=-0.0577 0.0366 -0.0078 -0.0146 0.0485 -0.0014
(3-5): ACCEPTED increment of 0.028333 with lambda 0.0, residual: 2.754024 -> 2.345105
         p=-0.0727 0.0510 -0.0113 -0.0029 0.0632 0.0008
(3-6): ACCEPTED increment of 0.031557 with lambda 0.0, residual: 2.345105 -> 2.063821
         p=-0.0858 0.0701 -0.0151 0.0134 0.0765 0.0009
(3-7): ACCEPTED increment of 0.032163 with lambda 0.0, residual: 2.063821 -> 1.858116
         p=-0.0931 0.0928 -0.0183 0.0330 0.0846 0.0025
(3-8): ACCEPTED increment of 0.027517 with lambda 0.0, residual: 1.858116 -> 1.716052
         p=-0.0987 0.1124 -0.0183 0.0506 0.0908 0.0011
(3-9): ACCEPTED increment of 0.020318 with lambda 0.0, residual: 1.716052 -> 1.595070
         p=-0.1083 0.1240 -0.0158 0.0599 0.1008 -0.0006
(3-10): ACCEPTED increment of 0.009325 with lambda 0.0, residual: 1.595070 -> 1.585776
         p=-0.1122 0.1200 -0.0128 0.0543 0.1050 0.0000
(3-11): ACCEPTED increment of 0.003826 with lambda 0.0, residual: 1.585776 -> 1.565200
         p=-0.1144 0.1203 -0.0109 0.0540 0.1075 0.0001
(3-12): REJECTED increment of 0.005886 with lambda 0.0, (residual: 1.565200 -> 1.572315)
(3-12): ACCEPTED increment of 0.001558 with lambda 0.2, residual: 1.565200 -> 1.561690
         p=-0.1143 0.1206 -0.0095 0.0537 0.1078 -0.0002
(3-13): ACCEPTED increment of 0.002274 with lambda 0.1, residual: 1.561690 -> 1.555817
         p=-0.1142 0.1209 -0.0074 0.0532 0.1083 0.0001
(3-14): REJECTED increment of 0.015002 with lambda 0.0, (residual: 1.555817 -> 1.595475)
(3-14): REJECTED increment of 0.002837 with lambda 0.2, (residual: 1.555817 -> 1.556309)
(3-14): ACCEPTED increment of 0.001844 with lambda 0.8, residual: 1.555817 -> 1.554043
         p=-0.1140 0.1212 -0.0056 0.0529 0.1086 0.0001
(3-15): REJECTED increment of 0.002041 with lambda 0.4, (residual: 1.554043 -> 1.559329)
(3-15): REJECTED increment of 0.001638 with lambda 0.8, (residual: 1.554043 -> 1.558871)
(3-15): REJECTED increment of 0.000783 with lambda 3.2, (residual: 1.554043 -> 1.555813)
(3-15): ACCEPTED increment of 0.000142 with lambda 25.6, residual: 1.554043 -> 1.553711
         p=-0.1139 0.1213 -0.0055 0.0529 0.1086 -0.0000
(3-15): FINISHED pyramid level (last residual reduction too small).
@JakobEngel
Copy link
Member

Hmm, I cannnot see anything in the code that looks wrong - however the depthmap looks very strange.
Maybe that's the initial, random depth map? It is quite normal that the first couple of keyframes are very noisy due to the random initialization - which should become better after some seconds of video.

@v0n0
Copy link
Author

v0n0 commented Dec 11, 2014

If so, how would I know when each depth map pixel has enough accuracy I can triangulate it and display it? I thought a keyframe would have solid hypothesis for each valid pixel, but maybe I misunderstood.

@JakobEngel
Copy link
Member

in this context, accuracy is always relative:
It pixels in a keyframe are as accurate as they are going to get with respect to that frame.
However, if the camera moves a long distance, tracking error accumulates. If then there is a loop-closure, this accumulated error is removed - in the process of which the pose of keyframes, and with them the pose of the corresponding points, can change drastically. This happens e.g. at 2:48 in the youtube video ( http://youtu.be/GnuQzP3gty4 ): Large parts of the pointcloud are moved significantly.
Now of course this is an extreme example, but on a small scale this happens all the time when new loop-closures are found (see e.g. the foodcourt sequence: http://youtu.be/aBVXfqumTXc). You could do something like only update points if their position changes by more than a threshold, however that's not straightforward to compute efficiently anymore.

@v0n0
Copy link
Author

v0n0 commented Dec 11, 2014

Do you think a depth map fusion approach could be used then, when 3 or more keyframes agree on the same "candidate" point depth, then create the point in the visualization?

The other thing I don't understand is, if the inverse depth is calculated just for high gradient image areas, how comes almost all of the depth map is marked as valid and has covariance greater than zero?

Thanks

@v0n0
Copy link
Author

v0n0 commented Dec 12, 2014

The example that comes to mind is figure 6 "Variational inpainting" of the latest paper of yours, how did you paint the full resolution depth?

@amiltonwong
Copy link

@v0n0 ,
Where's the figure 6 "Variational inpainting"? Where can I find that paper?

@v0n0
Copy link
Author

v0n0 commented Dec 12, 2014

You can find all of the papers here: https://vision.in.tum.de/research/lsdslam?redirect=1

@v0n0 v0n0 closed this as completed Dec 14, 2014
@romulogcerqueira
Copy link

@v0n0, are you using meshlab as your custom point cloud viewer?

@tsingjinyun
Copy link

@romulogcerqueira ,it's meshlab

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants