Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Service to get depth from Image Pixel Location using the Lidar #21

Closed
zachgoins opened this issue Aug 1, 2016 · 5 comments
Closed

Comments

@zachgoins
Copy link
Contributor

Once we have found the object we are trying to find in the image we typically want to make some maneuver based on that object. ie, we need to know it's relative location in 3D space relative to the boat.

I will make a service that given a pixel location and camera extrinsic to the boat will use the lidar to tell where the object is. So in the mission system we just use one function to get the distance rather than interacting with the lidar directly.

@DSsoto
Copy link
Member

DSsoto commented Aug 4, 2016

What will the service return if the pixel in the undistorted image is outside of the Velodyne's field of view (only 15 degree vertical field of view) ?

Also, if we want to get the range for a large number of pixels at a time, is the overhead of that many service calls worth the convenience?

@zachgoins
Copy link
Contributor Author

Both are good things to think about. We can throw out data that doesn't have a mapping, that's easy. And as far as trying to use the service calls to get the range for a bunch of pixels and then do the smoothing and averaging on the program side, I think it should be handled on the server side. So while the client will call something like Eigen::Vector3f = server.get_depth_from_image(int x, int y) it can assume that the server has done the smoothing and the value returned is safe. So service overhead stays low.

@kev-the-dev
Copy link
Contributor

I'm currently working on this for the right camera. I could probably make it more generic to work for other cameras as well. dev...ironmig:detect-deliver-dev

@kev-the-dev
Copy link
Contributor

Here's my current plan:

The service request has the following fields:

  • Header (for time and camera frame)
  • 2d point in camera frame
  • a tolerance (in pixels) of how far away a lidar point can be to be considered a corresponding point

The response includes

  • an array (dynamic sized) of 3d points in camera frame found in the transformed lidar point cloud which are within tollerence pixels away from the request point

Returning an array rather then a single point allows for more use cases like finding a normal to a plane (me and @RustyBamboo will be using this in the detect-deliver mission)

NOTE: I'm not sure how this would work with stereo @DSsoto,

@kev-the-dev
Copy link
Contributor

This is now available for the right camera as of #81. If it needs to be added for another camera, it can be easily configured for another.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants