Skip to content
This repository has been archived by the owner on May 27, 2020. It is now read-only.

camera_info topic? #9

Open
jumarini opened this issue Oct 28, 2015 · 7 comments
Open

camera_info topic? #9

jumarini opened this issue Oct 28, 2015 · 7 comments

Comments

@jumarini
Copy link

Perhaps I'm stuck in stereo camera land, but it seems there should be a camera_info topic to go along with the depth image (http://www.ros.org/reps/rep-0118.html).

I'm trying to convert depth images to 3D coordinates in a standardized way for multiple kinds of sensors - Kinect, CRL stereo, O3D. I can do the projection on the other units using their camera_info camera matrix, but o3d3xx isn't publishing one.

Am I missing something? How do I get X and Y scale from the depth image?

@tpanzarella
Copy link

Hi @jumarini your question is a good one and has been discussed a bit here. I'd suggest reading that for a start off.

In the meantime, here is some other info:

First, the unit vector data are exposed by the O3D303 camera but as of right now, per @graugans at IFM, in the current firmware implementation those data are not correct and will not compute the cartesian data from depth properly. My understanding is that it will be fixed in the next firmware release.

However, I do not think you are stuck, if what you really want is the cartesian data. We expose that in two ways: via the cloud topic (in meters) which will be registered to the depth data or via the xyz_image topic (in mm) which is just a 3-channel cv::Mat where each of the three image planes are the cartesian x-y-z respectively (as opposed to color planes). Again this is registered to the depth data.

Even once the unit vector data are being exposed by the camera properly, I am still unsure if we really need to expose it via the ROS interface. This is really something that I'd like to throw out there for discussion and to see what our user base needs. Right now, as the data are coming off the camera the cartesian data are transformed from the IFM optical frame to a coord-frame consistent with ROS conventions in a just-in-time manner. This happens here. Of course, the unit vector data would convert the depth data to the IFM optical frame cartesian coords then another transform would be needed to make those data consistent with the cloud and xyz_image coords. Again, a use case may be out there that would benefit from that, but right now it is not clear.

I'd like to work toward a "version 1.0.0" of both libo3d3xx and o3d3xx-ros before the end of the year (???) , and this is something I'd like to have resolved. I'd be interested to get your opinion and learn more about your use case.

Sorry for the long-winded note, but I thought the background would be useful.

@graugans
Copy link
Member

We have fixed the issues with unit vectors and extrinsic calibration. I guess the new firmware will be released when we official release the O3D300 smart sensor at SPS Drives in Nürnberg Germany. I hope we will soon release a document describing how to deal with the unit vectors and extrinsic calibration.

@tpanzarella
Copy link

Thanks Christian. Is the planned release version 1.2.533?

@jumarini
Copy link
Author

Thanks for your explanation, Tom.

Issue #14 was an interesting read, though I'm not familiar with the PCIC interface. The unit vectors sound like the intrinsic calibration and camera matrix I'm looking for to project the depth map to X,Y,Z.

I'm working on a sensor accuracy & noise quantification task for a commercial indoor mobile robot project. Sensing different materials, glossiness, sizes, and under different lighting and polarization conditions.
As stated, I am trying to use a standardized, lower-bandwidth method of grabbing frames. The ROS depth image is the way to do it. I know the O3D has low bandwidth requirements right now ("just use the cloud, dummy!"), but the Kinect 2 and stereo camera I'm using are much more intensive on bandwidth. They all have a depth map in common. Just think - you could tell all those Kinect depth map users out there to simply swap in the O3D and compute the same XYZ data in the same way. Nice.

I've already grabbed cloud data, and it certainly sounds like I could use this xyz_image and get the data I need, but they're just not the standard low-bandwidth representation for 3D sensors I'm looking for.

@graugans
Copy link
Member

@tpanzarella It looks like the management skips 1.2.xxx series and heads to 1.3.xxx But for the O3D3xx there is not much changed. Except the 100K Parameter which will be changed to a more generic resolution Parameter which defaults to 23K.

@tpanzarella
Copy link

@jumarini Got it. I understand where you are coming from now. As you can see from above, it looks like @graugans and team have all of this ready to go now and will be officially shipping sensors with the new firmware by the end of November (we will also make it available for download here once the code is officially released by IFM so you can flash your existing sensors).

I cannot promise you a firm date as of now regarding when all of this will be implemented and pushed out into the libo3d3xx and o3d3xx-ros repos, but we will come up with a solution. Once I get working on it, I can start pushing out my development branch to github if you want to work with an unstable/work-in-progress until the release it ready. I hope in the mean time, you can work with the xyz_image to keep moving forward with your application.

@graugans Thanks for the info. Based on my current work queue, I may be better suited to jump right to 1.3.xxx as well. Likely, IFM would not want sensors out there with the 1.2.xxx firmware if it is not officially supported anyway.

@tpanzarella
Copy link

Computing the 3D coords from the radial distance image, unit vectors, and extrinsics is now demonstrated in the examples pseudo-module of libo3d3xx repository. The file ex-cartesian.cpp shows how to do this. For a ROS version of this, see the test/test_camera.py file in o3d3xx-ros -- while the test script is in Python, it should be straight forward enough to translate it to C++ if that is the desired. At the very least you can glean the topics for which the pertinent data are published.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants