-
Notifications
You must be signed in to change notification settings - Fork 15
camera_info topic? #9
Comments
Hi @jumarini your question is a good one and has been discussed a bit here. I'd suggest reading that for a start off. In the meantime, here is some other info: First, the unit vector data are exposed by the O3D303 camera but as of right now, per @graugans at IFM, in the current firmware implementation those data are not correct and will not compute the cartesian data from depth properly. My understanding is that it will be fixed in the next firmware release. However, I do not think you are stuck, if what you really want is the cartesian data. We expose that in two ways: via the Even once the unit vector data are being exposed by the camera properly, I am still unsure if we really need to expose it via the ROS interface. This is really something that I'd like to throw out there for discussion and to see what our user base needs. Right now, as the data are coming off the camera the cartesian data are transformed from the IFM optical frame to a coord-frame consistent with ROS conventions in a just-in-time manner. This happens here. Of course, the unit vector data would convert the depth data to the IFM optical frame cartesian coords then another transform would be needed to make those data consistent with the I'd like to work toward a "version 1.0.0" of both Sorry for the long-winded note, but I thought the background would be useful. |
We have fixed the issues with unit vectors and extrinsic calibration. I guess the new firmware will be released when we official release the O3D300 smart sensor at SPS Drives in Nürnberg Germany. I hope we will soon release a document describing how to deal with the unit vectors and extrinsic calibration. |
Thanks Christian. Is the planned release version 1.2.533? |
Thanks for your explanation, Tom. Issue #14 was an interesting read, though I'm not familiar with the PCIC interface. The unit vectors sound like the intrinsic calibration and camera matrix I'm looking for to project the depth map to X,Y,Z. I'm working on a sensor accuracy & noise quantification task for a commercial indoor mobile robot project. Sensing different materials, glossiness, sizes, and under different lighting and polarization conditions. I've already grabbed |
@tpanzarella It looks like the management skips 1.2.xxx series and heads to 1.3.xxx But for the O3D3xx there is not much changed. Except the 100K Parameter which will be changed to a more generic resolution Parameter which defaults to 23K. |
@jumarini Got it. I understand where you are coming from now. As you can see from above, it looks like @graugans and team have all of this ready to go now and will be officially shipping sensors with the new firmware by the end of November (we will also make it available for download here once the code is officially released by IFM so you can flash your existing sensors). I cannot promise you a firm date as of now regarding when all of this will be implemented and pushed out into the @graugans Thanks for the info. Based on my current work queue, I may be better suited to jump right to 1.3.xxx as well. Likely, IFM would not want sensors out there with the 1.2.xxx firmware if it is not officially supported anyway. |
Computing the 3D coords from the radial distance image, unit vectors, and extrinsics is now demonstrated in the |
Perhaps I'm stuck in stereo camera land, but it seems there should be a camera_info topic to go along with the depth image (http://www.ros.org/reps/rep-0118.html).
I'm trying to convert depth images to 3D coordinates in a standardized way for multiple kinds of sensors - Kinect, CRL stereo, O3D. I can do the projection on the other units using their camera_info camera matrix, but o3d3xx isn't publishing one.
Am I missing something? How do I get X and Y scale from the depth image?
The text was updated successfully, but these errors were encountered: