Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added URDF/XACRO for the Zivid One+ 3D Camera #17

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

dave992
Copy link

@dave992 dave992 commented Mar 5, 2020

I've created a description package for the Zivid One+ 3D Camera which could be useful for others using the Zivid camera. I already saw #16 mentioning that there was a need for this.

Origin of the meshes:

  • The visual geometry is directly taken from the downloads page of the Zivid Website.
  • The collision geometry is generated based on the visual geometry and following the steps listed for creating collision STLs on Create a URDF for an Industrial Robot

Location of the links/frames:

  • I've placed the base_link at the center of the tripod mount screw hole.
  • I've placed the optical_frame at the lens opening of the camera and angled 8.5 degrees toward the projector. This orientation and location was the result of a discussion with Zivid support on where the measurement frame was located with respect to the tripod mount screw hole. I made this description for the Zivid One+ M. It could be the angle of the optical_frame differs depending on the model. I have not yet validated this location using measurement data, so a confirmation of this would be great!
  • I've placed the projector_frame at the original origin of the visual geometry file, as the orientation of the origin seemed to indicate it was the projector origin or really close to it. Some feedback on this location would be appreciated.

To view the URDF and TF frames build the package and simply run:
roslaunch zivid_description test_zivid_camera.launch

@dave992
Copy link
Author

dave992 commented Mar 5, 2020

Some images to show the URDF, frame locations and collision geometry:
rviz_screenshot_2020_03_05-14_31_06
rviz_screenshot_2020_03_05-14_31_35
rviz_screenshot_2020_03_05-14_31_48

@nedrebo
Copy link
Contributor

nedrebo commented Mar 5, 2020

This looks nice. We do not have a contributes agreement (CLA) in place for this repository. I will try to get one in place so we can process this PR. Thx

@dave992
Copy link
Author

dave992 commented Mar 6, 2020

This looks nice. We do not have a contributes agreement (CLA) in place for this repository. I will try to get one in place so we can process this PR. Thx

Sure, keep me posted! Just happy that this is useful to others as well.

@dave992
Copy link
Author

dave992 commented Mar 6, 2020

The checks seem to fail, but looking at the log this is due to the Zivid driver and as a result the nodelet not being able to load. Not sure how this is related to the added zivid_description package.

@dave992
Copy link
Author

dave992 commented Apr 24, 2020

Any progress on getting the CLA in place? Alternatively, would you consider this PR without the CLA?

@nedrebo
Copy link
Contributor

nedrebo commented Apr 24, 2020

Thanks for the reminder. I was in the process of implementing CLA, but got stuck with other tasks. I will resurrect the efforts here.

@nedrebo
Copy link
Contributor

nedrebo commented Apr 24, 2020

We'll also look into the CI error.

@eskaur
Copy link
Member

eskaur commented Jun 6, 2020

Should we allocate some time to this in June perhaps, @nedrebo ?

@nedrebo
Copy link
Contributor

nedrebo commented Jun 8, 2020

I would very much like that. It is already on our list of candidates for short term prioritization.

@dave992
Copy link
Author

dave992 commented Jan 12, 2021

Friendly ping :)

@apartridge
Copy link
Collaborator

Hi @dave992

Thanks for this PR, and sorry that it has taken so long for us to follow up on this.

I have asked a colleague to follow up with you regarding your specific questions on the camera/projector angle and the location of the optical center and the projector center. He should reach out to you soon.

We agree that adding these URDF/XACRO/STL definitions for the Zivid cameras to this repo is a good idea, and would be valuable for other users as well.

In order to merge this PR so that others could use them, we think we would need to have these definitions/files for all the Zivid camera models. At least One+ Small and One+ Large in addition to One+ Medium, in the first round. It should also be expandable, so that we can add Zivid Two eventually as well.

Currently the file is just named zivid_camera.xacro, so there should probably be one file per camera model, appropriately named, with the appropriate camera-specific angles and coordinates. Probably some XML could be shared, using a macro.

In addition to these changes, we would also need to do some testing on our side, to verify that this is working as expected, before we can take it in and officially support it. For the next months we are a bit too busy on the release of our next camera model, so we will unfortunately not be able to follow up on this for some time. We would like to keep this PR open so that others can use this until then.

project(zivid_description)
find_package(catkin REQUIRED COMPONENTS)
catkin_package()
include_directories(${catkin_INCLUDE_DIRS})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy-pasta @dave992 ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also suggest to add install(..):

Suggested change
include_directories(${catkin_INCLUDE_DIRS})
install(DIRECTORY config launch meshes urdf
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION})

@dave992
Copy link
Author

dave992 commented Jan 24, 2021

In order to merge this PR so that others could use them, we think we would need to have these definitions/files for all the Zivid camera models. At least One+ Small and One+ Large in addition to One+ Medium, in the first round. It should also be expandable, so that we can add Zivid Two eventually as well.

If the frames are different across models, then I agree. The geometry itself looked identical (correct me if I am wrong here), hence why I only implemented one version for all Zivid One+ variants. The naming can indeed be changed to leave room for other models and variants in the future :).

Let me know if I can do anything here.

@runenordmo
Copy link

@dave992, it would be nice if you could check in one of the samples that the pointcloud is located correctly relative to the frames you have added, given that you for instance <include file="$(find zivid_description)/launch/load_zivid_camera.launch" /> in sample launch file.
I can also check this one my side.

Updated minimum CMake version to match
other zivid packages. Improve readabillity and
add newlines. Add install command.
@aashish-tud
Copy link

@dave992 @runenordmo any progress?

Very interested in this.

@dave992
Copy link
Author

dave992 commented Oct 25, 2022

I do not know what the status of the CLA is. Other than that, not much has changed in the mean time but I can have a look again at incorporating the open-standing points:

  • Add support for the different Zivid One versions, e.g. via a parameterized macro and additional launch files
  • Remove copypasta
  • Add install() instruction to CMakeLists

@runenordmo
Copy link

@aashish-tud , @dave992 ;
We'll work together with you to review and get this in.

Regarding the CLA; @nedrebo has let me know that the decision is that we do not need it for this "BSD-3-Clause license" repository.

@@ -32,13 +32,13 @@

<!-- Zivid Optical (Measurement) and Projector Joints -->
<joint name="${prefix}optical_joint" type="fixed">
<origin xyz="0.065 0.062 0.0445" rpy="-${0.5*M_PI} 0 -${0.5*M_PI + 8.5/180*M_PI}"/>
<origin xyz="0.065 0.062 0.0445" rpy="-${0.5*pi} 0 -${0.5*pi + 8.5/180*pi}"/>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure we need a optical joint that is not the same as the base_link?
The optical joint's frame should ideally just be the same frame as the points in the pointcloud is given in, which is a fixed point in the camera - I can figure out exactly how it's specified.

Then the only frame that is essential is that optical frame, and a hand-eye transform will be used to get the pointcloud a robot's frame.
I think might be useful to also have a rough estimate of the projector coordinate system relative to the optical frame, like you have added (discussed in 624a977#r563312884).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The optical joint's frame should ideally just be the same frame as the points in the pointcloud is given in, which is a fixed point in the camera - I can figure out exactly how it's specified.

joint != frame.
A frame is a coordinate frame. A joint is a connection between frames defining the transformation between them.

The measurement frame (optical_frame) is indeed defined by the frame in which the camera outputs the captures. This may or may not coincide with another frame, but is definitely a distinct frame (even if it is just for semantics). The joint just connect the two links together.

Having a confirmation on the location of the optical_frame relative to the mounting hole (/base_link) would be very helpful indeed. Our usage (attached to a robot manipulator) does show that this location is correct or at least really close to the actual measurement frame. We often use this description "as is" without calibration for some quick captures.

Then the only frame that is essential is that optical frame

If looking at the camera in isolation yes, but my intent behind making this package is to actually connect it to other hardware. Then the base_link is essential as well, even if only by convention, expectations, and ease of use.

The base_link is located such that the geometry can easily be attached, it is the "starting point" of the geometry. In this case, I picked the center mounting hole as I saw this as a convenient location to which I can attach the camera for example a robot or end-effector. All description packages should start with a base_link.

and a hand-eye transform will be used to get the pointcloud a robot's frame

I would say that calibration is indeed needed for real-world applications, but not part of the scope of this package. Description packages are just there to give the ideal geometry and required frames of hardware. This can be used then for simulations or for a first best guess of your real-world counterpart.

Typically calibrations will result in a new frame, for example: calibrated_optical_frame, that is then separately attached to the description by the user.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep the base_link link and optical joint. I see your point on that being useful for the simulation and a first best guess or starting point.

Typically calibrations will result in a new frame, for example: calibrated_optical_frame, that is then separately attached to the description by the user.

Yes, I agree, in a real-world application the hand-eye calibration will take over, to be able to know how the point cloud is related to the robot's base. And then the transformation between the base_link frame and the optical_frame is mostly useful for simulations and verifying that the robot-camera calibration is sound.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a confirmation on the location of the optical_frame relative to the mounting hole (/base_link) would be very helpful indeed.

Yes, I will get this information

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so the point cloud is given relative to a location that is the optical center given at a certain temperature+aperture location. So this will vary for each camera, even within the same model, for instance Zivid One+ M.

So I think we can communicate through the naming of the joints and frames that the transformation between the between the mounting hole and the camera's optical center at the given calibration point (certain temperature+aperture) is an approximation.
And the we can use the fixed approximate values provided in the datasheet.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this will vary for each camera, even within the same model, for instance Zivid One+ M.

Would the driver have a way of retrieving that information?

There's no requirement for the xacro:macro to contain that link.

If the driver could publish it (as a TF frame), that would work just as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this will vary for each camera, even within the same model, for instance Zivid One+ M.

Would the driver have a way of retrieving that information?

Would also be interested in this, especially if it is moving between usages (e.g. due to temperature differences)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(forgive me if this has been discussed before / is generally known about Zivid devices)

Unless the pointcloud / depth images are automatically transformed by the driver to have their origins at a fixed point (so the driver / embedded firmware compensates for the offsets/variation due to temperature/other factors), not having the precise location of the optical frame significantly complicates using Zivid cameras for really precise/accurate work.

Extrinsic calibrations could likely compensate for that offset (they would incorporate it into whatever transform they determine between the camera itself and the mounting link), but IIUC from the comments by @runenordmo, that would essentially only be the extrinsic calibration for one particular 'state' of the sensor.

If the camera itself already compensates for this, a static link in the URDF/xacro:macro would seem to suffice. If not, the driver would ideally publish the transform itself -- perhaps not continuously, but at least the one associated with a particular capture. The rest of the system could then match it based on time from the header.stamp and the TF buffer.

@dave992
Copy link
Author

dave992 commented Nov 24, 2022

@aashish-tud , @dave992 ; We'll work together with you to review and get this in.

Regarding the CLA; @nedrebo has let me know that the decision is that we do not need it for this "BSD-3-Clause license" repository.

Ah great!

The only open-standing comment would be supporting the different variants of the Zivid One+. Could you confirm if the optical_frame and (optionally) projector_frame are located differently for the Zivid One+ S, ZIvid One+ M, and Zivid One+ L? And if so how they are located relative to the mounting hole (base_link)? This configuration seems correct for the Zivid One+ M based on captures taken with a robot manipulator.

As the Zivid Two launched, it is needed to differntiate between the
types.
@runenordmo
Copy link

Approximate transformation from mounting hole to camera optical center:
Zivid One+:
image

Zivid Two:
image

https://www.zivid.com/downloads

Requested by Zivid as it might cause confusion
The macro now supports the S, M, and L types of the ZIvid One+. Launch
files to load, and/or view the different variants have been included.
@dave992
Copy link
Author

dave992 commented Dec 2, 2022

I've updated the XACRO macro to change the optical_frame angle based on the type/variant used and added launch files for each Zivid One+ variation.

I did not change the position of the frame, this is still obtained as described in the PR. The drawing you shared only shows the projector frame, the optical frame is the other lens opening if I understand correctly.

Please let me know if additional changes are needed, or if this suits your need.

@BrettRD
Copy link

BrettRD commented Jan 12, 2023

A project I'm working on needs URDFs of the zivid two under ROS2.
Can we split the description package into a separate repo like Universal Robotics does?
Description packages are vastly easier to port and maintain than the rest of the driver

MShields1986 added a commit to UoS-EEE-Automation/zivid-ros that referenced this pull request Feb 1, 2023
@iosb-ina-mr
Copy link
Contributor

iosb-ina-mr commented Jun 29, 2023

I have added the URDF for the zivid two camera to the zivid_description package provided by @dave992. Is there any time estimate about when this is being merged so I can contribute?

@apartridge
Copy link
Collaborator

Hi, sorry for the late response on this PR. This PR, as well the feedbacks/ideas in this PR, is something we will address the next time we do a round of improvements/extensions to our ROS wrapper. I don't have an exact timeline for when we will do this. For now, we would like to keep the PR open, so that others can find this more easily.

@iosb-ina-mr could you provide a link to your fork/branch here so that others could take a look at it if they need zivid_two URDF files?

Thanks for your inputs and contributions on the Zivid ROS wrapper.

@iosb-ina-mr
Copy link
Contributor

@apartridge You are welcome. The URDF of the Zivid Two can be found on the urdf-branch of my fork:
https://github.com/iosb-ina-mr/zivid-ros/tree/urdf

@dave992
Copy link
Author

dave992 commented Sep 12, 2023

I have added the Zivid Two (and Zivid Two Plus) descriptions to the zivid_description package. This is based on the fork of @iosb-ina-mr

As the number of variants was rising, and therefore the number of launch files and xacro files, I instead used a type argument now that can be passed to the launch files and xacro files to indicate the variant. For the Zivid One Plus this is fully implemented for the S, M, and L variants.

For the Zivid Two and Zivid Two Plus, these arguments are defined and passed to the underlying macro, but nothing is actually done with the information at the moment. I opted for this as I do not know what the differences between the variants are if there are any actual influences on the geometry and URDF of these models.

@apartridge Can you tell me if there are actual differences relevant to the URDF for the Zivid Two and Zivid Two Plus series?

If using the macro in other URDFs then the materials are included.
@apartridge
Copy link
Collaborator

Hi @dave992 , sorry for the late response and thanks for your contribution.

I can confirm that there is a small geometry change on the 2+ cameras compared to the 2 cameras (the front cover extends forward 1-2 mm longer). The CAD files here should be correct: https://www.zivid.com/downloads.

In addition, there are also differences in the angles between the camera and the projector, as well as the optical center of the camera across thes models. This can be seen in Figure 5/6/7 in the data sheet for the products (https://www.zivid.com/downloads).

I do see that the datasheets for 2+ cameras is missing the outgoing angle of the camera /projector, which is visible in the data sheets for Zivid 2 M70 and L100 (in Figure 5). I will request for this information to be included in the data sheet. Note that the data sheets for the 2+ M60/L110 are still prelimenary.

I am not sure exactly what information you need for the URDFs, is this information sufficient (if we get the angles as well)?

The optical center will vary a bit between units in the same model, due to unit variations, but I think it should be good enough for visualization purposes. For more accurate results one would need to use hand-eye calibration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants