-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port of point_cloud2.py from ROS1 to ROS2. #128
Conversation
I'm not really a big fan of making mixed C++/python packages. It tends to make things not work that great for either, and it is hard to install one without installing the other. There are at least 2 ways we can handle this:
I'd personally go with the second option, since I don't think we need to host this code in the core here. But I'd like to hear opinions from @ros2/team . |
I think I agree with the second option, but let's wait and see what the others think. |
As long as you don't want to add an entry point to the Python part I am not aware of any limitation in mixing Python modules with a CMake package. |
I don't see any issue in this case keeping the Python module in the same package, but if there's a particular issue that requires it to be moved to a separate package, I'm indifferent whether it's in the repo or another. |
I think a big drawback is making it harder to separate Python and C++ parts of our code base, e.g. if you only want to install C++ stuff and actually don't want a runtime dependency on Python or Python libraries. As you go up in the stack that become less important, but in a core package like That all being said, I'm fine with leaving it as-is, but if there's motivation or another reason to separate the Python code out, then I'd agree with @jacobperron in that I have no preference as to whether it is in this repository or not. |
I think this functionality is needed by just about anyone who uses ROS2-Python to subscribe to and process point clouds -- which I see as a big use for ROS2. So I ideally it will be part of ROS2. |
Yeah, that's the downside I was thinking about. I guess it doesn't apply here, but I don't see a reason to prevent that, and it just seems cleaner to have separate packages for Python and C++.
Exactly for this reason.
While I agree that a lot of ROS users use PCL things, I still don't think it should be in the core like this. Most of the C++ code to deal with PCL in C++ lives in https://github.com/ros-perception/perception_pcl repository, for instance. I think this fits in better there, so my suggestion is to create a new Python package over there and port this functionality there. It may be worthwhile to open an issue there first to make sure the maintainers agree with that plan. |
+1 it would be great to have these utilities available in ROS 2. In general it makes sense to have the utilities to operate on a data structure close to the definition of the data structure itself so I'd find it sensible to have this in this package or this repo (akin similar C++ utilities that live within sensor_msgs). IIUC perception_pcl packages serve a different purpose which is to easily connect the ROS PointCloud2 datastructure to the PCL back-end for processing. These utilities being independent of PCL it would break the abstraction between the ROS datastructure and the library used for processing, It could be confusing to have the utilities live there and imply to some extent that the PointCloud2 datastructure and its utilities cannot be used with other processing approaches (being custom code or other point cloud libraries like open3d). |
Trying to push this forward. I see that two things are being discussed:
As for number 1, as mikaelarguedas pointed out and linked to, C++ utilities that iterate through Regarding number 2, I count 1 vote for moving this to a separate Python package, 3 votes indifferent. As wjwwood and clalancette pointed out, keeping it here has a drawback, separating it out has no drawback. So in the long run, it seems more optimal to do the way without a drawback. That does not involve too much change in this PR, just creating a new package in this repo. Looking at the list of packages in this repo, I don't know what package makes sense though, as everything in this repo is message packages, and |
@mabelzhang Thanks for pushing forward on this!
That's a fair argument, so I think we can move forward with putting it in here.
OK, sounds good to me.
For the sake of argument, let's propose two different python packages: |
I don't have a strong preference, though I favor How much do we expect other packages to have Those are my considerations. I'll wait to hear more votes before starting to review this. |
👍 for |
Looks like @SebastianGrans Per discussion above, could you please refactor the code such that it resides in a new Python-only package named |
Hm... This is embarrassing. I reverted my fork and refactored the code to a separate package. It seems like that is not the way you are supposed to do it, since it auto-closed this PR. How do I do this cleanly? I'm sorry for any inconvenience. 😳 |
It looks like you got your commits sorted, so that is good. You are currently failing both of the automated checks. The first one is the DCO bot, which you have to sign-off all of your commits to pass. I'll suggest that you rebase and squash all of the current commits together, and just sign-off the one commit. The second one is failing to complete the tests for the new package. The error message is:
This looks like you need to update the tests for the new name of the package. There are also Python warnings in there; I'll suggest you run the tests locally and resolve all of those. |
That required a bit of learning. Feels great to get to know Git a bit better though. I think everything should be in order. Though I still get an error from the copyright test.
|
Hm, yeah, that's unfortunate. I think the problem is that our copyright checker doesn't support BSD properly at the moment. I'll suggest just removing that test for now. Also, the DCO check is still failing. I believe that is because the email address you have configured in your |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few things to fix in here. You'll also need to add a pytest.ini
file to the top-level of the package with these contents:
[pytest]
junit_family=xunit2
Thanks! |
d4db152
to
18e56df
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is looking good. PR job is passing, DCO check is passing, things are starting to shape up. Thanks for iterating so far.
I've got a few more comments in here, but they are all small things. Once those are all fixed up, we'll be ready to run CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the empty text file resource/sensor_msgs_py
necessary? Or is there supposed to be something in it? I see it referenced in setup.py
but I'm not sure what it is being used for.
@clalancette just for my understanding is there a main reason to do this? i use that backwards compatibility |
The main reason is that on master/Rolling, it is dead code. It will never be executed there, so there is no reason to have it. If we end up backporting this to distributions that don't have
If you have a package that is meant to work across Dashing, Eloquent, Foxy, and Rolling, then having the backwards compatibility code is perfectly reasonable. It will be executed on at least one of them. @SebastianGrans CI is yellow because flake8 doesn't seem to like single-variable variable names. Can you take a look and clean it up? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks reasonable to me @clalancette, with one nitpick that shouldn't block it
coordinates. [default: empty list] | ||
@type uvs: iterable | ||
@return: Generator which yields a list of values for each point. | ||
@rtype: generator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: we use a different style when documenting parameters, and return type, etc... For example:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall I change it while I'm at it?
Like this:
"""
Read points from a sensor_msgs.PointCloud2 message.
:param cloud: The point cloud to read from sensor_msgs.PointCloud2.
:param field_names: The names of fields to read. If None, read all fields.
(Type: Iterable, Default: None)
:param skip_nans: If True, then don't return any point with a NaN value.
(Type: Bool, Default: False)
:param uvs: If specified, then only return the points at the given
coordinates. (Type: Iterable, Default: empty list)
:return: Generator which yields a list of values for each point.
"""
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if you don't mind changing it, that would be great. Thanks.
And with the linter failures addressed of course. |
@clalancette I am unable to replicate the flake8 error. EDIT: I figured it out! It is the use of the small letter 'L' that is forbidden as described: here. Still can't figure out why I it doesn't warn me when I build locally.... ./sensor_msgs_py/point_cloud2.py:151:32: E741 ambiguous variable name 'l'
return [Point._make(l) for l in read_points(cloud, field_names,
^
1 E741 ambiguous variable name 'l' End edit I did a fresh clone of my repo and ran build the exact build and test commands that are in the CI log here. Any recommendations? grans@workhorse ~/ws/src rm -rf common_interfaces
grans@workhorse ~/ws/src git clone https://github.com/SebastianGrans/common_interfaces.git
Cloning into 'common_interfaces'...
remote: Enumerating objects: 23, done.
remote: Counting objects: 100% (23/23), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 2100 (delta 9), reused 22 (delta 9), pack-reused 2077
Receiving objects: 100% (2100/2100), 326.81 KiB | 405.00 KiB/s, done.
Resolving deltas: 100% (1395/1395), done.
grans@workhorse ~/ws/src cd ..
grans@workhorse ~/ws colcon build --base-paths "src" --build-base "build" --install-base "install" --event-handlers console_cohesion+ console_package_list+ --cmake-args -DBUILD_TESTING=ON --no-warn-unused-cli -DINSTALL_EXAMPLES=OFF -DSECURITY=ON --packages-up-to sensor_msgs_py
Topological order
- std_msgs (ros.ament_cmake)
- geometry_msgs (ros.ament_cmake)
- sensor_msgs (ros.ament_cmake)
- sensor_msgs_py (ros.ament_python)
Starting >>> std_msgs
--- output: std_msgs
[omitted]
Finished <<< std_msgs [6.98s]
Starting >>> geometry_msgs
[omitted]
Finished <<< geometry_msgs [17.3s]
Starting >>> sensor_msgs
[omitted]
Finished <<< sensor_msgs [28.0s]
Starting >>> sensor_msgs_py
[omitted]
Finished <<< sensor_msgs_py [1.03s]
Summary: 4 packages finished [53.5s]
grans@workhorse ~/ws colcon test --base-paths "src" --build-base "build" --install-base "install" --event-handlers console_direct+ --executor sequential --retest-until-pass 2 --ctest-args -LE xfail --pytest-args -m "not xfail" --packages-select sensor_msgs_py
Starting >>> sensor_msgs_py
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.0.1, py-1.9.0, pluggy-0.13.0
cachedir: /home/grans/ws/build/sensor_msgs_py/.pytest_cache
rootdir: /home/grans/ws/src/common_interfaces/sensor_msgs_py, configfile: pytest.ini
plugins: ament-copyright-0.9.5, launch-testing-ros-0.10.3, ament-pep257-0.9.5, ament-lint-0.9.5, launch-testing-0.10.3, ament-xmllint-0.9.5, ament-flake8-0.9.5, rerunfailures-9.0, repeat-0.8.0, colcon-core-0.6.1, cov-2.8.1
collecting ...
collected 9 items
test/test_copyright.py . [ 11%]
test/test_flake8.py . [ 22%]
test/test_pep257.py . [ 33%]
test/test_point_cloud2.py ...... [100%]
------ generated xml file: /home/grans/ws/build/sensor_msgs_py/pytest.xml ------
============================== 9 passed in 0.46s ===============================
Finished <<< sensor_msgs_py [1.22s]
Summary: 1 package finished [1.44s]
grans@workhorse ~/ws The CI run complains about this line: return [Point._make(l) for l in read_points(cloud, field_names,
skip_nans, uvs)] So it could be the use of the single letter iteration variable, but then why isn't it complaining about the following line, which is just 4 lines above: field_names = [f.name for f in cloud.fields] |
Yeah, I'm not sure why you couldn't reproduce locally.
Yes, I would guess that it is complaining about the single letter variable. In point of fact, that rule that you link to explains exactly why; In any case, I think once you fix the parameter comment style like @wjwwood suggests, and fix that flake8 warning, things are ready for another CI here. |
@clalancette is there a way to check what source code the CI is checking out? I couldnt find it anywhere on the jenkins dashboard. |
@flynneva it's in the console output, if you look there. Looking at the configuration of the job, you can find this file @clalancette is using too: It contains:
So it's using master on your fork. If you want the commit hash being used, look in the console output of any of the jobs. |
thanks @wjwwood, any ideas how to replicate the ci errors locally? I couldnt either and its a little hard to test if they are solved if we cant test it locally... my output:
|
One of the issues with flake8 is that it only runs the checks that are currently installed. So if you are missing one of the plugins that we use in CI, it will just silently skip the check. That's my guess on what is going on here. If you look at https://ci.ros2.org/job/ci_linux/13226/consoleFull , and search for
One of those is probably the problem. |
@clalancette that was it! thank you so much! always a good day when you learn something new 😄 @SebastianGrans, can confirm just changing that
|
Signed-off-by: Sebastian Grans <sebastian.grans@ntnu.no>
9cc70d5
to
fc9bddc
Compare
All right, CI is all green here. Thanks for the contribution, and for all of the iterations. I'm going to merge this now. |
Signed-off-by: Sebastian Grans <sebastian.grans@ntnu.no> Co-authored-by: Sebastian Grans <sebastian.grans@ntnu.no>
Signed-off-by: Sebastian Grans <sebastian.grans@ntnu.no> Co-authored-by: Sebastian Grans <sebastian.grans@ntnu.no> Signed-off-by: flynneva <evanflynn.msu@gmail.com>
Backports #128 * Port of point_cloud2.py from ROS1 to ROS2. As seperate pkg. (#128) Signed-off-by: Sebastian Grans <sebastian.grans@ntnu.no> Co-authored-by: Sebastian Grans <sebastian.grans@ntnu.no> Signed-off-by: flynneva <evanflynn.msu@gmail.com> * align version number with rest of foxy packages Signed-off-by: flynneva <evanflynn.msu@gmail.com> Co-authored-by: Sebastian Grans <sebastian.grans@gmail.com> Co-authored-by: Sebastian Grans <sebastian.grans@ntnu.no>
@SebastianGrans @jacobperron @flynneva @clalancette @mabelzhang I was thinking that maybe it would be nice to have a numpy decoder for PointCloud2 as part of ROS to make things more efficient. I wrote one in ROSboard that reads the PointCloud2 entirely in a vectorized fashion with no Python loops over the points. See https://github.com/dheera/rosboard/blob/dev/rosboard/message_helper.py Obviously there are some extraneous things in the above code such as checking for presence of "x" and "y" fields and all the int16 compression logic, but that's only for websocket streaming and visualization, the decoder wouldn't need any of that. The magic is simply Would you mind if I submit a pull request with the generalized form of the above? Would we consider replacing |
if the inputs are the same to the new numpy 'read_points' version and the 'numpy' dependency can be easily be handled then I don't see why not to replace the logic with a faster alternative. We just would have to be careful to not affect end-users. |
I believe rosidl_generated_py itself depends on numpy already. I've gotten ros2 message data sometimes as array.array and sometimes as np.ndarray (I can't quite tell why rosidl picks one or the other instead of just using one all the time) but pretty sure numpy is a already dependency of ROS2 due to this. In terms of not breaking logic one issue could be that trying to return a list of namedtuples would be counterproductive to numpy-ifying this, because that causes a memory reallocation and a Python list comprehension which is slow -- as opposed to directly returning the numpy array of structs. I believe the only difference is when returning the numpy array the end user would have to do points[0]['x'], points[0]['y'], etc. instead of points[0].x, points[0].y, etc. -- I'll look into seeing if the latter is possible for numpy to support, I feel like it didn't work the last time I tried that, but if numpy can be convinced to do getattr then it would fix the logic-breaking issue. :) Specifically, here is a minimal version of the issue:
But at the end of the day if this is a new library, is it a hard requirement to have the same API as ROS1? Considering ROS2 is already returning numpy arrays in messages one would think that keeping everything in numpy and avoiding reallocations or ever converting to a Python list would best embrace ROS2's advantages. More than likely if the user gets a list of namedtuples from read_points the first thing they'll do is convert it back to a numpy array to do simple things like rotation matrices or filtering or to pass it into a neural net framework that wants numpy input. |
Okay you know what, I figured out how to make it work. This is what is needed:
So I think backwards compatibility with namedtuple users is possible! I'll work on a PR for this that supports skip_nans, uv etc., I think those should be easy to include. |
PR created here I couldn't get the DCO thing working though. |
so how do we extract points from PointCloud2 msg in ros2 ? It is said to be merged but i am still unable to import read_from_points in sensors_msgs library. thank you for any support |
Please open questions like this on https://answers.ros.org, which is our central Question and Answer site. You'll get a better answer there, and it will be searchable for the future. Make sure to include a lot of information on what platform you are using, which ROS distribution you are using, and the exact steps you took. |
I'll answer this, but please tell me if I should avoid answering questions in old PRs. @an99990: I made a small demo a while ago on how to use this small utility package. Check it out here: https://github.com/SebastianGrans/ROS2-Point-Cloud-Demo |
thank you @SebastianGrans :) |
Hi,
The python submodule
point_cloud2
in ROS1 was useful when working withPointCloud2
messages.People including me would like this feature in ROS2, so I took the opportunity to "port" it.
This is the first time I ever perform a PR to a significant project, and I've tried to follow the developer guidelines to the best of my abilities. Neverthessless, I've probably made some mistakes.
Tests:
Not all tests are passing.
The two former seem to not be related to this package per se, but rather to
flake8
link. Here's the error:And the
copyright
test gives me:point_cloud2.py
has the original copyright header from here.And for(Apparently not.)test/test_point_cloud2.py
I simply copied the Apache License header fromtest_pointcloud_conversion.cpp
.Signed-off-by: Sebastian Grans sebastian.grans@gmail.com