Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Point cloud feature extraction for LOAM based localization #6741

Closed
6 tasks done
ataparlar opened this issue Apr 4, 2024 · 12 comments
Closed
6 tasks done

Point cloud feature extraction for LOAM based localization #6741

ataparlar opened this issue Apr 4, 2024 · 12 comments
Assignees
Labels
component:localization Vehicle's position determination in its environment. (auto-assigned) type:new-feature New functionalities or additions, feature requests.

Comments

@ataparlar
Copy link
Contributor

ataparlar commented Apr 4, 2024

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

We are planning to establish a LOAM based localization. For this purpose, we need feature extracted point cloud maps. For providing those point clouds, this issue is open.

Will be used for:

This task is mapping part of the LOAM based localization issue. For this part, a PCAP file contains the position + point cloud packages inside it and ground truth pose data will be published.

For the development and testing of the feature, YTU Campus area PCAP and Ground Truth pose data will be used. This link includes the data: https://drive.google.com/file/d/1ivVL4hYuqqzlTSMTbJV7gEvlbPvMJ7-M/view?usp=drive_link

Purpose

Being able to create feature extracted point clouds with LOAM. Those will be used for LOAM based localization.

Possible approaches

  • A distortion corrector must be used for point clouds in the given PCAP data (for deskewing). The voxel-based downsampling method may be used to not involve unnecessary data. Implement feature extraction.
    • We are using timestamp data for each point. No need for that.
  • Ground truth poses can be used instead of LOAM output poses.

Definition of done (in order)

@YamatoAndo YamatoAndo added type:new-feature New functionalities or additions, feature requests. component:localization Vehicle's position determination in its environment. (auto-assigned) labels Apr 4, 2024
@ataparlar
Copy link
Contributor Author

The file in the drive link contains the following files:

  • PCAP file that contains point cloud data packages and position packages.
  • A text file that contains ground truth poses of the data collection.

For associating the points and ground truth poses, we can use time. Each data package is saved with time inside of it. However, data packages have a lot of points and only 1 data tag. So, there is no time for point but point clouds. We need point times for deskewing. Additionally, each ground truth position is 0.005 second precision.

Point cloud data is collected with the Velodyne VLP16 sensor. Position packages hold GPRMC messages inside and the PPS signal is used for time syncing during data collection. We can look into the manual of the sensor to parse data and position packages.

As can be seen in the following image, data packages have 1248 bytes. Each data package has 12 data blocks and each data block has 2 firing sequences. During the firing sequence, all the lasers that the sensor has are firing in sequence. Because we have got only one time tag in each data package, we need to associate this time with all the data inside the firing sequence.

Screenshot from 2024-04-04 16-21-19

The time is filled with position packages in the PCAP. So, the position packages needed to be parsed first. Position packages have 554 bytes. NMEA sentence starts from 207th byte and it covers 128 bytes. We used GPRCM here and here are the GPRMC message details. According to the details, the message can be parsed and take time from there.

To fill al the points with time-tag, the firing sequence timing needs to be calculated and points must associated with them. The manual has the following image for firing sequence timing. Each firing takes 2.304 microseconds and there are 16 firing. After 16 firings, there are 18.432 microseconds of recharge time. Then new firing starts. The total firing sequence lasts for 55.296 microseconds.
Screenshot from 2024-04-04 16-35-49

An example data point timing calculation can be seen in the following image:
Screenshot from 2024-04-04 16-40-04

@ataparlar ataparlar self-assigned this Apr 4, 2024
@congphase
Copy link

Hi @ataparlar, thanks for your detailed description. I understand the issue. But there's one thing that I find unclear: The example of the Time offset at the end of the firing sequence 22,

I understand the (22 * 55.296us)part (each Data Block from 0 to 11 contains two firing sequences, each firing sequence consumes 55.296us, so at the end of firing sequence 22, the time offset from the start of the firing sequence 0 would be equal to 22 * 55.296us plus the (55.296us) consumed by the firing sequence 0). With that, the equation should be Time Offset = (22 * 55.296us) + 55.296us?

Please clarify for me.

@YamatoAndo
Copy link
Contributor

@congphase Hi. My understanding is as shown in the figure below.

Screenshot from 2024-04-08 14-56-28

So, the time offset for channel 15 of the firing sequence 22 is (22 * 55.296us) + (15* 2.304us),
and the time offset for channel 0 of the firing sequence 23 is (23 * 55.296us) = (22 * 55.296us) + (16 * 2.304us) + 18.432us

@congphase
Copy link

Hi @YamatoAndo,
Thanks for your explanation.

I was interpreting as if the Time Offset was referring to the green line, as only 15 channels of sequence 22 were accounted in the equation.

image

image

After taking a look at the VLP-16 manual, I figured it out. Thank you again.

image

@ataparlar
Copy link
Contributor Author

Hi @congphase
Thank you for your interest and thanks to @YamatoAndo for the answer.

Here is the repository that I created for parsing the PCAP files and putting them in the corresponding position.

You can see the parsing code in points_provider.cpp. I haven't added the voxel based downsampling method to the code. So, the exported pcds are very dense right now. I will keep you posted with the updates.

Here is a visualization:
Screenshot from 2024-04-08 18-53-14

@congphase
Copy link

Cool.

Would love to hear from you about how I might join hands working on this, as an Autoware Labs member, discussing more on each item in the Definition of done of this task. Would you talk about this in the next Localization & Mapping WG meeting?

@ataparlar
Copy link
Contributor Author

Of course. The next one will be in 2 weeks later. Before that, we can discuss here what to do actually. In the end, we are trying to reach LOAM Based Localization. I am researching that topic right now. We can discuss the topic after Ramadan.

@congphase
Copy link

Read through your repo. Probably C++17 or higher? Can we have a call of about 1-2 hours so that you can help me set up the dev environment? I need to know how to start the program and set breakpoints to debug.

@ataparlar
Copy link
Contributor Author

Hi @congphase

It is a CMake environment. PCL and PcapPlusPlus are required. We have FindPcapPlusPlus.cmake file inside of the cmake folder in the repository to help find PcapPlusPlus.

loam_mapper::TransformProvider

This class reads the ground truth poses from the .txt file and holds the poses. There is the poses_ variable which holds the poses. Also get_pose_at() function is used for matching each ground truth pose to point cloud points.

loam_mapper::PointsProvider

This class holds the point types that we need. There is only PointXYZIT for now. Additionally, the structures and functions for parsing the PCAP data and extracting point clouds are stored here. DataPoint, DataBlock, DataPacket, and DataPacket are the structs. The process_packet and the process_pcaps functions are used for parsing.

Each point extracted from the PCAP file is tagged with the timestamp calculated in the class according to the time tag in each DataPacket.

This class needs the path of the folder that stores the PCAP files. I suggest you split the big PCAP file into pieces with

editcap -c 100000 ytu_map_2_08_04_23.pcap pcaps/ytu_campus.pcap

You can change the -c parameter for different sizes. Unless you don't do that step, the program would need very high amount of RAM.

loam_mapper::LoamMapper

The mapping process is implemented here. The class needs a TransformProvider object and a PointsProvider object. In the constructor, each point cloud point is matched with the ground truth pose and applied transformation with ground truth pose and a previously found LiDAR-IMU calibration matrix for each point. In the end, a downsampled point cloud is extracted right now.

@ataparlar
Copy link
Contributor Author

The repo is updated. We are able to export downsampled full point cloud. The next steps will be related to the LOAM part. Here are some results. As you can see in the right part of the first image, the point cloud sizes are very low. Also, we can lower it if needed.
Screenshot from 2024-04-15 20-57-42
Screenshot from 2024-04-15 20-59-19
Screenshot from 2024-04-15 21-00-04

@ataparlar
Copy link
Contributor Author

Hello guys,

I opened 3 different issues as subtasks of this issue. I added them to the task definition. Here they are:

We are developing the code for this issue in this repository:

@ataparlar
Copy link
Contributor Author

Hi guys,

I am finalizing the first version of the loam_mapper. Here is the repository: https://github.com/ataparlar/loam-mapper

Additionally, here are the videos shows how the process goes:

So, this task is done with the comment:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:localization Vehicle's position determination in its environment. (auto-assigned) type:new-feature New functionalities or additions, feature requests.
Projects
Status: Done
Development

No branches or pull requests

3 participants