Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce results for NCLT #28

Closed
nachovizzo opened this issue Jul 22, 2022 · 15 comments · Fixed by #36
Closed

How to reproduce results for NCLT #28

nachovizzo opened this issue Jul 22, 2022 · 15 comments · Fixed by #36

Comments

@nachovizzo
Copy link

nachovizzo commented Jul 22, 2022

Hello @jedeschaud, sorry to disturb you.

We are trying to reproduce the results for ct-icp on the NCLT dataset, and we couldn't succeed so far. How can I get this number?
image

Checking the implementation, and using the original velodyne_hits.bin file, I also see that you are processing 42764 in total, where the velodyne_sync folder contains only 28127 scans. How did you guys evaluate the results of the system?

Moreover, which ground truth poses were used to carry on the evaluation? According to the implementation,

case NCLT:

// Returns the Path to the Ground Truth file for the given sequence
// Note: The name of the sequence is not checked
inline std::string ground_truth_path(const DatasetOptions &options,
                                     const std::string &sequence_name) {
    std::string ground_truth_path = options.root_path;
    if (ground_truth_path.size() > 0 && ground_truth_path[ground_truth_path.size() - 1] != '/')
        ground_truth_path += '/';

    switch (options.dataset) {
        case NCLT:
            throw std::runtime_error("Not Implemented!");
    }
    return ground_truth_path;
}

Thanks a lot in advance

@pdell-kitware
Copy link
Collaborator

Hi @nachovizzo,

We are indeed using the velodyne_hits.bin file, which we decode into a sequence of scans,
As seen below

std::vector<ct_icp::Point3D> DoNext(bool jump_frame = false) {
            std::vector<ct_icp::Point3D> points;
            // Normalize timestamps
            double min_timestamp = std::numeric_limits<double>::infinity(), max_timestamp = std::numeric_limits<double>::lowest();
            for (int iter(0); iter < num_aggregated_pc_; ++iter) {
                if (!HasNext())
                    break;

                auto next_batch = NextBatch(jump_frame);

                if (jump_frame)
                    continue;

                auto old_size = points.size();

                if (!next_batch.empty()) {
                    auto timestamp = next_batch[0].timestamp;
                    if (timestamp < min_timestamp)
                        min_timestamp = timestamp;
                    if (timestamp > max_timestamp)
                        max_timestamp = timestamp;
                }

                points.resize(old_size + next_batch.size());
                std::copy(next_batch.begin(), next_batch.end(), points.begin() + old_size);
            }
            for (auto &point: points)
                point.alpha_timestamp = (point.timestamp - min_timestamp) / (max_timestamp - min_timestamp);
            return points;
        }

The method DoNext aggregates a number of batches (which probably correspond to velodyne packets), into a frame.
In order to almost have a full revolution, we set 220 as the number of batch in a frame.

To respond to the evaluation part of your question, I think we used pyLiDAR-SLAM for the evaluation in the paper.

But it would be much more convenient to integrate the ground truth for NCLT to this code base,
With the commit from issue #29 I'll add this to the code.

@nachovizzo
Copy link
Author

Thanks for such a detailed response (and fast), highly appreciated.

May I ask why you choose 220 velodyne packets? I'm looking forward to reproduce this experiment :)

Thanks!

@pdell-kitware
Copy link
Collaborator

Well, first I am not 100% sure that the batch exactly correspond to a velodyne packet,

But this is the separation of the file into batches as described by the author in http://robots.engin.umich.edu/nclt/python/read_vel_hits.py

220 is the number we found convenient to have almost a full revolution of the sensor

@nachovizzo
Copy link
Author

Ok, thanks. By the way, I love this line:

    if not verify_magic(magic):
        print "Could not verify magic"

@pdell-kitware
Copy link
Collaborator

Hi,

I commited major changes on both branches dev, and master,

Now you should be able to run ct_icp on NCLT and compute the metrics with the ground truth poses,
See the readme for more details:

run_odometry -c config/odometry/nclt_config.yaml

(You should either create a symbolic link to .nclt root dataset, or change the parameter in the config),

I haven't had a chance to run the full sequence, so don't hesitate to tell me if there are some problems / failures.

I should say, that there is quite a bit of difference with the code from the article (from the release ICRA-2022 for example).

@nachovizzo
Copy link
Author

Hello, thanks for the new contributions!

I'm trying to build the code from a fresh clone, and I'm getting an infinite amount of errors.

Should I open a new issue about this?

image

@pdell-kitware
Copy link
Collaborator

Hum, that is strange, the build manages to work cleanly with github actions (https://github.com/jedeschaud/ct_icp/runs/7555918389?check_suite_focus=true),

I created an issue for this #32

Can you provide on the issue your initial problem ? (first amongst the list ?)

@pdell-kitware
Copy link
Collaborator

I just ran NCLT with the config config/odometry/nclt_config.yaml with KITTI metrics and obtained a score of 0.87

@nachovizzo
Copy link
Author

After running for hours the pipeline did finish, but I can't see any metric, or any trajectory.
This is the last msg I got:

/* ------------------------------------------------------------------------ */
REGISTRATION OF FRAME number 42759 (Fid:42759) with CERES solver
Number of points in sub-sampled frame: 638
Initial ego-motion distance: 0.00479226
Trajectory correction [begin(t) - end(t-1)]: 0
Final ego-motion distance: 0.0128866
Elapsed Time: 189.805 (ms)
Inserting points in the Map:  Skipped Frames [0 / 5]  Total Insertions: 40903
Cumulative Orientation: 0.253408°
Cumulative Distance: 0.0210918m
Ego Orientation: 0.860687°
[CT-ICP] Logged Values:
 -- icp_duration_neighborhood: 53.51
 -- icp_duration_solve: 127.073
 -- icp_num_iters: 20
 -- icp_total_duration: 187.021
 -- odometry_duration_sampling: 0.28555
 -- odometry_initialization: 0.339338
 -- odometry_initialization(ms): 2.11022
 -- odometry_map_update(ms): 8.70454
 -- odometry_num_keypoints: 527
 -- odometry_total: 200.214
 -- odometry_total_duration(ms): 189.805
 -- odometry_transform(ms): 1.36332
 -- odometry_try_register: 187.668

Saving Trajectory to /home/ivizzo/dev/slam/ct_icp/install/CT_ICP/bin/.outputs/2022-07-29_16:10:21/NCLT_2012-01-08.PLY

The trajectory can't be opened with standard tools. How can I run the system?

@pdell-kitware
Copy link
Collaborator

pdell-kitware commented Jul 29, 2022

Well I am pretty sure that this means it worked !

The trajectory is outputted as a PLY file, which can be seen in cloud compare (see the last line)

It's a pretty long sequence more than 40 000 frames, so in real time it should be more than an hour long acquisition any way, but because it is a challenging sequence, we set conservative parameters (which yield better result at the cost of running slower), but if you want I can try and find parameters to run it faster.

The runtime on your PC seems slower than on mine, but again the set of parameters was chosen for precision not online processing as mentioned in the paper.

To compute the metrics, you can use the library with a ground truth CSV file provided by the dataset.

Either make sure that the ground truth poses is located on disk at the right sport and rerun the slam,

Or you can look at SlamCore/io.h and SlamCore/eval.h to compute metrics on your own c++ main script (using the library SlamCore you just installed)

@pdell-kitware
Copy link
Collaborator

Note, again that if you want to have a direct feedback on the Slam, you can either use ROS or the visualization (WITH_VIZ3D) (our own ugly little gui)

@nachovizzo
Copy link
Author

Thanks again for the reply. I still can't run the system and get the poses file :(

I20220730 09:53:47.427917 1915860 dataset.cpp:1089] Could not load ground truth for sequence 2012-01-08. Expected at location "/home/ivizzo/data/NCLT/2012-01-08_vel/groundtruth_2012-01-08$ .csv"

But the file it is there

$  cat /home/ivizzo/data/NCLT/2012-01-08_vel/groundtruth_2012-01-08.csv | wc -l
835469

@pdell-kitware
Copy link
Collaborator

pdell-kitware commented Jul 30, 2022

Ho, my bad, it should be located at

/home/ivizzo/data/NCLT/2012-01-08_vel/2012-01-08/groundtruth_2012-01-08.csv

The error message is confusing, I'll fix it,
Can you try this ?

Also, in the nclt_config.yaml file, set the variable progress_bar=true

This will make the output of the console more readable (you should in particular, have regularly the computation of the metrics performed)

What confuses me, is the filename "groundtruth_2012-01-08$ .csv", which should be groundtruth_2012-01-08.csv, as can be seen here from dataset.cpp:

 filename = "groundtruth_" + sequence_info.sequence_name + ".csv";

@nachovizzo
Copy link
Author

Hi, thanks for all the support. I can run the dataset, but I still need more help from your side :)

Is there any way to use the "normal" velodyne data. As far as I read on the NCLT paper that data is "motion-compensated" although they say later that due to the max 1 m/s speed of the segway there is no motion distortion....

I would like to have the ground truth poses that matches your pose estimates, which indeed double the number of scans in the sync folder from the original dataset.

Apparently, you are interpolating the estimated poses to match the amount of poses available in the ground truth... I'm still confused about this particular evaluation.

To summarize:

Velodyne data

Is there any way to run CT-ICP on the original data? without using this gigantic velodyne_hits.bin file?

GT poses

Is there any way to obtain a gt_poses.txt that matches the timestamp selection you use to generate the point clouds from the velodyne_hits.bin file?

I'm trying to evaluate the results (which looks amazing, by the way) with some scripts that can not accept different size in the amount of gt poses. Other baselines I'm running can process the "normal" velodyne point clouds, and I'm actually trying to replicate the same way you load data here

Basically, I would like to have a gt_poses.txt with 42760 which correlates to the timestamp you selected under this 220 accumulated velodyne packtes to do a 1 to 1 evaluation

Thanks a lot!

@pdell-kitware
Copy link
Collaborator

pdell-kitware commented Aug 4, 2022

Original Velodyne Data

I wouldn't use the sync point clouds from NCLT,
I am pretty sure that due to the high frequency rotations, the motion compensation is not great at all,
In my memories, when using the ground truth poses with these point clouds, there were multiple double walls, trailing trees, etc ...

Now there is indeed a way, to run the SLAM on the sync files if that is what you call the original data,
But you would have to define a new sequence which can decode the .bin files, (in your own script cpp).
Simply extend ADatasetSequence or better AFileSequence from dataset.h, and build a Dataset, using the Dataset::BuildCustomDataset function.

Also, when running with sync folder, you should not use the Continuous Time aspect of CT-ICP, but rather the standard rigid transform (see OdometryOptions::motion_compensation)

Then you can write a script very similar to command/cmd_run_odometry.cpp, using notably odometry_runner.h, odometry_runner.cpp and instead of building the dataset from the config, building your custom dataset.

GT Poses

In this library, if you wan to interpolate the poses at the timestamps of the frames,
You can:

  1. Load the original poses from NCLT (which should have a higher frequency) using from dataset.h:
std::vector<slam::Pose> ReadNCLTPoses(const std::string &file_path);
  1. Build a linear trajectory from the poses (look at include/SlamCore/trajectory.h):
std::vector<double> timestamps; #< The timestamps you want to sample

# ... Select the timestamps to sample, for example by reading all frames from the NCLT Dataset Iterator (and selecting min / max timestamp for each frame)
#     Or using the poses saved by the SLAM as PLY files which should contain timestamps for each pose

auto nclt_poses = ct_icp::ReadNCLTPoses(nclt_poses_file);
auto linear_trajectory = slam::LinearContinuousTrajectory::Create(std::move(nclt_poses));

std::vector<slam::Pose> poses_sampled;
for(double t : timestamps)
   poses_sampled.push_back(linear_trajectory.InterpolatePose(t));

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants