Skip to content

Stop adding duplicate points to the clusters.#308

Merged
christian-rauch merged 2 commits intoAprilRobotics:masterfrom
AustinSchuh:dedup_at_source
Jan 22, 2024
Merged

Stop adding duplicate points to the clusters.#308
christian-rauch merged 2 commits intoAprilRobotics:masterfrom
AustinSchuh:dedup_at_source

Conversation

@AustinSchuh
Copy link
Contributor

The segmentation system was adding duplicate points to the clusters list before removing them later.

This has 2 problems.

  1. It is more expensive to remove them later than not to add them.
  2. When sorting, multiple slopes can have the same calculated slope
    value. Depending on the initial order, the duplicates can end up
    being in non-adjacent slots in the cluster, and won't get
    de-duplicated later.

This takes my sample image on my test box from:
0 init 0.000000 ms 0.000000 ms
1 decimate 0.292000 ms 0.292000 ms
2 blur/sharp 0.000000 ms 0.292000 ms
3 threshold 0.617000 ms 0.909000 ms
4 unionfind 3.746000 ms 4.655000 ms
5 make clusters 9.461000 ms 14.116000 ms
6 fit quads to clusters 16.310000 ms 30.426000 ms
7 quads 0.087000 ms 30.513000 ms
8 decode+refinement 0.869000 ms 31.382000 ms
9 reconcile 0.001000 ms 31.383000 ms
10 debug output 0.001000 ms 31.384000 ms
11 cleanup 0.003000 ms 31.387000 ms

to

0 init 0.000000 ms 0.000000 ms
1 decimate 0.303000 ms 0.303000 ms
2 blur/sharp 0.001000 ms 0.304000 ms
3 threshold 0.640000 ms 0.944000 ms
4 unionfind 3.645000 ms 4.589000 ms
5 make clusters 8.593000 ms 13.182000 ms
6 fit quads to clusters 13.935000 ms 27.117000 ms
7 quads 0.084000 ms 27.201000 ms
8 decode+refinement 0.827000 ms 28.028000 ms
9 reconcile 0.002000 ms 28.030000 ms
10 debug output 0.000000 ms 28.030000 ms
11 cleanup 0.004000 ms 28.034000 ms

(across 10 runs, and picking one of the faster 10 runs for the original and the slowest for the new)

@christian-rauch
Copy link
Collaborator

Can you add the example data and the comparison of the results to the true poses?

@AustinSchuh
Copy link
Contributor Author

Took me a bit to get my results into a form which is easy to share.

I've got an algorithm in CUDA that I'm just getting working which I've been testing against all the intermediate steps of this C implementation. This changed saved quite a bit of time there (removing duplicates is harder in a GPU), and I used that algorithm to verify these changes too. (All the lists of deduplicated points for all the images matched across both algorithms before and after this change) From that, I was able to confirm that the actual de-duplicated list of points across my test image set matched before and after this change as well.

Here are the 12 images I've been using.
images.zip

Here's the output of my test before and after the change.
original.txt
deduplicated.txt


    timeprofile_display(tag_detector_->tp);

    for (int i = 0; i < zarray_size(aprilrobotics_detections_); i++) {
      apriltag_detection_t *det;
      zarray_get(aprilrobotics_detections_, i, &det);

      std::cerr << "Found tag number " << det->id
                << " hamming: " << det->hamming
                << " margin: " << det->decision_margin << " with corners: [("
                << std::setprecision(20) << det->p[0][0] << ", " 
                << std::setprecision(20) << det->p[0][1] << "), ("
                << std::setprecision(20) << det->p[1][0] << ", " 
                << std::setprecision(20) << det->p[1][1] << "), ("
                << std::setprecision(20) << det->p[2][0] << ", " 
                << std::setprecision(20) << det->p[2][1] << "), ("
                << std::setprecision(20) << det->p[3][0] << ", " 
                << std::setprecision(20) << det->p[3][1] << ")]" << std::endl;
    }

Image 7 found 1 extra quad.

Found tag number 4 hamming: 1 margin: 0.926475 with corners: [(1102.6893310546875, 487.68725585937505684), (1116.0167236328127274, 497.47357177734375), (1126.0665283203125, 486.05984497070301131), (1103.8942871093747726, 468.698699951171875)]

The margin is tiny so, at least for us, any reasonable margin threshold would reject it. We use a threshold of 50 ourselves.

Images 1 and 2 have small deviations.

Found tag number 7 hamming: 0 margin: 49.7088 with corners: [(991.11315917968738631, 252.836517333984375), (954.3212890625, 116.88887786865231533), (824.03729248046875, 158.61482238769528408), (854.81817626953125, 285.75524902343755684)]          

vs

Found tag number 7 hamming: 0 margin: 49.6801 with corners: [(991.11163330078136369, 252.83062744140625), (954.3212890625, 116.88887786865231533), (824.03564453124977263, 158.6153717041015625), (854.83093261718738631, 285.76669311523443184)]              

The largest change here is a change of 0.012 pixels.

and for image 2,

Found tag number 12 hamming: 1 margin: 2.10535 with corners: [(359.837554931640625, 712.52160644531284106), (392.29718017578125, 704.33892822265625), (401.3436279296875, 649.40826416015613631), (384.111663818359375, 625.49871826171875)]

vs

Found tag number 12 hamming: 1 margin: 1.81046 with corners: [(359.78314208984375, 712.76220703125022737), (392.22988891601556816, 704.54791259765625), (401.34921264648443184, 649.37542724609375), (384.11029052734386369, 625.50805664062511369)]

Here we see a max change of 0.24 pixels, but I'm not worried about this one since this isn't a real detection as can be seen by the low margin.

(re-reading my commit, I need to update some comments in addition to the code change.)

@christian-rauch
Copy link
Collaborator

@AustinSchuh You mentioned that you "need to update some comments in addition to the code change". Do you still plan to update the comments?

@AustinSchuh
Copy link
Contributor Author

@christian-rauch thanks for the prod, I did another pass through the code, and I think I updated all the comments in the Fix perimeter threshold comment. I can't find any more that talk about the duplication.

@christian-rauch
Copy link
Collaborator

Is this then read from your side?

@AustinSchuh
Copy link
Contributor Author

Is this then read from your side?

Whops, thought I hit reply already. Yes, I believe this is ready.

@christian-rauch
Copy link
Collaborator

Is this then read from your side?

Whops, thought I hit reply already. Yes, I believe this is ready.

Cool. Can you rebase and (force) push? I updated the CI in the meantime and would like to have this tested again on the latest CI before merging.

The segmentation system was adding duplicate points to the clusters list
before removing them later.

This has 2 problems.
  1) It is more expensive to remove them later than not to add them.
  2) When sorting, multiple slopes can have the same calculated slope
     value.  Depending on the initial order, the duplicates can end up
     being in non-adjacent slots in the cluster, and won't get
     de-duplicated later.

This takes my sample image on my test box from:
 0                             init        0.000000 ms        0.000000 ms
 1                         decimate        0.292000 ms        0.292000 ms
 2                       blur/sharp        0.000000 ms        0.292000 ms
 3                        threshold        0.617000 ms        0.909000 ms
 4                        unionfind        3.746000 ms        4.655000 ms
 5                    make clusters        9.461000 ms       14.116000 ms
 6            fit quads to clusters       16.310000 ms       30.426000 ms
 7                            quads        0.087000 ms       30.513000 ms
 8                decode+refinement        0.869000 ms       31.382000 ms
 9                        reconcile        0.001000 ms       31.383000 ms
10                     debug output        0.001000 ms       31.384000 ms
11                          cleanup        0.003000 ms       31.387000 ms

to

 0                             init        0.000000 ms        0.000000 ms
 1                         decimate        0.303000 ms        0.303000 ms
 2                       blur/sharp        0.001000 ms        0.304000 ms
 3                        threshold        0.640000 ms        0.944000 ms
 4                        unionfind        3.645000 ms        4.589000 ms
 5                    make clusters        8.593000 ms       13.182000 ms
 6            fit quads to clusters       13.935000 ms       27.117000 ms
 7                            quads        0.084000 ms       27.201000 ms
 8                decode+refinement        0.827000 ms       28.028000 ms
 9                        reconcile        0.002000 ms       28.030000 ms
10                     debug output        0.000000 ms       28.030000 ms
11                          cleanup        0.004000 ms       28.034000 ms

(across 10 runs, and picking one of the faster 10 runs for the original
and the slowest for the new)

Signed-off-by: Austin Schuh <austin.linux@gmail.com>
April tags will only have straight sides.  With the deduplication
happening earlier, each pixel along that boundary will now only
have 2 neighbors (one above, one at 45 degrees).  We should adjust the
max blob length threshold accordingly.

Signed-off-by: Austin Schuh <austin.linux@gmail.com>
@AustinSchuh
Copy link
Contributor Author

Cool. Can you rebase and (force) push? I updated the CI in the meantime and would like to have this tested again on the latest CI before merging.

Done!

@christian-rauch christian-rauch merged commit 9b586ba into AprilRobotics:master Jan 22, 2024
mailmindlin added a commit to mailmindlin/apriltag-rs that referenced this pull request May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants