Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set error status when duplicate markers are in the same MarkerArray #891

Merged

Conversation

sloretz
Copy link
Contributor

@sloretz sloretz commented Aug 29, 2022

This pull request adds an errors status to the MarkerArray display when a single MarkerArray message with duplicate markers is received. This can happen when the publisher author forgot to set id or ns. Assuming all the Marker messages have the same action (add/modify), it results in only the last marker being displayed.

duplicate_marker_check

Script for checking
import copy

import rclpy
from visualization_msgs.msg import Marker, MarkerArray


def frame_markers():
    tpl = Marker()
    tpl.header.frame_id = 'map'
    tpl.type = Marker.CYLINDER
    tpl.scale.x = 0.25
    tpl.scale.y = 0.25
    tpl.scale.z = 0.5
    tpl.color.a = 1.0
    tpl.ns = 'my_pose'

    msg_x = copy.deepcopy(tpl)
    msg_x.color.r = 1.0
    msg_x.pose.position.x = -0.5;
    msg_x.id = 0

    msg_y = copy.deepcopy(tpl)
    msg_y.color.g = 1.0
    msg_y.pose.position.x = 0.0;
    msg_y.id = 1

    msg_z = copy.deepcopy(tpl)
    msg_z.color.b = 1.0
    msg_z.pose.position.x = 0.5;
    # msg_z.id = 2
    msg_z.id = 1   # Duplicate!
    return [msg_x, msg_y, msg_z]


def main():
    rclpy.init()

    node = rclpy.create_node("rviz_pub")
    marker_pub = node.create_publisher(MarkerArray, "via_marker_array", 10)

    while True:
        message = MarkerArray()
        message.markers = frame_markers()
        marker_pub.publish(message)


if __name__ == "__main__":
    main()

@sloretz sloretz added the enhancement New feature or request label Aug 29, 2022
@sloretz sloretz self-assigned this Aug 29, 2022
@EricCousineau-TRI
Copy link

EricCousineau-TRI commented Aug 30, 2022

nice! one concern though is performance - there any simple performance tests that can be done to check that this processing doesn't blow compute budget for 30 FPS when using something like 1000 markers?

Also, is there perhaps a more efficient way to bookkeep? e.g. std::set<PairType>, vs. searching in std::vector<>?
also, any chance this can just be a warning, with no other behavior changes? e.g.

std::set<PairType> existing;
bool found_duplicates = false;
for (marker : markers) {
  addMessage(copy_of(marker));
  if (!found_duplicates) {
    PairType pair{marker.ns, marker.id};
    const bool is_new = existing.insert(pair).second;
    found_duplicates = !is_new;
  }
}
if (found_duplicates) {
  warn();
}

@sloretz
Copy link
Contributor Author

sloretz commented Aug 30, 2022

any chance this can just be a warning, with no other behavior changes? e.g.

I don't think it changes behavior, at least that was my intent since I'd like to backport this.

@EricCousineau-TRI
Copy link

gotcha, missed that. and yeah, your setup looks great, but still, perhaps good to use something with comparison time less than O(N^2) (for overall loop) just in case marker.id is not strictly increasing?

also, is necessary to have one loop with the full critical section, or might it be best to process message / check for duplicates before obtaining lock?

soz for asking dumb questions, just want to check

@sloretz
Copy link
Contributor Author

sloretz commented Aug 30, 2022

is there perhaps a more efficient way to bookkeep? e.g. std::set, vs. searching in std::vector<>?

46382a0 adds a temporary benchmark looking at how id's affect performance.

56: 2022-08-29T18:17:01-07:00                      
56: Running /home/osrf/ws/ros2/build/rviz_default_plugins/bm891
56: Run on (24 X 4950.19 MHz CPU s)
56: CPU Caches:
56:   L1 Data 32 KiB (x12)
56:   L1 Instruction 32 KiB (x12)
56:   L2 Unified 512 KiB (x12)
56:   L3 Unified 32768 KiB (x2)
56: Load Average: 2.42, 2.64, 2.63
56: ***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
56: ***WARNING*** Library was built as DEBUG. Timings may be affected.
56: ---------------------------------------------------------------------------------
56: Benchmark                                       Time             CPU   Iterations
56: ---------------------------------------------------------------------------------
56: no_check_10                                  7703 ns         7702 ns        91065
56: no_check_100                                69445 ns        69436 ns        10029
56: no_check_1000                              683527 ns       683407 ns         1024
56: no_check_10000                            6919471 ns      6918240 ns           99
56: increasing_ids__vector_of_pairs__10          8011 ns         8010 ns        87359
56: increasing_ids__vector_of_pairs__100        70618 ns        70608 ns         9884
56: increasing_ids__vector_of_pairs__1000      692104 ns       691993 ns         1014
56: increasing_ids__vector_of_pairs__10000    7022189 ns      7020993 ns           98
56: decreasing_ids__vector_of_pairs__10          8612 ns         8611 ns        81219
56: decreasing_ids__vector_of_pairs__100       103880 ns       103866 ns         6773
56: decreasing_ids__vector_of_pairs__1000     3669271 ns      3668772 ns          191
56: decreasing_ids__vector_of_pairs__10000  302649791 ns    302603300 ns            2
56: increasing_ids__set_insertion__10           10192 ns        10191 ns        68646
56: increasing_ids__set_insertion__100         104252 ns       104239 ns         6709
56: increasing_ids__set_insertion__1000       1133052 ns      1132858 ns          617
56: increasing_ids__set_insertion__10000     12522444 ns     12520509 ns           56

The best case of the current method (called vector_of_pairs) adds just a few percent of overhead, but the worst case is about 43 times slower than not checking for duplicates at 10,000 elements. The implementation using an std::set performs worse than the vector_of_pairs method until 100 elements, and afterwords it does better. It seems to be 50% to 80% slower than not checking.

@EricCousineau-TRI
Copy link

EricCousineau-TRI commented Aug 30, 2022

Thanks! Given that (10, 10000) markers can take:

  • (7us, 7ms) without checks
  • (8us, 30ms) (!!!) for checks w/ std::vector<>
  • (10us, 12ms) for checks w/ std::set<>

I think std::set<> for checks makes the most sense, since you might take end up taking the whole 33.3ms budget for 30 FPS in processing markers if using std::vector<>.
WDYT?

@EricCousineau-TRI
Copy link

(as an aside, mayhaps std::unordered_set<> could help the performance at cost of memory; but it may be marginal in this case - https://stackoverflow.com/a/1349883/7829525 )

@sloretz sloretz force-pushed the sloretz_error_when_duplicate_markers_in_marker_array branch from 66aa859 to 3662e20 Compare August 30, 2022 16:06
@sloretz
Copy link
Contributor Author

sloretz commented Aug 30, 2022

I think std::set<> for checks makes the most sense, since you might take end up taking the whole 33.3ms budget for 30 FPS in processing markers if using std::vector<>.

I added some more benchmarks and switched to std::set with id now first in the pair type in 3662e20.

56: 2022-08-30T09:02:03-07:00
56: Running /home/osrf/ws/ros2/build/rviz_default_plugins/bm891
56: Run on (24 X 4950.19 MHz CPU s)
56: CPU Caches:
56:   L1 Data 32 KiB (x12)
56:   L1 Instruction 32 KiB (x12)
56:   L2 Unified 512 KiB (x12)
56:   L3 Unified 32768 KiB (x2)
56: Load Average: 3.25, 5.05, 3.74
56: ***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
56: ***WARNING*** Library was built as DEBUG. Timings may be affected.
56: ----------------------------------------------------------------------------------------
56: Benchmark                                              Time             CPU   Iterations
56: ----------------------------------------------------------------------------------------
56: no_check_10                                         7594 ns         7593 ns        92133
56: no_check_100                                       66750 ns        66737 ns        10404
56: no_check_1000                                     652384 ns       651472 ns         1018
56: no_check_10000                                   6640213 ns      6627208 ns          103
56: increasing_ids__vector_of_pairs__10                 7890 ns         7888 ns        88519
56: increasing_ids__vector_of_pairs__100               69045 ns        69036 ns         9576
56: increasing_ids__vector_of_pairs__1000             669675 ns       669581 ns         1043
56: increasing_ids__vector_of_pairs__10000           6984555 ns      6983393 ns          100
56: decreasing_ids__vector_of_pairs__10                 8390 ns         8389 ns        80266
56: decreasing_ids__vector_of_pairs__100               98628 ns        98618 ns         7103
56: decreasing_ids__vector_of_pairs__1000            3421370 ns      3420919 ns          205
56: decreasing_ids__vector_of_pairs__10000         280956496 ns    280915720 ns            2
56: increasing_ids__set_insertion__10                  10226 ns        10225 ns        68416
56: increasing_ids__set_insertion__100                102784 ns       102769 ns         6771
56: increasing_ids__set_insertion__1000              1101864 ns      1101723 ns          634
56: increasing_ids__set_insertion__10000            12099845 ns     12097916 ns           58
56: decreasing_ids__set_insertion__10                  10138 ns        10137 ns        69105
56: decreasing_ids__set_insertion__100                 99622 ns        99606 ns         6971
56: decreasing_ids__set_insertion__1000              1083470 ns      1083285 ns          639
56: decreasing_ids__set_insertion__10000            11988269 ns     11986155 ns           57
56: increasing_ids__set_insertion_id_first__10          9983 ns         9982 ns        70204
56: increasing_ids__set_insertion_id_first__100        96213 ns        96202 ns         7279
56: increasing_ids__set_insertion_id_first__1000     1004277 ns      1004132 ns          697
56: increasing_ids__set_insertion_id_first__10000   10756909 ns     10755110 ns           65
56: decreasing_ids__set_insertion_id_first__10          9953 ns         9951 ns        69744
56: decreasing_ids__set_insertion_id_first__100        96422 ns        96412 ns         7265
56: decreasing_ids__set_insertion_id_first__1000     1011766 ns      1011640 ns          692
56: decreasing_ids__set_insertion_id_first__10000   10752093 ns     10750556 ns           65

as an aside, mayhaps std::unordered_set<> could help the performance at cost of memory;

I didn't try this one because it requires implementing a hash function for the pair.

@EricCousineau-TRI
Copy link

Yup, those results look like a convincing case for a reasonable trade-off between best- and worse-case using set + id-first, thanks!

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
@sloretz sloretz force-pushed the sloretz_error_when_duplicate_markers_in_marker_array branch from 3662e20 to 4e01043 Compare September 1, 2022 21:49
@sloretz
Copy link
Contributor Author

sloretz commented Sep 1, 2022

CI (build: --packages-above-and-dependencies rviz_default_plugins test: --packages-above rviz_default_plugins)

  • Linux Build Status
  • Linux-aarch64 Build Status
  • Windows Build Status

Copy link

@EricCousineau-TRI EricCousineau-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved from my perspective. Any chance there's a maintainer available to review?

Copy link
Member

@jacobperron jacobperron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, one comment below; take it or leave it.

Comment on lines +167 to +171
pair_type pair(marker.id, marker.ns);
found_duplicate = !unique_markers.insert(pair).second;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using emplace if performance is a concern. I'm not sure if moves the needle on your benchmark though:

Suggested change
pair_type pair(marker.id, marker.ns);
found_duplicate = !unique_markers.insert(pair).second;
found_duplicate = !unique_markers.emplace(marker.id, marker.ns).second;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried out in a benchmark, but I can't see a difference between emplace and insert within the noise.

One of the runs
56: 2022-09-02T13:58:15-07:00                      
56: Running /home/osrf/ws/ros2/build/rviz_default_plugins/bm891
56: Run on (24 X 4950.19 MHz CPU s)
56: CPU Caches:
56:   L1 Data 32 KiB (x12)
56:   L1 Instruction 32 KiB (x12)
56:   L2 Unified 512 KiB (x12)
56:   L3 Unified 32768 KiB (x2)
56: Load Average: 1.67, 2.49, 1.99
56: ***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
56: ------------------------------------------------------------------------------------------------------------
56: Benchmark                                                                  Time             CPU   Iterations
56: ------------------------------------------------------------------------------------------------------------
56: no_check_10                                                              494 ns          494 ns      1390157
56: no_check_100                                                            5035 ns         5034 ns       139124
56: no_check_1000                                                          52140 ns        52134 ns        13403
56: no_check_10000                                                        522287 ns       522195 ns         1330
56: increasing_ids__vector_of_pairs__10                                      540 ns          540 ns      1293151
56: increasing_ids__vector_of_pairs__100                                    5126 ns         5124 ns       136431
56: increasing_ids__vector_of_pairs__1000                                  53451 ns        53444 ns        13273
56: increasing_ids__vector_of_pairs__10000                                526744 ns       526664 ns         1291
56: decreasing_ids__vector_of_pairs__10                                      552 ns          552 ns      1266562
56: decreasing_ids__vector_of_pairs__100                                    7073 ns         7072 ns        97768
56: decreasing_ids__vector_of_pairs__1000                                 187947 ns       187922 ns         3712
56: decreasing_ids__vector_of_pairs__10000                              11610110 ns     11608107 ns           60
56: increasing_ids__set_insertion__10                                        730 ns          730 ns       954514
56: increasing_ids__set_insertion__100                                     10345 ns        10341 ns        67748
56: increasing_ids__set_insertion__1000                                   121661 ns       121645 ns         5752
56: increasing_ids__set_insertion__10000                                 1360934 ns      1360740 ns          516
56: decreasing_ids__set_insertion__10                                        738 ns          738 ns       947607
56: decreasing_ids__set_insertion__100                                      9665 ns         9664 ns        72394
56: decreasing_ids__set_insertion__1000                                   120601 ns       120586 ns         5779
56: decreasing_ids__set_insertion__10000                                 1340419 ns      1340184 ns          526
56: increasing_ids__set_insertion_id_first__10                               708 ns          708 ns       987930
56: increasing_ids__set_insertion_id_first__100                             8838 ns         8837 ns        79525
56: increasing_ids__set_insertion_id_first__1000                          105766 ns       105751 ns         6617
56: increasing_ids__set_insertion_id_first__10000                        1141860 ns      1141685 ns          615
56: decreasing_ids__set_insertion_id_first__10                               724 ns          724 ns       967446
56: decreasing_ids__set_insertion_id_first__100                             8674 ns         8673 ns        80745
56: decreasing_ids__set_insertion_id_first__1000                          105557 ns       105542 ns         6615
56: decreasing_ids__set_insertion_id_first__10000                        1128819 ns      1128658 ns          631
56: increasing_ids__set_insertion_id_first_always_lock__10                   752 ns          752 ns       930742
56: increasing_ids__set_insertion_id_first_always_lock__100                 9304 ns         9303 ns        75296
56: increasing_ids__set_insertion_id_first_always_lock__1000              109530 ns       109515 ns         6393
56: increasing_ids__set_insertion_id_first_always_lock__10000            1175368 ns      1175185 ns          596
56: decreasing_ids__set_insertion_id_first_always_lock__10                   722 ns          722 ns       968217
56: decreasing_ids__set_insertion_id_first_always_lock__100                 9042 ns         9041 ns        77217
56: decreasing_ids__set_insertion_id_first_always_lock__1000              108964 ns       108949 ns         6395
56: decreasing_ids__set_insertion_id_first_always_lock__10000            1185452 ns      1185020 ns          598
56: increasing_ids__set_insertion_id_first_always_lock_emplace__10           757 ns          757 ns       921752                 
56: increasing_ids__set_insertion_id_first_always_lock_emplace__100         9250 ns         9249 ns        75718
56: increasing_ids__set_insertion_id_first_always_lock_emplace__1000      109513 ns       109499 ns         6318
56: increasing_ids__set_insertion_id_first_always_lock_emplace__10000    1190149 ns      1189968 ns          591
56: decreasing_ids__set_insertion_id_first_always_lock_emplace__10           759 ns          759 ns       920237
56: decreasing_ids__set_insertion_id_first_always_lock_emplace__100         9133 ns         9132 ns        76482
56: decreasing_ids__set_insertion_id_first_always_lock_emplace__1000      109207 ns       109191 ns         6436
56: decreasing_ids__set_insertion_id_first_always_lock_emplace__10000    1139710 ns      1139538 ns          613

Copy link
Member

@wjwwood wjwwood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm, aside from my comment about providing more debug info.

display_->setStatusStd(
rviz_common::properties::StatusProperty::Error,
kDuplicateStatus,
"Multiple Markers in the same MarkerArray message had the same namespace and id");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this enough information for someone to debug this problem?

Maybe it should include how many are duplicates and/or which namespace/id are offending.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the error message include the first offending namespace and id in ba16715

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

More benchmarks

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
This reverts commit 8aeea4c.

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
@sloretz sloretz force-pushed the sloretz_error_when_duplicate_markers_in_marker_array branch from 4e01043 to 9bc888b Compare September 2, 2022 21:24
@sloretz
Copy link
Contributor Author

sloretz commented Sep 2, 2022

CI Re-run (this time just FastRTPS because I don't think this change needs testing on multiple RMW)

  • Linux Build Status
  • Linux-aarch64 Build Status
  • Windows Build Status

@sloretz
Copy link
Contributor Author

sloretz commented Sep 3, 2022

CMake warning on windows is unrelated to this PR

@sloretz sloretz requested a review from wjwwood September 3, 2022 00:13
@sloretz sloretz merged commit e069694 into rolling Sep 6, 2022
@delete-merged-branch delete-merged-branch bot deleted the sloretz_error_when_duplicate_markers_in_marker_array branch September 6, 2022 16:49
@sloretz
Copy link
Contributor Author

sloretz commented Sep 12, 2022

@Mergifyio backport humble galactic foxy

mergify bot pushed a commit that referenced this pull request Sep 12, 2022
…891)

* Set error status when duplicate markers are in the same MarkerArray

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Use std::set with id before ns

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Lock/Unlock for every message

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Output first offending namespace and id

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Add benchmark

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

More benchmarks

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Revert "Add benchmark"

This reverts commit 8aeea4c.

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
(cherry picked from commit e069694)
mergify bot pushed a commit that referenced this pull request Sep 12, 2022
…891)

* Set error status when duplicate markers are in the same MarkerArray

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Use std::set with id before ns

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Lock/Unlock for every message

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Output first offending namespace and id

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Add benchmark

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

More benchmarks

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Revert "Add benchmark"

This reverts commit 8aeea4c.

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
(cherry picked from commit e069694)
@mergify
Copy link

mergify bot commented Sep 12, 2022

backport humble galactic foxy

✅ Backports have been created

mergify bot pushed a commit that referenced this pull request Sep 12, 2022
…891)

* Set error status when duplicate markers are in the same MarkerArray

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Use std::set with id before ns

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Lock/Unlock for every message

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Output first offending namespace and id

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Add benchmark

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

More benchmarks

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Revert "Add benchmark"

This reverts commit 8aeea4c.

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
(cherry picked from commit e069694)
sloretz added a commit that referenced this pull request Sep 12, 2022
…891) (#901)

* Set error status when duplicate markers are in the same MarkerArray

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Use std::set with id before ns

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Lock/Unlock for every message

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Output first offending namespace and id

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Add benchmark

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

More benchmarks

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Revert "Add benchmark"

This reverts commit 8aeea4c.

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
(cherry picked from commit e069694)

Co-authored-by: Shane Loretz <sloretz@osrfoundation.org>
sloretz added a commit that referenced this pull request Sep 12, 2022
…891) (#899)

* Set error status when duplicate markers are in the same MarkerArray

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Use std::set with id before ns

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Lock/Unlock for every message

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Output first offending namespace and id

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Add benchmark

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

More benchmarks

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Revert "Add benchmark"

This reverts commit 8aeea4c.

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
(cherry picked from commit e069694)

Co-authored-by: Shane Loretz <sloretz@osrfoundation.org>
sloretz added a commit that referenced this pull request Sep 12, 2022
…891) (#900)

* Set error status when duplicate markers are in the same MarkerArray

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Use std::set with id before ns

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Lock/Unlock for every message

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Output first offending namespace and id

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Add benchmark

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

More benchmarks

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

* Revert "Add benchmark"

This reverts commit 8aeea4c.

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>

Signed-off-by: Shane Loretz <sloretz@osrfoundation.org>
(cherry picked from commit e069694)

Co-authored-by: Shane Loretz <sloretz@osrfoundation.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants