Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up the GSym error aggregation code, and pass the aggregator by reference #89688

Merged
merged 3 commits into from
Apr 30, 2024

Conversation

kevinfrei
Copy link
Contributor

There was a problem with llvm-gsymutils error aggregation code not properly collecting aggregate errors. The was that the output aggregator collecting errors from other threads wasn't being passed by reference, so it was merging them into a copy of the app-wide output aggregator.

While I was at it, I added a better comment above the "Merge" code and made it a bit more efficient, after learning more details about emplace vs. insert or operator[] on std::map's.

@kevinfrei kevinfrei marked this pull request as ready for review April 22, 2024 23:46
@llvmbot
Copy link
Collaborator

llvmbot commented Apr 22, 2024

@llvm/pr-subscribers-debuginfo

Author: Kevin Frei (kevinfrei)

Changes

There was a problem with llvm-gsymutils error aggregation code not properly collecting aggregate errors. The was that the output aggregator collecting errors from other threads wasn't being passed by reference, so it was merging them into a copy of the app-wide output aggregator.

While I was at it, I added a better comment above the "Merge" code and made it a bit more efficient, after learning more details about emplace vs. insert or operator[] on std::map's.


Full diff: https://github.com/llvm/llvm-project/pull/89688.diff

2 Files Affected:

  • (modified) llvm/include/llvm/DebugInfo/GSYM/OutputAggregator.h (+4-5)
  • (modified) llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp (+1-1)
diff --git a/llvm/include/llvm/DebugInfo/GSYM/OutputAggregator.h b/llvm/include/llvm/DebugInfo/GSYM/OutputAggregator.h
index 8deea3bff1ed76..35ef0a8bc89085 100644
--- a/llvm/include/llvm/DebugInfo/GSYM/OutputAggregator.h
+++ b/llvm/include/llvm/DebugInfo/GSYM/OutputAggregator.h
@@ -57,13 +57,12 @@ class OutputAggregator {
   }
 
   // For multi-threaded usage, we can collect stuff in another aggregator,
-  // then merge it in here
+  // then merge it in here. Note that this is *not* thread safe. It is up to
+  // the caller to ensure that this is only called from one thread at a time.
   void Merge(const OutputAggregator &other) {
     for (auto &&[name, count] : other.Aggregation) {
-      auto it = Aggregation.find(name);
-      if (it == Aggregation.end())
-        Aggregation.emplace(name, count);
-      else
+      auto [it, inserted] = Aggregation.emplace(name, count);
+      if (!inserted)
         it->second += count;
     }
   }
diff --git a/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp b/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
index ff6b560d11726b..601686fdd3dd51 100644
--- a/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
+++ b/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
@@ -612,7 +612,7 @@ Error DwarfTransformer::convert(uint32_t NumThreads, OutputAggregator &Out) {
       DWARFDie Die = getDie(*CU);
       if (Die) {
         CUInfo CUI(DICtx, dyn_cast<DWARFCompileUnit>(CU.get()));
-        pool.async([this, CUI, &LogMutex, Out, Die]() mutable {
+        pool.async([this, CUI, &LogMutex, &Out, Die]() mutable {
           std::string storage;
           raw_string_ostream StrStream(storage);
           OutputAggregator ThreadOut(Out.GetOS() ? &StrStream : nullptr);

@kevinfrei
Copy link
Contributor Author

@jeffreytan81 could you take a look? @clayborg is out for the week (and this is very simple)

@clayborg clayborg merged commit 6566ffd into llvm:main Apr 30, 2024
4 checks passed
@kevinfrei kevinfrei deleted the gsym-aggregate-bug branch July 2, 2024 21:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants