Skip to content

Conversation

@oontvoo
Copy link
Member

@oontvoo oontvoo commented Oct 20, 2025

No description provided.

…time.

We're seeing thunk size overrun in our large builds and it would be helpful to be able to increase the slop size.
However, this should probably not be changed between different runs of LLD, hence making it a build option so that it's fixed per lld binary.
@llvmbot
Copy link
Member

llvmbot commented Oct 20, 2025

@llvm/pr-subscribers-lld-macho

@llvm/pr-subscribers-lld

Author: Vy Nguyen (oontvoo)

Changes

We're seeing thunk size overrun in our large builds and it would be helpful to be able to increase the slop size. However, this should probably not be changed between different runs of LLD, hence making it a build option so that it's fixed per lld binary.


Full diff: https://github.com/llvm/llvm-project/pull/164295.diff

2 Files Affected:

  • (modified) lld/MachO/CMakeLists.txt (+5)
  • (modified) lld/MachO/ConcatOutputSection.cpp (+6-1)
diff --git a/lld/MachO/CMakeLists.txt b/lld/MachO/CMakeLists.txt
index 3cd94ced75cc0..5ad12eb04f4c3 100644
--- a/lld/MachO/CMakeLists.txt
+++ b/lld/MachO/CMakeLists.txt
@@ -1,3 +1,8 @@
+
+option(LLD_MACHO_SLOP_FACTOR
+    "Specify the slop factor. For very large build, the default of 256 might be too small, which can cause thunk-range overrun. Recommend increasing this value to 1024 or higher as needed.", 256)
+add_definitions("-DLLD_MACHO_SLOP_FACTOR=${LLD_MACHO_SLOP_FACTOR}")
+    
 set(LLVM_TARGET_DEFINITIONS Options.td)
 tablegen(LLVM Options.inc -gen-opt-parser-defs)
 add_public_tablegen_target(MachOOptionsTableGen)
diff --git a/lld/MachO/ConcatOutputSection.cpp b/lld/MachO/ConcatOutputSection.cpp
index 8067d23fa6faf..7a3476a145f0b 100644
--- a/lld/MachO/ConcatOutputSection.cpp
+++ b/lld/MachO/ConcatOutputSection.cpp
@@ -306,7 +306,12 @@ void TextOutputSection::finalize() {
     // contains several branch instructions in succession, then the distance
     // from the current position to the position where the thunks are inserted
     // grows. So leave room for a bunch of thunks.
-    unsigned slop = 256 * thunkSize;
+#ifdef LLD_MACHO_SLOP_FACTOR
+    const unsigned kSlopFact = LLD_MACHO_SLOP_FACTOR;
+#else
+    const unsigned kSlopFact = 256;
+#endif
+    unsigned slop = kSlopFact * thunkSize;
     while (finalIdx < endIdx) {
       uint64_t expectedNewSize =
           alignToPowerOf2(addr + size, inputs[finalIdx]->align) +

@@ -1,3 +1,8 @@

option(LLD_MACHO_SLOP_FACTOR
"Specify the slop factor. For very large build, the default of 256 might be too small, which can cause thunk-range overrun. Recommend increasing this value to 1024 or higher as needed.", 256)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any better documentation we can provide as to what this actually is/does? For example do we know what is causing a thunk-range overrun? Is there some coding patterns that may cause it? This seems like we are just putting a band-aid on what may be a bad coding practice.

Also

"For very large builds" vs "For very large build".

is this something that should maybe be specified as a build flag vs a macro?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any better documentation we can provide as to what this actually is/does? For example do we know what is causing a thunk-range overrun? Is there some coding patterns that may cause it? This seems like we are just putting a band-aid on what may be a bad coding practice.

Ok, how about describing the root cause (ie., too many consec branches)?
As for coding patterns that might cause it, one of them could be not using subsection_via_symbols (or anything that prevent deadstripping), which would create a giant input-section, increasing your chance of hitting this thunk-range overrun error.

Updated the code + docs to reflect that.

@ellishg
Copy link
Contributor

ellishg commented Oct 20, 2025

However, this should probably not be changed between different runs of LLD

What do you mean by this? Why not create a new LLD flag? Maybe -thunk-scale or -slop-scale?

@DataCorrupted
Copy link
Member

DataCorrupted commented Oct 20, 2025

@ellishg I think what she meant is that for a given lld (binary executable), it should produce the same result; if you want something different, build a separate lld. But I agree with you: I don't see the motivation in this; a flag would be better IMO

Edit: Pronoun, my mistake for not paying more attention

@oontvoo
Copy link
Member Author

oontvoo commented Oct 22, 2025

What do you mean by this? Why not create a new LLD flag? Maybe -thunk-scale or -slop-scale?

Yes, what @DataCorrupted said. (I was trying to avoid putting too much knobs into users' hand - instead I preferred the option to be set by the toolchain maintainer) But sure, I can see your argument for it being a normal linker option.
Updated the pr.

@oontvoo oontvoo changed the title [lld-macho]Define a macro to allow specifying the slop size at build time. [lld-macho]Define a flag for adjusting slop scale Oct 23, 2025
oontvoo and others added 3 commits October 28, 2025 13:11
Co-authored-by: Ellis Hoag <ellis.sparky.hoag@gmail.com>
Co-authored-by: Ellis Hoag <ellis.sparky.hoag@gmail.com>
@github-actions
Copy link

github-actions bot commented Oct 28, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@ellishg
Copy link
Contributor

ellishg commented Oct 29, 2025

Can we add a test? Maybe we could have --slop-scale=1 hit the failure and show that increasing it fixes it.

@oontvoo
Copy link
Member Author

oontvoo commented Oct 29, 2025

Can we add a test? Maybe we could have --slop-scale=1 hit the failure and show that increasing it fixes it.

Well I don't have a small enough test case to check in.
But in principle, this PR isn't changing the thunk allocation algorithm. It technically only makes it a bit configureable. So I think the test should simply be that the new option can be set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants