Skip to content

Conversation

xacrimon
Copy link

@xacrimon xacrimon commented Apr 26, 2025

This is the last PR of the trio of changes I wanted that I've had on my own fork for a little while now. I suspect that inlining attributes have been added over time hapazardly to random functions without really considering the affects or checking the result is desirable.

So, I've gone through and looked at every method in the public API and it's dissassembly with various opt levels ("3", "s", "z", 0) and function by function tweaked things to so that only code that needs to be inlined for performance is, and appropriate amounts are outlined at the various levels. This also included some miscellanous fixes like swapping the hasher wrapper methods to #[inline(always)] as not inlining generates slower and larger code on every single opt level outside except 0.

Generally, the changes and though process can be summed up as:

  • Don't push inlining on large/complex/expensive/infrequent API methods like clear() or shrink_to_fit, call overhead for these is not a real concern but the compile drawbacks of including and recompiling those extra #[inline] functions in every codegen unit is.
  • Keep/add #[inline] on outer API layer functions that are mostly just a shim over a couple of internal fns
  • Remove it, no attribute on large internal items that implement large complex bits of logic, this results in inlining of these adapting well depending on opt level and callsite and can result in significantly less code size of crates using hashlink.
  • Keep/add inlining for hot but small/simple internals like hash_key where inlining into the API caller can be highly beneficial for out of order execution reasons.
  • Remove where nonsensical, like Debug::fmt implementations.
  • Otherwise, default public generic functions to no attribute, they can still be inlined to callers in other crates without LTO since they're generics, but the inline threshold reducing effect of #[inline] is avoided and applied randomly just prevents LLVMs heuristics from making good decisions.

My testing of this in real world applications hasn't shown any performance regression, but has improved build time, total generated LLVM IR lines and binary size noticably on my test binary projects. I also haven't been able to create a microbenchmark that regresses on these changes. I haven't really observed any changes above statistical error.

@xacrimon xacrimon changed the title WIP: more sane usage of inline attributes WIP: Rework all inlining attribute decisions with a cohesive plan Apr 26, 2025
@xacrimon xacrimon marked this pull request as ready for review April 26, 2025 07:37
@xacrimon xacrimon changed the title WIP: Rework all inlining attribute decisions with a cohesive plan Rework all inlining attribute labels with consistent decisions Apr 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant