Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CodeGen] Port AtomicExpand to new Pass Manager #71220

Merged
merged 1 commit into from
Feb 25, 2024

Conversation

Ris-Bali
Copy link
Contributor

@Ris-Bali Ris-Bali commented Nov 3, 2023

Port the atomicexpand pass to the new Pass Manager.
Fixes #64559

Copy link

github-actions bot commented Nov 3, 2023

⚠️ C/C++ code formatter, clang-format found issues in your code. ⚠️

You can test this locally with the following command:
git-clang-format --diff 3ee8c93769cd094ea0748b4a446a475160c0f51f 8cebd08be8baa2a743a4f186a09edb317ab65a3a -- llvm/include/llvm/CodeGen/AtomicExpand.h llvm/include/llvm/CodeGen/Passes.h llvm/include/llvm/InitializePasses.h llvm/include/llvm/LinkAllPasses.h llvm/lib/CodeGen/AtomicExpandPass.cpp llvm/lib/CodeGen/CodeGen.cpp llvm/lib/Passes/PassBuilder.cpp llvm/lib/Target/AArch64/AArch64TargetMachine.cpp llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp llvm/lib/Target/ARC/ARCTargetMachine.cpp llvm/lib/Target/ARM/ARMTargetMachine.cpp llvm/lib/Target/BPF/BPFTargetMachine.cpp llvm/lib/Target/CSKY/CSKYTargetMachine.cpp llvm/lib/Target/Hexagon/HexagonTargetMachine.cpp llvm/lib/Target/Lanai/LanaiTargetMachine.cpp llvm/lib/Target/LoongArch/LoongArchTargetMachine.cpp llvm/lib/Target/M68k/M68kTargetMachine.cpp llvm/lib/Target/MSP430/MSP430TargetMachine.cpp llvm/lib/Target/Mips/MipsTargetMachine.cpp llvm/lib/Target/NVPTX/NVPTXTargetMachine.cpp llvm/lib/Target/PowerPC/PPCTargetMachine.cpp llvm/lib/Target/RISCV/RISCVTargetMachine.cpp llvm/lib/Target/Sparc/SparcTargetMachine.cpp llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp llvm/lib/Target/VE/VETargetMachine.cpp llvm/lib/Target/WebAssembly/WebAssemblyTargetMachine.cpp llvm/lib/Target/X86/X86TargetMachine.cpp llvm/lib/Target/XCore/XCoreTargetMachine.cpp llvm/tools/opt/optdriver.cpp
View the diff from clang-format here.
diff --git a/llvm/include/llvm/CodeGen/Passes.h b/llvm/include/llvm/CodeGen/Passes.h
index 3f0d81fa1d..b6cf8a9f29 100644
--- a/llvm/include/llvm/CodeGen/Passes.h
+++ b/llvm/include/llvm/CodeGen/Passes.h
@@ -44,562 +44,562 @@ namespace llvm {
   /// AtomicExpandPass - At IR level this pass replace atomic instructions with
   /// __atomic_* library calls, or target specific instruction which implement the
   /// same semantics in a way which better fits the target backend.
-  FunctionPass *createAtomicExpandLegacyPass();
-
-  /// createUnreachableBlockEliminationPass - The LLVM code generator does not
-  /// work well with unreachable basic blocks (what live ranges make sense for a
-  /// block that cannot be reached?).  As such, a code generator should either
-  /// not instruction select unreachable blocks, or run this pass as its
-  /// last LLVM modifying pass to clean up blocks that are not reachable from
-  /// the entry block.
-  FunctionPass *createUnreachableBlockEliminationPass();
+FunctionPass *createAtomicExpandLegacyPass();
+
+/// createUnreachableBlockEliminationPass - The LLVM code generator does not
+/// work well with unreachable basic blocks (what live ranges make sense for a
+/// block that cannot be reached?).  As such, a code generator should either
+/// not instruction select unreachable blocks, or run this pass as its
+/// last LLVM modifying pass to clean up blocks that are not reachable from
+/// the entry block.
+FunctionPass *createUnreachableBlockEliminationPass();
 
-  /// createGCEmptyBasicblocksPass - Empty basic blocks (basic blocks without
-  /// real code) appear as the result of optimization passes removing
-  /// instructions. These blocks confuscate profile analysis (e.g., basic block
-  /// sections) since they will share the address of their fallthrough blocks.
-  /// This pass garbage-collects such basic blocks.
-  MachineFunctionPass *createGCEmptyBasicBlocksPass();
-
-  /// createBasicBlockSections Pass - This pass assigns sections to machine
-  /// basic blocks and is enabled with -fbasic-block-sections.
-  MachineFunctionPass *createBasicBlockSectionsPass();
-
-  MachineFunctionPass *createBasicBlockPathCloningPass();
+/// createGCEmptyBasicblocksPass - Empty basic blocks (basic blocks without
+/// real code) appear as the result of optimization passes removing
+/// instructions. These blocks confuscate profile analysis (e.g., basic block
+/// sections) since they will share the address of their fallthrough blocks.
+/// This pass garbage-collects such basic blocks.
+MachineFunctionPass *createGCEmptyBasicBlocksPass();
+
+/// createBasicBlockSections Pass - This pass assigns sections to machine
+/// basic blocks and is enabled with -fbasic-block-sections.
+MachineFunctionPass *createBasicBlockSectionsPass();
+
+MachineFunctionPass *createBasicBlockPathCloningPass();
 
-  /// createMachineFunctionSplitterPass - This pass splits machine functions
-  /// using profile information.
-  MachineFunctionPass *createMachineFunctionSplitterPass();
+/// createMachineFunctionSplitterPass - This pass splits machine functions
+/// using profile information.
+MachineFunctionPass *createMachineFunctionSplitterPass();
 
-  /// MachineFunctionPrinter pass - This pass prints out the machine function to
-  /// the given stream as a debugging tool.
-  MachineFunctionPass *
-  createMachineFunctionPrinterPass(raw_ostream &OS,
-                                   const std::string &Banner ="");
+/// MachineFunctionPrinter pass - This pass prints out the machine function to
+/// the given stream as a debugging tool.
+MachineFunctionPass *
+createMachineFunctionPrinterPass(raw_ostream &OS,
+                                 const std::string &Banner = "");
 
-  /// StackFramePrinter pass - This pass prints out the machine function's
-  /// stack frame to the given stream as a debugging tool.
-  MachineFunctionPass *createStackFrameLayoutAnalysisPass();
+/// StackFramePrinter pass - This pass prints out the machine function's
+/// stack frame to the given stream as a debugging tool.
+MachineFunctionPass *createStackFrameLayoutAnalysisPass();
 
-  /// MIRPrinting pass - this pass prints out the LLVM IR into the given stream
-  /// using the MIR serialization format.
-  MachineFunctionPass *createPrintMIRPass(raw_ostream &OS);
+/// MIRPrinting pass - this pass prints out the LLVM IR into the given stream
+/// using the MIR serialization format.
+MachineFunctionPass *createPrintMIRPass(raw_ostream &OS);
 
-  /// This pass resets a MachineFunction when it has the FailedISel property
-  /// as if it was just created.
-  /// If EmitFallbackDiag is true, the pass will emit a
-  /// DiagnosticInfoISelFallback for every MachineFunction it resets.
-  /// If AbortOnFailedISel is true, abort compilation instead of resetting.
-  MachineFunctionPass *createResetMachineFunctionPass(bool EmitFallbackDiag,
-                                                      bool AbortOnFailedISel);
+/// This pass resets a MachineFunction when it has the FailedISel property
+/// as if it was just created.
+/// If EmitFallbackDiag is true, the pass will emit a
+/// DiagnosticInfoISelFallback for every MachineFunction it resets.
+/// If AbortOnFailedISel is true, abort compilation instead of resetting.
+MachineFunctionPass *createResetMachineFunctionPass(bool EmitFallbackDiag,
+                                                    bool AbortOnFailedISel);
 
-  /// createCodeGenPrepareLegacyPass - Transform the code to expose more pattern
-  /// matching during instruction selection.
-  FunctionPass *createCodeGenPrepareLegacyPass();
+/// createCodeGenPrepareLegacyPass - Transform the code to expose more pattern
+/// matching during instruction selection.
+FunctionPass *createCodeGenPrepareLegacyPass();
 
-  /// This pass implements generation of target-specific intrinsics to support
-  /// handling of complex number arithmetic
-  FunctionPass *createComplexDeinterleavingPass(const TargetMachine *TM);
+/// This pass implements generation of target-specific intrinsics to support
+/// handling of complex number arithmetic
+FunctionPass *createComplexDeinterleavingPass(const TargetMachine *TM);
 
-  /// AtomicExpandID -- Lowers atomic operations in terms of either cmpxchg
-  /// load-linked/store-conditional loops.
-  extern char &AtomicExpandID;
+/// AtomicExpandID -- Lowers atomic operations in terms of either cmpxchg
+/// load-linked/store-conditional loops.
+extern char &AtomicExpandID;
 
-  /// MachineLoopInfo - This pass is a loop analysis pass.
-  extern char &MachineLoopInfoID;
-
-  /// MachineDominators - This pass is a machine dominators analysis pass.
-  extern char &MachineDominatorsID;
+/// MachineLoopInfo - This pass is a loop analysis pass.
+extern char &MachineLoopInfoID;
+
+/// MachineDominators - This pass is a machine dominators analysis pass.
+extern char &MachineDominatorsID;
 
-  /// MachineDominanaceFrontier - This pass is a machine dominators analysis.
-  extern char &MachineDominanceFrontierID;
-
-  /// MachineRegionInfo - This pass computes SESE regions for machine functions.
-  extern char &MachineRegionInfoPassID;
-
-  /// EdgeBundles analysis - Bundle machine CFG edges.
-  extern char &EdgeBundlesID;
-
-  /// LiveVariables pass - This pass computes the set of blocks in which each
-  /// variable is life and sets machine operand kill flags.
-  extern char &LiveVariablesID;
-
-  /// PHIElimination - This pass eliminates machine instruction PHI nodes
-  /// by inserting copy instructions.  This destroys SSA information, but is the
-  /// desired input for some register allocators.  This pass is "required" by
-  /// these register allocator like this: AU.addRequiredID(PHIEliminationID);
-  extern char &PHIEliminationID;
-
-  /// LiveIntervals - This analysis keeps track of the live ranges of virtual
-  /// and physical registers.
-  extern char &LiveIntervalsID;
-
-  /// LiveStacks pass. An analysis keeping track of the liveness of stack slots.
-  extern char &LiveStacksID;
-
-  /// TwoAddressInstruction - This pass reduces two-address instructions to
-  /// use two operands. This destroys SSA information but it is desired by
-  /// register allocators.
-  extern char &TwoAddressInstructionPassID;
-
-  /// ProcessImpicitDefs pass - This pass removes IMPLICIT_DEFs.
-  extern char &ProcessImplicitDefsID;
-
-  /// RegisterCoalescer - This pass merges live ranges to eliminate copies.
-  extern char &RegisterCoalescerID;
-
-  /// MachineScheduler - This pass schedules machine instructions.
-  extern char &MachineSchedulerID;
-
-  /// PostMachineScheduler - This pass schedules machine instructions postRA.
-  extern char &PostMachineSchedulerID;
+/// MachineDominanaceFrontier - This pass is a machine dominators analysis.
+extern char &MachineDominanceFrontierID;
+
+/// MachineRegionInfo - This pass computes SESE regions for machine functions.
+extern char &MachineRegionInfoPassID;
+
+/// EdgeBundles analysis - Bundle machine CFG edges.
+extern char &EdgeBundlesID;
+
+/// LiveVariables pass - This pass computes the set of blocks in which each
+/// variable is life and sets machine operand kill flags.
+extern char &LiveVariablesID;
+
+/// PHIElimination - This pass eliminates machine instruction PHI nodes
+/// by inserting copy instructions.  This destroys SSA information, but is the
+/// desired input for some register allocators.  This pass is "required" by
+/// these register allocator like this: AU.addRequiredID(PHIEliminationID);
+extern char &PHIEliminationID;
+
+/// LiveIntervals - This analysis keeps track of the live ranges of virtual
+/// and physical registers.
+extern char &LiveIntervalsID;
+
+/// LiveStacks pass. An analysis keeping track of the liveness of stack slots.
+extern char &LiveStacksID;
+
+/// TwoAddressInstruction - This pass reduces two-address instructions to
+/// use two operands. This destroys SSA information but it is desired by
+/// register allocators.
+extern char &TwoAddressInstructionPassID;
+
+/// ProcessImpicitDefs pass - This pass removes IMPLICIT_DEFs.
+extern char &ProcessImplicitDefsID;
+
+/// RegisterCoalescer - This pass merges live ranges to eliminate copies.
+extern char &RegisterCoalescerID;
+
+/// MachineScheduler - This pass schedules machine instructions.
+extern char &MachineSchedulerID;
+
+/// PostMachineScheduler - This pass schedules machine instructions postRA.
+extern char &PostMachineSchedulerID;
 
-  /// SpillPlacement analysis. Suggest optimal placement of spill code between
-  /// basic blocks.
-  extern char &SpillPlacementID;
+/// SpillPlacement analysis. Suggest optimal placement of spill code between
+/// basic blocks.
+extern char &SpillPlacementID;
 
-  /// ShrinkWrap pass. Look for the best place to insert save and restore
-  // instruction and update the MachineFunctionInfo with that information.
-  extern char &ShrinkWrapID;
+/// ShrinkWrap pass. Look for the best place to insert save and restore
+// instruction and update the MachineFunctionInfo with that information.
+extern char &ShrinkWrapID;
 
-  /// LiveRangeShrink pass. Move instruction close to its definition to shrink
-  /// the definition's live range.
-  extern char &LiveRangeShrinkID;
+/// LiveRangeShrink pass. Move instruction close to its definition to shrink
+/// the definition's live range.
+extern char &LiveRangeShrinkID;
 
-  /// Greedy register allocator.
-  extern char &RAGreedyID;
-
-  /// Basic register allocator.
-  extern char &RABasicID;
-
-  /// VirtRegRewriter pass. Rewrite virtual registers to physical registers as
-  /// assigned in VirtRegMap.
-  extern char &VirtRegRewriterID;
-  FunctionPass *createVirtRegRewriter(bool ClearVirtRegs = true);
+/// Greedy register allocator.
+extern char &RAGreedyID;
+
+/// Basic register allocator.
+extern char &RABasicID;
+
+/// VirtRegRewriter pass. Rewrite virtual registers to physical registers as
+/// assigned in VirtRegMap.
+extern char &VirtRegRewriterID;
+FunctionPass *createVirtRegRewriter(bool ClearVirtRegs = true);
 
-  /// UnreachableMachineBlockElimination - This pass removes unreachable
-  /// machine basic blocks.
-  extern char &UnreachableMachineBlockElimID;
+/// UnreachableMachineBlockElimination - This pass removes unreachable
+/// machine basic blocks.
+extern char &UnreachableMachineBlockElimID;
 
-  /// DeadMachineInstructionElim - This pass removes dead machine instructions.
-  extern char &DeadMachineInstructionElimID;
+/// DeadMachineInstructionElim - This pass removes dead machine instructions.
+extern char &DeadMachineInstructionElimID;
 
-  /// This pass adds dead/undef flags after analyzing subregister lanes.
-  extern char &DetectDeadLanesID;
+/// This pass adds dead/undef flags after analyzing subregister lanes.
+extern char &DetectDeadLanesID;
 
-  /// This pass perform post-ra machine sink for COPY instructions.
-  extern char &PostRAMachineSinkingID;
+/// This pass perform post-ra machine sink for COPY instructions.
+extern char &PostRAMachineSinkingID;
 
-  /// This pass adds flow sensitive discriminators.
-  extern char &MIRAddFSDiscriminatorsID;
+/// This pass adds flow sensitive discriminators.
+extern char &MIRAddFSDiscriminatorsID;
 
-  /// This pass reads flow sensitive profile.
-  extern char &MIRProfileLoaderPassID;
+/// This pass reads flow sensitive profile.
+extern char &MIRProfileLoaderPassID;
 
-  /// FastRegisterAllocation Pass - This pass register allocates as fast as
-  /// possible. It is best suited for debug code where live ranges are short.
-  ///
-  FunctionPass *createFastRegisterAllocator();
-  FunctionPass *createFastRegisterAllocator(RegClassFilterFunc F,
-                                            bool ClearVirtRegs);
+/// FastRegisterAllocation Pass - This pass register allocates as fast as
+/// possible. It is best suited for debug code where live ranges are short.
+///
+FunctionPass *createFastRegisterAllocator();
+FunctionPass *createFastRegisterAllocator(RegClassFilterFunc F,
+                                          bool ClearVirtRegs);
 
-  /// BasicRegisterAllocation Pass - This pass implements a degenerate global
-  /// register allocator using the basic regalloc framework.
-  ///
-  FunctionPass *createBasicRegisterAllocator();
-  FunctionPass *createBasicRegisterAllocator(RegClassFilterFunc F);
+/// BasicRegisterAllocation Pass - This pass implements a degenerate global
+/// register allocator using the basic regalloc framework.
+///
+FunctionPass *createBasicRegisterAllocator();
+FunctionPass *createBasicRegisterAllocator(RegClassFilterFunc F);
 
-  /// Greedy register allocation pass - This pass implements a global register
-  /// allocator for optimized builds.
-  ///
-  FunctionPass *createGreedyRegisterAllocator();
-  FunctionPass *createGreedyRegisterAllocator(RegClassFilterFunc F);
+/// Greedy register allocation pass - This pass implements a global register
+/// allocator for optimized builds.
+///
+FunctionPass *createGreedyRegisterAllocator();
+FunctionPass *createGreedyRegisterAllocator(RegClassFilterFunc F);
 
-  /// PBQPRegisterAllocation Pass - This pass implements the Partitioned Boolean
-  /// Quadratic Prograaming (PBQP) based register allocator.
-  ///
-  FunctionPass *createDefaultPBQPRegisterAllocator();
+/// PBQPRegisterAllocation Pass - This pass implements the Partitioned Boolean
+/// Quadratic Prograaming (PBQP) based register allocator.
+///
+FunctionPass *createDefaultPBQPRegisterAllocator();
 
-  /// PrologEpilogCodeInserter - This pass inserts prolog and epilog code,
-  /// and eliminates abstract frame references.
-  extern char &PrologEpilogCodeInserterID;
-  MachineFunctionPass *createPrologEpilogInserterPass();
-
-  /// ExpandPostRAPseudos - This pass expands pseudo instructions after
-  /// register allocation.
-  extern char &ExpandPostRAPseudosID;
+/// PrologEpilogCodeInserter - This pass inserts prolog and epilog code,
+/// and eliminates abstract frame references.
+extern char &PrologEpilogCodeInserterID;
+MachineFunctionPass *createPrologEpilogInserterPass();
+
+/// ExpandPostRAPseudos - This pass expands pseudo instructions after
+/// register allocation.
+extern char &ExpandPostRAPseudosID;
 
-  /// PostRAHazardRecognizer - This pass runs the post-ra hazard
-  /// recognizer.
-  extern char &PostRAHazardRecognizerID;
+/// PostRAHazardRecognizer - This pass runs the post-ra hazard
+/// recognizer.
+extern char &PostRAHazardRecognizerID;
 
-  /// PostRAScheduler - This pass performs post register allocation
-  /// scheduling.
-  extern char &PostRASchedulerID;
+/// PostRAScheduler - This pass performs post register allocation
+/// scheduling.
+extern char &PostRASchedulerID;
 
-  /// BranchFolding - This pass performs machine code CFG based
-  /// optimizations to delete branches to branches, eliminate branches to
-  /// successor blocks (creating fall throughs), and eliminating branches over
-  /// branches.
-  extern char &BranchFolderPassID;
+/// BranchFolding - This pass performs machine code CFG based
+/// optimizations to delete branches to branches, eliminate branches to
+/// successor blocks (creating fall throughs), and eliminating branches over
+/// branches.
+extern char &BranchFolderPassID;
 
-  /// BranchRelaxation - This pass replaces branches that need to jump further
-  /// than is supported by a branch instruction.
-  extern char &BranchRelaxationPassID;
+/// BranchRelaxation - This pass replaces branches that need to jump further
+/// than is supported by a branch instruction.
+extern char &BranchRelaxationPassID;
 
-  /// MachineFunctionPrinterPass - This pass prints out MachineInstr's.
-  extern char &MachineFunctionPrinterPassID;
+/// MachineFunctionPrinterPass - This pass prints out MachineInstr's.
+extern char &MachineFunctionPrinterPassID;
 
-  /// MIRPrintingPass - this pass prints out the LLVM IR using the MIR
-  /// serialization format.
-  extern char &MIRPrintingPassID;
+/// MIRPrintingPass - this pass prints out the LLVM IR using the MIR
+/// serialization format.
+extern char &MIRPrintingPassID;
 
-  /// TailDuplicate - Duplicate blocks with unconditional branches
-  /// into tails of their predecessors.
-  extern char &TailDuplicateID;
+/// TailDuplicate - Duplicate blocks with unconditional branches
+/// into tails of their predecessors.
+extern char &TailDuplicateID;
 
-  /// Duplicate blocks with unconditional branches into tails of their
-  /// predecessors. Variant that works before register allocation.
-  extern char &EarlyTailDuplicateID;
+/// Duplicate blocks with unconditional branches into tails of their
+/// predecessors. Variant that works before register allocation.
+extern char &EarlyTailDuplicateID;
 
-  /// MachineTraceMetrics - This pass computes critical path and CPU resource
-  /// usage in an ensemble of traces.
-  extern char &MachineTraceMetricsID;
+/// MachineTraceMetrics - This pass computes critical path and CPU resource
+/// usage in an ensemble of traces.
+extern char &MachineTraceMetricsID;
 
-  /// EarlyIfConverter - This pass performs if-conversion on SSA form by
-  /// inserting cmov instructions.
-  extern char &EarlyIfConverterID;
+/// EarlyIfConverter - This pass performs if-conversion on SSA form by
+/// inserting cmov instructions.
+extern char &EarlyIfConverterID;
 
-  /// EarlyIfPredicator - This pass performs if-conversion on SSA form by
-  /// predicating if/else block and insert select at the join point.
-  extern char &EarlyIfPredicatorID;
+/// EarlyIfPredicator - This pass performs if-conversion on SSA form by
+/// predicating if/else block and insert select at the join point.
+extern char &EarlyIfPredicatorID;
 
-  /// This pass performs instruction combining using trace metrics to estimate
-  /// critical-path and resource depth.
-  extern char &MachineCombinerID;
+/// This pass performs instruction combining using trace metrics to estimate
+/// critical-path and resource depth.
+extern char &MachineCombinerID;
 
-  /// StackSlotColoring - This pass performs stack coloring and merging.
-  /// It merges disjoint allocas to reduce the stack size.
-  extern char &StackColoringID;
+/// StackSlotColoring - This pass performs stack coloring and merging.
+/// It merges disjoint allocas to reduce the stack size.
+extern char &StackColoringID;
 
-  /// StackFramePrinter - This pass prints the stack frame layout and variable
-  /// mappings.
-  extern char &StackFrameLayoutAnalysisPassID;
+/// StackFramePrinter - This pass prints the stack frame layout and variable
+/// mappings.
+extern char &StackFrameLayoutAnalysisPassID;
 
-  /// IfConverter - This pass performs machine code if conversion.
-  extern char &IfConverterID;
+/// IfConverter - This pass performs machine code if conversion.
+extern char &IfConverterID;
 
-  FunctionPass *createIfConverter(
-      std::function<bool(const MachineFunction &)> Ftor);
-
-  /// MachineBlockPlacement - This pass places basic blocks based on branch
-  /// probabilities.
-  extern char &MachineBlockPlacementID;
-
-  /// MachineBlockPlacementStats - This pass collects statistics about the
-  /// basic block placement using branch probabilities and block frequency
-  /// information.
-  extern char &MachineBlockPlacementStatsID;
-
-  /// GCLowering Pass - Used by gc.root to perform its default lowering
-  /// operations.
-  FunctionPass *createGCLoweringPass();
-
-  /// GCLowering Pass - Used by gc.root to perform its default lowering
-  /// operations.
-  extern char &GCLoweringID;
+FunctionPass *
+createIfConverter(std::function<bool(const MachineFunction &)> Ftor);
+
+/// MachineBlockPlacement - This pass places basic blocks based on branch
+/// probabilities.
+extern char &MachineBlockPlacementID;
+
+/// MachineBlockPlacementStats - This pass collects statistics about the
+/// basic block placement using branch probabilities and block frequency
+/// information.
+extern char &MachineBlockPlacementStatsID;
+
+/// GCLowering Pass - Used by gc.root to perform its default lowering
+/// operations.
+FunctionPass *createGCLoweringPass();
+
+/// GCLowering Pass - Used by gc.root to perform its default lowering
+/// operations.
+extern char &GCLoweringID;
 
-  /// ShadowStackGCLowering - Implements the custom lowering mechanism
-  /// used by the shadow stack GC.  Only runs on functions which opt in to
-  /// the shadow stack collector.
-  FunctionPass *createShadowStackGCLoweringPass();
+/// ShadowStackGCLowering - Implements the custom lowering mechanism
+/// used by the shadow stack GC.  Only runs on functions which opt in to
+/// the shadow stack collector.
+FunctionPass *createShadowStackGCLoweringPass();
 
-  /// ShadowStackGCLowering - Implements the custom lowering mechanism
-  /// used by the shadow stack GC.
-  extern char &ShadowStackGCLoweringID;
+/// ShadowStackGCLowering - Implements the custom lowering mechanism
+/// used by the shadow stack GC.
+extern char &ShadowStackGCLoweringID;
 
-  /// GCMachineCodeAnalysis - Target-independent pass to mark safe points
-  /// in machine code. Must be added very late during code generation, just
-  /// prior to output, and importantly after all CFG transformations (such as
-  /// branch folding).
-  extern char &GCMachineCodeAnalysisID;
+/// GCMachineCodeAnalysis - Target-independent pass to mark safe points
+/// in machine code. Must be added very late during code generation, just
+/// prior to output, and importantly after all CFG transformations (such as
+/// branch folding).
+extern char &GCMachineCodeAnalysisID;
 
-  /// MachineCSE - This pass performs global CSE on machine instructions.
-  extern char &MachineCSEID;
+/// MachineCSE - This pass performs global CSE on machine instructions.
+extern char &MachineCSEID;
 
-  /// MIRCanonicalizer - This pass canonicalizes MIR by renaming vregs
-  /// according to the semantics of the instruction as well as hoists
-  /// code.
-  extern char &MIRCanonicalizerID;
+/// MIRCanonicalizer - This pass canonicalizes MIR by renaming vregs
+/// according to the semantics of the instruction as well as hoists
+/// code.
+extern char &MIRCanonicalizerID;
 
-  /// ImplicitNullChecks - This pass folds null pointer checks into nearby
-  /// memory operations.
-  extern char &ImplicitNullChecksID;
+/// ImplicitNullChecks - This pass folds null pointer checks into nearby
+/// memory operations.
+extern char &ImplicitNullChecksID;
 
-  /// This pass performs loop invariant code motion on machine instructions.
-  extern char &MachineLICMID;
+/// This pass performs loop invariant code motion on machine instructions.
+extern char &MachineLICMID;
 
-  /// This pass performs loop invariant code motion on machine instructions.
-  /// This variant works before register allocation. \see MachineLICMID.
-  extern char &EarlyMachineLICMID;
+/// This pass performs loop invariant code motion on machine instructions.
+/// This variant works before register allocation. \see MachineLICMID.
+extern char &EarlyMachineLICMID;
 
-  /// MachineSinking - This pass performs sinking on machine instructions.
-  extern char &MachineSinkingID;
+/// MachineSinking - This pass performs sinking on machine instructions.
+extern char &MachineSinkingID;
 
-  /// MachineCopyPropagation - This pass performs copy propagation on
-  /// machine instructions.
-  extern char &MachineCopyPropagationID;
+/// MachineCopyPropagation - This pass performs copy propagation on
+/// machine instructions.
+extern char &MachineCopyPropagationID;
 
-  MachineFunctionPass *createMachineCopyPropagationPass(bool UseCopyInstr);
+MachineFunctionPass *createMachineCopyPropagationPass(bool UseCopyInstr);
 
-  /// MachineLateInstrsCleanup - This pass removes redundant identical
-  /// instructions after register allocation and rematerialization.
-  extern char &MachineLateInstrsCleanupID;
+/// MachineLateInstrsCleanup - This pass removes redundant identical
+/// instructions after register allocation and rematerialization.
+extern char &MachineLateInstrsCleanupID;
 
-  /// PeepholeOptimizer - This pass performs peephole optimizations -
-  /// like extension and comparison eliminations.
-  extern char &PeepholeOptimizerID;
+/// PeepholeOptimizer - This pass performs peephole optimizations -
+/// like extension and comparison eliminations.
+extern char &PeepholeOptimizerID;
 
-  /// OptimizePHIs - This pass optimizes machine instruction PHIs
-  /// to take advantage of opportunities created during DAG legalization.
-  extern char &OptimizePHIsID;
+/// OptimizePHIs - This pass optimizes machine instruction PHIs
+/// to take advantage of opportunities created during DAG legalization.
+extern char &OptimizePHIsID;
 
-  /// StackSlotColoring - This pass performs stack slot coloring.
-  extern char &StackSlotColoringID;
+/// StackSlotColoring - This pass performs stack slot coloring.
+extern char &StackSlotColoringID;
 
-  /// This pass lays out funclets contiguously.
-  extern char &FuncletLayoutID;
+/// This pass lays out funclets contiguously.
+extern char &FuncletLayoutID;
 
-  /// This pass inserts the XRay instrumentation sleds if they are supported by
-  /// the target platform.
-  extern char &XRayInstrumentationID;
+/// This pass inserts the XRay instrumentation sleds if they are supported by
+/// the target platform.
+extern char &XRayInstrumentationID;
 
-  /// This pass inserts FEntry calls
-  extern char &FEntryInserterID;
+/// This pass inserts FEntry calls
+extern char &FEntryInserterID;
 
-  /// This pass implements the "patchable-function" attribute.
-  extern char &PatchableFunctionID;
-
-  /// createStackProtectorPass - This pass adds stack protectors to functions.
-  ///
-  FunctionPass *createStackProtectorPass();
-
-  /// createMachineVerifierPass - This pass verifies cenerated machine code
-  /// instructions for correctness.
-  ///
-  FunctionPass *createMachineVerifierPass(const std::string& Banner);
-
-  /// createDwarfEHPass - This pass mulches exception handling code into a form
-  /// adapted to code generation.  Required if using dwarf exception handling.
-  FunctionPass *createDwarfEHPass(CodeGenOptLevel OptLevel);
-
-  /// createWinEHPass - Prepares personality functions used by MSVC on Windows,
-  /// in addition to the Itanium LSDA based personalities.
-  FunctionPass *createWinEHPass(bool DemoteCatchSwitchPHIOnly = false);
-
-  /// createSjLjEHPreparePass - This pass adapts exception handling code to use
-  /// the GCC-style builtin setjmp/longjmp (sjlj) to handling EH control flow.
-  ///
-  FunctionPass *createSjLjEHPreparePass(const TargetMachine *TM);
-
-  /// createWasmEHPass - This pass adapts exception handling code to use
-  /// WebAssembly's exception handling scheme.
-  FunctionPass *createWasmEHPass();
-
-  /// LocalStackSlotAllocation - This pass assigns local frame indices to stack
-  /// slots relative to one another and allocates base registers to access them
-  /// when it is estimated by the target to be out of range of normal frame
-  /// pointer or stack pointer index addressing.
-  extern char &LocalStackSlotAllocationID;
-
-  /// This pass expands pseudo-instructions, reserves registers and adjusts
-  /// machine frame information.
-  extern char &FinalizeISelID;
+/// This pass implements the "patchable-function" attribute.
+extern char &PatchableFunctionID;
+
+/// createStackProtectorPass - This pass adds stack protectors to functions.
+///
+FunctionPass *createStackProtectorPass();
+
+/// createMachineVerifierPass - This pass verifies cenerated machine code
+/// instructions for correctness.
+///
+FunctionPass *createMachineVerifierPass(const std::string &Banner);
+
+/// createDwarfEHPass - This pass mulches exception handling code into a form
+/// adapted to code generation.  Required if using dwarf exception handling.
+FunctionPass *createDwarfEHPass(CodeGenOptLevel OptLevel);
+
+/// createWinEHPass - Prepares personality functions used by MSVC on Windows,
+/// in addition to the Itanium LSDA based personalities.
+FunctionPass *createWinEHPass(bool DemoteCatchSwitchPHIOnly = false);
+
+/// createSjLjEHPreparePass - This pass adapts exception handling code to use
+/// the GCC-style builtin setjmp/longjmp (sjlj) to handling EH control flow.
+///
+FunctionPass *createSjLjEHPreparePass(const TargetMachine *TM);
+
+/// createWasmEHPass - This pass adapts exception handling code to use
+/// WebAssembly's exception handling scheme.
+FunctionPass *createWasmEHPass();
+
+/// LocalStackSlotAllocation - This pass assigns local frame indices to stack
+/// slots relative to one another and allocates base registers to access them
+/// when it is estimated by the target to be out of range of normal frame
+/// pointer or stack pointer index addressing.
+extern char &LocalStackSlotAllocationID;
+
+/// This pass expands pseudo-instructions, reserves registers and adjusts
+/// machine frame information.
+extern char &FinalizeISelID;
 
-  /// UnpackMachineBundles - This pass unpack machine instruction bundles.
-  extern char &UnpackMachineBundlesID;
+/// UnpackMachineBundles - This pass unpack machine instruction bundles.
+extern char &UnpackMachineBundlesID;
 
-  FunctionPass *
-  createUnpackMachineBundles(std::function<bool(const MachineFunction &)> Ftor);
+FunctionPass *
+createUnpackMachineBundles(std::function<bool(const MachineFunction &)> Ftor);
 
-  /// FinalizeMachineBundles - This pass finalize machine instruction
-  /// bundles (created earlier, e.g. during pre-RA scheduling).
-  extern char &FinalizeMachineBundlesID;
+/// FinalizeMachineBundles - This pass finalize machine instruction
+/// bundles (created earlier, e.g. during pre-RA scheduling).
+extern char &FinalizeMachineBundlesID;
 
-  /// StackMapLiveness - This pass analyses the register live-out set of
-  /// stackmap/patchpoint intrinsics and attaches the calculated information to
-  /// the intrinsic for later emission to the StackMap.
-  extern char &StackMapLivenessID;
+/// StackMapLiveness - This pass analyses the register live-out set of
+/// stackmap/patchpoint intrinsics and attaches the calculated information to
+/// the intrinsic for later emission to the StackMap.
+extern char &StackMapLivenessID;
 
-  // MachineSanitizerBinaryMetadata - appends/finalizes sanitizer binary
-  // metadata after llvm SanitizerBinaryMetadata pass.
-  extern char &MachineSanitizerBinaryMetadataID;
+// MachineSanitizerBinaryMetadata - appends/finalizes sanitizer binary
+// metadata after llvm SanitizerBinaryMetadata pass.
+extern char &MachineSanitizerBinaryMetadataID;
 
-  /// RemoveRedundantDebugValues pass.
-  extern char &RemoveRedundantDebugValuesID;
+/// RemoveRedundantDebugValues pass.
+extern char &RemoveRedundantDebugValuesID;
 
-  /// MachineCFGPrinter pass.
-  extern char &MachineCFGPrinterID;
+/// MachineCFGPrinter pass.
+extern char &MachineCFGPrinterID;
 
-  /// LiveDebugValues pass
-  extern char &LiveDebugValuesID;
+/// LiveDebugValues pass
+extern char &LiveDebugValuesID;
 
-  /// InterleavedAccess Pass - This pass identifies and matches interleaved
-  /// memory accesses to target specific intrinsics.
-  ///
-  FunctionPass *createInterleavedAccessPass();
+/// InterleavedAccess Pass - This pass identifies and matches interleaved
+/// memory accesses to target specific intrinsics.
+///
+FunctionPass *createInterleavedAccessPass();
 
-  /// InterleavedLoadCombines Pass - This pass identifies interleaved loads and
-  /// combines them into wide loads detectable by InterleavedAccessPass
-  ///
-  FunctionPass *createInterleavedLoadCombinePass();
+/// InterleavedLoadCombines Pass - This pass identifies interleaved loads and
+/// combines them into wide loads detectable by InterleavedAccessPass
+///
+FunctionPass *createInterleavedLoadCombinePass();
 
-  /// LowerEmuTLS - This pass generates __emutls_[vt].xyz variables for all
-  /// TLS variables for the emulated TLS model.
-  ///
-  ModulePass *createLowerEmuTLSPass();
+/// LowerEmuTLS - This pass generates __emutls_[vt].xyz variables for all
+/// TLS variables for the emulated TLS model.
+///
+ModulePass *createLowerEmuTLSPass();
 
-  /// This pass lowers the \@llvm.load.relative and \@llvm.objc.* intrinsics to
-  /// instructions.  This is unsafe to do earlier because a pass may combine the
-  /// constant initializer into the load, which may result in an overflowing
-  /// evaluation.
-  ModulePass *createPreISelIntrinsicLoweringPass();
+/// This pass lowers the \@llvm.load.relative and \@llvm.objc.* intrinsics to
+/// instructions.  This is unsafe to do earlier because a pass may combine the
+/// constant initializer into the load, which may result in an overflowing
+/// evaluation.
+ModulePass *createPreISelIntrinsicLoweringPass();
 
-  /// GlobalMerge - This pass merges internal (by default) globals into structs
-  /// to enable reuse of a base pointer by indexed addressing modes.
-  /// It can also be configured to focus on size optimizations only.
-  ///
-  Pass *createGlobalMergePass(const TargetMachine *TM, unsigned MaximalOffset,
-                              bool OnlyOptimizeForSize = false,
-                              bool MergeExternalByDefault = false);
+/// GlobalMerge - This pass merges internal (by default) globals into structs
+/// to enable reuse of a base pointer by indexed addressing modes.
+/// It can also be configured to focus on size optimizations only.
+///
+Pass *createGlobalMergePass(const TargetMachine *TM, unsigned MaximalOffset,
+                            bool OnlyOptimizeForSize = false,
+                            bool MergeExternalByDefault = false);
 
-  /// This pass splits the stack into a safe stack and an unsafe stack to
-  /// protect against stack-based overflow vulnerabilities.
-  FunctionPass *createSafeStackPass();
+/// This pass splits the stack into a safe stack and an unsafe stack to
+/// protect against stack-based overflow vulnerabilities.
+FunctionPass *createSafeStackPass();
 
-  /// This pass detects subregister lanes in a virtual register that are used
-  /// independently of other lanes and splits them into separate virtual
-  /// registers.
-  extern char &RenameIndependentSubregsID;
+/// This pass detects subregister lanes in a virtual register that are used
+/// independently of other lanes and splits them into separate virtual
+/// registers.
+extern char &RenameIndependentSubregsID;
 
-  /// This pass is executed POST-RA to collect which physical registers are
-  /// preserved by given machine function.
-  FunctionPass *createRegUsageInfoCollector();
+/// This pass is executed POST-RA to collect which physical registers are
+/// preserved by given machine function.
+FunctionPass *createRegUsageInfoCollector();
 
-  /// Return a MachineFunction pass that identifies call sites
-  /// and propagates register usage information of callee to caller
-  /// if available with PysicalRegisterUsageInfo pass.
-  FunctionPass *createRegUsageInfoPropPass();
+/// Return a MachineFunction pass that identifies call sites
+/// and propagates register usage information of callee to caller
+/// if available with PysicalRegisterUsageInfo pass.
+FunctionPass *createRegUsageInfoPropPass();
 
-  /// This pass performs software pipelining on machine instructions.
-  extern char &MachinePipelinerID;
+/// This pass performs software pipelining on machine instructions.
+extern char &MachinePipelinerID;
 
-  /// This pass frees the memory occupied by the MachineFunction.
-  FunctionPass *createFreeMachineFunctionPass();
+/// This pass frees the memory occupied by the MachineFunction.
+FunctionPass *createFreeMachineFunctionPass();
 
-  /// This pass performs outlining on machine instructions directly before
-  /// printing assembly.
-  ModulePass *createMachineOutlinerPass(bool RunOnAllFunctions = true);
+/// This pass performs outlining on machine instructions directly before
+/// printing assembly.
+ModulePass *createMachineOutlinerPass(bool RunOnAllFunctions = true);
 
-  /// This pass expands the reduction intrinsics into sequences of shuffles.
-  FunctionPass *createExpandReductionsPass();
+/// This pass expands the reduction intrinsics into sequences of shuffles.
+FunctionPass *createExpandReductionsPass();
 
-  // This pass replaces intrinsics operating on vector operands with calls to
-  // the corresponding function in a vector library (e.g., SVML, libmvec).
-  FunctionPass *createReplaceWithVeclibLegacyPass();
+// This pass replaces intrinsics operating on vector operands with calls to
+// the corresponding function in a vector library (e.g., SVML, libmvec).
+FunctionPass *createReplaceWithVeclibLegacyPass();
 
-  /// This pass expands the vector predication intrinsics into unpredicated
-  /// instructions with selects or just the explicit vector length into the
-  /// predicate mask.
-  FunctionPass *createExpandVectorPredicationPass();
+/// This pass expands the vector predication intrinsics into unpredicated
+/// instructions with selects or just the explicit vector length into the
+/// predicate mask.
+FunctionPass *createExpandVectorPredicationPass();
 
-  // Expands large div/rem instructions.
-  FunctionPass *createExpandLargeDivRemPass();
+// Expands large div/rem instructions.
+FunctionPass *createExpandLargeDivRemPass();
 
-  // Expands large div/rem instructions.
-  FunctionPass *createExpandLargeFpConvertPass();
+// Expands large div/rem instructions.
+FunctionPass *createExpandLargeFpConvertPass();
 
-  // This pass expands memcmp() to load/stores.
-  FunctionPass *createExpandMemCmpLegacyPass();
+// This pass expands memcmp() to load/stores.
+FunctionPass *createExpandMemCmpLegacyPass();
 
-  /// Creates Break False Dependencies pass. \see BreakFalseDeps.cpp
-  FunctionPass *createBreakFalseDeps();
+/// Creates Break False Dependencies pass. \see BreakFalseDeps.cpp
+FunctionPass *createBreakFalseDeps();
 
-  // This pass expands indirectbr instructions.
-  FunctionPass *createIndirectBrExpandPass();
+// This pass expands indirectbr instructions.
+FunctionPass *createIndirectBrExpandPass();
 
-  /// Creates CFI Fixup pass. \see CFIFixup.cpp
-  FunctionPass *createCFIFixup();
+/// Creates CFI Fixup pass. \see CFIFixup.cpp
+FunctionPass *createCFIFixup();
 
-  /// Creates CFI Instruction Inserter pass. \see CFIInstrInserter.cpp
-  FunctionPass *createCFIInstrInserter();
+/// Creates CFI Instruction Inserter pass. \see CFIInstrInserter.cpp
+FunctionPass *createCFIInstrInserter();
 
-  /// Creates CFGuard longjmp target identification pass.
-  /// \see CFGuardLongjmp.cpp
-  FunctionPass *createCFGuardLongjmpPass();
+/// Creates CFGuard longjmp target identification pass.
+/// \see CFGuardLongjmp.cpp
+FunctionPass *createCFGuardLongjmpPass();
 
-  /// Creates EHContGuard catchret target identification pass.
-  /// \see EHContGuardCatchret.cpp
-  FunctionPass *createEHContGuardCatchretPass();
+/// Creates EHContGuard catchret target identification pass.
+/// \see EHContGuardCatchret.cpp
+FunctionPass *createEHContGuardCatchretPass();
 
-  /// Create Hardware Loop pass. \see HardwareLoops.cpp
-  FunctionPass *createHardwareLoopsLegacyPass();
+/// Create Hardware Loop pass. \see HardwareLoops.cpp
+FunctionPass *createHardwareLoopsLegacyPass();
 
-  /// This pass inserts pseudo probe annotation for callsite profiling.
-  FunctionPass *createPseudoProbeInserter();
+/// This pass inserts pseudo probe annotation for callsite profiling.
+FunctionPass *createPseudoProbeInserter();
 
-  /// Create IR Type Promotion pass. \see TypePromotion.cpp
-  FunctionPass *createTypePromotionLegacyPass();
+/// Create IR Type Promotion pass. \see TypePromotion.cpp
+FunctionPass *createTypePromotionLegacyPass();
 
-  /// Add Flow Sensitive Discriminators. PassNum specifies the
-  /// sequence number of this pass (starting from 1).
-  FunctionPass *
-  createMIRAddFSDiscriminatorsPass(sampleprof::FSDiscriminatorPass P);
+/// Add Flow Sensitive Discriminators. PassNum specifies the
+/// sequence number of this pass (starting from 1).
+FunctionPass *
+createMIRAddFSDiscriminatorsPass(sampleprof::FSDiscriminatorPass P);
 
-  /// Read Flow Sensitive Profile.
-  FunctionPass *
-  createMIRProfileLoaderPass(std::string File, std::string RemappingFile,
-                             sampleprof::FSDiscriminatorPass P,
-                             IntrusiveRefCntPtr<vfs::FileSystem> FS);
+/// Read Flow Sensitive Profile.
+FunctionPass *
+createMIRProfileLoaderPass(std::string File, std::string RemappingFile,
+                           sampleprof::FSDiscriminatorPass P,
+                           IntrusiveRefCntPtr<vfs::FileSystem> FS);
 
-  /// Creates MIR Debugify pass. \see MachineDebugify.cpp
-  ModulePass *createDebugifyMachineModulePass();
+/// Creates MIR Debugify pass. \see MachineDebugify.cpp
+ModulePass *createDebugifyMachineModulePass();
 
-  /// Creates MIR Strip Debug pass. \see MachineStripDebug.cpp
-  /// If OnlyDebugified is true then it will only strip debug info if it was
-  /// added by a Debugify pass. The module will be left unchanged if the debug
-  /// info was generated by another source such as clang.
-  ModulePass *createStripDebugMachineModulePass(bool OnlyDebugified);
+/// Creates MIR Strip Debug pass. \see MachineStripDebug.cpp
+/// If OnlyDebugified is true then it will only strip debug info if it was
+/// added by a Debugify pass. The module will be left unchanged if the debug
+/// info was generated by another source such as clang.
+ModulePass *createStripDebugMachineModulePass(bool OnlyDebugified);
 
-  /// Creates MIR Check Debug pass. \see MachineCheckDebugify.cpp
-  ModulePass *createCheckDebugMachineModulePass();
+/// Creates MIR Check Debug pass. \see MachineCheckDebugify.cpp
+ModulePass *createCheckDebugMachineModulePass();
 
-  /// The pass fixups statepoint machine instruction to replace usage of
-  /// caller saved registers with stack slots.
-  extern char &FixupStatepointCallerSavedID;
+/// The pass fixups statepoint machine instruction to replace usage of
+/// caller saved registers with stack slots.
+extern char &FixupStatepointCallerSavedID;
 
-  /// The pass transforms load/store <256 x i32> to AMX load/store intrinsics
-  /// or split the data to two <128 x i32>.
-  FunctionPass *createX86LowerAMXTypePass();
+/// The pass transforms load/store <256 x i32> to AMX load/store intrinsics
+/// or split the data to two <128 x i32>.
+FunctionPass *createX86LowerAMXTypePass();
 
-  /// The pass transforms amx intrinsics to scalar operation if the function has
-  /// optnone attribute or it is O0.
-  FunctionPass *createX86LowerAMXIntrinsicsPass();
+/// The pass transforms amx intrinsics to scalar operation if the function has
+/// optnone attribute or it is O0.
+FunctionPass *createX86LowerAMXIntrinsicsPass();
 
-  /// When learning an eviction policy, extract score(reward) information,
-  /// otherwise this does nothing
-  FunctionPass *createRegAllocScoringPass();
+/// When learning an eviction policy, extract score(reward) information,
+/// otherwise this does nothing
+FunctionPass *createRegAllocScoringPass();
 
-  /// JMC instrument pass.
-  ModulePass *createJMCInstrumenterPass();
+/// JMC instrument pass.
+ModulePass *createJMCInstrumenterPass();
 
-  /// This pass converts conditional moves to conditional jumps when profitable.
-  FunctionPass *createSelectOptimizePass();
+/// This pass converts conditional moves to conditional jumps when profitable.
+FunctionPass *createSelectOptimizePass();
 
-  FunctionPass *createCallBrPass();
+FunctionPass *createCallBrPass();
 
-  /// Lowers KCFI operand bundles for indirect calls.
-  FunctionPass *createKCFIPass();
+/// Lowers KCFI operand bundles for indirect calls.
+FunctionPass *createKCFIPass();
 } // End llvm namespace
 
 #endif

@Ris-Bali Ris-Bali changed the title [AtomicExpand] New PM support [CodeGen] Port AtomicExpand to new PM Nov 3, 2023
@Ris-Bali Ris-Bali changed the title [CodeGen] Port AtomicExpand to new PM [CodeGen] Port AtomicExpand to new Pass Manager Nov 4, 2023
@boomanaiden154
Copy link
Contributor

@Ris-Bali Are you still working on this?

@Ris-Bali
Copy link
Contributor Author

Ris-Bali commented Dec 1, 2023

Yes I am!!

llvm/include/llvm/CodeGen/AtomicExpand.h Outdated Show resolved Hide resolved
llvm/include/llvm/CodeGen/AtomicExpand.h Outdated Show resolved Hide resolved
@llvmbot
Copy link
Collaborator

llvmbot commented Jan 14, 2024

@llvm/pr-subscribers-backend-systemz
@llvm/pr-subscribers-backend-powerpc
@llvm/pr-subscribers-clang
@llvm/pr-subscribers-backend-msp430
@llvm/pr-subscribers-backend-amdgpu
@llvm/pr-subscribers-backend-m68k
@llvm/pr-subscribers-backend-sparc
@llvm/pr-subscribers-backend-risc-v
@llvm/pr-subscribers-backend-x86

@llvm/pr-subscribers-backend-loongarch

Author: Rishabh Bali (Ris-Bali)

Changes

Port the atomicexpand pass to the new Pass Manager.
Fixes #64559


Patch is 66.81 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/71220.diff

71 Files Affected:

  • (added) llvm/include/llvm/CodeGen/ExpandAtomic.h (+30)
  • (modified) llvm/include/llvm/CodeGen/MachinePassRegistry.def (+1-1)
  • (modified) llvm/include/llvm/CodeGen/Passes.h (+3-3)
  • (modified) llvm/include/llvm/CodeGen/TargetSubtargetInfo.h (+1-1)
  • (modified) llvm/include/llvm/InitializePasses.h (+1-1)
  • (modified) llvm/lib/CodeGen/CMakeLists.txt (+1-1)
  • (modified) llvm/lib/CodeGen/CodeGen.cpp (+1-1)
  • (renamed) llvm/lib/CodeGen/ExpandAtomicPass.cpp (+92-57)
  • (modified) llvm/lib/CodeGen/TargetSubtargetInfo.cpp (+1-1)
  • (modified) llvm/lib/Passes/PassBuilder.cpp (+2-1)
  • (modified) llvm/lib/Passes/PassRegistry.def (+1)
  • (modified) llvm/lib/Target/AArch64/AArch64TargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/ARM/ARMTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/CSKY/CSKYTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/Hexagon/HexagonTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/LoongArch/LoongArchTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/M68k/M68kTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/Mips/MipsTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/PowerPC/PPCExpandAtomicPseudoInsts.cpp (+1-1)
  • (modified) llvm/lib/Target/PowerPC/PPCTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/RISCV/RISCVTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/Sparc/SparcTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/VE/VETargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblySubtarget.cpp (+1-1)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblySubtarget.h (+1-1)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblyTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/X86/X86TargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/XCore/XCoreTargetMachine.cpp (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/idemponent-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AArch64/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AArch64/expand-atomicrmw-xchg-fp.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AArch64/pcsections.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-i16-system.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-i16.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-i8-system.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-i8.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fadd-flat-specialization.ll (+4-4)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fadd.ll (+6-6)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fmax.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fmin.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fsub.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-nand.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-simplify-cfg-CAS-block.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/unaligned-atomic.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/ARM/atomic-expansion-v7.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/ARM/atomic-expansion-v8.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/ARM/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/ARM/cmpxchg-weak.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/Hexagon/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/LoongArch/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/LoongArch/load-store-atomic.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/Mips/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/cfence-double.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/cfence-float.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/cmpxchg.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/issue55983.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/RISCV/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/SPARC/libcalls.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/SPARC/partword.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-libcall.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-non-integer.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-rmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-rmw-initial-load.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-xchg-fp.ll (+1-1)
  • (modified) llvm/tools/opt/opt.cpp (-2)
  • (modified) llvm/utils/gn/secondary/llvm/lib/CodeGen/BUILD.gn (+1-1)
diff --git a/llvm/include/llvm/CodeGen/ExpandAtomic.h b/llvm/include/llvm/CodeGen/ExpandAtomic.h
new file mode 100644
index 00000000000000..4ba49f8886ca94
--- /dev/null
+++ b/llvm/include/llvm/CodeGen/ExpandAtomic.h
@@ -0,0 +1,30 @@
+//===-- ExpandAtomic.h - Expand Atomic Instructions -------------*- C++ -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_CODEGEN_EXPANDATOMIC_H
+#define LLVM_CODEGEN_EXPANDATOMIC_H
+
+#include "llvm/IR/PassManager.h"
+
+namespace llvm {
+
+class Function;
+class TargetMachine;
+
+class ExpandAtomicPass : public PassInfoMixin<ExpandAtomicPass> {
+private:
+  const TargetMachine *TM;
+
+public:
+  ExpandAtomicPass(const TargetMachine *TM) : TM(TM) {}
+  PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM);
+};
+
+} // end namespace llvm
+
+#endif // LLVM_CODEGEN_EXPANDATOMIC_H
diff --git a/llvm/include/llvm/CodeGen/MachinePassRegistry.def b/llvm/include/llvm/CodeGen/MachinePassRegistry.def
index a29269644ea1dc..a52e7d04912f31 100644
--- a/llvm/include/llvm/CodeGen/MachinePassRegistry.def
+++ b/llvm/include/llvm/CodeGen/MachinePassRegistry.def
@@ -116,7 +116,7 @@ DUMMY_FUNCTION_PASS("wasmehprepare", WasmEHPass, ())
 DUMMY_FUNCTION_PASS("codegenprepare", CodeGenPreparePass, ())
 DUMMY_FUNCTION_PASS("safe-stack", SafeStackPass, ())
 DUMMY_FUNCTION_PASS("stack-protector", StackProtectorPass, ())
-DUMMY_FUNCTION_PASS("atomic-expand", AtomicExpandPass, ())
+DUMMY_FUNCTION_PASS("expand-atomic", ExpandAtomicPass, ())
 DUMMY_FUNCTION_PASS("interleaved-access", InterleavedAccessPass, ())
 DUMMY_FUNCTION_PASS("indirectbr-expand", IndirectBrExpandPass, ())
 DUMMY_FUNCTION_PASS("cfguard-dispatch", CFGuardDispatchPass, ())
diff --git a/llvm/include/llvm/CodeGen/Passes.h b/llvm/include/llvm/CodeGen/Passes.h
index 712048017bca1a..b9f9ce936dbb7f 100644
--- a/llvm/include/llvm/CodeGen/Passes.h
+++ b/llvm/include/llvm/CodeGen/Passes.h
@@ -41,10 +41,10 @@ class FileSystem;
 // List of target independent CodeGen pass IDs.
 namespace llvm {
 
-  /// AtomicExpandPass - At IR level this pass replace atomic instructions with
+  /// ExpandAtomicPass - At IR level this pass replace atomic instructions with
   /// __atomic_* library calls, or target specific instruction which implement the
   /// same semantics in a way which better fits the target backend.
-  FunctionPass *createAtomicExpandPass();
+  FunctionPass *createExpandAtomicLegacyPass();
 
   /// createUnreachableBlockEliminationPass - The LLVM code generator does not
   /// work well with unreachable basic blocks (what live ranges make sense for a
@@ -103,7 +103,7 @@ namespace llvm {
 
   /// AtomicExpandID -- Lowers atomic operations in terms of either cmpxchg
   /// load-linked/store-conditional loops.
-  extern char &AtomicExpandID;
+  extern char &ExpandAtomicID;
 
   /// MachineLoopInfo - This pass is a loop analysis pass.
   extern char &MachineLoopInfoID;
diff --git a/llvm/include/llvm/CodeGen/TargetSubtargetInfo.h b/llvm/include/llvm/CodeGen/TargetSubtargetInfo.h
index 55ef95c2854319..da1fd6737b796f 100644
--- a/llvm/include/llvm/CodeGen/TargetSubtargetInfo.h
+++ b/llvm/include/llvm/CodeGen/TargetSubtargetInfo.h
@@ -215,7 +215,7 @@ class TargetSubtargetInfo : public MCSubtargetInfo {
   virtual bool enablePostRAMachineScheduler() const;
 
   /// True if the subtarget should run the atomic expansion pass.
-  virtual bool enableAtomicExpand() const;
+  virtual bool enableExpandAtomic() const;
 
   /// True if the subtarget should run the indirectbr expansion pass.
   virtual bool enableIndirectBrExpand() const;
diff --git a/llvm/include/llvm/InitializePasses.h b/llvm/include/llvm/InitializePasses.h
index fafae8b5ecd7a7..04365af3d1a74e 100644
--- a/llvm/include/llvm/InitializePasses.h
+++ b/llvm/include/llvm/InitializePasses.h
@@ -54,7 +54,7 @@ void initializeAlwaysInlinerLegacyPassPass(PassRegistry&);
 void initializeAssignmentTrackingAnalysisPass(PassRegistry &);
 void initializeAssumeBuilderPassLegacyPassPass(PassRegistry &);
 void initializeAssumptionCacheTrackerPass(PassRegistry&);
-void initializeAtomicExpandPass(PassRegistry&);
+void initializeExpandAtomicLegacyPass(PassRegistry &);
 void initializeBasicBlockPathCloningPass(PassRegistry &);
 void initializeBasicBlockSectionsProfileReaderPass(PassRegistry &);
 void initializeBasicBlockSectionsPass(PassRegistry &);
diff --git a/llvm/lib/CodeGen/CMakeLists.txt b/llvm/lib/CodeGen/CMakeLists.txt
index df2d1831ee5fdb..12bd12bc49caeb 100644
--- a/llvm/lib/CodeGen/CMakeLists.txt
+++ b/llvm/lib/CodeGen/CMakeLists.txt
@@ -40,7 +40,7 @@ add_llvm_component_library(LLVMCodeGen
   AllocationOrder.cpp
   Analysis.cpp
   AssignmentTrackingAnalysis.cpp
-  AtomicExpandPass.cpp
+  ExpandAtomicPass.cpp
   BasicTargetTransformInfo.cpp
   BranchFolding.cpp
   BranchRelaxation.cpp
diff --git a/llvm/lib/CodeGen/CodeGen.cpp b/llvm/lib/CodeGen/CodeGen.cpp
index 79a95ee0d747a1..eacd8b36b610c8 100644
--- a/llvm/lib/CodeGen/CodeGen.cpp
+++ b/llvm/lib/CodeGen/CodeGen.cpp
@@ -19,7 +19,7 @@ using namespace llvm;
 /// initializeCodeGen - Initialize all passes linked into the CodeGen library.
 void llvm::initializeCodeGen(PassRegistry &Registry) {
   initializeAssignmentTrackingAnalysisPass(Registry);
-  initializeAtomicExpandPass(Registry);
+  initializeExpandAtomicLegacyPass(Registry);
   initializeBasicBlockPathCloningPass(Registry);
   initializeBasicBlockSectionsPass(Registry);
   initializeBranchFolderPassPass(Registry);
diff --git a/llvm/lib/CodeGen/AtomicExpandPass.cpp b/llvm/lib/CodeGen/ExpandAtomicPass.cpp
similarity index 95%
rename from llvm/lib/CodeGen/AtomicExpandPass.cpp
rename to llvm/lib/CodeGen/ExpandAtomicPass.cpp
index ccf3e9ec649210..5f8e069bafc7a6 100644
--- a/llvm/lib/CodeGen/AtomicExpandPass.cpp
+++ b/llvm/lib/CodeGen/ExpandAtomicPass.cpp
@@ -1,4 +1,4 @@
-//===- AtomicExpandPass.cpp - Expand atomic instructions ------------------===//
+//===- ExpandAtomicPass.cpp - Expand atomic instructions ------------------===//
 //
 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
 // See https://llvm.org/LICENSE.txt for license information.
@@ -19,6 +19,7 @@
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Analysis/InstSimplifyFolder.h"
 #include "llvm/Analysis/OptimizationRemarkEmitter.h"
+#include "llvm/CodeGen/ExpandAtomic.h"
 #include "llvm/CodeGen/AtomicExpandUtils.h"
 #include "llvm/CodeGen/RuntimeLibcalls.h"
 #include "llvm/CodeGen/TargetLowering.h"
@@ -55,23 +56,14 @@
 
 using namespace llvm;
 
-#define DEBUG_TYPE "atomic-expand"
+#define DEBUG_TYPE "expand-atomic"
 
 namespace {
 
-class AtomicExpand : public FunctionPass {
+class ExpandAtomicImpl {
   const TargetLowering *TLI = nullptr;
   const DataLayout *DL = nullptr;
 
-public:
-  static char ID; // Pass identification, replacement for typeid
-
-  AtomicExpand() : FunctionPass(ID) {
-    initializeAtomicExpandPass(*PassRegistry::getPassRegistry());
-  }
-
-  bool runOnFunction(Function &F) override;
-
 private:
   bool bracketInstWithFences(Instruction *I, AtomicOrdering Order);
   IntegerType *getCorrespondingIntegerType(Type *T, const DataLayout &DL);
@@ -124,6 +116,20 @@ class AtomicExpand : public FunctionPass {
   friend bool
   llvm::expandAtomicRMWToCmpXchg(AtomicRMWInst *AI,
                                  CreateCmpXchgInstFun CreateCmpXchg);
+
+public:
+  bool run(Function &F, const TargetMachine *TM);
+};
+
+class ExpandAtomicLegacy : public FunctionPass {
+public:
+  static char ID; // Pass identification, replacement for typeid
+
+  ExpandAtomicLegacy() : FunctionPass(ID) {
+    initializeExpandAtomicLegacyPass(*PassRegistry::getPassRegistry());
+  }
+
+  bool runOnFunction(Function &F) override;
 };
 
 // IRBuilder to be used for replacement atomic instructions.
@@ -138,14 +144,15 @@ struct ReplacementIRBuilder : IRBuilder<InstSimplifyFolder> {
 
 } // end anonymous namespace
 
-char AtomicExpand::ID = 0;
+char ExpandAtomicLegacy::ID = 0;
 
-char &llvm::AtomicExpandID = AtomicExpand::ID;
+char &llvm::ExpandAtomicID = ExpandAtomicLegacy::ID;
 
-INITIALIZE_PASS(AtomicExpand, DEBUG_TYPE, "Expand Atomic instructions", false,
-                false)
-
-FunctionPass *llvm::createAtomicExpandPass() { return new AtomicExpand(); }
+INITIALIZE_PASS_BEGIN(ExpandAtomicLegacy, DEBUG_TYPE,
+                      "Expand Atomic instructions", false, false)
+INITIALIZE_PASS_DEPENDENCY(TargetPassConfig)
+INITIALIZE_PASS_END(ExpandAtomicLegacy, DEBUG_TYPE,
+                    "Expand Atomic instructions", false, false)
 
 // Helper functions to retrieve the size of atomic instructions.
 static unsigned getAtomicOpSize(LoadInst *LI) {
@@ -179,14 +186,9 @@ static bool atomicSizeSupported(const TargetLowering *TLI, Inst *I) {
          Size <= TLI->getMaxAtomicSizeInBitsSupported() / 8;
 }
 
-bool AtomicExpand::runOnFunction(Function &F) {
-  auto *TPC = getAnalysisIfAvailable<TargetPassConfig>();
-  if (!TPC)
-    return false;
-
-  auto &TM = TPC->getTM<TargetMachine>();
-  const auto *Subtarget = TM.getSubtargetImpl(F);
-  if (!Subtarget->enableAtomicExpand())
+bool ExpandAtomicImpl::run(Function &F, const TargetMachine *TM) {
+  const auto *Subtarget = TM->getSubtargetImpl(F);
+  if (!Subtarget->enableExpandAtomic())
     return false;
   TLI = Subtarget->getTargetLowering();
   DL = &F.getParent()->getDataLayout();
@@ -340,7 +342,39 @@ bool AtomicExpand::runOnFunction(Function &F) {
   return MadeChange;
 }
 
-bool AtomicExpand::bracketInstWithFences(Instruction *I, AtomicOrdering Order) {
+bool ExpandAtomicLegacy::runOnFunction(Function &F) {
+  if (skipFunction(F))
+    return false;
+
+  auto *TPC = getAnalysisIfAvailable<TargetPassConfig>();
+  if (!TPC)
+    return false;
+
+  auto *TM = &TPC->getTM<TargetMachine>();
+
+  ExpandAtomicImpl AE;
+  return AE.run(F, TM);
+}
+
+FunctionPass *llvm::createExpandAtomicLegacyPass() {
+  return new ExpandAtomicLegacy();
+}
+
+PreservedAnalyses ExpandAtomicPass::run(Function &F,
+                                        FunctionAnalysisManager &AM) {
+  ExpandAtomicImpl AE;
+
+  bool Changed = AE.run(F, TM);
+  if (!Changed)
+    return PreservedAnalyses::all();
+
+  PreservedAnalyses PA;
+  PA.preserveSet<CFGAnalyses>();
+  return PA;
+}
+
+bool ExpandAtomicImpl::bracketInstWithFences(Instruction *I,
+                                             AtomicOrdering Order) {
   ReplacementIRBuilder Builder(I, *DL);
 
   auto LeadingFence = TLI->emitLeadingFence(Builder, I, Order);
@@ -355,8 +389,8 @@ bool AtomicExpand::bracketInstWithFences(Instruction *I, AtomicOrdering Order) {
 }
 
 /// Get the iX type with the same bitwidth as T.
-IntegerType *AtomicExpand::getCorrespondingIntegerType(Type *T,
-                                                       const DataLayout &DL) {
+IntegerType *
+ExpandAtomicImpl::getCorrespondingIntegerType(Type *T, const DataLayout &DL) {
   EVT VT = TLI->getMemValueType(DL, T);
   unsigned BitWidth = VT.getStoreSizeInBits();
   assert(BitWidth == VT.getSizeInBits() && "must be a power of two");
@@ -366,7 +400,7 @@ IntegerType *AtomicExpand::getCorrespondingIntegerType(Type *T,
 /// Convert an atomic load of a non-integral type to an integer load of the
 /// equivalent bitwidth.  See the function comment on
 /// convertAtomicStoreToIntegerType for background.
-LoadInst *AtomicExpand::convertAtomicLoadToIntegerType(LoadInst *LI) {
+LoadInst *ExpandAtomicImpl::convertAtomicLoadToIntegerType(LoadInst *LI) {
   auto *M = LI->getModule();
   Type *NewTy = getCorrespondingIntegerType(LI->getType(), M->getDataLayout());
 
@@ -387,7 +421,7 @@ LoadInst *AtomicExpand::convertAtomicLoadToIntegerType(LoadInst *LI) {
 }
 
 AtomicRMWInst *
-AtomicExpand::convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI) {
+ExpandAtomicImpl::convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI) {
   auto *M = RMWI->getModule();
   Type *NewTy =
       getCorrespondingIntegerType(RMWI->getType(), M->getDataLayout());
@@ -414,7 +448,7 @@ AtomicExpand::convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI) {
   return NewRMWI;
 }
 
-bool AtomicExpand::tryExpandAtomicLoad(LoadInst *LI) {
+bool ExpandAtomicImpl::tryExpandAtomicLoad(LoadInst *LI) {
   switch (TLI->shouldExpandAtomicLoadInIR(LI)) {
   case TargetLoweringBase::AtomicExpansionKind::None:
     return false;
@@ -436,7 +470,7 @@ bool AtomicExpand::tryExpandAtomicLoad(LoadInst *LI) {
   }
 }
 
-bool AtomicExpand::tryExpandAtomicStore(StoreInst *SI) {
+bool ExpandAtomicImpl::tryExpandAtomicStore(StoreInst *SI) {
   switch (TLI->shouldExpandAtomicStoreInIR(SI)) {
   case TargetLoweringBase::AtomicExpansionKind::None:
     return false;
@@ -451,7 +485,7 @@ bool AtomicExpand::tryExpandAtomicStore(StoreInst *SI) {
   }
 }
 
-bool AtomicExpand::expandAtomicLoadToLL(LoadInst *LI) {
+bool ExpandAtomicImpl::expandAtomicLoadToLL(LoadInst *LI) {
   ReplacementIRBuilder Builder(LI, *DL);
 
   // On some architectures, load-linked instructions are atomic for larger
@@ -467,7 +501,7 @@ bool AtomicExpand::expandAtomicLoadToLL(LoadInst *LI) {
   return true;
 }
 
-bool AtomicExpand::expandAtomicLoadToCmpXchg(LoadInst *LI) {
+bool ExpandAtomicImpl::expandAtomicLoadToCmpXchg(LoadInst *LI) {
   ReplacementIRBuilder Builder(LI, *DL);
   AtomicOrdering Order = LI->getOrdering();
   if (Order == AtomicOrdering::Unordered)
@@ -496,7 +530,7 @@ bool AtomicExpand::expandAtomicLoadToCmpXchg(LoadInst *LI) {
 /// instruction select from the original atomic store, but as a migration
 /// mechanism, we convert back to the old format which the backends understand.
 /// Each backend will need individual work to recognize the new format.
-StoreInst *AtomicExpand::convertAtomicStoreToIntegerType(StoreInst *SI) {
+StoreInst *ExpandAtomicImpl::convertAtomicStoreToIntegerType(StoreInst *SI) {
   ReplacementIRBuilder Builder(SI, *DL);
   auto *M = SI->getModule();
   Type *NewTy = getCorrespondingIntegerType(SI->getValueOperand()->getType(),
@@ -514,7 +548,7 @@ StoreInst *AtomicExpand::convertAtomicStoreToIntegerType(StoreInst *SI) {
   return NewSI;
 }
 
-void AtomicExpand::expandAtomicStore(StoreInst *SI) {
+void ExpandAtomicImpl::expandAtomicStore(StoreInst *SI) {
   // This function is only called on atomic stores that are too large to be
   // atomic if implemented as a native store. So we replace them by an
   // atomic swap, that can be implemented for example as a ldrex/strex on ARM
@@ -561,7 +595,7 @@ static void createCmpXchgInstFun(IRBuilderBase &Builder, Value *Addr,
     NewLoaded = Builder.CreateBitCast(NewLoaded, OrigTy);
 }
 
-bool AtomicExpand::tryExpandAtomicRMW(AtomicRMWInst *AI) {
+bool ExpandAtomicImpl::tryExpandAtomicRMW(AtomicRMWInst *AI) {
   LLVMContext &Ctx = AI->getModule()->getContext();
   TargetLowering::AtomicExpansionKind Kind = TLI->shouldExpandAtomicRMWInIR(AI);
   switch (Kind) {
@@ -843,7 +877,7 @@ static Value *performMaskedAtomicOp(AtomicRMWInst::BinOp Op,
 /// way as a typical atomicrmw expansion. The only difference here is
 /// that the operation inside of the loop may operate upon only a
 /// part of the value.
-void AtomicExpand::expandPartwordAtomicRMW(
+void ExpandAtomicImpl::expandPartwordAtomicRMW(
     AtomicRMWInst *AI, TargetLoweringBase::AtomicExpansionKind ExpansionKind) {
   AtomicOrdering MemOpOrder = AI->getOrdering();
   SyncScope::ID SSID = AI->getSyncScopeID();
@@ -887,7 +921,7 @@ void AtomicExpand::expandPartwordAtomicRMW(
 }
 
 // Widen the bitwise atomicrmw (or/xor/and) to the minimum supported width.
-AtomicRMWInst *AtomicExpand::widenPartwordAtomicRMW(AtomicRMWInst *AI) {
+AtomicRMWInst *ExpandAtomicImpl::widenPartwordAtomicRMW(AtomicRMWInst *AI) {
   ReplacementIRBuilder Builder(AI, *DL);
   AtomicRMWInst::BinOp Op = AI->getOperation();
 
@@ -922,7 +956,7 @@ AtomicRMWInst *AtomicExpand::widenPartwordAtomicRMW(AtomicRMWInst *AI) {
   return NewAI;
 }
 
-bool AtomicExpand::expandPartwordCmpXchg(AtomicCmpXchgInst *CI) {
+bool ExpandAtomicImpl::expandPartwordCmpXchg(AtomicCmpXchgInst *CI) {
   // The basic idea here is that we're expanding a cmpxchg of a
   // smaller memory size up to a word-sized cmpxchg. To do this, we
   // need to add a retry-loop for strong cmpxchg, so that
@@ -1047,7 +1081,7 @@ bool AtomicExpand::expandPartwordCmpXchg(AtomicCmpXchgInst *CI) {
   return true;
 }
 
-void AtomicExpand::expandAtomicOpToLLSC(
+void ExpandAtomicImpl::expandAtomicOpToLLSC(
     Instruction *I, Type *ResultType, Value *Addr, Align AddrAlign,
     AtomicOrdering MemOpOrder,
     function_ref<Value *(IRBuilderBase &, Value *)> PerformOp) {
@@ -1059,7 +1093,7 @@ void AtomicExpand::expandAtomicOpToLLSC(
   I->eraseFromParent();
 }
 
-void AtomicExpand::expandAtomicRMWToMaskedIntrinsic(AtomicRMWInst *AI) {
+void ExpandAtomicImpl::expandAtomicRMWToMaskedIntrinsic(AtomicRMWInst *AI) {
   ReplacementIRBuilder Builder(AI, *DL);
 
   PartwordMaskValues PMV =
@@ -1085,7 +1119,8 @@ void AtomicExpand::expandAtomicRMWToMaskedIntrinsic(AtomicRMWInst *AI) {
   AI->eraseFromParent();
 }
 
-void AtomicExpand::expandAtomicCmpXchgToMaskedIntrinsic(AtomicCmpXchgInst *CI) {
+void ExpandAtomicImpl::expandAtomicCmpXchgToMaskedIntrinsic(
+    AtomicCmpXchgInst *CI) {
   ReplacementIRBuilder Builder(CI, *DL);
 
   PartwordMaskValues PMV = createMaskInstrs(
@@ -1112,7 +1147,7 @@ void AtomicExpand::expandAtomicCmpXchgToMaskedIntrinsic(AtomicCmpXchgInst *CI) {
   CI->eraseFromParent();
 }
 
-Value *AtomicExpand::insertRMWLLSCLoop(
+Value *ExpandAtomicImpl::insertRMWLLSCLoop(
     IRBuilderBase &Builder, Type *ResultTy, Value *Addr, Align AddrAlign,
     AtomicOrdering MemOpOrder,
     function_ref<Value *(IRBuilderBase &, Value *)> PerformOp) {
@@ -1168,7 +1203,7 @@ Value *AtomicExpand::insertRMWLLSCLoop(
 /// way to represent a pointer cmpxchg so that we can update backends one by
 /// one.
 AtomicCmpXchgInst *
-AtomicExpand::convertCmpXchgToIntegerType(AtomicCmpXchgInst *CI) {
+ExpandAtomicImpl::convertCmpXchgToIntegerType(AtomicCmpXchgInst *CI) {
   auto *M = CI->getModule();
   Type *NewTy = getCorrespondingIntegerType(CI->getCompareOperand()->getType(),
                                             M->getDataLayout());
@@ -1201,7 +1236,7 @@ AtomicExpand::convertCmpXchgToIntegerType(AtomicCmpXchgInst *CI) {
   return NewCI;
 }
 
-bool AtomicExpand::expandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
+bool ExpandAtomicImpl::expandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
   AtomicOrdering SuccessOrder = CI->getSuccessOrdering();
   AtomicOrdering FailureOrder = CI->getFailureOrdering();
   Value *Addr = CI->getPointerOperand();
@@ -1447,7 +1482,7 @@ bool AtomicExpand::expandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
   return true;
 }
 
-bool AtomicExpand::isIdempotentRMW(AtomicRMWInst *RMWI) {
+bool ExpandAtomicImpl::isIdempotentRMW(AtomicRMWInst *RMWI) {
   auto C = dyn_cast<ConstantInt>(RMWI->getValOperand());
   if (!C)
     return false;
@@ -1467,7 +1502,7 @@ bool AtomicExpand::isIdempotentRMW(AtomicRMWInst *RMWI) {
   }
 }
 
-bool AtomicExpand::simplifyIdempotentRMW(AtomicRMWInst *RMWI) {
+bool ExpandAtomicImpl::simplifyIdempotentRMW(AtomicRMWInst *RMWI) {
   if (auto ResultingLoad = TLI->lowerIdempotentRMWIntoFencedLoad(RMWI)) {
     tryExpandAtomicLoad(ResultingLoad);
     return true;
@@ -1475,7 +1510,7 @@ bool AtomicExpand::simplifyIdempotentRMW(AtomicRMWInst *RMWI) {
   return false;
 }
 
-Value *AtomicExpand::insertRMWCmpXchgLoop(
+Value *ExpandAtomicImpl::insertRMWCmpXchgLoop(
     IRBuilderBase &Builder, Type *ResultTy, Value *Addr, Align AddrAlign,
     AtomicOrdering MemOpOrder, SyncScope::ID SSID,
     function_ref<Value *(IRBuilderBase &, Value *)> PerformOp,
@@ -1536,7 +1571,7 @@ Value *AtomicExpand::insertRMWCmpXchgLoop(
   return NewLoaded;
 }
 
-bool AtomicExpand::tryExpandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
+bool ExpandAtomicImpl::tryExpandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
   unsigned MinCASSize = TLI->getMinCmpXchgSizeInBits() / 8;
   unsigned ValueSize = getAtomicOpSize(CI);
 
@@ -1567,7 +1602,7 @@ bool llvm::expandAtomicRMWToCmpXchg(AtomicRMWInst *AI,
 
   // FIXME: If FP exceptions are observable, we should force them off for the
   // loop for the FP atomics.
-  Value *Loaded = AtomicExpand::insertRMWCmpXchgLoop(
+  Value *Loaded = ExpandAtomicImpl::insertRMWCmpXchgLoop(
       Builder, AI->getType(), AI->...
[truncated]

@llvmbot
Copy link
Collaborator

llvmbot commented Jan 14, 2024

@llvm/pr-subscribers-backend-webassembly

Author: Rishabh Bali (Ris-Bali)

Changes

Port the atomicexpand pass to the new Pass Manager.
Fixes #64559


Patch is 66.81 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/71220.diff

71 Files Affected:

  • (added) llvm/include/llvm/CodeGen/ExpandAtomic.h (+30)
  • (modified) llvm/include/llvm/CodeGen/MachinePassRegistry.def (+1-1)
  • (modified) llvm/include/llvm/CodeGen/Passes.h (+3-3)
  • (modified) llvm/include/llvm/CodeGen/TargetSubtargetInfo.h (+1-1)
  • (modified) llvm/include/llvm/InitializePasses.h (+1-1)
  • (modified) llvm/lib/CodeGen/CMakeLists.txt (+1-1)
  • (modified) llvm/lib/CodeGen/CodeGen.cpp (+1-1)
  • (renamed) llvm/lib/CodeGen/ExpandAtomicPass.cpp (+92-57)
  • (modified) llvm/lib/CodeGen/TargetSubtargetInfo.cpp (+1-1)
  • (modified) llvm/lib/Passes/PassBuilder.cpp (+2-1)
  • (modified) llvm/lib/Passes/PassRegistry.def (+1)
  • (modified) llvm/lib/Target/AArch64/AArch64TargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/ARM/ARMTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/CSKY/CSKYTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/Hexagon/HexagonTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/LoongArch/LoongArchTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/M68k/M68kTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/Mips/MipsTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/PowerPC/PPCExpandAtomicPseudoInsts.cpp (+1-1)
  • (modified) llvm/lib/Target/PowerPC/PPCTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/RISCV/RISCVTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/Sparc/SparcTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/VE/VETargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblySubtarget.cpp (+1-1)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblySubtarget.h (+1-1)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblyTargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/X86/X86TargetMachine.cpp (+1-1)
  • (modified) llvm/lib/Target/XCore/XCoreTargetMachine.cpp (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/idemponent-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AArch64/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AArch64/expand-atomicrmw-xchg-fp.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AArch64/pcsections.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-i16-system.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-i16.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-i8-system.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-i8.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fadd-flat-specialization.ll (+4-4)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fadd.ll (+6-6)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fmax.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fmin.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-fsub.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-rmw-nand.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/expand-atomic-simplify-cfg-CAS-block.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/AMDGPU/unaligned-atomic.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/ARM/atomic-expansion-v7.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/ARM/atomic-expansion-v8.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/ARM/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/ARM/cmpxchg-weak.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/Hexagon/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/LoongArch/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/LoongArch/load-store-atomic.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/Mips/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/cfence-double.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/cfence-float.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/cmpxchg.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/PowerPC/issue55983.ll (+2-2)
  • (modified) llvm/test/Transforms/AtomicExpand/RISCV/atomicrmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/SPARC/libcalls.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/SPARC/partword.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-libcall.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-non-integer.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-rmw-fp.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-rmw-initial-load.ll (+1-1)
  • (modified) llvm/test/Transforms/AtomicExpand/X86/expand-atomic-xchg-fp.ll (+1-1)
  • (modified) llvm/tools/opt/opt.cpp (-2)
  • (modified) llvm/utils/gn/secondary/llvm/lib/CodeGen/BUILD.gn (+1-1)
diff --git a/llvm/include/llvm/CodeGen/ExpandAtomic.h b/llvm/include/llvm/CodeGen/ExpandAtomic.h
new file mode 100644
index 00000000000000..4ba49f8886ca94
--- /dev/null
+++ b/llvm/include/llvm/CodeGen/ExpandAtomic.h
@@ -0,0 +1,30 @@
+//===-- ExpandAtomic.h - Expand Atomic Instructions -------------*- C++ -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_CODEGEN_EXPANDATOMIC_H
+#define LLVM_CODEGEN_EXPANDATOMIC_H
+
+#include "llvm/IR/PassManager.h"
+
+namespace llvm {
+
+class Function;
+class TargetMachine;
+
+class ExpandAtomicPass : public PassInfoMixin<ExpandAtomicPass> {
+private:
+  const TargetMachine *TM;
+
+public:
+  ExpandAtomicPass(const TargetMachine *TM) : TM(TM) {}
+  PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM);
+};
+
+} // end namespace llvm
+
+#endif // LLVM_CODEGEN_EXPANDATOMIC_H
diff --git a/llvm/include/llvm/CodeGen/MachinePassRegistry.def b/llvm/include/llvm/CodeGen/MachinePassRegistry.def
index a29269644ea1dc..a52e7d04912f31 100644
--- a/llvm/include/llvm/CodeGen/MachinePassRegistry.def
+++ b/llvm/include/llvm/CodeGen/MachinePassRegistry.def
@@ -116,7 +116,7 @@ DUMMY_FUNCTION_PASS("wasmehprepare", WasmEHPass, ())
 DUMMY_FUNCTION_PASS("codegenprepare", CodeGenPreparePass, ())
 DUMMY_FUNCTION_PASS("safe-stack", SafeStackPass, ())
 DUMMY_FUNCTION_PASS("stack-protector", StackProtectorPass, ())
-DUMMY_FUNCTION_PASS("atomic-expand", AtomicExpandPass, ())
+DUMMY_FUNCTION_PASS("expand-atomic", ExpandAtomicPass, ())
 DUMMY_FUNCTION_PASS("interleaved-access", InterleavedAccessPass, ())
 DUMMY_FUNCTION_PASS("indirectbr-expand", IndirectBrExpandPass, ())
 DUMMY_FUNCTION_PASS("cfguard-dispatch", CFGuardDispatchPass, ())
diff --git a/llvm/include/llvm/CodeGen/Passes.h b/llvm/include/llvm/CodeGen/Passes.h
index 712048017bca1a..b9f9ce936dbb7f 100644
--- a/llvm/include/llvm/CodeGen/Passes.h
+++ b/llvm/include/llvm/CodeGen/Passes.h
@@ -41,10 +41,10 @@ class FileSystem;
 // List of target independent CodeGen pass IDs.
 namespace llvm {
 
-  /// AtomicExpandPass - At IR level this pass replace atomic instructions with
+  /// ExpandAtomicPass - At IR level this pass replace atomic instructions with
   /// __atomic_* library calls, or target specific instruction which implement the
   /// same semantics in a way which better fits the target backend.
-  FunctionPass *createAtomicExpandPass();
+  FunctionPass *createExpandAtomicLegacyPass();
 
   /// createUnreachableBlockEliminationPass - The LLVM code generator does not
   /// work well with unreachable basic blocks (what live ranges make sense for a
@@ -103,7 +103,7 @@ namespace llvm {
 
   /// AtomicExpandID -- Lowers atomic operations in terms of either cmpxchg
   /// load-linked/store-conditional loops.
-  extern char &AtomicExpandID;
+  extern char &ExpandAtomicID;
 
   /// MachineLoopInfo - This pass is a loop analysis pass.
   extern char &MachineLoopInfoID;
diff --git a/llvm/include/llvm/CodeGen/TargetSubtargetInfo.h b/llvm/include/llvm/CodeGen/TargetSubtargetInfo.h
index 55ef95c2854319..da1fd6737b796f 100644
--- a/llvm/include/llvm/CodeGen/TargetSubtargetInfo.h
+++ b/llvm/include/llvm/CodeGen/TargetSubtargetInfo.h
@@ -215,7 +215,7 @@ class TargetSubtargetInfo : public MCSubtargetInfo {
   virtual bool enablePostRAMachineScheduler() const;
 
   /// True if the subtarget should run the atomic expansion pass.
-  virtual bool enableAtomicExpand() const;
+  virtual bool enableExpandAtomic() const;
 
   /// True if the subtarget should run the indirectbr expansion pass.
   virtual bool enableIndirectBrExpand() const;
diff --git a/llvm/include/llvm/InitializePasses.h b/llvm/include/llvm/InitializePasses.h
index fafae8b5ecd7a7..04365af3d1a74e 100644
--- a/llvm/include/llvm/InitializePasses.h
+++ b/llvm/include/llvm/InitializePasses.h
@@ -54,7 +54,7 @@ void initializeAlwaysInlinerLegacyPassPass(PassRegistry&);
 void initializeAssignmentTrackingAnalysisPass(PassRegistry &);
 void initializeAssumeBuilderPassLegacyPassPass(PassRegistry &);
 void initializeAssumptionCacheTrackerPass(PassRegistry&);
-void initializeAtomicExpandPass(PassRegistry&);
+void initializeExpandAtomicLegacyPass(PassRegistry &);
 void initializeBasicBlockPathCloningPass(PassRegistry &);
 void initializeBasicBlockSectionsProfileReaderPass(PassRegistry &);
 void initializeBasicBlockSectionsPass(PassRegistry &);
diff --git a/llvm/lib/CodeGen/CMakeLists.txt b/llvm/lib/CodeGen/CMakeLists.txt
index df2d1831ee5fdb..12bd12bc49caeb 100644
--- a/llvm/lib/CodeGen/CMakeLists.txt
+++ b/llvm/lib/CodeGen/CMakeLists.txt
@@ -40,7 +40,7 @@ add_llvm_component_library(LLVMCodeGen
   AllocationOrder.cpp
   Analysis.cpp
   AssignmentTrackingAnalysis.cpp
-  AtomicExpandPass.cpp
+  ExpandAtomicPass.cpp
   BasicTargetTransformInfo.cpp
   BranchFolding.cpp
   BranchRelaxation.cpp
diff --git a/llvm/lib/CodeGen/CodeGen.cpp b/llvm/lib/CodeGen/CodeGen.cpp
index 79a95ee0d747a1..eacd8b36b610c8 100644
--- a/llvm/lib/CodeGen/CodeGen.cpp
+++ b/llvm/lib/CodeGen/CodeGen.cpp
@@ -19,7 +19,7 @@ using namespace llvm;
 /// initializeCodeGen - Initialize all passes linked into the CodeGen library.
 void llvm::initializeCodeGen(PassRegistry &Registry) {
   initializeAssignmentTrackingAnalysisPass(Registry);
-  initializeAtomicExpandPass(Registry);
+  initializeExpandAtomicLegacyPass(Registry);
   initializeBasicBlockPathCloningPass(Registry);
   initializeBasicBlockSectionsPass(Registry);
   initializeBranchFolderPassPass(Registry);
diff --git a/llvm/lib/CodeGen/AtomicExpandPass.cpp b/llvm/lib/CodeGen/ExpandAtomicPass.cpp
similarity index 95%
rename from llvm/lib/CodeGen/AtomicExpandPass.cpp
rename to llvm/lib/CodeGen/ExpandAtomicPass.cpp
index ccf3e9ec649210..5f8e069bafc7a6 100644
--- a/llvm/lib/CodeGen/AtomicExpandPass.cpp
+++ b/llvm/lib/CodeGen/ExpandAtomicPass.cpp
@@ -1,4 +1,4 @@
-//===- AtomicExpandPass.cpp - Expand atomic instructions ------------------===//
+//===- ExpandAtomicPass.cpp - Expand atomic instructions ------------------===//
 //
 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
 // See https://llvm.org/LICENSE.txt for license information.
@@ -19,6 +19,7 @@
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Analysis/InstSimplifyFolder.h"
 #include "llvm/Analysis/OptimizationRemarkEmitter.h"
+#include "llvm/CodeGen/ExpandAtomic.h"
 #include "llvm/CodeGen/AtomicExpandUtils.h"
 #include "llvm/CodeGen/RuntimeLibcalls.h"
 #include "llvm/CodeGen/TargetLowering.h"
@@ -55,23 +56,14 @@
 
 using namespace llvm;
 
-#define DEBUG_TYPE "atomic-expand"
+#define DEBUG_TYPE "expand-atomic"
 
 namespace {
 
-class AtomicExpand : public FunctionPass {
+class ExpandAtomicImpl {
   const TargetLowering *TLI = nullptr;
   const DataLayout *DL = nullptr;
 
-public:
-  static char ID; // Pass identification, replacement for typeid
-
-  AtomicExpand() : FunctionPass(ID) {
-    initializeAtomicExpandPass(*PassRegistry::getPassRegistry());
-  }
-
-  bool runOnFunction(Function &F) override;
-
 private:
   bool bracketInstWithFences(Instruction *I, AtomicOrdering Order);
   IntegerType *getCorrespondingIntegerType(Type *T, const DataLayout &DL);
@@ -124,6 +116,20 @@ class AtomicExpand : public FunctionPass {
   friend bool
   llvm::expandAtomicRMWToCmpXchg(AtomicRMWInst *AI,
                                  CreateCmpXchgInstFun CreateCmpXchg);
+
+public:
+  bool run(Function &F, const TargetMachine *TM);
+};
+
+class ExpandAtomicLegacy : public FunctionPass {
+public:
+  static char ID; // Pass identification, replacement for typeid
+
+  ExpandAtomicLegacy() : FunctionPass(ID) {
+    initializeExpandAtomicLegacyPass(*PassRegistry::getPassRegistry());
+  }
+
+  bool runOnFunction(Function &F) override;
 };
 
 // IRBuilder to be used for replacement atomic instructions.
@@ -138,14 +144,15 @@ struct ReplacementIRBuilder : IRBuilder<InstSimplifyFolder> {
 
 } // end anonymous namespace
 
-char AtomicExpand::ID = 0;
+char ExpandAtomicLegacy::ID = 0;
 
-char &llvm::AtomicExpandID = AtomicExpand::ID;
+char &llvm::ExpandAtomicID = ExpandAtomicLegacy::ID;
 
-INITIALIZE_PASS(AtomicExpand, DEBUG_TYPE, "Expand Atomic instructions", false,
-                false)
-
-FunctionPass *llvm::createAtomicExpandPass() { return new AtomicExpand(); }
+INITIALIZE_PASS_BEGIN(ExpandAtomicLegacy, DEBUG_TYPE,
+                      "Expand Atomic instructions", false, false)
+INITIALIZE_PASS_DEPENDENCY(TargetPassConfig)
+INITIALIZE_PASS_END(ExpandAtomicLegacy, DEBUG_TYPE,
+                    "Expand Atomic instructions", false, false)
 
 // Helper functions to retrieve the size of atomic instructions.
 static unsigned getAtomicOpSize(LoadInst *LI) {
@@ -179,14 +186,9 @@ static bool atomicSizeSupported(const TargetLowering *TLI, Inst *I) {
          Size <= TLI->getMaxAtomicSizeInBitsSupported() / 8;
 }
 
-bool AtomicExpand::runOnFunction(Function &F) {
-  auto *TPC = getAnalysisIfAvailable<TargetPassConfig>();
-  if (!TPC)
-    return false;
-
-  auto &TM = TPC->getTM<TargetMachine>();
-  const auto *Subtarget = TM.getSubtargetImpl(F);
-  if (!Subtarget->enableAtomicExpand())
+bool ExpandAtomicImpl::run(Function &F, const TargetMachine *TM) {
+  const auto *Subtarget = TM->getSubtargetImpl(F);
+  if (!Subtarget->enableExpandAtomic())
     return false;
   TLI = Subtarget->getTargetLowering();
   DL = &F.getParent()->getDataLayout();
@@ -340,7 +342,39 @@ bool AtomicExpand::runOnFunction(Function &F) {
   return MadeChange;
 }
 
-bool AtomicExpand::bracketInstWithFences(Instruction *I, AtomicOrdering Order) {
+bool ExpandAtomicLegacy::runOnFunction(Function &F) {
+  if (skipFunction(F))
+    return false;
+
+  auto *TPC = getAnalysisIfAvailable<TargetPassConfig>();
+  if (!TPC)
+    return false;
+
+  auto *TM = &TPC->getTM<TargetMachine>();
+
+  ExpandAtomicImpl AE;
+  return AE.run(F, TM);
+}
+
+FunctionPass *llvm::createExpandAtomicLegacyPass() {
+  return new ExpandAtomicLegacy();
+}
+
+PreservedAnalyses ExpandAtomicPass::run(Function &F,
+                                        FunctionAnalysisManager &AM) {
+  ExpandAtomicImpl AE;
+
+  bool Changed = AE.run(F, TM);
+  if (!Changed)
+    return PreservedAnalyses::all();
+
+  PreservedAnalyses PA;
+  PA.preserveSet<CFGAnalyses>();
+  return PA;
+}
+
+bool ExpandAtomicImpl::bracketInstWithFences(Instruction *I,
+                                             AtomicOrdering Order) {
   ReplacementIRBuilder Builder(I, *DL);
 
   auto LeadingFence = TLI->emitLeadingFence(Builder, I, Order);
@@ -355,8 +389,8 @@ bool AtomicExpand::bracketInstWithFences(Instruction *I, AtomicOrdering Order) {
 }
 
 /// Get the iX type with the same bitwidth as T.
-IntegerType *AtomicExpand::getCorrespondingIntegerType(Type *T,
-                                                       const DataLayout &DL) {
+IntegerType *
+ExpandAtomicImpl::getCorrespondingIntegerType(Type *T, const DataLayout &DL) {
   EVT VT = TLI->getMemValueType(DL, T);
   unsigned BitWidth = VT.getStoreSizeInBits();
   assert(BitWidth == VT.getSizeInBits() && "must be a power of two");
@@ -366,7 +400,7 @@ IntegerType *AtomicExpand::getCorrespondingIntegerType(Type *T,
 /// Convert an atomic load of a non-integral type to an integer load of the
 /// equivalent bitwidth.  See the function comment on
 /// convertAtomicStoreToIntegerType for background.
-LoadInst *AtomicExpand::convertAtomicLoadToIntegerType(LoadInst *LI) {
+LoadInst *ExpandAtomicImpl::convertAtomicLoadToIntegerType(LoadInst *LI) {
   auto *M = LI->getModule();
   Type *NewTy = getCorrespondingIntegerType(LI->getType(), M->getDataLayout());
 
@@ -387,7 +421,7 @@ LoadInst *AtomicExpand::convertAtomicLoadToIntegerType(LoadInst *LI) {
 }
 
 AtomicRMWInst *
-AtomicExpand::convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI) {
+ExpandAtomicImpl::convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI) {
   auto *M = RMWI->getModule();
   Type *NewTy =
       getCorrespondingIntegerType(RMWI->getType(), M->getDataLayout());
@@ -414,7 +448,7 @@ AtomicExpand::convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI) {
   return NewRMWI;
 }
 
-bool AtomicExpand::tryExpandAtomicLoad(LoadInst *LI) {
+bool ExpandAtomicImpl::tryExpandAtomicLoad(LoadInst *LI) {
   switch (TLI->shouldExpandAtomicLoadInIR(LI)) {
   case TargetLoweringBase::AtomicExpansionKind::None:
     return false;
@@ -436,7 +470,7 @@ bool AtomicExpand::tryExpandAtomicLoad(LoadInst *LI) {
   }
 }
 
-bool AtomicExpand::tryExpandAtomicStore(StoreInst *SI) {
+bool ExpandAtomicImpl::tryExpandAtomicStore(StoreInst *SI) {
   switch (TLI->shouldExpandAtomicStoreInIR(SI)) {
   case TargetLoweringBase::AtomicExpansionKind::None:
     return false;
@@ -451,7 +485,7 @@ bool AtomicExpand::tryExpandAtomicStore(StoreInst *SI) {
   }
 }
 
-bool AtomicExpand::expandAtomicLoadToLL(LoadInst *LI) {
+bool ExpandAtomicImpl::expandAtomicLoadToLL(LoadInst *LI) {
   ReplacementIRBuilder Builder(LI, *DL);
 
   // On some architectures, load-linked instructions are atomic for larger
@@ -467,7 +501,7 @@ bool AtomicExpand::expandAtomicLoadToLL(LoadInst *LI) {
   return true;
 }
 
-bool AtomicExpand::expandAtomicLoadToCmpXchg(LoadInst *LI) {
+bool ExpandAtomicImpl::expandAtomicLoadToCmpXchg(LoadInst *LI) {
   ReplacementIRBuilder Builder(LI, *DL);
   AtomicOrdering Order = LI->getOrdering();
   if (Order == AtomicOrdering::Unordered)
@@ -496,7 +530,7 @@ bool AtomicExpand::expandAtomicLoadToCmpXchg(LoadInst *LI) {
 /// instruction select from the original atomic store, but as a migration
 /// mechanism, we convert back to the old format which the backends understand.
 /// Each backend will need individual work to recognize the new format.
-StoreInst *AtomicExpand::convertAtomicStoreToIntegerType(StoreInst *SI) {
+StoreInst *ExpandAtomicImpl::convertAtomicStoreToIntegerType(StoreInst *SI) {
   ReplacementIRBuilder Builder(SI, *DL);
   auto *M = SI->getModule();
   Type *NewTy = getCorrespondingIntegerType(SI->getValueOperand()->getType(),
@@ -514,7 +548,7 @@ StoreInst *AtomicExpand::convertAtomicStoreToIntegerType(StoreInst *SI) {
   return NewSI;
 }
 
-void AtomicExpand::expandAtomicStore(StoreInst *SI) {
+void ExpandAtomicImpl::expandAtomicStore(StoreInst *SI) {
   // This function is only called on atomic stores that are too large to be
   // atomic if implemented as a native store. So we replace them by an
   // atomic swap, that can be implemented for example as a ldrex/strex on ARM
@@ -561,7 +595,7 @@ static void createCmpXchgInstFun(IRBuilderBase &Builder, Value *Addr,
     NewLoaded = Builder.CreateBitCast(NewLoaded, OrigTy);
 }
 
-bool AtomicExpand::tryExpandAtomicRMW(AtomicRMWInst *AI) {
+bool ExpandAtomicImpl::tryExpandAtomicRMW(AtomicRMWInst *AI) {
   LLVMContext &Ctx = AI->getModule()->getContext();
   TargetLowering::AtomicExpansionKind Kind = TLI->shouldExpandAtomicRMWInIR(AI);
   switch (Kind) {
@@ -843,7 +877,7 @@ static Value *performMaskedAtomicOp(AtomicRMWInst::BinOp Op,
 /// way as a typical atomicrmw expansion. The only difference here is
 /// that the operation inside of the loop may operate upon only a
 /// part of the value.
-void AtomicExpand::expandPartwordAtomicRMW(
+void ExpandAtomicImpl::expandPartwordAtomicRMW(
     AtomicRMWInst *AI, TargetLoweringBase::AtomicExpansionKind ExpansionKind) {
   AtomicOrdering MemOpOrder = AI->getOrdering();
   SyncScope::ID SSID = AI->getSyncScopeID();
@@ -887,7 +921,7 @@ void AtomicExpand::expandPartwordAtomicRMW(
 }
 
 // Widen the bitwise atomicrmw (or/xor/and) to the minimum supported width.
-AtomicRMWInst *AtomicExpand::widenPartwordAtomicRMW(AtomicRMWInst *AI) {
+AtomicRMWInst *ExpandAtomicImpl::widenPartwordAtomicRMW(AtomicRMWInst *AI) {
   ReplacementIRBuilder Builder(AI, *DL);
   AtomicRMWInst::BinOp Op = AI->getOperation();
 
@@ -922,7 +956,7 @@ AtomicRMWInst *AtomicExpand::widenPartwordAtomicRMW(AtomicRMWInst *AI) {
   return NewAI;
 }
 
-bool AtomicExpand::expandPartwordCmpXchg(AtomicCmpXchgInst *CI) {
+bool ExpandAtomicImpl::expandPartwordCmpXchg(AtomicCmpXchgInst *CI) {
   // The basic idea here is that we're expanding a cmpxchg of a
   // smaller memory size up to a word-sized cmpxchg. To do this, we
   // need to add a retry-loop for strong cmpxchg, so that
@@ -1047,7 +1081,7 @@ bool AtomicExpand::expandPartwordCmpXchg(AtomicCmpXchgInst *CI) {
   return true;
 }
 
-void AtomicExpand::expandAtomicOpToLLSC(
+void ExpandAtomicImpl::expandAtomicOpToLLSC(
     Instruction *I, Type *ResultType, Value *Addr, Align AddrAlign,
     AtomicOrdering MemOpOrder,
     function_ref<Value *(IRBuilderBase &, Value *)> PerformOp) {
@@ -1059,7 +1093,7 @@ void AtomicExpand::expandAtomicOpToLLSC(
   I->eraseFromParent();
 }
 
-void AtomicExpand::expandAtomicRMWToMaskedIntrinsic(AtomicRMWInst *AI) {
+void ExpandAtomicImpl::expandAtomicRMWToMaskedIntrinsic(AtomicRMWInst *AI) {
   ReplacementIRBuilder Builder(AI, *DL);
 
   PartwordMaskValues PMV =
@@ -1085,7 +1119,8 @@ void AtomicExpand::expandAtomicRMWToMaskedIntrinsic(AtomicRMWInst *AI) {
   AI->eraseFromParent();
 }
 
-void AtomicExpand::expandAtomicCmpXchgToMaskedIntrinsic(AtomicCmpXchgInst *CI) {
+void ExpandAtomicImpl::expandAtomicCmpXchgToMaskedIntrinsic(
+    AtomicCmpXchgInst *CI) {
   ReplacementIRBuilder Builder(CI, *DL);
 
   PartwordMaskValues PMV = createMaskInstrs(
@@ -1112,7 +1147,7 @@ void AtomicExpand::expandAtomicCmpXchgToMaskedIntrinsic(AtomicCmpXchgInst *CI) {
   CI->eraseFromParent();
 }
 
-Value *AtomicExpand::insertRMWLLSCLoop(
+Value *ExpandAtomicImpl::insertRMWLLSCLoop(
     IRBuilderBase &Builder, Type *ResultTy, Value *Addr, Align AddrAlign,
     AtomicOrdering MemOpOrder,
     function_ref<Value *(IRBuilderBase &, Value *)> PerformOp) {
@@ -1168,7 +1203,7 @@ Value *AtomicExpand::insertRMWLLSCLoop(
 /// way to represent a pointer cmpxchg so that we can update backends one by
 /// one.
 AtomicCmpXchgInst *
-AtomicExpand::convertCmpXchgToIntegerType(AtomicCmpXchgInst *CI) {
+ExpandAtomicImpl::convertCmpXchgToIntegerType(AtomicCmpXchgInst *CI) {
   auto *M = CI->getModule();
   Type *NewTy = getCorrespondingIntegerType(CI->getCompareOperand()->getType(),
                                             M->getDataLayout());
@@ -1201,7 +1236,7 @@ AtomicExpand::convertCmpXchgToIntegerType(AtomicCmpXchgInst *CI) {
   return NewCI;
 }
 
-bool AtomicExpand::expandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
+bool ExpandAtomicImpl::expandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
   AtomicOrdering SuccessOrder = CI->getSuccessOrdering();
   AtomicOrdering FailureOrder = CI->getFailureOrdering();
   Value *Addr = CI->getPointerOperand();
@@ -1447,7 +1482,7 @@ bool AtomicExpand::expandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
   return true;
 }
 
-bool AtomicExpand::isIdempotentRMW(AtomicRMWInst *RMWI) {
+bool ExpandAtomicImpl::isIdempotentRMW(AtomicRMWInst *RMWI) {
   auto C = dyn_cast<ConstantInt>(RMWI->getValOperand());
   if (!C)
     return false;
@@ -1467,7 +1502,7 @@ bool AtomicExpand::isIdempotentRMW(AtomicRMWInst *RMWI) {
   }
 }
 
-bool AtomicExpand::simplifyIdempotentRMW(AtomicRMWInst *RMWI) {
+bool ExpandAtomicImpl::simplifyIdempotentRMW(AtomicRMWInst *RMWI) {
   if (auto ResultingLoad = TLI->lowerIdempotentRMWIntoFencedLoad(RMWI)) {
     tryExpandAtomicLoad(ResultingLoad);
     return true;
@@ -1475,7 +1510,7 @@ bool AtomicExpand::simplifyIdempotentRMW(AtomicRMWInst *RMWI) {
   return false;
 }
 
-Value *AtomicExpand::insertRMWCmpXchgLoop(
+Value *ExpandAtomicImpl::insertRMWCmpXchgLoop(
     IRBuilderBase &Builder, Type *ResultTy, Value *Addr, Align AddrAlign,
     AtomicOrdering MemOpOrder, SyncScope::ID SSID,
     function_ref<Value *(IRBuilderBase &, Value *)> PerformOp,
@@ -1536,7 +1571,7 @@ Value *AtomicExpand::insertRMWCmpXchgLoop(
   return NewLoaded;
 }
 
-bool AtomicExpand::tryExpandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
+bool ExpandAtomicImpl::tryExpandAtomicCmpXchg(AtomicCmpXchgInst *CI) {
   unsigned MinCASSize = TLI->getMinCmpXchgSizeInBits() / 8;
   unsigned ValueSize = getAtomicOpSize(CI);
 
@@ -1567,7 +1602,7 @@ bool llvm::expandAtomicRMWToCmpXchg(AtomicRMWInst *AI,
 
   // FIXME: If FP exceptions are observable, we should force them off for the
   // loop for the FP atomics.
-  Value *Loaded = AtomicExpand::insertRMWCmpXchgLoop(
+  Value *Loaded = ExpandAtomicImpl::insertRMWCmpXchgLoop(
       Builder, AI->getType(), AI->...
[truncated]

@MaskRay MaskRay requested a review from jyknight January 14, 2024 08:21
@MaskRay
Copy link
Member

MaskRay commented Jan 14, 2024

What's the rationale behind the pass renaming?

@paperchalice
Copy link
Contributor

What's the rationale behind the pass renaming?

The naming convention in PassRegistry.def is inconsistent and there is no pass naming convention in LLVM coding standard.
In PassRegistry.def there are 5 passes contains "expand" and 3 passes follow the expand-* convention.

@llvmbot llvmbot added backend:MSP430 clang Clang issues not falling into any other category labels Jan 14, 2024
llvm/lib/CodeGen/AtomicExpandPass.cpp Outdated Show resolved Hide resolved
@Ris-Bali
Copy link
Contributor Author

All the tests pass now.

@paperchalice
Copy link
Contributor

All the tests pass now.

Could you also fix the format issue? Thanks!

@Ris-Bali
Copy link
Contributor Author

Ris-Bali commented Feb 19, 2024

Should I revert the formatting changes in llvm/include/llvm/CodeGen/Passes.h ? most of the formatting changes done are not related to this PR but were still being detected by check_code_formatter so I had to comply.

@paperchalice
Copy link
Contributor

paperchalice commented Feb 20, 2024

Should I revert the formatting changes in llvm/include/llvm/CodeGen/Passes.h ? most of the formatting changes done are not related to this PR but were still being detected by check_code_formatter so I had to comply.

Sorry for the unclear request, just use git clang-format to format the original patch.

@aeubanks
Copy link
Contributor

in the future it would be nice to split out the renaming into a separate PR

@Ris-Bali
Copy link
Contributor Author

Ris-Bali commented Feb 20, 2024

Should I revert the formatting changes in llvm/include/llvm/CodeGen/Passes.h ? most of the formatting changes done are not related to this PR but were still being detected by check_code_formatter so I had to comply.

Sorry for the unclear request, just use git clang-format to format the original patch.

The problem is the formatting tests only passes if Passes.h is fully formatted(prev_commit) (the tests fail with the latest commit where I reverted the formatting changes" originally I have changed a single line in that file. git clang-format is currently formatting the entire file. It may be a bug although I am not sure

@paperchalice
Copy link
Contributor

Should I revert the formatting changes in llvm/include/llvm/CodeGen/Passes.h ? most of the formatting changes done are not related to this PR but were still being detected by check_code_formatter so I had to comply.

Sorry for the unclear request, just use git clang-format to format the original patch.

The problem is the formatting tests only passes if Passes.h is fully formatted(prev_commit) (the tests fail with the latest commit where I reverted the formatting changes" originally I have changed a single line in that file. git clang-format is currently formatting the entire file. It may be a bug although I am not sure

You can squash the change here.

@Ris-Bali
Copy link
Contributor Author

Should I revert the formatting changes in llvm/include/llvm/CodeGen/Passes.h ? most of the formatting changes done are not related to this PR but were still being detected by check_code_formatter so I had to comply.

Sorry for the unclear request, just use git clang-format to format the original patch.

The problem is the formatting tests only passes if Passes.h is fully formatted(prev_commit) (the tests fail with the latest commit where I reverted the formatting changes" originally I have changed a single line in that file. git clang-format is currently formatting the entire file. It may be a bug although I am not sure

You can squash the change here.

Have squashed the changes.

@arsenm arsenm merged commit fe42e72 into llvm:main Feb 25, 2024
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Port AtomicExpand pass to new pass manager
7 participants