Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RISCV] Omit "@plt" in assembly output "call foo@plt" #72467

Merged
merged 1 commit into from Jan 7, 2024

Conversation

MaskRay
Copy link
Member

@MaskRay MaskRay commented Nov 16, 2023

R_RISCV_CALL/R_RISCV_CALL_PLT distinction is not necessary and
R_RISCV_CALL has been deprecated. Since https://reviews.llvm.org/D132530
call foo assembles to R_RISCV_CALL_PLT. The @plt suffix is not
useful and can be removed now (matching AArch64 and PowerPC).

GNU assembler assembles call foo to RISCV_CALL_PLT since 2022-09
(70f35d72ef04cd23771875c1661c9975044a749c).

Without this patch, unconditionally changing MO_CALL to MO_PLT could
create jump .L1@plt, a0, which is invalid in LLVM integrated assembler
and GNU assembler.

@llvmbot
Copy link
Collaborator

llvmbot commented Nov 16, 2023

@llvm/pr-subscribers-llvm-globalisel
@llvm/pr-subscribers-mc

@llvm/pr-subscribers-backend-risc-v

Author: Fangrui Song (MaskRay)

Changes

R_RISCV_CALL/R_RISCV_CALL_PLT distinction is not necessary and
R_RISCV_CALL has been deprecated. Since https://reviews.llvm.org/D132530
call foo assembles to R_RISCV_CALL_PLT. The @<!-- -->plt suffix is not
useful and can be removed now (matching AArch64 and PowerPC).

Without this patch, unconditionally changing MO_CALL to MO_PLT could
create jump .L1@<!-- -->plt, a0, which is invalid in LLVM integrated assembler
and GNU assembler.


Patch is 2.11 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/72467.diff

203 Files Affected:

  • (modified) llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp (+2-3)
  • (modified) llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp (-2)
  • (modified) llvm/test/CodeGen/RISCV/addrspacecast.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/aext-to-sext.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/alloca.ll (+3-3)
  • (modified) llvm/test/CodeGen/RISCV/analyze-branch.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll (+90-90)
  • (modified) llvm/test/CodeGen/RISCV/atomic-load-store.ll (+72-72)
  • (modified) llvm/test/CodeGen/RISCV/atomic-rmw-discard.ll (+9-9)
  • (modified) llvm/test/CodeGen/RISCV/atomic-rmw-sub.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/atomic-rmw.ll (+535-535)
  • (modified) llvm/test/CodeGen/RISCV/atomic-signext.ll (+139-139)
  • (modified) llvm/test/CodeGen/RISCV/atomicrmw-uinc-udec-wrap.ll (+18-18)
  • (modified) llvm/test/CodeGen/RISCV/bf16-promote.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/bfloat-br-fcmp.ll (+34-34)
  • (modified) llvm/test/CodeGen/RISCV/bfloat-convert.ll (+37-37)
  • (modified) llvm/test/CodeGen/RISCV/bfloat-frem.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/bfloat-mem.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/bfloat.ll (+38-38)
  • (modified) llvm/test/CodeGen/RISCV/bittest.ll (+100-100)
  • (modified) llvm/test/CodeGen/RISCV/byval.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/callee-saved-fpr32s.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/callee-saved-fpr64s.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/callee-saved-gprs.ll (+16-16)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-half.ll (+48-48)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+24-24)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32d.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32f-ilp32d-common.ll (+5-5)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-lp64-lp64f-common.ll (+3-3)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-lp64-lp64f-lp64d-common.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-lp64.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-rv32f-ilp32.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-sext-zext.ll (+18-18)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-vector-on-stack.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calls.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/cm_mvas_mvsa.ll (+24-24)
  • (modified) llvm/test/CodeGen/RISCV/condops.ll (+18-18)
  • (modified) llvm/test/CodeGen/RISCV/copysign-casts.ll (+5-5)
  • (modified) llvm/test/CodeGen/RISCV/ctlz-cttz-ctpop.ll (+30-30)
  • (modified) llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll (+25-25)
  • (modified) llvm/test/CodeGen/RISCV/div-by-constant.ll (+5-5)
  • (modified) llvm/test/CodeGen/RISCV/div.ll (+56-56)
  • (modified) llvm/test/CodeGen/RISCV/double-arith-strict.ll (+48-48)
  • (modified) llvm/test/CodeGen/RISCV/double-arith.ll (+80-80)
  • (modified) llvm/test/CodeGen/RISCV/double-br-fcmp.ll (+68-68)
  • (modified) llvm/test/CodeGen/RISCV/double-calling-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/double-convert-strict.ll (+46-46)
  • (modified) llvm/test/CodeGen/RISCV/double-convert.ll (+132-132)
  • (modified) llvm/test/CodeGen/RISCV/double-fcmp-strict.ll (+64-64)
  • (modified) llvm/test/CodeGen/RISCV/double-fcmp.ll (+32-32)
  • (modified) llvm/test/CodeGen/RISCV/double-frem.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/double-intrinsics-strict.ll (+142-142)
  • (modified) llvm/test/CodeGen/RISCV/double-intrinsics.ll (+115-115)
  • (modified) llvm/test/CodeGen/RISCV/double-mem.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/double-previous-failure.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/double-round-conv-sat.ll (+40-40)
  • (modified) llvm/test/CodeGen/RISCV/double-round-conv.ll (+50-50)
  • (modified) llvm/test/CodeGen/RISCV/double-stack-spill-restore.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/eh-dwarf-cfa.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/emutls.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/exception-pointer-register.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/fastcc-float.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/fastcc-int.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/fastcc-without-f-reg.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/fli-licm.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/float-arith-strict.ll (+48-48)
  • (modified) llvm/test/CodeGen/RISCV/float-arith.ll (+82-82)
  • (modified) llvm/test/CodeGen/RISCV/float-bit-preserving-dagcombines.ll (+24-24)
  • (modified) llvm/test/CodeGen/RISCV/float-br-fcmp.ll (+80-80)
  • (modified) llvm/test/CodeGen/RISCV/float-convert-strict.ll (+42-42)
  • (modified) llvm/test/CodeGen/RISCV/float-convert.ll (+128-128)
  • (modified) llvm/test/CodeGen/RISCV/float-fcmp-strict.ll (+64-64)
  • (modified) llvm/test/CodeGen/RISCV/float-fcmp.ll (+32-32)
  • (modified) llvm/test/CodeGen/RISCV/float-frem.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/float-intrinsics-strict.ll (+142-142)
  • (modified) llvm/test/CodeGen/RISCV/float-intrinsics.ll (+110-110)
  • (modified) llvm/test/CodeGen/RISCV/float-mem.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/float-round-conv-sat.ll (+20-20)
  • (modified) llvm/test/CodeGen/RISCV/float-round-conv.ll (+20-20)
  • (modified) llvm/test/CodeGen/RISCV/float-zfa.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/fmax-fmin.ll (+18-18)
  • (modified) llvm/test/CodeGen/RISCV/fold-addi-loadstore.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/forced-atomics.ll (+292-292)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+3-3)
  • (modified) llvm/test/CodeGen/RISCV/fp16-promote.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/fpclamptosat.ll (+106-106)
  • (modified) llvm/test/CodeGen/RISCV/frame-info.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/frame.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/frameaddr-returnaddr.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/ghccc-rv32.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/ghccc-rv64.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/ghccc-without-f-reg.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-arith.ll (+352-352)
  • (modified) llvm/test/CodeGen/RISCV/half-br-fcmp.ll (+136-136)
  • (modified) llvm/test/CodeGen/RISCV/half-convert-strict.ll (+44-44)
  • (modified) llvm/test/CodeGen/RISCV/half-convert.ll (+437-437)
  • (modified) llvm/test/CodeGen/RISCV/half-frem.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/half-intrinsics.ll (+268-268)
  • (modified) llvm/test/CodeGen/RISCV/half-mem.ll (+16-16)
  • (modified) llvm/test/CodeGen/RISCV/half-round-conv-sat.ll (+40-40)
  • (modified) llvm/test/CodeGen/RISCV/half-round-conv.ll (+40-40)
  • (modified) llvm/test/CodeGen/RISCV/hoist-global-addr-base.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/interrupt-attr-callee.ll (+9-9)
  • (modified) llvm/test/CodeGen/RISCV/interrupt-attr-nocall.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/interrupt-attr.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/intrinsic-cttz-elts-vscale.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/libcall-tail-calls.ll (+68-68)
  • (modified) llvm/test/CodeGen/RISCV/llvm.exp10.ll (+106-106)
  • (modified) llvm/test/CodeGen/RISCV/llvm.frexp.ll (+126-126)
  • (modified) llvm/test/CodeGen/RISCV/machine-outliner-and-machine-copy-propagation.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/machine-outliner-throw.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/machinelicm-address-pseudos.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/macro-fusion-lui-addi.ll (+3-3)
  • (modified) llvm/test/CodeGen/RISCV/mem.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/mem64.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/memcpy.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/miss-sp-restore-eh.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/mul.ll (+27-27)
  • (modified) llvm/test/CodeGen/RISCV/nest-register.ll (+2-4)
  • (modified) llvm/test/CodeGen/RISCV/nomerge.ll (+5-5)
  • (modified) llvm/test/CodeGen/RISCV/out-of-reach-emergency-slot.mir (+3-1)
  • (modified) llvm/test/CodeGen/RISCV/overflow-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/pr51206.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/pr63816.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/push-pop-popret.ll (+72-72)
  • (modified) llvm/test/CodeGen/RISCV/regalloc-last-chance-recoloring-failure.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rem.ll (+36-36)
  • (modified) llvm/test/CodeGen/RISCV/remat.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/rv32i-rv64i-float-double.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/rv32i-rv64i-half.ll (+14-14)
  • (modified) llvm/test/CodeGen/RISCV/rv32xtheadbb.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/rv32zbb.ll (+15-15)
  • (modified) llvm/test/CodeGen/RISCV/rv64-large-stack.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/rv64-legal-i32/div.ll (+25-25)
  • (modified) llvm/test/CodeGen/RISCV/rv64-legal-i32/mem64.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/rv64-legal-i32/rem.ll (+16-16)
  • (modified) llvm/test/CodeGen/RISCV/rv64-legal-i32/rv64xtheadbb.ll (+15-15)
  • (modified) llvm/test/CodeGen/RISCV/rv64-legal-i32/rv64zbb.ll (+14-14)
  • (modified) llvm/test/CodeGen/RISCV/rv64-legal-i32/rv64zbs.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rv64i-complex-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rv64i-double-softfloat.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/rv64i-single-softfloat.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rv64xtheadbb.ll (+11-11)
  • (modified) llvm/test/CodeGen/RISCV/rv64zbb.ll (+18-18)
  • (modified) llvm/test/CodeGen/RISCV/rv64zbs.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/rvv/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll (+20-20)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-emergency-slot.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-llrint.ll (+50-50)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int-vp.ll (+14-14)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll (+240-240)
  • (modified) llvm/test/CodeGen/RISCV/rvv/large-rvv-stack-size.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/rvv/localvar.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rvv/memory-args.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/rvv/no-reserved-frame.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/rvv/pr63596.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/rvv/reg-alloc-reserve-bp.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll (+3-3)
  • (modified) llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/rvv/rvv-stack-align.mir (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/rvv/scalar-stack-align.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vxrm-insert.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/select-and.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/select-cc.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/select-or.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/setcc-logic.ll (+56-56)
  • (modified) llvm/test/CodeGen/RISCV/sextw-removal.ll (+30-30)
  • (modified) llvm/test/CodeGen/RISCV/shadowcallstack.ll (+14-14)
  • (modified) llvm/test/CodeGen/RISCV/shifts.ll (+3-3)
  • (modified) llvm/test/CodeGen/RISCV/short-foward-branch-opt.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/shrinkwrap-jump-table.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/shrinkwrap.ll (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/split-sp-adjust.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/split-udiv-by-constant.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/split-urem-by-constant.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/srem-lkk.ll (+15-15)
  • (modified) llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll (+16-16)
  • (modified) llvm/test/CodeGen/RISCV/srem-vector-lkk.ll (+55-55)
  • (modified) llvm/test/CodeGen/RISCV/stack-protector-target.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/stack-realignment-with-variable-sized-objects.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/stack-realignment.ll (+32-32)
  • (modified) llvm/test/CodeGen/RISCV/stack-slot-size.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/tls-models.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/unfold-masked-merge-scalar-variablemask.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/urem-lkk.ll (+11-11)
  • (modified) llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/urem-vector-lkk.ll (+51-51)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+30-30)
  • (modified) llvm/test/CodeGen/RISCV/vlenb.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/zbb-cmp-combine.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/zcmp-with-float.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/zfh-half-intrinsics-strict.ll (+48-48)
  • (modified) llvm/test/CodeGen/RISCV/zfhmin-half-intrinsics-strict.ll (+48-48)
  • (modified) llvm/test/MC/RISCV/function-call.s (+2-2)
  • (modified) llvm/test/MC/RISCV/tail-call.s (+1-1)
diff --git a/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp b/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp
index e9264a5d851c0b5..f34b288fd00f3d3 100644
--- a/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp
+++ b/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp
@@ -2042,9 +2042,8 @@ ParseStatus RISCVAsmParser::parseCallSymbol(OperandVector &Operands) {
 
   SMLoc E = SMLoc::getFromPointer(S.getPointer() + Identifier.size());
 
-  RISCVMCExpr::VariantKind Kind = RISCVMCExpr::VK_RISCV_CALL;
-  if (Identifier.consume_back("@plt"))
-    Kind = RISCVMCExpr::VK_RISCV_CALL_PLT;
+  RISCVMCExpr::VariantKind Kind = RISCVMCExpr::VK_RISCV_CALL_PLT;
+  (void)Identifier.consume_back("@plt");
 
   MCSymbol *Sym = getContext().getOrCreateSymbol(Identifier);
   Res = MCSymbolRefExpr::create(Sym, MCSymbolRefExpr::VK_None, getContext());
diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp
index d67351102bc1cde..64ddae61b1bc159 100644
--- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp
+++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp
@@ -41,8 +41,6 @@ void RISCVMCExpr::printImpl(raw_ostream &OS, const MCAsmInfo *MAI) const {
   if (HasVariant)
     OS << '%' << getVariantKindName(getKind()) << '(';
   Expr->print(OS, MAI);
-  if (Kind == VK_RISCV_CALL_PLT)
-    OS << "@plt";
   if (HasVariant)
     OS << ')';
 }
diff --git a/llvm/test/CodeGen/RISCV/addrspacecast.ll b/llvm/test/CodeGen/RISCV/addrspacecast.ll
index 7fe041a8ac6a10a..e55a57a51678222 100644
--- a/llvm/test/CodeGen/RISCV/addrspacecast.ll
+++ b/llvm/test/CodeGen/RISCV/addrspacecast.ll
@@ -26,7 +26,7 @@ define void @cast1(ptr %ptr) {
 ; RV32I-NEXT:    .cfi_def_cfa_offset 16
 ; RV32I-NEXT:    sw ra, 12(sp) # 4-byte Folded Spill
 ; RV32I-NEXT:    .cfi_offset ra, -4
-; RV32I-NEXT:    call foo@plt
+; RV32I-NEXT:    call foo
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -37,7 +37,7 @@ define void @cast1(ptr %ptr) {
 ; RV64I-NEXT:    .cfi_def_cfa_offset 16
 ; RV64I-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
 ; RV64I-NEXT:    .cfi_offset ra, -8
-; RV64I-NEXT:    call foo@plt
+; RV64I-NEXT:    call foo
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/aext-to-sext.ll b/llvm/test/CodeGen/RISCV/aext-to-sext.ll
index 09803012001c26c..888ea666d713167 100644
--- a/llvm/test/CodeGen/RISCV/aext-to-sext.ll
+++ b/llvm/test/CodeGen/RISCV/aext-to-sext.ll
@@ -19,7 +19,7 @@ define void @quux(i32 signext %arg, i32 signext %arg1) nounwind {
 ; RV64I-NEXT:    subw s0, a1, a0
 ; RV64I-NEXT:  .LBB0_2: # %bb2
 ; RV64I-NEXT:    # =>This Inner Loop Header: Depth=1
-; RV64I-NEXT:    call hoge@plt
+; RV64I-NEXT:    call hoge
 ; RV64I-NEXT:    addiw s0, s0, -1
 ; RV64I-NEXT:    bnez s0, .LBB0_2
 ; RV64I-NEXT:  # %bb.3:
diff --git a/llvm/test/CodeGen/RISCV/alloca.ll b/llvm/test/CodeGen/RISCV/alloca.ll
index 34cac429f30533c..bcb0592c18f59f1 100644
--- a/llvm/test/CodeGen/RISCV/alloca.ll
+++ b/llvm/test/CodeGen/RISCV/alloca.ll
@@ -18,7 +18,7 @@ define void @simple_alloca(i32 %n) nounwind {
 ; RV32I-NEXT:    andi a0, a0, -16
 ; RV32I-NEXT:    sub a0, sp, a0
 ; RV32I-NEXT:    mv sp, a0
-; RV32I-NEXT:    call notdead@plt
+; RV32I-NEXT:    call notdead
 ; RV32I-NEXT:    addi sp, s0, -16
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    lw s0, 8(sp) # 4-byte Folded Reload
@@ -45,7 +45,7 @@ define void @scoped_alloca(i32 %n) nounwind {
 ; RV32I-NEXT:    andi a0, a0, -16
 ; RV32I-NEXT:    sub a0, sp, a0
 ; RV32I-NEXT:    mv sp, a0
-; RV32I-NEXT:    call notdead@plt
+; RV32I-NEXT:    call notdead
 ; RV32I-NEXT:    mv sp, s1
 ; RV32I-NEXT:    addi sp, s0, -16
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
@@ -91,7 +91,7 @@ define void @alloca_callframe(i32 %n) nounwind {
 ; RV32I-NEXT:    li a6, 7
 ; RV32I-NEXT:    li a7, 8
 ; RV32I-NEXT:    sw t0, 0(sp)
-; RV32I-NEXT:    call func@plt
+; RV32I-NEXT:    call func
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    addi sp, s0, -16
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/analyze-branch.ll b/llvm/test/CodeGen/RISCV/analyze-branch.ll
index e33e6b65c6f14d7..768a11a717063d5 100644
--- a/llvm/test/CodeGen/RISCV/analyze-branch.ll
+++ b/llvm/test/CodeGen/RISCV/analyze-branch.ll
@@ -20,13 +20,13 @@ define void @test_bcc_fallthrough_taken(i32 %in) nounwind {
 ; RV32I-NEXT:    li a1, 42
 ; RV32I-NEXT:    bne a0, a1, .LBB0_3
 ; RV32I-NEXT:  # %bb.1: # %true
-; RV32I-NEXT:    call test_true@plt
+; RV32I-NEXT:    call test_true
 ; RV32I-NEXT:  .LBB0_2: # %true
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
 ; RV32I-NEXT:  .LBB0_3: # %false
-; RV32I-NEXT:    call test_false@plt
+; RV32I-NEXT:    call test_false
 ; RV32I-NEXT:    j .LBB0_2
   %tst = icmp eq i32 %in, 42
   br i1 %tst, label %true, label %false, !prof !0
@@ -52,13 +52,13 @@ define void @test_bcc_fallthrough_nottaken(i32 %in) nounwind {
 ; RV32I-NEXT:    li a1, 42
 ; RV32I-NEXT:    beq a0, a1, .LBB1_3
 ; RV32I-NEXT:  # %bb.1: # %false
-; RV32I-NEXT:    call test_false@plt
+; RV32I-NEXT:    call test_false
 ; RV32I-NEXT:  .LBB1_2: # %true
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
 ; RV32I-NEXT:  .LBB1_3: # %true
-; RV32I-NEXT:    call test_true@plt
+; RV32I-NEXT:    call test_true
 ; RV32I-NEXT:    j .LBB1_2
   %tst = icmp eq i32 %in, 42
   br i1 %tst, label %true, label %false, !prof !1
diff --git a/llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll b/llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll
index eea4cb72938af23..46ed01b11584f9c 100644
--- a/llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll
+++ b/llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll
@@ -21,7 +21,7 @@ define void @cmpxchg_i8_monotonic_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 0
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -57,7 +57,7 @@ define void @cmpxchg_i8_monotonic_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 0
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -97,7 +97,7 @@ define void @cmpxchg_i8_acquire_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 2
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -156,7 +156,7 @@ define void @cmpxchg_i8_acquire_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 2
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -219,7 +219,7 @@ define void @cmpxchg_i8_acquire_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 2
 ; RV32I-NEXT:    li a4, 2
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -278,7 +278,7 @@ define void @cmpxchg_i8_acquire_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 2
 ; RV64I-NEXT:    li a4, 2
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -341,7 +341,7 @@ define void @cmpxchg_i8_release_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 3
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -400,7 +400,7 @@ define void @cmpxchg_i8_release_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 3
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -463,7 +463,7 @@ define void @cmpxchg_i8_release_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 3
 ; RV32I-NEXT:    li a4, 2
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -522,7 +522,7 @@ define void @cmpxchg_i8_release_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 3
 ; RV64I-NEXT:    li a4, 2
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -585,7 +585,7 @@ define void @cmpxchg_i8_acq_rel_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 4
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -644,7 +644,7 @@ define void @cmpxchg_i8_acq_rel_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 4
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -707,7 +707,7 @@ define void @cmpxchg_i8_acq_rel_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 4
 ; RV32I-NEXT:    li a4, 2
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -766,7 +766,7 @@ define void @cmpxchg_i8_acq_rel_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 4
 ; RV64I-NEXT:    li a4, 2
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -829,7 +829,7 @@ define void @cmpxchg_i8_seq_cst_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 5
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -865,7 +865,7 @@ define void @cmpxchg_i8_seq_cst_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 5
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -905,7 +905,7 @@ define void @cmpxchg_i8_seq_cst_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 5
 ; RV32I-NEXT:    li a4, 2
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -941,7 +941,7 @@ define void @cmpxchg_i8_seq_cst_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 5
 ; RV64I-NEXT:    li a4, 2
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -981,7 +981,7 @@ define void @cmpxchg_i8_seq_cst_seq_cst(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV32I-NEXT:    addi a1, sp, 11
 ; RV32I-NEXT:    li a3, 5
 ; RV32I-NEXT:    li a4, 5
-; RV32I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_1
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1017,7 +1017,7 @@ define void @cmpxchg_i8_seq_cst_seq_cst(ptr %ptr, i8 %cmp, i8 %val) nounwind {
 ; RV64I-NEXT:    addi a1, sp, 7
 ; RV64I-NEXT:    li a3, 5
 ; RV64I-NEXT:    li a4, 5
-; RV64I-NEXT:    call __atomic_compare_exchange_1@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_1
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -1057,7 +1057,7 @@ define void @cmpxchg_i16_monotonic_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounw
 ; RV32I-NEXT:    addi a1, sp, 10
 ; RV32I-NEXT:    li a3, 0
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_2
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1094,7 +1094,7 @@ define void @cmpxchg_i16_monotonic_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounw
 ; RV64I-NEXT:    addi a1, sp, 6
 ; RV64I-NEXT:    li a3, 0
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_2
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -1135,7 +1135,7 @@ define void @cmpxchg_i16_acquire_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
 ; RV32I-NEXT:    addi a1, sp, 10
 ; RV32I-NEXT:    li a3, 2
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_2
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1196,7 +1196,7 @@ define void @cmpxchg_i16_acquire_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
 ; RV64I-NEXT:    addi a1, sp, 6
 ; RV64I-NEXT:    li a3, 2
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_2
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -1261,7 +1261,7 @@ define void @cmpxchg_i16_acquire_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
 ; RV32I-NEXT:    addi a1, sp, 10
 ; RV32I-NEXT:    li a3, 2
 ; RV32I-NEXT:    li a4, 2
-; RV32I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_2
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1322,7 +1322,7 @@ define void @cmpxchg_i16_acquire_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
 ; RV64I-NEXT:    addi a1, sp, 6
 ; RV64I-NEXT:    li a3, 2
 ; RV64I-NEXT:    li a4, 2
-; RV64I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_2
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -1387,7 +1387,7 @@ define void @cmpxchg_i16_release_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
 ; RV32I-NEXT:    addi a1, sp, 10
 ; RV32I-NEXT:    li a3, 3
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_2
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1448,7 +1448,7 @@ define void @cmpxchg_i16_release_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
 ; RV64I-NEXT:    addi a1, sp, 6
 ; RV64I-NEXT:    li a3, 3
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_2
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -1513,7 +1513,7 @@ define void @cmpxchg_i16_release_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
 ; RV32I-NEXT:    addi a1, sp, 10
 ; RV32I-NEXT:    li a3, 3
 ; RV32I-NEXT:    li a4, 2
-; RV32I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_2
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1574,7 +1574,7 @@ define void @cmpxchg_i16_release_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
 ; RV64I-NEXT:    addi a1, sp, 6
 ; RV64I-NEXT:    li a3, 3
 ; RV64I-NEXT:    li a4, 2
-; RV64I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_2
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -1639,7 +1639,7 @@ define void @cmpxchg_i16_acq_rel_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
 ; RV32I-NEXT:    addi a1, sp, 10
 ; RV32I-NEXT:    li a3, 4
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_2
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1700,7 +1700,7 @@ define void @cmpxchg_i16_acq_rel_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
 ; RV64I-NEXT:    addi a1, sp, 6
 ; RV64I-NEXT:    li a3, 4
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_2
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -1765,7 +1765,7 @@ define void @cmpxchg_i16_acq_rel_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
 ; RV32I-NEXT:    addi a1, sp, 10
 ; RV32I-NEXT:    li a3, 4
 ; RV32I-NEXT:    li a4, 2
-; RV32I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_2
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1826,7 +1826,7 @@ define void @cmpxchg_i16_acq_rel_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
 ; RV64I-NEXT:    addi a1, sp, 6
 ; RV64I-NEXT:    li a3, 4
 ; RV64I-NEXT:    li a4, 2
-; RV64I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_2
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -1891,7 +1891,7 @@ define void @cmpxchg_i16_seq_cst_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
 ; RV32I-NEXT:    addi a1, sp, 10
 ; RV32I-NEXT:    li a3, 5
 ; RV32I-NEXT:    li a4, 0
-; RV32I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV32I-NEXT:    call __atomic_compare_exchange_2
 ; RV32I-NEXT:    lw ra, 12(sp) # 4-byte Folded Reload
 ; RV32I-NEXT:    addi sp, sp, 16
 ; RV32I-NEXT:    ret
@@ -1928,7 +1928,7 @@ define void @cmpxchg_i16_seq_cst_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
 ; RV64I-NEXT:    addi a1, sp, 6
 ; RV64I-NEXT:    li a3, 5
 ; RV64I-NEXT:    li a4, 0
-; RV64I-NEXT:    call __atomic_compare_exchange_2@plt
+; RV64I-NEXT:    call __atomic_compare_exchange_2
 ; RV64I-NEXT:    ld ra, 8(sp) # 8-byte Folded Reload
 ; RV64I-NEXT:    addi sp, sp, 16
 ; RV64I-NEXT:    ret
@@ -...
[truncated]

RISCVMCExpr::VariantKind Kind = RISCVMCExpr::VK_RISCV_CALL;
if (Identifier.consume_back("@plt"))
Kind = RISCVMCExpr::VK_RISCV_CALL_PLT;
RISCVMCExpr::VariantKind Kind = RISCVMCExpr::VK_RISCV_CALL_PLT;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be able to ditch VK_RISCV_CALL_PLT and only have a VK_RISCV_CALL that means @plt

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we ditch VK_RISCV_CALL and only have a VK_RISCV_CALL_PLT? The ABI decided to keep R_RISCV_CALL_PLT not R_RISCV_CALL. It may be more clear to be analogous to the ABI and since we want to always emit the @plt suffix.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be able to ditch VK_RISCV_CALL_PLT and only have a VK_RISCV_CALL that means @plt

This patch will make this possible. I am happy to remove either one as a follow-up (which will update very few tests).

R_RISCV_CALL/R_RISCV_CALL_PLT distinction is not necessary and
R_RISCV_CALL has been deprecated. Since https://reviews.llvm.org/D132530
`call foo` assembles to R_RISCV_CALL_PLT. The `@plt` suffix is not
useful and can be removed now (matching AArch64 and PowerPC).

GNU assembler assembles `call foo` to RISCV_CALL_PLT since 2022-09
(70f35d72ef04cd23771875c1661c9975044a749c).

Without this patch, unconditionally changing MO_CALL to MO_PLT could
create `jump .L1@plt, a0`, which is invalid in LLVM integrated assembler
and GNU assembler.
@MaskRay MaskRay changed the title [RISCV] Omit "@plt" in assembler output "call foo@plt" [RISCV] Omit "@plt" in assembly output "call foo@plt" Jan 7, 2024
@MaskRay MaskRay merged commit eabaee0 into llvm:main Jan 7, 2024
4 of 5 checks passed
@MaskRay MaskRay deleted the rv-atplt branch January 7, 2024 20:10
MaskRay added a commit that referenced this pull request Jan 7, 2024
Since #72467, `@plt` in assembly output "call foo@plt" is omitted. We
can trivially merge MO_PLT and MO_CALL without any functional change to
assembly/relocatable file output.

Earlier architectures use different call relocation types whether a PLT
is potentially needed: R_386_PLT32/R_386_PC32, R_68K_PLT32/R_68K_PC32,
R_SPARC_WDISP30/R_SPARC_WPLT320. However, as the PLT property is
per-symbol instead of per-call-site and linkers can optimize out a PLT,
the distinction has been confusing.

Arm made good names R_ARM_CALL/R_AARCH64_CALL. Let's use MO_CALL instead
of MO_PLT.

As follow-ups, we can merge fixup_riscv_call/fixup_riscv_call_plt and
VK_RISCV_CALL/VK_RISCV_CALL_PLT.
justinfargnoli pushed a commit to justinfargnoli/llvm-project that referenced this pull request Jan 28, 2024
R_RISCV_CALL/R_RISCV_CALL_PLT distinction is not necessary and
R_RISCV_CALL has been deprecated. Since https://reviews.llvm.org/D132530
`call foo` assembles to R_RISCV_CALL_PLT. The `@plt` suffix is not
useful and can be removed now (matching AArch64 and PowerPC).

GNU assembler assembles `call foo` to RISCV_CALL_PLT since 2022-09
(70f35d72ef04cd23771875c1661c9975044a749c).

Without this patch, unconditionally changing MO_CALL to MO_PLT could
create `jump .L1@plt, a0`, which is invalid in LLVM integrated assembler
and GNU assembler.
justinfargnoli pushed a commit to justinfargnoli/llvm-project that referenced this pull request Jan 28, 2024
Since llvm#72467, `@plt` in assembly output "call foo@plt" is omitted. We
can trivially merge MO_PLT and MO_CALL without any functional change to
assembly/relocatable file output.

Earlier architectures use different call relocation types whether a PLT
is potentially needed: R_386_PLT32/R_386_PC32, R_68K_PLT32/R_68K_PC32,
R_SPARC_WDISP30/R_SPARC_WPLT320. However, as the PLT property is
per-symbol instead of per-call-site and linkers can optimize out a PLT,
the distinction has been confusing.

Arm made good names R_ARM_CALL/R_AARCH64_CALL. Let's use MO_CALL instead
of MO_PLT.

As follow-ups, we can merge fixup_riscv_call/fixup_riscv_call_plt and
VK_RISCV_CALL/VK_RISCV_CALL_PLT.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants