New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RISCV] Omit "@plt" in assembly output "call foo@plt" #72467
Conversation
@llvm/pr-subscribers-llvm-globalisel @llvm/pr-subscribers-backend-risc-v Author: Fangrui Song (MaskRay) ChangesR_RISCV_CALL/R_RISCV_CALL_PLT distinction is not necessary and Without this patch, unconditionally changing MO_CALL to MO_PLT could Patch is 2.11 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/72467.diff 203 Files Affected:
diff --git a/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp b/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp
index e9264a5d851c0b5..f34b288fd00f3d3 100644
--- a/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp
+++ b/llvm/lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp
@@ -2042,9 +2042,8 @@ ParseStatus RISCVAsmParser::parseCallSymbol(OperandVector &Operands) {
SMLoc E = SMLoc::getFromPointer(S.getPointer() + Identifier.size());
- RISCVMCExpr::VariantKind Kind = RISCVMCExpr::VK_RISCV_CALL;
- if (Identifier.consume_back("@plt"))
- Kind = RISCVMCExpr::VK_RISCV_CALL_PLT;
+ RISCVMCExpr::VariantKind Kind = RISCVMCExpr::VK_RISCV_CALL_PLT;
+ (void)Identifier.consume_back("@plt");
MCSymbol *Sym = getContext().getOrCreateSymbol(Identifier);
Res = MCSymbolRefExpr::create(Sym, MCSymbolRefExpr::VK_None, getContext());
diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp
index d67351102bc1cde..64ddae61b1bc159 100644
--- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp
+++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMCExpr.cpp
@@ -41,8 +41,6 @@ void RISCVMCExpr::printImpl(raw_ostream &OS, const MCAsmInfo *MAI) const {
if (HasVariant)
OS << '%' << getVariantKindName(getKind()) << '(';
Expr->print(OS, MAI);
- if (Kind == VK_RISCV_CALL_PLT)
- OS << "@plt";
if (HasVariant)
OS << ')';
}
diff --git a/llvm/test/CodeGen/RISCV/addrspacecast.ll b/llvm/test/CodeGen/RISCV/addrspacecast.ll
index 7fe041a8ac6a10a..e55a57a51678222 100644
--- a/llvm/test/CodeGen/RISCV/addrspacecast.ll
+++ b/llvm/test/CodeGen/RISCV/addrspacecast.ll
@@ -26,7 +26,7 @@ define void @cast1(ptr %ptr) {
; RV32I-NEXT: .cfi_def_cfa_offset 16
; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill
; RV32I-NEXT: .cfi_offset ra, -4
-; RV32I-NEXT: call foo@plt
+; RV32I-NEXT: call foo
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -37,7 +37,7 @@ define void @cast1(ptr %ptr) {
; RV64I-NEXT: .cfi_def_cfa_offset 16
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: .cfi_offset ra, -8
-; RV64I-NEXT: call foo@plt
+; RV64I-NEXT: call foo
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/aext-to-sext.ll b/llvm/test/CodeGen/RISCV/aext-to-sext.ll
index 09803012001c26c..888ea666d713167 100644
--- a/llvm/test/CodeGen/RISCV/aext-to-sext.ll
+++ b/llvm/test/CodeGen/RISCV/aext-to-sext.ll
@@ -19,7 +19,7 @@ define void @quux(i32 signext %arg, i32 signext %arg1) nounwind {
; RV64I-NEXT: subw s0, a1, a0
; RV64I-NEXT: .LBB0_2: # %bb2
; RV64I-NEXT: # =>This Inner Loop Header: Depth=1
-; RV64I-NEXT: call hoge@plt
+; RV64I-NEXT: call hoge
; RV64I-NEXT: addiw s0, s0, -1
; RV64I-NEXT: bnez s0, .LBB0_2
; RV64I-NEXT: # %bb.3:
diff --git a/llvm/test/CodeGen/RISCV/alloca.ll b/llvm/test/CodeGen/RISCV/alloca.ll
index 34cac429f30533c..bcb0592c18f59f1 100644
--- a/llvm/test/CodeGen/RISCV/alloca.ll
+++ b/llvm/test/CodeGen/RISCV/alloca.ll
@@ -18,7 +18,7 @@ define void @simple_alloca(i32 %n) nounwind {
; RV32I-NEXT: andi a0, a0, -16
; RV32I-NEXT: sub a0, sp, a0
; RV32I-NEXT: mv sp, a0
-; RV32I-NEXT: call notdead@plt
+; RV32I-NEXT: call notdead
; RV32I-NEXT: addi sp, s0, -16
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: lw s0, 8(sp) # 4-byte Folded Reload
@@ -45,7 +45,7 @@ define void @scoped_alloca(i32 %n) nounwind {
; RV32I-NEXT: andi a0, a0, -16
; RV32I-NEXT: sub a0, sp, a0
; RV32I-NEXT: mv sp, a0
-; RV32I-NEXT: call notdead@plt
+; RV32I-NEXT: call notdead
; RV32I-NEXT: mv sp, s1
; RV32I-NEXT: addi sp, s0, -16
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
@@ -91,7 +91,7 @@ define void @alloca_callframe(i32 %n) nounwind {
; RV32I-NEXT: li a6, 7
; RV32I-NEXT: li a7, 8
; RV32I-NEXT: sw t0, 0(sp)
-; RV32I-NEXT: call func@plt
+; RV32I-NEXT: call func
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: addi sp, s0, -16
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/analyze-branch.ll b/llvm/test/CodeGen/RISCV/analyze-branch.ll
index e33e6b65c6f14d7..768a11a717063d5 100644
--- a/llvm/test/CodeGen/RISCV/analyze-branch.ll
+++ b/llvm/test/CodeGen/RISCV/analyze-branch.ll
@@ -20,13 +20,13 @@ define void @test_bcc_fallthrough_taken(i32 %in) nounwind {
; RV32I-NEXT: li a1, 42
; RV32I-NEXT: bne a0, a1, .LBB0_3
; RV32I-NEXT: # %bb.1: # %true
-; RV32I-NEXT: call test_true@plt
+; RV32I-NEXT: call test_true
; RV32I-NEXT: .LBB0_2: # %true
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
; RV32I-NEXT: .LBB0_3: # %false
-; RV32I-NEXT: call test_false@plt
+; RV32I-NEXT: call test_false
; RV32I-NEXT: j .LBB0_2
%tst = icmp eq i32 %in, 42
br i1 %tst, label %true, label %false, !prof !0
@@ -52,13 +52,13 @@ define void @test_bcc_fallthrough_nottaken(i32 %in) nounwind {
; RV32I-NEXT: li a1, 42
; RV32I-NEXT: beq a0, a1, .LBB1_3
; RV32I-NEXT: # %bb.1: # %false
-; RV32I-NEXT: call test_false@plt
+; RV32I-NEXT: call test_false
; RV32I-NEXT: .LBB1_2: # %true
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
; RV32I-NEXT: .LBB1_3: # %true
-; RV32I-NEXT: call test_true@plt
+; RV32I-NEXT: call test_true
; RV32I-NEXT: j .LBB1_2
%tst = icmp eq i32 %in, 42
br i1 %tst, label %true, label %false, !prof !1
diff --git a/llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll b/llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll
index eea4cb72938af23..46ed01b11584f9c 100644
--- a/llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll
+++ b/llvm/test/CodeGen/RISCV/atomic-cmpxchg.ll
@@ -21,7 +21,7 @@ define void @cmpxchg_i8_monotonic_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 0
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -57,7 +57,7 @@ define void @cmpxchg_i8_monotonic_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 0
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -97,7 +97,7 @@ define void @cmpxchg_i8_acquire_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 2
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -156,7 +156,7 @@ define void @cmpxchg_i8_acquire_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 2
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -219,7 +219,7 @@ define void @cmpxchg_i8_acquire_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 2
; RV32I-NEXT: li a4, 2
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -278,7 +278,7 @@ define void @cmpxchg_i8_acquire_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 2
; RV64I-NEXT: li a4, 2
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -341,7 +341,7 @@ define void @cmpxchg_i8_release_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 3
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -400,7 +400,7 @@ define void @cmpxchg_i8_release_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 3
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -463,7 +463,7 @@ define void @cmpxchg_i8_release_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 3
; RV32I-NEXT: li a4, 2
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -522,7 +522,7 @@ define void @cmpxchg_i8_release_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 3
; RV64I-NEXT: li a4, 2
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -585,7 +585,7 @@ define void @cmpxchg_i8_acq_rel_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 4
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -644,7 +644,7 @@ define void @cmpxchg_i8_acq_rel_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 4
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -707,7 +707,7 @@ define void @cmpxchg_i8_acq_rel_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 4
; RV32I-NEXT: li a4, 2
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -766,7 +766,7 @@ define void @cmpxchg_i8_acq_rel_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 4
; RV64I-NEXT: li a4, 2
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -829,7 +829,7 @@ define void @cmpxchg_i8_seq_cst_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 5
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -865,7 +865,7 @@ define void @cmpxchg_i8_seq_cst_monotonic(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 5
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -905,7 +905,7 @@ define void @cmpxchg_i8_seq_cst_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 5
; RV32I-NEXT: li a4, 2
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -941,7 +941,7 @@ define void @cmpxchg_i8_seq_cst_acquire(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 5
; RV64I-NEXT: li a4, 2
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -981,7 +981,7 @@ define void @cmpxchg_i8_seq_cst_seq_cst(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV32I-NEXT: addi a1, sp, 11
; RV32I-NEXT: li a3, 5
; RV32I-NEXT: li a4, 5
-; RV32I-NEXT: call __atomic_compare_exchange_1@plt
+; RV32I-NEXT: call __atomic_compare_exchange_1
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1017,7 +1017,7 @@ define void @cmpxchg_i8_seq_cst_seq_cst(ptr %ptr, i8 %cmp, i8 %val) nounwind {
; RV64I-NEXT: addi a1, sp, 7
; RV64I-NEXT: li a3, 5
; RV64I-NEXT: li a4, 5
-; RV64I-NEXT: call __atomic_compare_exchange_1@plt
+; RV64I-NEXT: call __atomic_compare_exchange_1
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -1057,7 +1057,7 @@ define void @cmpxchg_i16_monotonic_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounw
; RV32I-NEXT: addi a1, sp, 10
; RV32I-NEXT: li a3, 0
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_2@plt
+; RV32I-NEXT: call __atomic_compare_exchange_2
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1094,7 +1094,7 @@ define void @cmpxchg_i16_monotonic_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounw
; RV64I-NEXT: addi a1, sp, 6
; RV64I-NEXT: li a3, 0
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_2@plt
+; RV64I-NEXT: call __atomic_compare_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -1135,7 +1135,7 @@ define void @cmpxchg_i16_acquire_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
; RV32I-NEXT: addi a1, sp, 10
; RV32I-NEXT: li a3, 2
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_2@plt
+; RV32I-NEXT: call __atomic_compare_exchange_2
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1196,7 +1196,7 @@ define void @cmpxchg_i16_acquire_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
; RV64I-NEXT: addi a1, sp, 6
; RV64I-NEXT: li a3, 2
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_2@plt
+; RV64I-NEXT: call __atomic_compare_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -1261,7 +1261,7 @@ define void @cmpxchg_i16_acquire_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
; RV32I-NEXT: addi a1, sp, 10
; RV32I-NEXT: li a3, 2
; RV32I-NEXT: li a4, 2
-; RV32I-NEXT: call __atomic_compare_exchange_2@plt
+; RV32I-NEXT: call __atomic_compare_exchange_2
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1322,7 +1322,7 @@ define void @cmpxchg_i16_acquire_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
; RV64I-NEXT: addi a1, sp, 6
; RV64I-NEXT: li a3, 2
; RV64I-NEXT: li a4, 2
-; RV64I-NEXT: call __atomic_compare_exchange_2@plt
+; RV64I-NEXT: call __atomic_compare_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -1387,7 +1387,7 @@ define void @cmpxchg_i16_release_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
; RV32I-NEXT: addi a1, sp, 10
; RV32I-NEXT: li a3, 3
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_2@plt
+; RV32I-NEXT: call __atomic_compare_exchange_2
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1448,7 +1448,7 @@ define void @cmpxchg_i16_release_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
; RV64I-NEXT: addi a1, sp, 6
; RV64I-NEXT: li a3, 3
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_2@plt
+; RV64I-NEXT: call __atomic_compare_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -1513,7 +1513,7 @@ define void @cmpxchg_i16_release_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
; RV32I-NEXT: addi a1, sp, 10
; RV32I-NEXT: li a3, 3
; RV32I-NEXT: li a4, 2
-; RV32I-NEXT: call __atomic_compare_exchange_2@plt
+; RV32I-NEXT: call __atomic_compare_exchange_2
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1574,7 +1574,7 @@ define void @cmpxchg_i16_release_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
; RV64I-NEXT: addi a1, sp, 6
; RV64I-NEXT: li a3, 3
; RV64I-NEXT: li a4, 2
-; RV64I-NEXT: call __atomic_compare_exchange_2@plt
+; RV64I-NEXT: call __atomic_compare_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -1639,7 +1639,7 @@ define void @cmpxchg_i16_acq_rel_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
; RV32I-NEXT: addi a1, sp, 10
; RV32I-NEXT: li a3, 4
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_2@plt
+; RV32I-NEXT: call __atomic_compare_exchange_2
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1700,7 +1700,7 @@ define void @cmpxchg_i16_acq_rel_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
; RV64I-NEXT: addi a1, sp, 6
; RV64I-NEXT: li a3, 4
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_2@plt
+; RV64I-NEXT: call __atomic_compare_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -1765,7 +1765,7 @@ define void @cmpxchg_i16_acq_rel_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
; RV32I-NEXT: addi a1, sp, 10
; RV32I-NEXT: li a3, 4
; RV32I-NEXT: li a4, 2
-; RV32I-NEXT: call __atomic_compare_exchange_2@plt
+; RV32I-NEXT: call __atomic_compare_exchange_2
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1826,7 +1826,7 @@ define void @cmpxchg_i16_acq_rel_acquire(ptr %ptr, i16 %cmp, i16 %val) nounwind
; RV64I-NEXT: addi a1, sp, 6
; RV64I-NEXT: li a3, 4
; RV64I-NEXT: li a4, 2
-; RV64I-NEXT: call __atomic_compare_exchange_2@plt
+; RV64I-NEXT: call __atomic_compare_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -1891,7 +1891,7 @@ define void @cmpxchg_i16_seq_cst_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
; RV32I-NEXT: addi a1, sp, 10
; RV32I-NEXT: li a3, 5
; RV32I-NEXT: li a4, 0
-; RV32I-NEXT: call __atomic_compare_exchange_2@plt
+; RV32I-NEXT: call __atomic_compare_exchange_2
; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload
; RV32I-NEXT: addi sp, sp, 16
; RV32I-NEXT: ret
@@ -1928,7 +1928,7 @@ define void @cmpxchg_i16_seq_cst_monotonic(ptr %ptr, i16 %cmp, i16 %val) nounwin
; RV64I-NEXT: addi a1, sp, 6
; RV64I-NEXT: li a3, 5
; RV64I-NEXT: li a4, 0
-; RV64I-NEXT: call __atomic_compare_exchange_2@plt
+; RV64I-NEXT: call __atomic_compare_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
; RV64I-NEXT: ret
@@ -...
[truncated]
|
RISCVMCExpr::VariantKind Kind = RISCVMCExpr::VK_RISCV_CALL; | ||
if (Identifier.consume_back("@plt")) | ||
Kind = RISCVMCExpr::VK_RISCV_CALL_PLT; | ||
RISCVMCExpr::VariantKind Kind = RISCVMCExpr::VK_RISCV_CALL_PLT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be able to ditch VK_RISCV_CALL_PLT and only have a VK_RISCV_CALL that means @plt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we ditch VK_RISCV_CALL and only have a VK_RISCV_CALL_PLT? The ABI decided to keep R_RISCV_CALL_PLT not R_RISCV_CALL. It may be more clear to be analogous to the ABI and since we want to always emit the @plt suffix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be able to ditch VK_RISCV_CALL_PLT and only have a VK_RISCV_CALL that means @plt
This patch will make this possible. I am happy to remove either one as a follow-up (which will update very few tests).
R_RISCV_CALL/R_RISCV_CALL_PLT distinction is not necessary and R_RISCV_CALL has been deprecated. Since https://reviews.llvm.org/D132530 `call foo` assembles to R_RISCV_CALL_PLT. The `@plt` suffix is not useful and can be removed now (matching AArch64 and PowerPC). GNU assembler assembles `call foo` to RISCV_CALL_PLT since 2022-09 (70f35d72ef04cd23771875c1661c9975044a749c). Without this patch, unconditionally changing MO_CALL to MO_PLT could create `jump .L1@plt, a0`, which is invalid in LLVM integrated assembler and GNU assembler.
Since #72467, `@plt` in assembly output "call foo@plt" is omitted. We can trivially merge MO_PLT and MO_CALL without any functional change to assembly/relocatable file output. Earlier architectures use different call relocation types whether a PLT is potentially needed: R_386_PLT32/R_386_PC32, R_68K_PLT32/R_68K_PC32, R_SPARC_WDISP30/R_SPARC_WPLT320. However, as the PLT property is per-symbol instead of per-call-site and linkers can optimize out a PLT, the distinction has been confusing. Arm made good names R_ARM_CALL/R_AARCH64_CALL. Let's use MO_CALL instead of MO_PLT. As follow-ups, we can merge fixup_riscv_call/fixup_riscv_call_plt and VK_RISCV_CALL/VK_RISCV_CALL_PLT.
R_RISCV_CALL/R_RISCV_CALL_PLT distinction is not necessary and R_RISCV_CALL has been deprecated. Since https://reviews.llvm.org/D132530 `call foo` assembles to R_RISCV_CALL_PLT. The `@plt` suffix is not useful and can be removed now (matching AArch64 and PowerPC). GNU assembler assembles `call foo` to RISCV_CALL_PLT since 2022-09 (70f35d72ef04cd23771875c1661c9975044a749c). Without this patch, unconditionally changing MO_CALL to MO_PLT could create `jump .L1@plt, a0`, which is invalid in LLVM integrated assembler and GNU assembler.
Since llvm#72467, `@plt` in assembly output "call foo@plt" is omitted. We can trivially merge MO_PLT and MO_CALL without any functional change to assembly/relocatable file output. Earlier architectures use different call relocation types whether a PLT is potentially needed: R_386_PLT32/R_386_PC32, R_68K_PLT32/R_68K_PC32, R_SPARC_WDISP30/R_SPARC_WPLT320. However, as the PLT property is per-symbol instead of per-call-site and linkers can optimize out a PLT, the distinction has been confusing. Arm made good names R_ARM_CALL/R_AARCH64_CALL. Let's use MO_CALL instead of MO_PLT. As follow-ups, we can merge fixup_riscv_call/fixup_riscv_call_plt and VK_RISCV_CALL/VK_RISCV_CALL_PLT.
R_RISCV_CALL/R_RISCV_CALL_PLT distinction is not necessary and
R_RISCV_CALL has been deprecated. Since https://reviews.llvm.org/D132530
call foo
assembles to R_RISCV_CALL_PLT. The@plt
suffix is notuseful and can be removed now (matching AArch64 and PowerPC).
GNU assembler assembles
call foo
to RISCV_CALL_PLT since 2022-09(70f35d72ef04cd23771875c1661c9975044a749c).
Without this patch, unconditionally changing MO_CALL to MO_PLT could
create
jump .L1@plt, a0
, which is invalid in LLVM integrated assemblerand GNU assembler.