Skip to content

CodeGen: Fix implementation of __builtin_trivially_relocate. #140312

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions clang/lib/CodeGen/CGBuiltin.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4425,6 +4425,14 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl GD, unsigned BuiltinID,
Address Dest = EmitPointerWithAlignment(E->getArg(0));
Address Src = EmitPointerWithAlignment(E->getArg(1));
Value *SizeVal = EmitScalarExpr(E->getArg(2));
if (BuiltinIDIfNoAsmLabel == Builtin::BI__builtin_trivially_relocate)
SizeVal = Builder.CreateMul(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this multiply trigger some sort of ubsan check if it overflows?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think an overflow here can only result from a call to std::trivially_relocate(first, last, result) with first > last. I feel like it would probably be better to report the error at the caller so that we can provide a better error message.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think generally, there is the question of whether we want UBSAN to enforce the preconditions of trivially_relocate (object completeness, valid ranges), However, because the intent is for that builtin to be wrapped in a standard function, it's probably better to only do the checks there, it would integrate better with hardening/contracts, and we don't (afaik) have precondition checks for builtins

SizeVal,
ConstantInt::get(
SizeVal->getType(),
getContext()
.getTypeSizeInChars(E->getArg(0)->getType()->getPointeeType())
.getQuantity()));
EmitArgCheck(TCK_Store, Dest, E->getArg(0), 0);
EmitArgCheck(TCK_Load, Src, E->getArg(1), 1);
Builder.CreateMemMove(Dest, Src, SizeVal, false);
Expand Down
14 changes: 10 additions & 4 deletions clang/test/CodeGenCXX/cxx2c-trivially-relocatable.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
// RUN: %clang_cc1 -std=c++26 -triple x86_64-linux-gnu -emit-llvm -o - %s | FileCheck %s

typedef __SIZE_TYPE__ size_t;

struct S trivially_relocatable_if_eligible {
S(const S&);
~S();
Expand All @@ -8,9 +10,13 @@ struct S trivially_relocatable_if_eligible {
};

// CHECK: @_Z4testP1SS0_
// CHECK: call void @llvm.memmove.p0.p0.i64
// CHECK-NOT: __builtin
// CHECK: ret
void test(S* source, S* dest) {
void test(S* source, S* dest, size_t count) {
// CHECK: call void @llvm.memmove.p0.p0.i64({{.*}}, i64 8
// CHECK-NOT: __builtin
__builtin_trivially_relocate(dest, source, 1);
// CHECK: [[A:%.*]] = load i64, ptr %count.addr
// CHECK: [[M:%.*]] = mul i64 [[A]], 8
// CHECK: call void @llvm.memmove.p0.p0.i64({{.*}}, i64 [[M]]
__builtin_trivially_relocate(dest, source, count);
// CHECK: ret
};
Loading