106 changes: 106 additions & 0 deletions clang/docs/StandardCPlusPlusModules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -520,6 +520,112 @@ is attached to the global module fragments. For example:

Now the linkage name of ``NS::foo()`` will be ``_ZN2NS3fooEv``.

Reduced BMI
-----------

To support the 2 phase compilation model, Clang chose to put everything needed to
produce an object into the BMI. But every consumer of the BMI, except itself, doesn't
need such informations. It makes the BMI to larger and so may introduce unnecessary
dependencies into the BMI. To mitigate the problem, we decided to reduce the information
contained in the BMI.

To be clear, we call the default BMI as Full BMI and the new introduced BMI as Reduced
BMI.

Users can use ``-fexperimental-modules-reduced-bmi`` flag to enable the Reduced BMI.

For one phase compilation model (CMake implements this model), with
``-fexperimental-modules-reduced-bmi``, the generated BMI will be Reduced BMI automatically.
(The output path of the BMI is specified by ``-fmodule-output=`` as usual one phase
compilation model).

It is still possible to support Reduced BMI in two phase compilation model. With
``-fexperimental-modules-reduced-bmi``, ``--precompile`` and ``-fmodule-output=`` specified,
the generated BMI specified by ``-o`` will be full BMI and the BMI specified by
``-fmodule-output=`` will be Reduced BMI. The dependency graph may be:

.. code-block:: none
module-unit.cppm --> module-unit.full.pcm -> module-unit.o
|
-> module-unit.reduced.pcm -> consumer1.cpp
-> consumer2.cpp
-> ...
-> consumer_n.cpp
We don't emit diagnostics if ``-fexperimental-modules-reduced-bmi`` is used with a non-module
unit. This design helps the end users of one phase compilation model to perform experiments
early without asking for the help of build systems. The users of build systems which supports
two phase compilation model still need helps from build systems.

Within Reduced BMI, we won't write unreachable entities from GMF, definitions of non-inline
functions and non-inline variables. This may not be a transparent change.
`[module.global.frag]ex2 <https://eel.is/c++draft/module.global.frag#example-2>`_ may be a good
example:

.. code-block:: c++

// foo.h
namespace N {
struct X {};
int d();
int e();
inline int f(X, int = d()) { return e(); }
int g(X);
int h(X);
}

// M.cppm
module;
#include "foo.h"
export module M;
template<typename T> int use_f() {
N::X x; // N::X, N, and :: are decl-reachable from use_f
return f(x, 123); // N::f is decl-reachable from use_f,
// N::e is indirectly decl-reachable from use_f
// because it is decl-reachable from N::f, and
// N::d is decl-reachable from use_f
// because it is decl-reachable from N::f
// even though it is not used in this call
}
template<typename T> int use_g() {
N::X x; // N::X, N, and :: are decl-reachable from use_g
return g((T(), x)); // N::g is not decl-reachable from use_g
}
template<typename T> int use_h() {
N::X x; // N::X, N, and :: are decl-reachable from use_h
return h((T(), x)); // N::h is not decl-reachable from use_h, but
// N::h is decl-reachable from use_h<int>
}
int k = use_h<int>();
// use_h<int> is decl-reachable from k, so
// N::h is decl-reachable from k

// M-impl.cpp
module M;
int a = use_f<int>(); // OK
int b = use_g<int>(); // error: no viable function for call to g;
// g is not decl-reachable from purview of
// module M's interface, so is discarded
int c = use_h<int>(); // OK

In the above example, the function definition of ``N::g`` is elided from the Reduced
BMI of ``M.cppm``. Then the use of ``use_g<int>`` in ``M-impl.cpp`` fails
to instantiate. For such issues, users can add references to ``N::g`` in the module purview
of ``M.cppm`` to make sure it is reachable, e.g., ``using N::g;``.

We think the Reduced BMI is the correct direction. But given it is a drastic change,
we'd like to make it experimental first to avoid breaking existing users. The roadmap
of Reduced BMI may be:

1. ``-fexperimental-modules-reduced-bmi`` is opt in for 1~2 releases. The period depends
on testing feedbacks.
2. We would announce Reduced BMI is not experimental and introduce ``-fmodules-reduced-bmi``.
and suggest users to enable this mode. This may takes 1~2 releases too.
3. Finally we will enable this by default. When that time comes, the term BMI will refer to
the reduced BMI today and the Full BMI will only be meaningful to build systems which
loves to support two phase compilations.

Performance Tips
----------------

Expand Down
2 changes: 2 additions & 0 deletions clang/docs/tools/clang-formatted-files.txt
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ clang/include/clang/Analysis/Analyses/CalledOnceCheck.h
clang/include/clang/Analysis/Analyses/CFGReachabilityAnalysis.h
clang/include/clang/Analysis/Analyses/ExprMutationAnalyzer.h
clang/include/clang/Analysis/FlowSensitive/AdornedCFG.h
clang/include/clang/Analysis/FlowSensitive/ASTOps.h
clang/include/clang/Analysis/FlowSensitive/DataflowAnalysis.h
clang/include/clang/Analysis/FlowSensitive/DataflowAnalysisContext.h
clang/include/clang/Analysis/FlowSensitive/DataflowEnvironment.h
Expand Down Expand Up @@ -307,6 +308,7 @@ clang/lib/Analysis/CalledOnceCheck.cpp
clang/lib/Analysis/CloneDetection.cpp
clang/lib/Analysis/CodeInjector.cpp
clang/lib/Analysis/FlowSensitive/AdornedCFG.cpp
clang/lib/Analysis/FlowSensitive/ASTOps.cpp
clang/lib/Analysis/FlowSensitive/DataflowAnalysisContext.cpp
clang/lib/Analysis/FlowSensitive/DataflowEnvironment.cpp
clang/lib/Analysis/FlowSensitive/DebugSupport.cpp
Expand Down
65 changes: 18 additions & 47 deletions clang/include/clang/AST/OpenACCClause.h
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,17 @@ class OpenACCIfClause : public OpenACCClauseWithCondition {
SourceLocation EndLoc);
};

/// A 'self' clause, which has an optional condition expression.
class OpenACCSelfClause : public OpenACCClauseWithCondition {
OpenACCSelfClause(SourceLocation BeginLoc, SourceLocation LParenLoc,
Expr *ConditionExpr, SourceLocation EndLoc);

public:
static OpenACCSelfClause *Create(const ASTContext &C, SourceLocation BeginLoc,
SourceLocation LParenLoc,
Expr *ConditionExpr, SourceLocation EndLoc);
};

template <class Impl> class OpenACCClauseVisitor {
Impl &getDerived() { return static_cast<Impl &>(*this); }

Expand All @@ -159,53 +170,13 @@ template <class Impl> class OpenACCClauseVisitor {
return;

switch (C->getClauseKind()) {
case OpenACCClauseKind::Default:
VisitDefaultClause(*cast<OpenACCDefaultClause>(C));
return;
case OpenACCClauseKind::If:
VisitIfClause(*cast<OpenACCIfClause>(C));
return;
case OpenACCClauseKind::Finalize:
case OpenACCClauseKind::IfPresent:
case OpenACCClauseKind::Seq:
case OpenACCClauseKind::Independent:
case OpenACCClauseKind::Auto:
case OpenACCClauseKind::Worker:
case OpenACCClauseKind::Vector:
case OpenACCClauseKind::NoHost:
case OpenACCClauseKind::Self:
case OpenACCClauseKind::Copy:
case OpenACCClauseKind::UseDevice:
case OpenACCClauseKind::Attach:
case OpenACCClauseKind::Delete:
case OpenACCClauseKind::Detach:
case OpenACCClauseKind::Device:
case OpenACCClauseKind::DevicePtr:
case OpenACCClauseKind::DeviceResident:
case OpenACCClauseKind::FirstPrivate:
case OpenACCClauseKind::Host:
case OpenACCClauseKind::Link:
case OpenACCClauseKind::NoCreate:
case OpenACCClauseKind::Present:
case OpenACCClauseKind::Private:
case OpenACCClauseKind::CopyOut:
case OpenACCClauseKind::CopyIn:
case OpenACCClauseKind::Create:
case OpenACCClauseKind::Reduction:
case OpenACCClauseKind::Collapse:
case OpenACCClauseKind::Bind:
case OpenACCClauseKind::VectorLength:
case OpenACCClauseKind::NumGangs:
case OpenACCClauseKind::NumWorkers:
case OpenACCClauseKind::DeviceNum:
case OpenACCClauseKind::DefaultAsync:
case OpenACCClauseKind::DeviceType:
case OpenACCClauseKind::DType:
case OpenACCClauseKind::Async:
case OpenACCClauseKind::Tile:
case OpenACCClauseKind::Gang:
case OpenACCClauseKind::Wait:
case OpenACCClauseKind::Invalid:
#define VISIT_CLAUSE(CLAUSE_NAME) \
case OpenACCClauseKind::CLAUSE_NAME: \
Visit##CLAUSE_NAME##Clause(*cast<OpenACC##CLAUSE_NAME##Clause>(C)); \
return;
#include "clang/Basic/OpenACCClauses.def"

default:
llvm_unreachable("Clause visitor not yet implemented");
}
llvm_unreachable("Invalid Clause kind");
Expand Down
4 changes: 1 addition & 3 deletions clang/include/clang/AST/StmtOpenACC.h
Original file line number Diff line number Diff line change
Expand Up @@ -142,9 +142,7 @@ class OpenACCComputeConstruct final
Stmt *StructuredBlock)
: OpenACCAssociatedStmtConstruct(OpenACCComputeConstructClass, K, Start,
End, StructuredBlock) {
assert((K == OpenACCDirectiveKind::Parallel ||
K == OpenACCDirectiveKind::Serial ||
K == OpenACCDirectiveKind::Kernels) &&
assert(isOpenACCComputeDirectiveKind(K) &&
"Only parallel, serial, and kernels constructs should be "
"represented by this type");

Expand Down
136 changes: 93 additions & 43 deletions clang/include/clang/Analysis/Analyses/ExprMutationAnalyzer.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,9 @@
#ifndef LLVM_CLANG_ANALYSIS_ANALYSES_EXPRMUTATIONANALYZER_H
#define LLVM_CLANG_ANALYSIS_ANALYSES_EXPRMUTATIONANALYZER_H

#include <type_traits>

#include "clang/AST/AST.h"
#include "clang/ASTMatchers/ASTMatchers.h"
#include "llvm/ADT/DenseMap.h"
#include <memory>

namespace clang {

Expand All @@ -21,75 +19,127 @@ class FunctionParmMutationAnalyzer;
/// Analyzes whether any mutative operations are applied to an expression within
/// a given statement.
class ExprMutationAnalyzer {
friend class FunctionParmMutationAnalyzer;

public:
struct Memoized {
using ResultMap = llvm::DenseMap<const Expr *, const Stmt *>;
using FunctionParaAnalyzerMap =
llvm::SmallDenseMap<const FunctionDecl *,
std::unique_ptr<FunctionParmMutationAnalyzer>>;

ResultMap Results;
ResultMap PointeeResults;
FunctionParaAnalyzerMap FuncParmAnalyzer;

void clear() {
Results.clear();
PointeeResults.clear();
FuncParmAnalyzer.clear();
}
};
struct Analyzer {
Analyzer(const Stmt &Stm, ASTContext &Context, Memoized &Memorized)
: Stm(Stm), Context(Context), Memorized(Memorized) {}

const Stmt *findMutation(const Expr *Exp);
const Stmt *findMutation(const Decl *Dec);

const Stmt *findPointeeMutation(const Expr *Exp);
const Stmt *findPointeeMutation(const Decl *Dec);
static bool isUnevaluated(const Stmt *Smt, const Stmt &Stm,
ASTContext &Context);

private:
using MutationFinder = const Stmt *(Analyzer::*)(const Expr *);

const Stmt *findMutationMemoized(const Expr *Exp,
llvm::ArrayRef<MutationFinder> Finders,
Memoized::ResultMap &MemoizedResults);
const Stmt *tryEachDeclRef(const Decl *Dec, MutationFinder Finder);

bool isUnevaluated(const Expr *Exp);

const Stmt *findExprMutation(ArrayRef<ast_matchers::BoundNodes> Matches);
const Stmt *findDeclMutation(ArrayRef<ast_matchers::BoundNodes> Matches);
const Stmt *
findExprPointeeMutation(ArrayRef<ast_matchers::BoundNodes> Matches);
const Stmt *
findDeclPointeeMutation(ArrayRef<ast_matchers::BoundNodes> Matches);

const Stmt *findDirectMutation(const Expr *Exp);
const Stmt *findMemberMutation(const Expr *Exp);
const Stmt *findArrayElementMutation(const Expr *Exp);
const Stmt *findCastMutation(const Expr *Exp);
const Stmt *findRangeLoopMutation(const Expr *Exp);
const Stmt *findReferenceMutation(const Expr *Exp);
const Stmt *findFunctionArgMutation(const Expr *Exp);

const Stmt &Stm;
ASTContext &Context;
Memoized &Memorized;
};

ExprMutationAnalyzer(const Stmt &Stm, ASTContext &Context)
: Stm(Stm), Context(Context) {}
: Memorized(), A(Stm, Context, Memorized) {}

bool isMutated(const Expr *Exp) { return findMutation(Exp) != nullptr; }
bool isMutated(const Decl *Dec) { return findMutation(Dec) != nullptr; }
const Stmt *findMutation(const Expr *Exp);
const Stmt *findMutation(const Decl *Dec);
const Stmt *findMutation(const Expr *Exp) { return A.findMutation(Exp); }
const Stmt *findMutation(const Decl *Dec) { return A.findMutation(Dec); }

bool isPointeeMutated(const Expr *Exp) {
return findPointeeMutation(Exp) != nullptr;
}
bool isPointeeMutated(const Decl *Dec) {
return findPointeeMutation(Dec) != nullptr;
}
const Stmt *findPointeeMutation(const Expr *Exp);
const Stmt *findPointeeMutation(const Decl *Dec);
const Stmt *findPointeeMutation(const Expr *Exp) {
return A.findPointeeMutation(Exp);
}
const Stmt *findPointeeMutation(const Decl *Dec) {
return A.findPointeeMutation(Dec);
}

static bool isUnevaluated(const Stmt *Smt, const Stmt &Stm,
ASTContext &Context);
ASTContext &Context) {
return Analyzer::isUnevaluated(Smt, Stm, Context);
}

private:
using MutationFinder = const Stmt *(ExprMutationAnalyzer::*)(const Expr *);
using ResultMap = llvm::DenseMap<const Expr *, const Stmt *>;

const Stmt *findMutationMemoized(const Expr *Exp,
llvm::ArrayRef<MutationFinder> Finders,
ResultMap &MemoizedResults);
const Stmt *tryEachDeclRef(const Decl *Dec, MutationFinder Finder);

bool isUnevaluated(const Expr *Exp);

const Stmt *findExprMutation(ArrayRef<ast_matchers::BoundNodes> Matches);
const Stmt *findDeclMutation(ArrayRef<ast_matchers::BoundNodes> Matches);
const Stmt *
findExprPointeeMutation(ArrayRef<ast_matchers::BoundNodes> Matches);
const Stmt *
findDeclPointeeMutation(ArrayRef<ast_matchers::BoundNodes> Matches);

const Stmt *findDirectMutation(const Expr *Exp);
const Stmt *findMemberMutation(const Expr *Exp);
const Stmt *findArrayElementMutation(const Expr *Exp);
const Stmt *findCastMutation(const Expr *Exp);
const Stmt *findRangeLoopMutation(const Expr *Exp);
const Stmt *findReferenceMutation(const Expr *Exp);
const Stmt *findFunctionArgMutation(const Expr *Exp);

const Stmt &Stm;
ASTContext &Context;
llvm::DenseMap<const FunctionDecl *,
std::unique_ptr<FunctionParmMutationAnalyzer>>
FuncParmAnalyzer;
ResultMap Results;
ResultMap PointeeResults;
Memoized Memorized;
Analyzer A;
};

// A convenient wrapper around ExprMutationAnalyzer for analyzing function
// params.
class FunctionParmMutationAnalyzer {
public:
FunctionParmMutationAnalyzer(const FunctionDecl &Func, ASTContext &Context);
static FunctionParmMutationAnalyzer *
getFunctionParmMutationAnalyzer(const FunctionDecl &Func, ASTContext &Context,
ExprMutationAnalyzer::Memoized &Memorized) {
auto it = Memorized.FuncParmAnalyzer.find(&Func);
if (it == Memorized.FuncParmAnalyzer.end())
it =
Memorized.FuncParmAnalyzer
.try_emplace(&Func, std::unique_ptr<FunctionParmMutationAnalyzer>(
new FunctionParmMutationAnalyzer(
Func, Context, Memorized)))
.first;
return it->getSecond().get();
}

bool isMutated(const ParmVarDecl *Parm) {
return findMutation(Parm) != nullptr;
}
const Stmt *findMutation(const ParmVarDecl *Parm);

private:
ExprMutationAnalyzer BodyAnalyzer;
ExprMutationAnalyzer::Analyzer BodyAnalyzer;
llvm::DenseMap<const ParmVarDecl *, const Stmt *> Results;

FunctionParmMutationAnalyzer(const FunctionDecl &Func, ASTContext &Context,
ExprMutationAnalyzer::Memoized &Memorized);
};

} // namespace clang
Expand Down
98 changes: 98 additions & 0 deletions clang/include/clang/Analysis/FlowSensitive/ASTOps.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
//===-- ASTOps.h -------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// Operations on AST nodes that are used in flow-sensitive analysis.
//
//===----------------------------------------------------------------------===//

#ifndef LLVM_CLANG_ANALYSIS_FLOWSENSITIVE_ASTOPS_H
#define LLVM_CLANG_ANALYSIS_FLOWSENSITIVE_ASTOPS_H

#include "clang/AST/Decl.h"
#include "clang/AST/Expr.h"
#include "clang/AST/Type.h"
#include "clang/Analysis/FlowSensitive/StorageLocation.h"
#include "llvm/ADT/DenseSet.h"
#include "llvm/ADT/SetVector.h"

namespace clang {
namespace dataflow {

/// Skip past nodes that the CFG does not emit. These nodes are invisible to
/// flow-sensitive analysis, and should be ignored as they will effectively not
/// exist.
///
/// * `ParenExpr` - The CFG takes the operator precedence into account, but
/// otherwise omits the node afterwards.
///
/// * `ExprWithCleanups` - The CFG will generate the appropriate calls to
/// destructors and then omit the node.
///
const Expr &ignoreCFGOmittedNodes(const Expr &E);
const Stmt &ignoreCFGOmittedNodes(const Stmt &S);

/// A set of `FieldDecl *`. Use `SmallSetVector` to guarantee deterministic
/// iteration order.
using FieldSet = llvm::SmallSetVector<const FieldDecl *, 4>;

/// Returns the set of all fields in the type.
FieldSet getObjectFields(QualType Type);

/// Returns whether `Fields` and `FieldLocs` contain the same fields.
bool containsSameFields(const FieldSet &Fields,
const RecordStorageLocation::FieldToLoc &FieldLocs);

/// Helper class for initialization of a record with an `InitListExpr`.
/// `InitListExpr::inits()` contains the initializers for both the base classes
/// and the fields of the record; this helper class separates these out into two
/// different lists. In addition, it deals with special cases associated with
/// unions.
class RecordInitListHelper {
public:
// `InitList` must have record type.
RecordInitListHelper(const InitListExpr *InitList);

// Base classes with their associated initializer expressions.
ArrayRef<std::pair<const CXXBaseSpecifier *, Expr *>> base_inits() const {
return BaseInits;
}

// Fields with their associated initializer expressions.
ArrayRef<std::pair<const FieldDecl *, Expr *>> field_inits() const {
return FieldInits;
}

private:
SmallVector<std::pair<const CXXBaseSpecifier *, Expr *>> BaseInits;
SmallVector<std::pair<const FieldDecl *, Expr *>> FieldInits;

// We potentially synthesize an `ImplicitValueInitExpr` for unions. It's a
// member variable because we store a pointer to it in `FieldInits`.
std::optional<ImplicitValueInitExpr> ImplicitValueInitForUnion;
};

/// A collection of several types of declarations, all referenced from the same
/// function.
struct ReferencedDecls {
/// Non-static member variables.
FieldSet Fields;
/// All variables with static storage duration, notably including static
/// member variables and static variables declared within a function.
llvm::DenseSet<const VarDecl *> Globals;
/// Free functions and member functions which are referenced (but not
/// necessarily called).
llvm::DenseSet<const FunctionDecl *> Functions;
};

/// Returns declarations that are declared in or referenced from `FD`.
ReferencedDecls getReferencedDecls(const FunctionDecl &FD);

} // namespace dataflow
} // namespace clang

#endif // LLVM_CLANG_ANALYSIS_FLOWSENSITIVE_ASTOPS_H
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
#include "clang/AST/Decl.h"
#include "clang/AST/Expr.h"
#include "clang/AST/TypeOrdering.h"
#include "clang/Analysis/FlowSensitive/ASTOps.h"
#include "clang/Analysis/FlowSensitive/AdornedCFG.h"
#include "clang/Analysis/FlowSensitive/Arena.h"
#include "clang/Analysis/FlowSensitive/Solver.h"
Expand All @@ -30,38 +31,11 @@
#include <cassert>
#include <memory>
#include <optional>
#include <type_traits>
#include <utility>
#include <vector>

namespace clang {
namespace dataflow {
class Logger;

/// Skip past nodes that the CFG does not emit. These nodes are invisible to
/// flow-sensitive analysis, and should be ignored as they will effectively not
/// exist.
///
/// * `ParenExpr` - The CFG takes the operator precedence into account, but
/// otherwise omits the node afterwards.
///
/// * `ExprWithCleanups` - The CFG will generate the appropriate calls to
/// destructors and then omit the node.
///
const Expr &ignoreCFGOmittedNodes(const Expr &E);
const Stmt &ignoreCFGOmittedNodes(const Stmt &S);

/// A set of `FieldDecl *`. Use `SmallSetVector` to guarantee deterministic
/// iteration order.
using FieldSet = llvm::SmallSetVector<const FieldDecl *, 4>;

/// Returns the set of all fields in the type.
FieldSet getObjectFields(QualType Type);

/// Returns whether `Fields` and `FieldLocs` contain the same fields.
bool containsSameFields(const FieldSet &Fields,
const RecordStorageLocation::FieldToLoc &FieldLocs);

struct ContextSensitiveOptions {
/// The maximum depth to analyze. A value of zero is equivalent to disabling
/// context-sensitive analysis entirely.
Expand Down
36 changes: 0 additions & 36 deletions clang/include/clang/Analysis/FlowSensitive/DataflowEnvironment.h
Original file line number Diff line number Diff line change
Expand Up @@ -775,42 +775,6 @@ RecordStorageLocation *getImplicitObjectLocation(const CXXMemberCallExpr &MCE,
RecordStorageLocation *getBaseObjectLocation(const MemberExpr &ME,
const Environment &Env);

/// Returns the fields of a `RecordDecl` that are initialized by an
/// `InitListExpr`, in the order in which they appear in
/// `InitListExpr::inits()`.
/// `Init->getType()` must be a record type.
std::vector<const FieldDecl *>
getFieldsForInitListExpr(const InitListExpr *InitList);

/// Helper class for initialization of a record with an `InitListExpr`.
/// `InitListExpr::inits()` contains the initializers for both the base classes
/// and the fields of the record; this helper class separates these out into two
/// different lists. In addition, it deals with special cases associated with
/// unions.
class RecordInitListHelper {
public:
// `InitList` must have record type.
RecordInitListHelper(const InitListExpr *InitList);

// Base classes with their associated initializer expressions.
ArrayRef<std::pair<const CXXBaseSpecifier *, Expr *>> base_inits() const {
return BaseInits;
}

// Fields with their associated initializer expressions.
ArrayRef<std::pair<const FieldDecl *, Expr *>> field_inits() const {
return FieldInits;
}

private:
SmallVector<std::pair<const CXXBaseSpecifier *, Expr *>> BaseInits;
SmallVector<std::pair<const FieldDecl *, Expr *>> FieldInits;

// We potentially synthesize an `ImplicitValueInitExpr` for unions. It's a
// member variable because we store a pointer to it in `FieldInits`.
std::optional<ImplicitValueInitExpr> ImplicitValueInitForUnion;
};

/// Associates a new `RecordValue` with `Loc` and returns the new value.
RecordValue &refreshRecordValue(RecordStorageLocation &Loc, Environment &Env);

Expand Down
6 changes: 6 additions & 0 deletions clang/include/clang/Basic/Builtins.td
Original file line number Diff line number Diff line change
Expand Up @@ -1164,6 +1164,12 @@ def Unreachable : Builtin {
let Prototype = "void()";
}

def AllowRuntimeCheck : Builtin {
let Spellings = ["__builtin_allow_runtime_check"];
let Attributes = [NoThrow, Pure, Const];
let Prototype = "bool(char const*)";
}

def ShuffleVector : Builtin {
let Spellings = ["__builtin_shufflevector"];
let Attributes = [NoThrow, Const, CustomTypeChecking];
Expand Down
8 changes: 3 additions & 5 deletions clang/include/clang/Basic/Cuda.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,17 +50,15 @@ const char *CudaVersionToString(CudaVersion V);
// Input is "Major.Minor"
CudaVersion CudaStringToVersion(const llvm::Twine &S);

// We have a name conflict with sys/mac.h on AIX
#ifdef SM_32
#undef SM_32
#endif
enum class CudaArch {
UNUSED,
UNKNOWN,
// TODO: Deprecate and remove GPU architectures older than sm_52.
SM_20,
SM_21,
SM_30,
SM_32,
// This has a name conflict with sys/mac.h on AIX, rename it as a workaround.
SM_32_,
SM_35,
SM_37,
SM_50,
Expand Down
4 changes: 4 additions & 0 deletions clang/include/clang/Basic/DiagnosticSemaKinds.td
Original file line number Diff line number Diff line change
Expand Up @@ -12274,4 +12274,8 @@ def note_acc_branch_into_compute_construct
: Note<"invalid branch into OpenACC Compute Construct">;
def note_acc_branch_out_of_compute_construct
: Note<"invalid branch out of OpenACC Compute Construct">;
def warn_acc_if_self_conflict
: Warning<"OpenACC construct 'self' has no effect when an 'if' clause "
"evaluates to true">,
InGroup<DiagGroup<"openacc-self-if-potential-conflict">>;
} // end of sema component.
1 change: 1 addition & 0 deletions clang/include/clang/Basic/OpenACCClauses.def
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,6 @@

VISIT_CLAUSE(Default)
VISIT_CLAUSE(If)
VISIT_CLAUSE(Self)

#undef VISIT_CLAUSE
6 changes: 6 additions & 0 deletions clang/include/clang/Basic/OpenACCKinds.h
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,12 @@ inline llvm::raw_ostream &operator<<(llvm::raw_ostream &Out,
return printOpenACCDirectiveKind(Out, K);
}

inline bool isOpenACCComputeDirectiveKind(OpenACCDirectiveKind K) {
return K == OpenACCDirectiveKind::Parallel ||
K == OpenACCDirectiveKind::Serial ||
K == OpenACCDirectiveKind::Kernels;
}

enum class OpenACCAtomicKind {
Read,
Write,
Expand Down
2 changes: 1 addition & 1 deletion clang/include/clang/Basic/arm_fp16.td
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
include "arm_neon_incl.td"

// ARMv8.2-A FP16 intrinsics.
let ArchGuard = "defined(__aarch64__)", TargetGuard = "fullfp16" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "fullfp16" in {

// Negate
def VNEGSH : SInst<"vneg", "11", "Sh">;
Expand Down
58 changes: 29 additions & 29 deletions clang/include/clang/Basic/arm_neon.td
Original file line number Diff line number Diff line change
Expand Up @@ -605,11 +605,11 @@ def VQDMULL_LANE : SOpInst<"vqdmull_lane", "(>Q)..I", "si", OP_QDMULL_LN>;
def VQDMULH_N : SOpInst<"vqdmulh_n", "..1", "siQsQi", OP_QDMULH_N>;
def VQRDMULH_N : SOpInst<"vqrdmulh_n", "..1", "siQsQi", OP_QRDMULH_N>;

let ArchGuard = "!defined(__aarch64__)" in {
let ArchGuard = "!defined(__aarch64__) && !defined(__arm64ec__)" in {
def VQDMULH_LANE : SOpInst<"vqdmulh_lane", "..qI", "siQsQi", OP_QDMULH_LN>;
def VQRDMULH_LANE : SOpInst<"vqrdmulh_lane", "..qI", "siQsQi", OP_QRDMULH_LN>;
}
let ArchGuard = "defined(__aarch64__)" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)" in {
def A64_VQDMULH_LANE : SInst<"vqdmulh_lane", "..(!q)I", "siQsQi">;
def A64_VQRDMULH_LANE : SInst<"vqrdmulh_lane", "..(!q)I", "siQsQi">;
}
Expand Down Expand Up @@ -686,7 +686,7 @@ multiclass REINTERPRET_CROSS_TYPES<string TypesA, string TypesB> {

// E.3.31 Vector reinterpret cast operations
def VREINTERPRET : REINTERPRET_CROSS_SELF<"csilUcUsUiUlhfPcPsQcQsQiQlQUcQUsQUiQUlQhQfQPcQPs"> {
let ArchGuard = "!defined(__aarch64__)";
let ArchGuard = "!defined(__aarch64__) && !defined(__arm64ec__)";
let BigEndianSafe = 1;
}

Expand Down Expand Up @@ -714,7 +714,7 @@ def VADDP : WInst<"vadd", "...", "PcPsPlQPcQPsQPl">;
////////////////////////////////////////////////////////////////////////////////
// AArch64 Intrinsics

let ArchGuard = "defined(__aarch64__)" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)" in {

////////////////////////////////////////////////////////////////////////////////
// Load/Store
Expand Down Expand Up @@ -1091,14 +1091,14 @@ let isLaneQ = 1 in {
def VQDMULH_LANEQ : SInst<"vqdmulh_laneq", "..QI", "siQsQi">;
def VQRDMULH_LANEQ : SInst<"vqrdmulh_laneq", "..QI", "siQsQi">;
}
let ArchGuard = "defined(__aarch64__)", TargetGuard = "v8.1a" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "v8.1a" in {
def VQRDMLAH_LANEQ : SOpInst<"vqrdmlah_laneq", "...QI", "siQsQi", OP_QRDMLAH_LN> {
let isLaneQ = 1;
}
def VQRDMLSH_LANEQ : SOpInst<"vqrdmlsh_laneq", "...QI", "siQsQi", OP_QRDMLSH_LN> {
let isLaneQ = 1;
}
} // ArchGuard = "defined(__aarch64__)", TargetGuard = "v8.1a"
} // ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "v8.1a"

// Note: d type implemented by SCALAR_VMULX_LANE
def VMULX_LANE : IOpInst<"vmulx_lane", "..qI", "fQfQd", OP_MULX_LN>;
Expand Down Expand Up @@ -1143,7 +1143,7 @@ def SHA256H2 : SInst<"vsha256h2", "....", "QUi">;
def SHA256SU1 : SInst<"vsha256su1", "....", "QUi">;
}

let ArchGuard = "defined(__aarch64__)", TargetGuard = "sha3" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "sha3" in {
def BCAX : SInst<"vbcax", "....", "QUcQUsQUiQUlQcQsQiQl">;
def EOR3 : SInst<"veor3", "....", "QUcQUsQUiQUlQcQsQiQl">;
def RAX1 : SInst<"vrax1", "...", "QUl">;
Expand All @@ -1153,14 +1153,14 @@ def XAR : SInst<"vxar", "...I", "QUl">;
}
}

let ArchGuard = "defined(__aarch64__)", TargetGuard = "sha3" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "sha3" in {
def SHA512SU0 : SInst<"vsha512su0", "...", "QUl">;
def SHA512su1 : SInst<"vsha512su1", "....", "QUl">;
def SHA512H : SInst<"vsha512h", "....", "QUl">;
def SHA512H2 : SInst<"vsha512h2", "....", "QUl">;
}

let ArchGuard = "defined(__aarch64__)", TargetGuard = "sm4" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "sm4" in {
def SM3SS1 : SInst<"vsm3ss1", "....", "QUi">;
def SM3TT1A : SInst<"vsm3tt1a", "....I", "QUi">;
def SM3TT1B : SInst<"vsm3tt1b", "....I", "QUi">;
Expand All @@ -1170,7 +1170,7 @@ def SM3PARTW1 : SInst<"vsm3partw1", "....", "QUi">;
def SM3PARTW2 : SInst<"vsm3partw2", "....", "QUi">;
}

let ArchGuard = "defined(__aarch64__)", TargetGuard = "sm4" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "sm4" in {
def SM4E : SInst<"vsm4e", "...", "QUi">;
def SM4EKEY : SInst<"vsm4ekey", "...", "QUi">;
}
Expand All @@ -1193,7 +1193,7 @@ def FCVTAS_S32 : SInst<"vcvta_s32", "S.", "fQf">;
def FCVTAU_S32 : SInst<"vcvta_u32", "U.", "fQf">;
}

let ArchGuard = "defined(__aarch64__)" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)" in {
def FCVTNS_S64 : SInst<"vcvtn_s64", "S.", "dQd">;
def FCVTNU_S64 : SInst<"vcvtn_u64", "U.", "dQd">;
def FCVTPS_S64 : SInst<"vcvtp_s64", "S.", "dQd">;
Expand All @@ -1217,7 +1217,7 @@ def FRINTZ_S32 : SInst<"vrnd", "..", "fQf">;
def FRINTI_S32 : SInst<"vrndi", "..", "fQf">;
}

let ArchGuard = "defined(__aarch64__) && defined(__ARM_FEATURE_DIRECTED_ROUNDING)" in {
let ArchGuard = "(defined(__aarch64__) || defined(__arm64ec__)) && defined(__ARM_FEATURE_DIRECTED_ROUNDING)" in {
def FRINTN_S64 : SInst<"vrndn", "..", "dQd">;
def FRINTA_S64 : SInst<"vrnda", "..", "dQd">;
def FRINTP_S64 : SInst<"vrndp", "..", "dQd">;
Expand All @@ -1227,7 +1227,7 @@ def FRINTZ_S64 : SInst<"vrnd", "..", "dQd">;
def FRINTI_S64 : SInst<"vrndi", "..", "dQd">;
}

let ArchGuard = "defined(__aarch64__)", TargetGuard = "v8.5a" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "v8.5a" in {
def FRINT32X_S32 : SInst<"vrnd32x", "..", "fQf">;
def FRINT32Z_S32 : SInst<"vrnd32z", "..", "fQf">;
def FRINT64X_S32 : SInst<"vrnd64x", "..", "fQf">;
Expand All @@ -1247,7 +1247,7 @@ def FMAXNM_S32 : SInst<"vmaxnm", "...", "fQf">;
def FMINNM_S32 : SInst<"vminnm", "...", "fQf">;
}

let ArchGuard = "defined(__aarch64__) && defined(__ARM_FEATURE_NUMERIC_MAXMIN)" in {
let ArchGuard = "(defined(__aarch64__) || defined(__arm64ec__)) && defined(__ARM_FEATURE_NUMERIC_MAXMIN)" in {
def FMAXNM_S64 : SInst<"vmaxnm", "...", "dQd">;
def FMINNM_S64 : SInst<"vminnm", "...", "dQd">;
}
Expand Down Expand Up @@ -1289,7 +1289,7 @@ def VQTBX4_A64 : WInst<"vqtbx4", "..(4Q)U", "UccPcQUcQcQPc">;
// itself during generation so, unlike all other intrinsics, this one should
// include *all* types, not just additional ones.
def VVREINTERPRET : REINTERPRET_CROSS_SELF<"csilUcUsUiUlhfdPcPsPlQcQsQiQlQUcQUsQUiQUlQhQfQdQPcQPsQPlQPk"> {
let ArchGuard = "defined(__aarch64__)";
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)";
let BigEndianSafe = 1;
}

Expand Down Expand Up @@ -1401,15 +1401,15 @@ def SCALAR_SQDMULH : SInst<"vqdmulh", "111", "SsSi">;
// Scalar Integer Saturating Rounding Doubling Multiply Half High
def SCALAR_SQRDMULH : SInst<"vqrdmulh", "111", "SsSi">;

let ArchGuard = "defined(__aarch64__)", TargetGuard = "v8.1a" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "v8.1a" in {
////////////////////////////////////////////////////////////////////////////////
// Signed Saturating Rounding Doubling Multiply Accumulate Returning High Half
def SCALAR_SQRDMLAH : SInst<"vqrdmlah", "1111", "SsSi">;

////////////////////////////////////////////////////////////////////////////////
// Signed Saturating Rounding Doubling Multiply Subtract Returning High Half
def SCALAR_SQRDMLSH : SInst<"vqrdmlsh", "1111", "SsSi">;
} // ArchGuard = "defined(__aarch64__)", TargetGuard = "v8.1a"
} // ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "v8.1a"

////////////////////////////////////////////////////////////////////////////////
// Scalar Floating-point Multiply Extended
Expand Down Expand Up @@ -1651,7 +1651,7 @@ def SCALAR_VDUP_LANEQ : IInst<"vdup_laneq", "1QI", "ScSsSiSlSfSdSUcSUsSUiSUlSPcS
let isLaneQ = 1;
}

} // ArchGuard = "defined(__aarch64__)"
} // ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)"

// ARMv8.2-A FP16 vector intrinsics for A32/A64.
let TargetGuard = "fullfp16" in {
Expand Down Expand Up @@ -1775,7 +1775,7 @@ def VEXTH : WInst<"vext", "...I", "hQh">;
def VREV64H : WOpInst<"vrev64", "..", "hQh", OP_REV64>;

// ARMv8.2-A FP16 vector intrinsics for A64 only.
let ArchGuard = "defined(__aarch64__)", TargetGuard = "fullfp16" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "fullfp16" in {

// Vector rounding
def FRINTIH : SInst<"vrndi", "..", "hQh">;
Expand Down Expand Up @@ -1856,7 +1856,7 @@ let ArchGuard = "defined(__aarch64__)", TargetGuard = "fullfp16" in {
def FMINNMVH : SInst<"vminnmv", "1.", "hQh">;
}

let ArchGuard = "defined(__aarch64__)" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)" in {
// Permutation
def VTRN1H : SOpInst<"vtrn1", "...", "hQh", OP_TRN1>;
def VZIP1H : SOpInst<"vzip1", "...", "hQh", OP_ZIP1>;
Expand All @@ -1876,15 +1876,15 @@ let TargetGuard = "dotprod" in {
def DOT : SInst<"vdot", "..(<<)(<<)", "iQiUiQUi">;
def DOT_LANE : SOpInst<"vdot_lane", "..(<<)(<<q)I", "iUiQiQUi", OP_DOT_LN>;
}
let ArchGuard = "defined(__aarch64__)", TargetGuard = "dotprod" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "dotprod" in {
// Variants indexing into a 128-bit vector are A64 only.
def UDOT_LANEQ : SOpInst<"vdot_laneq", "..(<<)(<<Q)I", "iUiQiQUi", OP_DOT_LNQ> {
let isLaneQ = 1;
}
}

// v8.2-A FP16 fused multiply-add long instructions.
let ArchGuard = "defined(__aarch64__)", TargetGuard = "fp16fml" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "fp16fml" in {
def VFMLAL_LOW : SInst<"vfmlal_low", ">>..", "hQh">;
def VFMLSL_LOW : SInst<"vfmlsl_low", ">>..", "hQh">;
def VFMLAL_HIGH : SInst<"vfmlal_high", ">>..", "hQh">;
Expand Down Expand Up @@ -1918,7 +1918,7 @@ let TargetGuard = "i8mm" in {
def VUSDOT_LANE : SOpInst<"vusdot_lane", "..(<<U)(<<q)I", "iQi", OP_USDOT_LN>;
def VSUDOT_LANE : SOpInst<"vsudot_lane", "..(<<)(<<qU)I", "iQi", OP_SUDOT_LN>;

let ArchGuard = "defined(__aarch64__)" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)" in {
let isLaneQ = 1 in {
def VUSDOT_LANEQ : SOpInst<"vusdot_laneq", "..(<<U)(<<Q)I", "iQi", OP_USDOT_LNQ>;
def VSUDOT_LANEQ : SOpInst<"vsudot_laneq", "..(<<)(<<QU)I", "iQi", OP_SUDOT_LNQ>;
Expand Down Expand Up @@ -1986,7 +1986,7 @@ let TargetGuard = "v8.3a" in {

defm VCMLA_F32 : VCMLA_ROTS<"f", "uint64x1_t", "uint64x2_t">;
}
let ArchGuard = "defined(__aarch64__)", TargetGuard = "v8.3a" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "v8.3a" in {
def VCADDQ_ROT90_FP64 : SInst<"vcaddq_rot90", "QQQ", "d">;
def VCADDQ_ROT270_FP64 : SInst<"vcaddq_rot270", "QQQ", "d">;

Expand Down Expand Up @@ -2058,14 +2058,14 @@ let TargetGuard = "bf16" in {
def SCALAR_CVT_F32_BF16 : SOpInst<"vcvtah_f32", "(1F>)(1!)", "b", OP_CVT_F32_BF16>;
}

let ArchGuard = "!defined(__aarch64__)", TargetGuard = "bf16" in {
let ArchGuard = "!defined(__aarch64__) && !defined(__arm64ec__)", TargetGuard = "bf16" in {
def VCVT_BF16_F32_A32_INTERNAL : WInst<"__a32_vcvt_bf16", "BQ", "f">;
def VCVT_BF16_F32_A32 : SOpInst<"vcvt_bf16", "BQ", "f", OP_VCVT_BF16_F32_A32>;
def VCVT_LOW_BF16_F32_A32 : SOpInst<"vcvt_low_bf16", "BQ", "Qf", OP_VCVT_BF16_F32_LO_A32>;
def VCVT_HIGH_BF16_F32_A32 : SOpInst<"vcvt_high_bf16", "BBQ", "Qf", OP_VCVT_BF16_F32_HI_A32>;
}

let ArchGuard = "defined(__aarch64__)", TargetGuard = "bf16" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "bf16" in {
def VCVT_LOW_BF16_F32_A64_INTERNAL : WInst<"__a64_vcvtq_low_bf16", "BQ", "Hf">;
def VCVT_LOW_BF16_F32_A64 : SOpInst<"vcvt_low_bf16", "BQ", "Qf", OP_VCVT_BF16_F32_LO_A64>;
def VCVT_HIGH_BF16_F32_A64 : SInst<"vcvt_high_bf16", "BBQ", "Qf">;
Expand All @@ -2077,22 +2077,22 @@ let ArchGuard = "defined(__aarch64__)", TargetGuard = "bf16" in {
def COPYQ_LANEQ_BF16 : IOpInst<"vcopy_laneq", "..I.I", "Qb", OP_COPY_LN>;
}

let ArchGuard = "!defined(__aarch64__)", TargetGuard = "bf16" in {
let ArchGuard = "!defined(__aarch64__) && !defined(__arm64ec__)", TargetGuard = "bf16" in {
let BigEndianSafe = 1 in {
defm VREINTERPRET_BF : REINTERPRET_CROSS_TYPES<
"csilUcUsUiUlhfPcPsPlQcQsQiQlQUcQUsQUiQUlQhQfQPcQPsQPl", "bQb">;
}
}

let ArchGuard = "defined(__aarch64__)", TargetGuard = "bf16" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "bf16" in {
let BigEndianSafe = 1 in {
defm VVREINTERPRET_BF : REINTERPRET_CROSS_TYPES<
"csilUcUsUiUlhfdPcPsPlQcQsQiQlQUcQUsQUiQUlQhQfQdQPcQPsQPlQPk", "bQb">;
}
}

// v8.9a/v9.4a LRCPC3 intrinsics
let ArchGuard = "defined(__aarch64__)", TargetGuard = "rcpc3" in {
let ArchGuard = "defined(__aarch64__) || defined(__arm64ec__)", TargetGuard = "rcpc3" in {
def VLDAP1_LANE : WInst<"vldap1_lane", ".(c*!).I", "QUlQlUlldQdPlQPl">;
def VSTL1_LANE : WInst<"vstl1_lane", "v*(.!)I", "QUlQlUlldQdPlQPl">;
}
15 changes: 8 additions & 7 deletions clang/include/clang/Parse/Parser.h
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
#include "clang/Lex/CodeCompletionHandler.h"
#include "clang/Lex/Preprocessor.h"
#include "clang/Sema/Sema.h"
#include "clang/Sema/SemaOpenMP.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/Frontend/OpenMP/OMPContext.h"
#include "llvm/Support/SaveAndRestore.h"
Expand Down Expand Up @@ -2537,7 +2538,7 @@ class Parser : public CodeCompletionHandler {
/// Returns true for declaration, false for expression.
bool isForInitDeclaration() {
if (getLangOpts().OpenMP)
Actions.startOpenMPLoop();
Actions.OpenMP().startOpenMPLoop();
if (getLangOpts().CPlusPlus)
return Tok.is(tok::kw_using) ||
isCXXSimpleDeclaration(/*AllowForRangeDecl=*/true);
Expand Down Expand Up @@ -3396,7 +3397,7 @@ class Parser : public CodeCompletionHandler {
SourceLocation Loc);

/// Parse clauses for '#pragma omp [begin] declare target'.
void ParseOMPDeclareTargetClauses(Sema::DeclareTargetContextInfo &DTCI);
void ParseOMPDeclareTargetClauses(SemaOpenMP::DeclareTargetContextInfo &DTCI);

/// Parse '#pragma omp end declare target'.
void ParseOMPEndDeclareTargetDirective(OpenMPDirectiveKind BeginDKind,
Expand Down Expand Up @@ -3486,7 +3487,7 @@ class Parser : public CodeCompletionHandler {
/// Parses indirect clause
/// \param ParseOnly true to skip the clause's semantic actions and return
// false;
bool ParseOpenMPIndirectClause(Sema::DeclareTargetContextInfo &DTCI,
bool ParseOpenMPIndirectClause(SemaOpenMP::DeclareTargetContextInfo &DTCI,
bool ParseOnly);
/// Parses clause with a single expression and an additional argument
/// of a kind \a Kind.
Expand Down Expand Up @@ -3556,24 +3557,24 @@ class Parser : public CodeCompletionHandler {

/// Parses a reserved locator like 'omp_all_memory'.
bool ParseOpenMPReservedLocator(OpenMPClauseKind Kind,
Sema::OpenMPVarListDataTy &Data,
SemaOpenMP::OpenMPVarListDataTy &Data,
const LangOptions &LangOpts);
/// Parses clauses with list.
bool ParseOpenMPVarList(OpenMPDirectiveKind DKind, OpenMPClauseKind Kind,
SmallVectorImpl<Expr *> &Vars,
Sema::OpenMPVarListDataTy &Data);
SemaOpenMP::OpenMPVarListDataTy &Data);
bool ParseUnqualifiedId(CXXScopeSpec &SS, ParsedType ObjectType,
bool ObjectHadErrors, bool EnteringContext,
bool AllowDestructorName, bool AllowConstructorName,
bool AllowDeductionGuide,
SourceLocation *TemplateKWLoc, UnqualifiedId &Result);

/// Parses the mapper modifier in map, to, and from clauses.
bool parseMapperModifier(Sema::OpenMPVarListDataTy &Data);
bool parseMapperModifier(SemaOpenMP::OpenMPVarListDataTy &Data);
/// Parses map-type-modifiers in map clause.
/// map([ [map-type-modifier[,] [map-type-modifier[,] ...] map-type : ] list)
/// where, map-type-modifier ::= always | close | mapper(mapper-identifier)
bool parseMapTypeModifiers(Sema::OpenMPVarListDataTy &Data);
bool parseMapTypeModifiers(SemaOpenMP::OpenMPVarListDataTy &Data);

//===--------------------------------------------------------------------===//
// OpenACC Parsing.
Expand Down
1,504 changes: 57 additions & 1,447 deletions clang/include/clang/Sema/Sema.h

Large diffs are not rendered by default.

18 changes: 15 additions & 3 deletions clang/include/clang/Sema/SemaOpenACC.h
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,8 @@ class SemaOpenACC : public SemaBase {
Expr *ConditionExpr;
};

std::variant<DefaultDetails, ConditionDetails> Details;
std::variant<std::monostate, DefaultDetails, ConditionDetails> Details =
std::monostate{};

public:
OpenACCParsedClause(OpenACCDirectiveKind DirKind,
Expand Down Expand Up @@ -72,8 +73,17 @@ class SemaOpenACC : public SemaBase {
}

Expr *getConditionExpr() {
assert(ClauseKind == OpenACCClauseKind::If &&
assert((ClauseKind == OpenACCClauseKind::If ||
(ClauseKind == OpenACCClauseKind::Self &&
DirKind != OpenACCDirectiveKind::Update)) &&
"Parsed clause kind does not have a condition expr");

// 'self' has an optional ConditionExpr, so be tolerant of that. This will
// assert in variant otherwise.
if (ClauseKind == OpenACCClauseKind::Self &&
std::holds_alternative<std::monostate>(Details))
return nullptr;

return std::get<ConditionDetails>(Details).ConditionExpr;
}

Expand All @@ -87,7 +97,9 @@ class SemaOpenACC : public SemaBase {
}

void setConditionDetails(Expr *ConditionExpr) {
assert(ClauseKind == OpenACCClauseKind::If &&
assert((ClauseKind == OpenACCClauseKind::If ||
(ClauseKind == OpenACCClauseKind::Self &&
DirKind != OpenACCDirectiveKind::Update)) &&
"Parsed clause kind does not have a condition expr");
// In C++ we can count on this being a 'bool', but in C this gets left as
// some sort of scalar that codegen will have to take care of converting.
Expand Down
1,447 changes: 1,447 additions & 0 deletions clang/include/clang/Sema/SemaOpenMP.h

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion clang/include/clang/Serialization/ModuleFileExtension.h
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@
#ifndef LLVM_CLANG_SERIALIZATION_MODULEFILEEXTENSION_H
#define LLVM_CLANG_SERIALIZATION_MODULEFILEEXTENSION_H

#include "llvm/ADT/IntrusiveRefCntPtr.h"
#include "llvm/Support/ExtensibleRTTI.h"
#include "llvm/Support/HashBuilder.h"
#include "llvm/Support/MD5.h"
Expand Down
2 changes: 1 addition & 1 deletion clang/include/clang/Serialization/PCHContainerOperations.h
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
#include "clang/Basic/Module.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/ADT/StringMap.h"
#include "llvm/Support/MemoryBuffer.h"
#include "llvm/Support/MemoryBufferRef.h"
#include <memory>

namespace llvm {
Expand Down
59 changes: 53 additions & 6 deletions clang/lib/AST/Interp/ByteCodeExprGen.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -398,6 +398,35 @@ bool ByteCodeExprGen<Emitter>::VisitCastExpr(const CastExpr *CE) {
return true;
}

case CK_VectorSplat: {
assert(!classify(CE->getType()));
assert(classify(SubExpr->getType()));
assert(CE->getType()->isVectorType());

if (DiscardResult)
return this->discard(SubExpr);

assert(Initializing); // FIXME: Not always correct.
const auto *VT = CE->getType()->getAs<VectorType>();
PrimType ElemT = classifyPrim(SubExpr);
unsigned ElemOffset = allocateLocalPrimitive(
SubExpr, ElemT, /*IsConst=*/true, /*IsExtended=*/false);

if (!this->visit(SubExpr))
return false;
if (!this->emitSetLocal(ElemT, ElemOffset, CE))
return false;

for (unsigned I = 0; I != VT->getNumElements(); ++I) {
if (!this->emitGetLocal(ElemT, ElemOffset, CE))
return false;
if (!this->emitInitElem(ElemT, I, CE))
return false;
}

return true;
}

case CK_ToVoid:
return discard(SubExpr);

Expand Down Expand Up @@ -1267,10 +1296,30 @@ template <class Emitter>
bool ByteCodeExprGen<Emitter>::VisitMemberExpr(const MemberExpr *E) {
// 'Base.Member'
const Expr *Base = E->getBase();
const ValueDecl *Member = E->getMemberDecl();

if (DiscardResult)
return this->discard(Base);

// MemberExprs are almost always lvalues, in which case we don't need to
// do the load. But sometimes they aren't.
const auto maybeLoadValue = [&]() -> bool {
if (E->isGLValue())
return true;
if (std::optional<PrimType> T = classify(E))
return this->emitLoadPop(*T, E);
return false;
};

if (const auto *VD = dyn_cast<VarDecl>(Member)) {
// I am almost confident in saying that a var decl must be static
// and therefore registered as a global variable. But this will probably
// turn out to be wrong some time in the future, as always.
if (auto GlobalIndex = P.getGlobal(VD))
return this->emitGetPtrGlobal(*GlobalIndex, E) && maybeLoadValue();
return false;
}

if (Initializing) {
if (!this->delegate(Base))
return false;
Expand All @@ -1280,16 +1329,14 @@ bool ByteCodeExprGen<Emitter>::VisitMemberExpr(const MemberExpr *E) {
}

// Base above gives us a pointer on the stack.
// TODO: Implement non-FieldDecl members.
const ValueDecl *Member = E->getMemberDecl();
if (const auto *FD = dyn_cast<FieldDecl>(Member)) {
const RecordDecl *RD = FD->getParent();
const Record *R = getRecord(RD);
const Record::Field *F = R->getField(FD);
// Leave a pointer to the field on the stack.
if (F->Decl->getType()->isReferenceType())
return this->emitGetFieldPop(PT_Ptr, F->Offset, E);
return this->emitGetPtrField(F->Offset, E);
return this->emitGetFieldPop(PT_Ptr, F->Offset, E) && maybeLoadValue();
return this->emitGetPtrField(F->Offset, E) && maybeLoadValue();
}

return false;
Expand Down Expand Up @@ -1624,7 +1671,7 @@ bool ByteCodeExprGen<Emitter>::VisitCompoundAssignOperator(
return false;
if (!this->emitLoad(*LT, E))
return false;
if (*LT != *LHSComputationT) {
if (LT != LHSComputationT) {
if (!this->emitCast(*LT, *LHSComputationT, E))
return false;
}
Expand Down Expand Up @@ -1680,7 +1727,7 @@ bool ByteCodeExprGen<Emitter>::VisitCompoundAssignOperator(
}

// And now cast from LHSComputationT to ResultT.
if (*ResultT != *LHSComputationT) {
if (ResultT != LHSComputationT) {
if (!this->emitCast(*LHSComputationT, *ResultT, E))
return false;
}
Expand Down
9 changes: 8 additions & 1 deletion clang/lib/AST/Interp/ByteCodeExprGen.h
Original file line number Diff line number Diff line change
Expand Up @@ -148,13 +148,20 @@ class ByteCodeExprGen : public ConstStmtVisitor<ByteCodeExprGen<Emitter>, bool>,
return Ctx.classify(Ty);
}

/// Classifies a known primitive type
/// Classifies a known primitive type.
PrimType classifyPrim(QualType Ty) const {
if (auto T = classify(Ty)) {
return *T;
}
llvm_unreachable("not a primitive type");
}
/// Classifies a known primitive expression.
PrimType classifyPrim(const Expr *E) const {
if (auto T = classify(E))
return *T;
llvm_unreachable("not a primitive type");
}

/// Evaluates an expression and places the result on the stack. If the
/// expression is of composite type, a local variable will be created
/// and a pointer to said variable will be placed on the stack.
Expand Down
4 changes: 2 additions & 2 deletions clang/lib/AST/Interp/Disasm.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ LLVM_DUMP_METHOD void Program::dump(llvm::raw_ostream &OS) const {
const Descriptor *Desc = G->block()->getDescriptor();
Pointer GP = getPtrGlobal(GI);

OS << GI << ": " << (void *)G->block() << " ";
OS << GI << ": " << (const void *)G->block() << " ";
{
ColorScope SC(OS, true,
GP.isInitialized()
Expand Down Expand Up @@ -268,7 +268,7 @@ LLVM_DUMP_METHOD void Record::dump(llvm::raw_ostream &OS, unsigned Indentation,
LLVM_DUMP_METHOD void Block::dump(llvm::raw_ostream &OS) const {
{
ColorScope SC(OS, true, {llvm::raw_ostream::BRIGHT_BLUE, true});
OS << "Block " << (void *)this << "\n";
OS << "Block " << (const void *)this << "\n";
}
unsigned NPointers = 0;
for (const Pointer *P = Pointers; P; P = P->Next) {
Expand Down
1 change: 1 addition & 0 deletions clang/lib/AST/Interp/FunctionPointer.h
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ class FunctionPointer final {

const Function *getFunction() const { return Func; }
bool isZero() const { return !Func; }
bool isValid() const { return Valid; }
bool isWeak() const {
if (!Func || !Valid)
return false;
Expand Down
4 changes: 4 additions & 0 deletions clang/lib/AST/Interp/Interp.h
Original file line number Diff line number Diff line change
Expand Up @@ -2236,6 +2236,10 @@ inline bool CallPtr(InterpState &S, CodePtr OpPC, uint32_t ArgSize,
<< const_cast<Expr *>(E) << E->getSourceRange();
return false;
}

if (!FuncPtr.isValid())
return false;

assert(F);

// Check argument nullability state.
Expand Down
118 changes: 118 additions & 0 deletions clang/lib/AST/Interp/InterpBuiltin.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -977,6 +977,117 @@ static bool interp__builtin_complex(InterpState &S, CodePtr OpPC,
return true;
}

/// __builtin_is_aligned()
/// __builtin_align_up()
/// __builtin_align_down()
/// The first parameter is either an integer or a pointer.
/// The second parameter is the requested alignment as an integer.
static bool interp__builtin_is_aligned_up_down(InterpState &S, CodePtr OpPC,
const InterpFrame *Frame,
const Function *Func,
const CallExpr *Call) {
unsigned BuiltinOp = Func->getBuiltinID();
unsigned CallSize = callArgSize(S, Call);

PrimType AlignmentT = *S.Ctx.classify(Call->getArg(1));
const APSInt &Alignment = peekToAPSInt(S.Stk, AlignmentT);

if (Alignment < 0 || !Alignment.isPowerOf2()) {
S.FFDiag(Call, diag::note_constexpr_invalid_alignment) << Alignment;
return false;
}
unsigned SrcWidth = S.getCtx().getIntWidth(Call->getArg(0)->getType());
APSInt MaxValue(APInt::getOneBitSet(SrcWidth, SrcWidth - 1));
if (APSInt::compareValues(Alignment, MaxValue) > 0) {
S.FFDiag(Call, diag::note_constexpr_alignment_too_big)
<< MaxValue << Call->getArg(0)->getType() << Alignment;
return false;
}

// The first parameter is either an integer or a pointer (but not a function
// pointer).
PrimType FirstArgT = *S.Ctx.classify(Call->getArg(0));

if (isIntegralType(FirstArgT)) {
const APSInt &Src = peekToAPSInt(S.Stk, FirstArgT, CallSize);
APSInt Align = Alignment.extOrTrunc(Src.getBitWidth());
if (BuiltinOp == Builtin::BI__builtin_align_up) {
APSInt AlignedVal =
APSInt((Src + (Align - 1)) & ~(Align - 1), Src.isUnsigned());
pushInteger(S, AlignedVal, Call->getType());
} else if (BuiltinOp == Builtin::BI__builtin_align_down) {
APSInt AlignedVal = APSInt(Src & ~(Align - 1), Src.isUnsigned());
pushInteger(S, AlignedVal, Call->getType());
} else {
assert(*S.Ctx.classify(Call->getType()) == PT_Bool);
S.Stk.push<Boolean>((Src & (Align - 1)) == 0);
}
return true;
}

assert(FirstArgT == PT_Ptr);
const Pointer &Ptr = S.Stk.peek<Pointer>(CallSize);

unsigned PtrOffset = Ptr.getByteOffset();
PtrOffset = Ptr.getIndex();
CharUnits BaseAlignment =
S.getCtx().getDeclAlign(Ptr.getDeclDesc()->asValueDecl());
CharUnits PtrAlign =
BaseAlignment.alignmentAtOffset(CharUnits::fromQuantity(PtrOffset));

if (BuiltinOp == Builtin::BI__builtin_is_aligned) {
if (PtrAlign.getQuantity() >= Alignment) {
S.Stk.push<Boolean>(true);
return true;
}
// If the alignment is not known to be sufficient, some cases could still
// be aligned at run time. However, if the requested alignment is less or
// equal to the base alignment and the offset is not aligned, we know that
// the run-time value can never be aligned.
if (BaseAlignment.getQuantity() >= Alignment &&
PtrAlign.getQuantity() < Alignment) {
S.Stk.push<Boolean>(false);
return true;
}

S.FFDiag(Call->getArg(0), diag::note_constexpr_alignment_compute)
<< Alignment;
return false;
}

assert(BuiltinOp == Builtin::BI__builtin_align_down ||
BuiltinOp == Builtin::BI__builtin_align_up);

// For align_up/align_down, we can return the same value if the alignment
// is known to be greater or equal to the requested value.
if (PtrAlign.getQuantity() >= Alignment) {
S.Stk.push<Pointer>(Ptr);
return true;
}

// The alignment could be greater than the minimum at run-time, so we cannot
// infer much about the resulting pointer value. One case is possible:
// For `_Alignas(32) char buf[N]; __builtin_align_down(&buf[idx], 32)` we
// can infer the correct index if the requested alignment is smaller than
// the base alignment so we can perform the computation on the offset.
if (BaseAlignment.getQuantity() >= Alignment) {
assert(Alignment.getBitWidth() <= 64 &&
"Cannot handle > 64-bit address-space");
uint64_t Alignment64 = Alignment.getZExtValue();
CharUnits NewOffset =
CharUnits::fromQuantity(BuiltinOp == Builtin::BI__builtin_align_down
? llvm::alignDown(PtrOffset, Alignment64)
: llvm::alignTo(PtrOffset, Alignment64));

S.Stk.push<Pointer>(Ptr.atIndex(NewOffset.getQuantity()));
return true;
}

// Otherwise, we cannot constant-evaluate the result.
S.FFDiag(Call->getArg(0), diag::note_constexpr_alignment_adjust) << Alignment;
return false;
}

bool InterpretBuiltin(InterpState &S, CodePtr OpPC, const Function *F,
const CallExpr *Call) {
const InterpFrame *Frame = S.Current;
Expand Down Expand Up @@ -1291,6 +1402,13 @@ bool InterpretBuiltin(InterpState &S, CodePtr OpPC, const Function *F,
return false;
break;

case Builtin::BI__builtin_is_aligned:
case Builtin::BI__builtin_align_up:
case Builtin::BI__builtin_align_down:
if (!interp__builtin_is_aligned_up_down(S, OpPC, Frame, F, Call))
return false;
break;

default:
S.FFDiag(S.Current->getLocation(OpPC),
diag::note_invalid_subexpr_in_const_expr)
Expand Down
7 changes: 7 additions & 0 deletions clang/lib/AST/Interp/InterpFrame.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,13 @@ void print(llvm::raw_ostream &OS, const Pointer &P, ASTContext &Ctx,
}

void InterpFrame::describe(llvm::raw_ostream &OS) const {
// We create frames for builtin functions as well, but we can't reliably
// diagnose them. The 'in call to' diagnostics for them add no value to the
// user _and_ it doesn't generally work since the argument types don't always
// match the function prototype. Just ignore them.
if (const auto *F = getFunction(); F && F->isBuiltin())
return;

const FunctionDecl *F = getCallee();
if (const auto *M = dyn_cast<CXXMethodDecl>(F);
M && M->isInstance() && !isa<CXXConstructorDecl>(F)) {
Expand Down
5 changes: 3 additions & 2 deletions clang/lib/AST/Interp/State.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,8 @@ void State::addCallStack(unsigned Limit) {
SmallString<128> Buffer;
llvm::raw_svector_ostream Out(Buffer);
F->describe(Out);
addDiag(CallRange.getBegin(), diag::note_constexpr_call_here)
<< Out.str() << CallRange;
if (!Buffer.empty())
addDiag(CallRange.getBegin(), diag::note_constexpr_call_here)
<< Out.str() << CallRange;
}
}
26 changes: 26 additions & 0 deletions clang/lib/AST/OpenACCClause.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,26 @@ OpenACCIfClause::OpenACCIfClause(SourceLocation BeginLoc,
"Condition expression type not scalar/dependent");
}

OpenACCSelfClause *OpenACCSelfClause::Create(const ASTContext &C,
SourceLocation BeginLoc,
SourceLocation LParenLoc,
Expr *ConditionExpr,
SourceLocation EndLoc) {
void *Mem = C.Allocate(sizeof(OpenACCIfClause), alignof(OpenACCIfClause));
return new (Mem)
OpenACCSelfClause(BeginLoc, LParenLoc, ConditionExpr, EndLoc);
}

OpenACCSelfClause::OpenACCSelfClause(SourceLocation BeginLoc,
SourceLocation LParenLoc,
Expr *ConditionExpr, SourceLocation EndLoc)
: OpenACCClauseWithCondition(OpenACCClauseKind::Self, BeginLoc, LParenLoc,
ConditionExpr, EndLoc) {
assert((!ConditionExpr || ConditionExpr->isInstantiationDependent() ||
ConditionExpr->getType()->isScalarType()) &&
"Condition expression type not scalar/dependent");
}

OpenACCClause::child_range OpenACCClause::children() {
switch (getClauseKind()) {
default:
Expand All @@ -72,3 +92,9 @@ void OpenACCClausePrinter::VisitDefaultClause(const OpenACCDefaultClause &C) {
void OpenACCClausePrinter::VisitIfClause(const OpenACCIfClause &C) {
OS << "if(" << C.getConditionExpr() << ")";
}

void OpenACCClausePrinter::VisitSelfClause(const OpenACCSelfClause &C) {
OS << "self";
if (const Expr *CondExpr = C.getConditionExpr())
OS << "(" << CondExpr << ")";
}
5 changes: 5 additions & 0 deletions clang/lib/AST/StmtProfile.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2491,6 +2491,11 @@ void OpenACCClauseProfiler::VisitIfClause(const OpenACCIfClause &Clause) {
"if clause requires a valid condition expr");
Profiler.VisitStmt(Clause.getConditionExpr());
}

void OpenACCClauseProfiler::VisitSelfClause(const OpenACCSelfClause &Clause) {
if (Clause.hasConditionExpr())
Profiler.VisitStmt(Clause.getConditionExpr());
}
} // namespace

void StmtProfiler::VisitOpenACCComputeConstruct(
Expand Down
1 change: 1 addition & 0 deletions clang/lib/AST/TextNodeDumper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -398,6 +398,7 @@ void TextNodeDumper::Visit(const OpenACCClause *C) {
OS << '(' << cast<OpenACCDefaultClause>(C)->getDefaultClauseKind() << ')';
break;
case OpenACCClauseKind::If:
case OpenACCClauseKind::Self:
// The condition expression will be printed as a part of the 'children',
// but print 'clause' here so it is clear what is happening from the dump.
OS << " clause";
Expand Down
125 changes: 72 additions & 53 deletions clang/lib/Analysis/ExprMutationAnalyzer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -186,9 +186,10 @@ template <> struct NodeID<Decl> { static constexpr StringRef value = "decl"; };
constexpr StringRef NodeID<Expr>::value;
constexpr StringRef NodeID<Decl>::value;

template <class T, class F = const Stmt *(ExprMutationAnalyzer::*)(const T *)>
template <class T,
class F = const Stmt *(ExprMutationAnalyzer::Analyzer::*)(const T *)>
const Stmt *tryEachMatch(ArrayRef<ast_matchers::BoundNodes> Matches,
ExprMutationAnalyzer *Analyzer, F Finder) {
ExprMutationAnalyzer::Analyzer *Analyzer, F Finder) {
const StringRef ID = NodeID<T>::value;
for (const auto &Nodes : Matches) {
if (const Stmt *S = (Analyzer->*Finder)(Nodes.getNodeAs<T>(ID)))
Expand All @@ -199,33 +200,37 @@ const Stmt *tryEachMatch(ArrayRef<ast_matchers::BoundNodes> Matches,

} // namespace

const Stmt *ExprMutationAnalyzer::findMutation(const Expr *Exp) {
return findMutationMemoized(Exp,
{&ExprMutationAnalyzer::findDirectMutation,
&ExprMutationAnalyzer::findMemberMutation,
&ExprMutationAnalyzer::findArrayElementMutation,
&ExprMutationAnalyzer::findCastMutation,
&ExprMutationAnalyzer::findRangeLoopMutation,
&ExprMutationAnalyzer::findReferenceMutation,
&ExprMutationAnalyzer::findFunctionArgMutation},
Results);
const Stmt *ExprMutationAnalyzer::Analyzer::findMutation(const Expr *Exp) {
return findMutationMemoized(
Exp,
{&ExprMutationAnalyzer::Analyzer::findDirectMutation,
&ExprMutationAnalyzer::Analyzer::findMemberMutation,
&ExprMutationAnalyzer::Analyzer::findArrayElementMutation,
&ExprMutationAnalyzer::Analyzer::findCastMutation,
&ExprMutationAnalyzer::Analyzer::findRangeLoopMutation,
&ExprMutationAnalyzer::Analyzer::findReferenceMutation,
&ExprMutationAnalyzer::Analyzer::findFunctionArgMutation},
Memorized.Results);
}

const Stmt *ExprMutationAnalyzer::findMutation(const Decl *Dec) {
return tryEachDeclRef(Dec, &ExprMutationAnalyzer::findMutation);
const Stmt *ExprMutationAnalyzer::Analyzer::findMutation(const Decl *Dec) {
return tryEachDeclRef(Dec, &ExprMutationAnalyzer::Analyzer::findMutation);
}

const Stmt *ExprMutationAnalyzer::findPointeeMutation(const Expr *Exp) {
return findMutationMemoized(Exp, {/*TODO*/}, PointeeResults);
const Stmt *
ExprMutationAnalyzer::Analyzer::findPointeeMutation(const Expr *Exp) {
return findMutationMemoized(Exp, {/*TODO*/}, Memorized.PointeeResults);
}

const Stmt *ExprMutationAnalyzer::findPointeeMutation(const Decl *Dec) {
return tryEachDeclRef(Dec, &ExprMutationAnalyzer::findPointeeMutation);
const Stmt *
ExprMutationAnalyzer::Analyzer::findPointeeMutation(const Decl *Dec) {
return tryEachDeclRef(Dec,
&ExprMutationAnalyzer::Analyzer::findPointeeMutation);
}

const Stmt *ExprMutationAnalyzer::findMutationMemoized(
const Stmt *ExprMutationAnalyzer::Analyzer::findMutationMemoized(
const Expr *Exp, llvm::ArrayRef<MutationFinder> Finders,
ResultMap &MemoizedResults) {
Memoized::ResultMap &MemoizedResults) {
const auto Memoized = MemoizedResults.find(Exp);
if (Memoized != MemoizedResults.end())
return Memoized->second;
Expand All @@ -241,8 +246,9 @@ const Stmt *ExprMutationAnalyzer::findMutationMemoized(
return MemoizedResults[Exp] = nullptr;
}

const Stmt *ExprMutationAnalyzer::tryEachDeclRef(const Decl *Dec,
MutationFinder Finder) {
const Stmt *
ExprMutationAnalyzer::Analyzer::tryEachDeclRef(const Decl *Dec,
MutationFinder Finder) {
const auto Refs = match(
findAll(
declRefExpr(to(
Expand All @@ -261,8 +267,9 @@ const Stmt *ExprMutationAnalyzer::tryEachDeclRef(const Decl *Dec,
return nullptr;
}

bool ExprMutationAnalyzer::isUnevaluated(const Stmt *Exp, const Stmt &Stm,
ASTContext &Context) {
bool ExprMutationAnalyzer::Analyzer::isUnevaluated(const Stmt *Exp,
const Stmt &Stm,
ASTContext &Context) {
return selectFirst<Stmt>(
NodeID<Expr>::value,
match(
Expand Down Expand Up @@ -293,33 +300,36 @@ bool ExprMutationAnalyzer::isUnevaluated(const Stmt *Exp, const Stmt &Stm,
Stm, Context)) != nullptr;
}

bool ExprMutationAnalyzer::isUnevaluated(const Expr *Exp) {
bool ExprMutationAnalyzer::Analyzer::isUnevaluated(const Expr *Exp) {
return isUnevaluated(Exp, Stm, Context);
}

const Stmt *
ExprMutationAnalyzer::findExprMutation(ArrayRef<BoundNodes> Matches) {
return tryEachMatch<Expr>(Matches, this, &ExprMutationAnalyzer::findMutation);
ExprMutationAnalyzer::Analyzer::findExprMutation(ArrayRef<BoundNodes> Matches) {
return tryEachMatch<Expr>(Matches, this,
&ExprMutationAnalyzer::Analyzer::findMutation);
}

const Stmt *
ExprMutationAnalyzer::findDeclMutation(ArrayRef<BoundNodes> Matches) {
return tryEachMatch<Decl>(Matches, this, &ExprMutationAnalyzer::findMutation);
ExprMutationAnalyzer::Analyzer::findDeclMutation(ArrayRef<BoundNodes> Matches) {
return tryEachMatch<Decl>(Matches, this,
&ExprMutationAnalyzer::Analyzer::findMutation);
}

const Stmt *ExprMutationAnalyzer::findExprPointeeMutation(
const Stmt *ExprMutationAnalyzer::Analyzer::findExprPointeeMutation(
ArrayRef<ast_matchers::BoundNodes> Matches) {
return tryEachMatch<Expr>(Matches, this,
&ExprMutationAnalyzer::findPointeeMutation);
return tryEachMatch<Expr>(
Matches, this, &ExprMutationAnalyzer::Analyzer::findPointeeMutation);
}

const Stmt *ExprMutationAnalyzer::findDeclPointeeMutation(
const Stmt *ExprMutationAnalyzer::Analyzer::findDeclPointeeMutation(
ArrayRef<ast_matchers::BoundNodes> Matches) {
return tryEachMatch<Decl>(Matches, this,
&ExprMutationAnalyzer::findPointeeMutation);
return tryEachMatch<Decl>(
Matches, this, &ExprMutationAnalyzer::Analyzer::findPointeeMutation);
}

const Stmt *ExprMutationAnalyzer::findDirectMutation(const Expr *Exp) {
const Stmt *
ExprMutationAnalyzer::Analyzer::findDirectMutation(const Expr *Exp) {
// LHS of any assignment operators.
const auto AsAssignmentLhs =
binaryOperator(isAssignmentOperator(), hasLHS(canResolveToExpr(Exp)));
Expand Down Expand Up @@ -426,7 +436,7 @@ const Stmt *ExprMutationAnalyzer::findDirectMutation(const Expr *Exp) {
const auto AsNonConstRefReturn =
returnStmt(hasReturnValue(canResolveToExpr(Exp)));

// It is used as a non-const-reference for initalizing a range-for loop.
// It is used as a non-const-reference for initializing a range-for loop.
const auto AsNonConstRefRangeInit = cxxForRangeStmt(hasRangeInit(declRefExpr(
allOf(canResolveToExpr(Exp), hasType(nonConstReferenceType())))));

Expand All @@ -443,7 +453,8 @@ const Stmt *ExprMutationAnalyzer::findDirectMutation(const Expr *Exp) {
return selectFirst<Stmt>("stmt", Matches);
}

const Stmt *ExprMutationAnalyzer::findMemberMutation(const Expr *Exp) {
const Stmt *
ExprMutationAnalyzer::Analyzer::findMemberMutation(const Expr *Exp) {
// Check whether any member of 'Exp' is mutated.
const auto MemberExprs = match(
findAll(expr(anyOf(memberExpr(hasObjectExpression(canResolveToExpr(Exp))),
Expand All @@ -456,7 +467,8 @@ const Stmt *ExprMutationAnalyzer::findMemberMutation(const Expr *Exp) {
return findExprMutation(MemberExprs);
}

const Stmt *ExprMutationAnalyzer::findArrayElementMutation(const Expr *Exp) {
const Stmt *
ExprMutationAnalyzer::Analyzer::findArrayElementMutation(const Expr *Exp) {
// Check whether any element of an array is mutated.
const auto SubscriptExprs = match(
findAll(arraySubscriptExpr(
Expand All @@ -469,7 +481,7 @@ const Stmt *ExprMutationAnalyzer::findArrayElementMutation(const Expr *Exp) {
return findExprMutation(SubscriptExprs);
}

const Stmt *ExprMutationAnalyzer::findCastMutation(const Expr *Exp) {
const Stmt *ExprMutationAnalyzer::Analyzer::findCastMutation(const Expr *Exp) {
// If the 'Exp' is explicitly casted to a non-const reference type the
// 'Exp' is considered to be modified.
const auto ExplicitCast =
Expand Down Expand Up @@ -504,7 +516,8 @@ const Stmt *ExprMutationAnalyzer::findCastMutation(const Expr *Exp) {
return findExprMutation(Calls);
}

const Stmt *ExprMutationAnalyzer::findRangeLoopMutation(const Expr *Exp) {
const Stmt *
ExprMutationAnalyzer::Analyzer::findRangeLoopMutation(const Expr *Exp) {
// Keep the ordering for the specific initialization matches to happen first,
// because it is cheaper to match all potential modifications of the loop
// variable.
Expand Down Expand Up @@ -567,7 +580,8 @@ const Stmt *ExprMutationAnalyzer::findRangeLoopMutation(const Expr *Exp) {
return findDeclMutation(LoopVars);
}

const Stmt *ExprMutationAnalyzer::findReferenceMutation(const Expr *Exp) {
const Stmt *
ExprMutationAnalyzer::Analyzer::findReferenceMutation(const Expr *Exp) {
// Follow non-const reference returned by `operator*()` of move-only classes.
// These are typically smart pointers with unique ownership so we treat
// mutation of pointee as mutation of the smart pointer itself.
Expand Down Expand Up @@ -599,7 +613,8 @@ const Stmt *ExprMutationAnalyzer::findReferenceMutation(const Expr *Exp) {
return findDeclMutation(Refs);
}

const Stmt *ExprMutationAnalyzer::findFunctionArgMutation(const Expr *Exp) {
const Stmt *
ExprMutationAnalyzer::Analyzer::findFunctionArgMutation(const Expr *Exp) {
const auto NonConstRefParam = forEachArgumentWithParam(
canResolveToExpr(Exp),
parmVarDecl(hasType(nonConstReferenceType())).bind("parm"));
Expand Down Expand Up @@ -637,10 +652,9 @@ const Stmt *ExprMutationAnalyzer::findFunctionArgMutation(const Expr *Exp) {
if (const auto *RefType = ParmType->getAs<RValueReferenceType>()) {
if (!RefType->getPointeeType().getQualifiers() &&
RefType->getPointeeType()->getAs<TemplateTypeParmType>()) {
std::unique_ptr<FunctionParmMutationAnalyzer> &Analyzer =
FuncParmAnalyzer[Func];
if (!Analyzer)
Analyzer.reset(new FunctionParmMutationAnalyzer(*Func, Context));
FunctionParmMutationAnalyzer *Analyzer =
FunctionParmMutationAnalyzer::getFunctionParmMutationAnalyzer(
*Func, Context, Memorized);
if (Analyzer->findMutation(Parm))
return Exp;
continue;
Expand All @@ -653,13 +667,15 @@ const Stmt *ExprMutationAnalyzer::findFunctionArgMutation(const Expr *Exp) {
}

FunctionParmMutationAnalyzer::FunctionParmMutationAnalyzer(
const FunctionDecl &Func, ASTContext &Context)
: BodyAnalyzer(*Func.getBody(), Context) {
const FunctionDecl &Func, ASTContext &Context,
ExprMutationAnalyzer::Memoized &Memorized)
: BodyAnalyzer(*Func.getBody(), Context, Memorized) {
if (const auto *Ctor = dyn_cast<CXXConstructorDecl>(&Func)) {
// CXXCtorInitializer might also mutate Param but they're not part of
// function body, check them eagerly here since they're typically trivial.
for (const CXXCtorInitializer *Init : Ctor->inits()) {
ExprMutationAnalyzer InitAnalyzer(*Init->getInit(), Context);
ExprMutationAnalyzer::Analyzer InitAnalyzer(*Init->getInit(), Context,
Memorized);
for (const ParmVarDecl *Parm : Ctor->parameters()) {
if (Results.contains(Parm))
continue;
Expand All @@ -675,11 +691,14 @@ FunctionParmMutationAnalyzer::findMutation(const ParmVarDecl *Parm) {
const auto Memoized = Results.find(Parm);
if (Memoized != Results.end())
return Memoized->second;

// To handle call A -> call B -> call A. Assume parameters of A is not mutated
// before analyzing parameters of A. Then when analyzing the second "call A",
// FunctionParmMutationAnalyzer can use this memoized value to avoid infinite
// recursion.
Results[Parm] = nullptr;
if (const Stmt *S = BodyAnalyzer.findMutation(Parm))
return Results[Parm] = S;

return Results[Parm] = nullptr;
return Results[Parm];
}

} // namespace clang
249 changes: 249 additions & 0 deletions clang/lib/Analysis/FlowSensitive/ASTOps.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,249 @@
//===-- ASTOps.cc -------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// Operations on AST nodes that are used in flow-sensitive analysis.
//
//===----------------------------------------------------------------------===//

#include "clang/Analysis/FlowSensitive/ASTOps.h"
#include "clang/AST/ComputeDependence.h"
#include "clang/AST/Decl.h"
#include "clang/AST/DeclBase.h"
#include "clang/AST/DeclCXX.h"
#include "clang/AST/Expr.h"
#include "clang/AST/ExprCXX.h"
#include "clang/AST/Stmt.h"
#include "clang/AST/Type.h"
#include "clang/Analysis/FlowSensitive/StorageLocation.h"
#include "clang/Basic/LLVM.h"
#include "llvm/ADT/DenseSet.h"
#include "llvm/ADT/STLExtras.h"
#include <cassert>
#include <iterator>
#include <vector>

#define DEBUG_TYPE "dataflow"

namespace clang::dataflow {

const Expr &ignoreCFGOmittedNodes(const Expr &E) {
const Expr *Current = &E;
if (auto *EWC = dyn_cast<ExprWithCleanups>(Current)) {
Current = EWC->getSubExpr();
assert(Current != nullptr);
}
Current = Current->IgnoreParens();
assert(Current != nullptr);
return *Current;
}

const Stmt &ignoreCFGOmittedNodes(const Stmt &S) {
if (auto *E = dyn_cast<Expr>(&S))
return ignoreCFGOmittedNodes(*E);
return S;
}

// FIXME: Does not precisely handle non-virtual diamond inheritance. A single
// field decl will be modeled for all instances of the inherited field.
static void getFieldsFromClassHierarchy(QualType Type, FieldSet &Fields) {
if (Type->isIncompleteType() || Type->isDependentType() ||
!Type->isRecordType())
return;

for (const FieldDecl *Field : Type->getAsRecordDecl()->fields())
Fields.insert(Field);
if (auto *CXXRecord = Type->getAsCXXRecordDecl())
for (const CXXBaseSpecifier &Base : CXXRecord->bases())
getFieldsFromClassHierarchy(Base.getType(), Fields);
}

/// Gets the set of all fields in the type.
FieldSet getObjectFields(QualType Type) {
FieldSet Fields;
getFieldsFromClassHierarchy(Type, Fields);
return Fields;
}

bool containsSameFields(const FieldSet &Fields,
const RecordStorageLocation::FieldToLoc &FieldLocs) {
if (Fields.size() != FieldLocs.size())
return false;
for ([[maybe_unused]] auto [Field, Loc] : FieldLocs)
if (!Fields.contains(cast_or_null<FieldDecl>(Field)))
return false;
return true;
}

/// Returns the fields of a `RecordDecl` that are initialized by an
/// `InitListExpr`, in the order in which they appear in
/// `InitListExpr::inits()`.
/// `Init->getType()` must be a record type.
static std::vector<const FieldDecl *>
getFieldsForInitListExpr(const InitListExpr *InitList) {
const RecordDecl *RD = InitList->getType()->getAsRecordDecl();
assert(RD != nullptr);

std::vector<const FieldDecl *> Fields;

if (InitList->getType()->isUnionType()) {
Fields.push_back(InitList->getInitializedFieldInUnion());
return Fields;
}

// Unnamed bitfields are only used for padding and do not appear in
// `InitListExpr`'s inits. However, those fields do appear in `RecordDecl`'s
// field list, and we thus need to remove them before mapping inits to
// fields to avoid mapping inits to the wrongs fields.
llvm::copy_if(
RD->fields(), std::back_inserter(Fields),
[](const FieldDecl *Field) { return !Field->isUnnamedBitfield(); });
return Fields;
}

RecordInitListHelper::RecordInitListHelper(const InitListExpr *InitList) {
auto *RD = InitList->getType()->getAsCXXRecordDecl();
assert(RD != nullptr);

std::vector<const FieldDecl *> Fields = getFieldsForInitListExpr(InitList);
ArrayRef<Expr *> Inits = InitList->inits();

// Unions initialized with an empty initializer list need special treatment.
// For structs/classes initialized with an empty initializer list, Clang
// puts `ImplicitValueInitExpr`s in `InitListExpr::inits()`, but for unions,
// it doesn't do this -- so we create an `ImplicitValueInitExpr` ourselves.
SmallVector<Expr *> InitsForUnion;
if (InitList->getType()->isUnionType() && Inits.empty()) {
assert(Fields.size() == 1);
ImplicitValueInitForUnion.emplace(Fields.front()->getType());
InitsForUnion.push_back(&*ImplicitValueInitForUnion);
Inits = InitsForUnion;
}

size_t InitIdx = 0;

assert(Fields.size() + RD->getNumBases() == Inits.size());
for (const CXXBaseSpecifier &Base : RD->bases()) {
assert(InitIdx < Inits.size());
Expr *Init = Inits[InitIdx++];
BaseInits.emplace_back(&Base, Init);
}

assert(Fields.size() == Inits.size() - InitIdx);
for (const FieldDecl *Field : Fields) {
assert(InitIdx < Inits.size());
Expr *Init = Inits[InitIdx++];
FieldInits.emplace_back(Field, Init);
}
}

static void insertIfGlobal(const Decl &D,
llvm::DenseSet<const VarDecl *> &Globals) {
if (auto *V = dyn_cast<VarDecl>(&D))
if (V->hasGlobalStorage())
Globals.insert(V);
}

static void insertIfFunction(const Decl &D,
llvm::DenseSet<const FunctionDecl *> &Funcs) {
if (auto *FD = dyn_cast<FunctionDecl>(&D))
Funcs.insert(FD);
}

static MemberExpr *getMemberForAccessor(const CXXMemberCallExpr &C) {
// Use getCalleeDecl instead of getMethodDecl in order to handle
// pointer-to-member calls.
const auto *MethodDecl = dyn_cast_or_null<CXXMethodDecl>(C.getCalleeDecl());
if (!MethodDecl)
return nullptr;
auto *Body = dyn_cast_or_null<CompoundStmt>(MethodDecl->getBody());
if (!Body || Body->size() != 1)
return nullptr;
if (auto *RS = dyn_cast<ReturnStmt>(*Body->body_begin()))
if (auto *Return = RS->getRetValue())
return dyn_cast<MemberExpr>(Return->IgnoreParenImpCasts());
return nullptr;
}

static void getReferencedDecls(const Decl &D, ReferencedDecls &Referenced) {
insertIfGlobal(D, Referenced.Globals);
insertIfFunction(D, Referenced.Functions);
if (const auto *Decomp = dyn_cast<DecompositionDecl>(&D))
for (const auto *B : Decomp->bindings())
if (auto *ME = dyn_cast_or_null<MemberExpr>(B->getBinding()))
// FIXME: should we be using `E->getFoundDecl()`?
if (const auto *FD = dyn_cast<FieldDecl>(ME->getMemberDecl()))
Referenced.Fields.insert(FD);
}

/// Traverses `S` and inserts into `Referenced` any declarations that are
/// declared in or referenced from sub-statements.
static void getReferencedDecls(const Stmt &S, ReferencedDecls &Referenced) {
for (auto *Child : S.children())
if (Child != nullptr)
getReferencedDecls(*Child, Referenced);
if (const auto *DefaultArg = dyn_cast<CXXDefaultArgExpr>(&S))
getReferencedDecls(*DefaultArg->getExpr(), Referenced);
if (const auto *DefaultInit = dyn_cast<CXXDefaultInitExpr>(&S))
getReferencedDecls(*DefaultInit->getExpr(), Referenced);

if (auto *DS = dyn_cast<DeclStmt>(&S)) {
if (DS->isSingleDecl())
getReferencedDecls(*DS->getSingleDecl(), Referenced);
else
for (auto *D : DS->getDeclGroup())
getReferencedDecls(*D, Referenced);
} else if (auto *E = dyn_cast<DeclRefExpr>(&S)) {
insertIfGlobal(*E->getDecl(), Referenced.Globals);
insertIfFunction(*E->getDecl(), Referenced.Functions);
} else if (const auto *C = dyn_cast<CXXMemberCallExpr>(&S)) {
// If this is a method that returns a member variable but does nothing else,
// model the field of the return value.
if (MemberExpr *E = getMemberForAccessor(*C))
if (const auto *FD = dyn_cast<FieldDecl>(E->getMemberDecl()))
Referenced.Fields.insert(FD);
} else if (auto *E = dyn_cast<MemberExpr>(&S)) {
// FIXME: should we be using `E->getFoundDecl()`?
const ValueDecl *VD = E->getMemberDecl();
insertIfGlobal(*VD, Referenced.Globals);
insertIfFunction(*VD, Referenced.Functions);
if (const auto *FD = dyn_cast<FieldDecl>(VD))
Referenced.Fields.insert(FD);
} else if (auto *InitList = dyn_cast<InitListExpr>(&S)) {
if (InitList->getType()->isRecordType())
for (const auto *FD : getFieldsForInitListExpr(InitList))
Referenced.Fields.insert(FD);
}
}

ReferencedDecls getReferencedDecls(const FunctionDecl &FD) {
ReferencedDecls Result;
// Look for global variable and field references in the
// constructor-initializers.
if (const auto *CtorDecl = dyn_cast<CXXConstructorDecl>(&FD)) {
for (const auto *Init : CtorDecl->inits()) {
if (Init->isMemberInitializer()) {
Result.Fields.insert(Init->getMember());
} else if (Init->isIndirectMemberInitializer()) {
for (const auto *I : Init->getIndirectMember()->chain())
Result.Fields.insert(cast<FieldDecl>(I));
}
const Expr *E = Init->getInit();
assert(E != nullptr);
getReferencedDecls(*E, Result);
}
// Add all fields mentioned in default member initializers.
for (const FieldDecl *F : CtorDecl->getParent()->fields())
if (const auto *I = F->getInClassInitializer())
getReferencedDecls(*I, Result);
}
getReferencedDecls(*FD.getBody(), Result);

return Result;
}

} // namespace clang::dataflow
1 change: 1 addition & 0 deletions clang/lib/Analysis/FlowSensitive/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
add_clang_library(clangAnalysisFlowSensitive
AdornedCFG.cpp
Arena.cpp
ASTOps.cpp
DataflowAnalysisContext.cpp
DataflowEnvironment.cpp
Formula.cpp
Expand Down
53 changes: 1 addition & 52 deletions clang/lib/Analysis/FlowSensitive/DataflowAnalysisContext.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@

#include "clang/Analysis/FlowSensitive/DataflowAnalysisContext.h"
#include "clang/AST/ExprCXX.h"
#include "clang/Analysis/FlowSensitive/ASTOps.h"
#include "clang/Analysis/FlowSensitive/DebugSupport.h"
#include "clang/Analysis/FlowSensitive/Formula.h"
#include "clang/Analysis/FlowSensitive/Logger.h"
Expand Down Expand Up @@ -359,55 +360,3 @@ DataflowAnalysisContext::~DataflowAnalysisContext() = default;

} // namespace dataflow
} // namespace clang

using namespace clang;

const Expr &clang::dataflow::ignoreCFGOmittedNodes(const Expr &E) {
const Expr *Current = &E;
if (auto *EWC = dyn_cast<ExprWithCleanups>(Current)) {
Current = EWC->getSubExpr();
assert(Current != nullptr);
}
Current = Current->IgnoreParens();
assert(Current != nullptr);
return *Current;
}

const Stmt &clang::dataflow::ignoreCFGOmittedNodes(const Stmt &S) {
if (auto *E = dyn_cast<Expr>(&S))
return ignoreCFGOmittedNodes(*E);
return S;
}

// FIXME: Does not precisely handle non-virtual diamond inheritance. A single
// field decl will be modeled for all instances of the inherited field.
static void getFieldsFromClassHierarchy(QualType Type,
clang::dataflow::FieldSet &Fields) {
if (Type->isIncompleteType() || Type->isDependentType() ||
!Type->isRecordType())
return;

for (const FieldDecl *Field : Type->getAsRecordDecl()->fields())
Fields.insert(Field);
if (auto *CXXRecord = Type->getAsCXXRecordDecl())
for (const CXXBaseSpecifier &Base : CXXRecord->bases())
getFieldsFromClassHierarchy(Base.getType(), Fields);
}

/// Gets the set of all fields in the type.
clang::dataflow::FieldSet clang::dataflow::getObjectFields(QualType Type) {
FieldSet Fields;
getFieldsFromClassHierarchy(Type, Fields);
return Fields;
}

bool clang::dataflow::containsSameFields(
const clang::dataflow::FieldSet &Fields,
const clang::dataflow::RecordStorageLocation::FieldToLoc &FieldLocs) {
if (Fields.size() != FieldLocs.size())
return false;
for ([[maybe_unused]] auto [Field, Loc] : FieldLocs)
if (!Fields.contains(cast_or_null<FieldDecl>(Field)))
return false;
return true;
}
188 changes: 15 additions & 173 deletions clang/lib/Analysis/FlowSensitive/DataflowEnvironment.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
#include "clang/AST/DeclCXX.h"
#include "clang/AST/RecursiveASTVisitor.h"
#include "clang/AST/Type.h"
#include "clang/Analysis/FlowSensitive/ASTOps.h"
#include "clang/Analysis/FlowSensitive/DataflowLattice.h"
#include "clang/Analysis/FlowSensitive/Value.h"
#include "llvm/ADT/DenseMap.h"
Expand Down Expand Up @@ -304,93 +305,6 @@ widenKeyToValueMap(const llvm::MapVector<Key, Value *> &CurMap,
return WidenedMap;
}

/// Initializes a global storage value.
static void insertIfGlobal(const Decl &D,
llvm::DenseSet<const VarDecl *> &Vars) {
if (auto *V = dyn_cast<VarDecl>(&D))
if (V->hasGlobalStorage())
Vars.insert(V);
}

static void insertIfFunction(const Decl &D,
llvm::DenseSet<const FunctionDecl *> &Funcs) {
if (auto *FD = dyn_cast<FunctionDecl>(&D))
Funcs.insert(FD);
}

static MemberExpr *getMemberForAccessor(const CXXMemberCallExpr &C) {
// Use getCalleeDecl instead of getMethodDecl in order to handle
// pointer-to-member calls.
const auto *MethodDecl = dyn_cast_or_null<CXXMethodDecl>(C.getCalleeDecl());
if (!MethodDecl)
return nullptr;
auto *Body = dyn_cast_or_null<CompoundStmt>(MethodDecl->getBody());
if (!Body || Body->size() != 1)
return nullptr;
if (auto *RS = dyn_cast<ReturnStmt>(*Body->body_begin()))
if (auto *Return = RS->getRetValue())
return dyn_cast<MemberExpr>(Return->IgnoreParenImpCasts());
return nullptr;
}

static void
getFieldsGlobalsAndFuncs(const Decl &D, FieldSet &Fields,
llvm::DenseSet<const VarDecl *> &Vars,
llvm::DenseSet<const FunctionDecl *> &Funcs) {
insertIfGlobal(D, Vars);
insertIfFunction(D, Funcs);
if (const auto *Decomp = dyn_cast<DecompositionDecl>(&D))
for (const auto *B : Decomp->bindings())
if (auto *ME = dyn_cast_or_null<MemberExpr>(B->getBinding()))
// FIXME: should we be using `E->getFoundDecl()`?
if (const auto *FD = dyn_cast<FieldDecl>(ME->getMemberDecl()))
Fields.insert(FD);
}

/// Traverses `S` and inserts into `Fields`, `Vars` and `Funcs` any fields,
/// global variables and functions that are declared in or referenced from
/// sub-statements.
static void
getFieldsGlobalsAndFuncs(const Stmt &S, FieldSet &Fields,
llvm::DenseSet<const VarDecl *> &Vars,
llvm::DenseSet<const FunctionDecl *> &Funcs) {
for (auto *Child : S.children())
if (Child != nullptr)
getFieldsGlobalsAndFuncs(*Child, Fields, Vars, Funcs);
if (const auto *DefaultArg = dyn_cast<CXXDefaultArgExpr>(&S))
getFieldsGlobalsAndFuncs(*DefaultArg->getExpr(), Fields, Vars, Funcs);
if (const auto *DefaultInit = dyn_cast<CXXDefaultInitExpr>(&S))
getFieldsGlobalsAndFuncs(*DefaultInit->getExpr(), Fields, Vars, Funcs);

if (auto *DS = dyn_cast<DeclStmt>(&S)) {
if (DS->isSingleDecl())
getFieldsGlobalsAndFuncs(*DS->getSingleDecl(), Fields, Vars, Funcs);
else
for (auto *D : DS->getDeclGroup())
getFieldsGlobalsAndFuncs(*D, Fields, Vars, Funcs);
} else if (auto *E = dyn_cast<DeclRefExpr>(&S)) {
insertIfGlobal(*E->getDecl(), Vars);
insertIfFunction(*E->getDecl(), Funcs);
} else if (const auto *C = dyn_cast<CXXMemberCallExpr>(&S)) {
// If this is a method that returns a member variable but does nothing else,
// model the field of the return value.
if (MemberExpr *E = getMemberForAccessor(*C))
if (const auto *FD = dyn_cast<FieldDecl>(E->getMemberDecl()))
Fields.insert(FD);
} else if (auto *E = dyn_cast<MemberExpr>(&S)) {
// FIXME: should we be using `E->getFoundDecl()`?
const ValueDecl *VD = E->getMemberDecl();
insertIfGlobal(*VD, Vars);
insertIfFunction(*VD, Funcs);
if (const auto *FD = dyn_cast<FieldDecl>(VD))
Fields.insert(FD);
} else if (auto *InitList = dyn_cast<InitListExpr>(&S)) {
if (InitList->getType()->isRecordType())
for (const auto *FD : getFieldsForInitListExpr(InitList))
Fields.insert(FD);
}
}

namespace {

// Visitor that builds a map from record prvalues to result objects.
Expand Down Expand Up @@ -505,7 +419,11 @@ class ResultObjectVisitor : public RecursiveASTVisitor<ResultObjectVisitor> {
// below them can initialize the same object (or part of it).
if (isa<CXXConstructExpr>(E) || isa<CallExpr>(E) || isa<LambdaExpr>(E) ||
isa<CXXDefaultArgExpr>(E) || isa<CXXDefaultInitExpr>(E) ||
isa<CXXStdInitializerListExpr>(E)) {
isa<CXXStdInitializerListExpr>(E) ||
// We treat `BuiltinBitCastExpr` as an "original initializer" too as
// it may not even be casting from a record type -- and even if it is,
// the two objects are in general of unrelated type.
isa<BuiltinBitCastExpr>(E)) {
return;
}
if (auto *Op = dyn_cast<BinaryOperator>(E);
Expand Down Expand Up @@ -556,6 +474,11 @@ class ResultObjectVisitor : public RecursiveASTVisitor<ResultObjectVisitor> {
return;
}

if (auto *SE = dyn_cast<StmtExpr>(E)) {
PropagateResultObject(cast<Expr>(SE->getSubStmt()->body_back()), Loc);
return;
}

// All other expression nodes that propagate a record prvalue should have
// exactly one child.
SmallVector<Stmt *, 1> Children(E->child_begin(), E->child_end());
Expand Down Expand Up @@ -653,36 +576,13 @@ void Environment::initialize() {
void Environment::initFieldsGlobalsAndFuncs(const FunctionDecl *FuncDecl) {
assert(FuncDecl->doesThisDeclarationHaveABody());

FieldSet Fields;
llvm::DenseSet<const VarDecl *> Vars;
llvm::DenseSet<const FunctionDecl *> Funcs;

// Look for global variable and field references in the
// constructor-initializers.
if (const auto *CtorDecl = dyn_cast<CXXConstructorDecl>(FuncDecl)) {
for (const auto *Init : CtorDecl->inits()) {
if (Init->isMemberInitializer()) {
Fields.insert(Init->getMember());
} else if (Init->isIndirectMemberInitializer()) {
for (const auto *I : Init->getIndirectMember()->chain())
Fields.insert(cast<FieldDecl>(I));
}
const Expr *E = Init->getInit();
assert(E != nullptr);
getFieldsGlobalsAndFuncs(*E, Fields, Vars, Funcs);
}
// Add all fields mentioned in default member initializers.
for (const FieldDecl *F : CtorDecl->getParent()->fields())
if (const auto *I = F->getInClassInitializer())
getFieldsGlobalsAndFuncs(*I, Fields, Vars, Funcs);
}
getFieldsGlobalsAndFuncs(*FuncDecl->getBody(), Fields, Vars, Funcs);
ReferencedDecls Referenced = getReferencedDecls(*FuncDecl);

// These have to be added before the lines that follow to ensure that
// `create*` work correctly for structs.
DACtx->addModeledFields(Fields);
DACtx->addModeledFields(Referenced.Fields);

for (const VarDecl *D : Vars) {
for (const VarDecl *D : Referenced.Globals) {
if (getStorageLocation(*D) != nullptr)
continue;

Expand All @@ -694,7 +594,7 @@ void Environment::initFieldsGlobalsAndFuncs(const FunctionDecl *FuncDecl) {
setStorageLocation(*D, createObject(*D, nullptr));
}

for (const FunctionDecl *FD : Funcs) {
for (const FunctionDecl *FD : Referenced.Functions) {
if (getStorageLocation(*FD) != nullptr)
continue;
auto &Loc = createStorageLocation(*FD);
Expand Down Expand Up @@ -1354,64 +1254,6 @@ RecordStorageLocation *getBaseObjectLocation(const MemberExpr &ME,
return Env.get<RecordStorageLocation>(*Base);
}

std::vector<const FieldDecl *>
getFieldsForInitListExpr(const InitListExpr *InitList) {
const RecordDecl *RD = InitList->getType()->getAsRecordDecl();
assert(RD != nullptr);

std::vector<const FieldDecl *> Fields;

if (InitList->getType()->isUnionType()) {
Fields.push_back(InitList->getInitializedFieldInUnion());
return Fields;
}

// Unnamed bitfields are only used for padding and do not appear in
// `InitListExpr`'s inits. However, those fields do appear in `RecordDecl`'s
// field list, and we thus need to remove them before mapping inits to
// fields to avoid mapping inits to the wrongs fields.
llvm::copy_if(
RD->fields(), std::back_inserter(Fields),
[](const FieldDecl *Field) { return !Field->isUnnamedBitfield(); });
return Fields;
}

RecordInitListHelper::RecordInitListHelper(const InitListExpr *InitList) {
auto *RD = InitList->getType()->getAsCXXRecordDecl();
assert(RD != nullptr);

std::vector<const FieldDecl *> Fields = getFieldsForInitListExpr(InitList);
ArrayRef<Expr *> Inits = InitList->inits();

// Unions initialized with an empty initializer list need special treatment.
// For structs/classes initialized with an empty initializer list, Clang
// puts `ImplicitValueInitExpr`s in `InitListExpr::inits()`, but for unions,
// it doesn't do this -- so we create an `ImplicitValueInitExpr` ourselves.
SmallVector<Expr *> InitsForUnion;
if (InitList->getType()->isUnionType() && Inits.empty()) {
assert(Fields.size() == 1);
ImplicitValueInitForUnion.emplace(Fields.front()->getType());
InitsForUnion.push_back(&*ImplicitValueInitForUnion);
Inits = InitsForUnion;
}

size_t InitIdx = 0;

assert(Fields.size() + RD->getNumBases() == Inits.size());
for (const CXXBaseSpecifier &Base : RD->bases()) {
assert(InitIdx < Inits.size());
Expr *Init = Inits[InitIdx++];
BaseInits.emplace_back(&Base, Init);
}

assert(Fields.size() == Inits.size() - InitIdx);
for (const FieldDecl *Field : Fields) {
assert(InitIdx < Inits.size());
Expr *Init = Inits[InitIdx++];
FieldInits.emplace_back(Field, Init);
}
}

RecordValue &refreshRecordValue(RecordStorageLocation &Loc, Environment &Env) {
auto &NewVal = Env.create<RecordValue>(Loc);
Env.setValue(Loc, NewVal);
Expand Down
2 changes: 2 additions & 0 deletions clang/lib/Analysis/FlowSensitive/Transfer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@
#include "clang/AST/OperationKinds.h"
#include "clang/AST/Stmt.h"
#include "clang/AST/StmtVisitor.h"
#include "clang/Analysis/FlowSensitive/ASTOps.h"
#include "clang/Analysis/FlowSensitive/AdornedCFG.h"
#include "clang/Analysis/FlowSensitive/DataflowAnalysisContext.h"
#include "clang/Analysis/FlowSensitive/DataflowEnvironment.h"
#include "clang/Analysis/FlowSensitive/NoopAnalysis.h"
#include "clang/Analysis/FlowSensitive/RecordOps.h"
Expand Down
2 changes: 1 addition & 1 deletion clang/lib/Analysis/UnsafeBufferUsage.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1114,7 +1114,7 @@ class UPCAddressofArraySubscriptGadget : public FixableGadget {
virtual DeclUseList getClaimedVarUseSites() const override {
const auto *ArraySubst = cast<ArraySubscriptExpr>(Node->getSubExpr());
const auto *DRE =
cast<DeclRefExpr>(ArraySubst->getBase()->IgnoreImpCasts());
cast<DeclRefExpr>(ArraySubst->getBase()->IgnoreParenImpCasts());
return {DRE};
}
};
Expand Down
6 changes: 3 additions & 3 deletions clang/lib/Basic/Cuda.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ static const CudaArchToStringMap arch_names[] = {
// clang-format off
{CudaArch::UNUSED, "", ""},
SM2(20, "compute_20"), SM2(21, "compute_20"), // Fermi
SM(30), SM(32), SM(35), SM(37), // Kepler
SM(30), {CudaArch::SM_32_, "sm_32", "compute_32"}, SM(35), SM(37), // Kepler
SM(50), SM(52), SM(53), // Maxwell
SM(60), SM(61), SM(62), // Pascal
SM(70), SM(72), // Volta
Expand Down Expand Up @@ -186,7 +186,7 @@ CudaVersion MinVersionForCudaArch(CudaArch A) {
case CudaArch::SM_20:
case CudaArch::SM_21:
case CudaArch::SM_30:
case CudaArch::SM_32:
case CudaArch::SM_32_:
case CudaArch::SM_35:
case CudaArch::SM_37:
case CudaArch::SM_50:
Expand Down Expand Up @@ -231,7 +231,7 @@ CudaVersion MaxVersionForCudaArch(CudaArch A) {
case CudaArch::SM_21:
return CudaVersion::CUDA_80;
case CudaArch::SM_30:
case CudaArch::SM_32:
case CudaArch::SM_32_:
return CudaVersion::CUDA_102;
case CudaArch::SM_35:
case CudaArch::SM_37:
Expand Down
2 changes: 1 addition & 1 deletion clang/lib/Basic/Targets/NVPTX.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ void NVPTXTargetInfo::getTargetDefines(const LangOptions &Opts,
return "210";
case CudaArch::SM_30:
return "300";
case CudaArch::SM_32:
case CudaArch::SM_32_:
return "320";
case CudaArch::SM_35:
return "350";
Expand Down
3 changes: 2 additions & 1 deletion clang/lib/Basic/Targets/RISCV.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,8 @@ bool RISCVTargetInfo::handleTargetFeatures(std::vector<std::string> &Features,
if (ISAInfo->hasExtension("zfh") || ISAInfo->hasExtension("zhinx"))
HasLegalHalfType = true;

FastUnalignedAccess = llvm::is_contained(Features, "+fast-unaligned-access");
FastUnalignedAccess = llvm::is_contained(Features, "+unaligned-scalar-mem") &&
llvm::is_contained(Features, "+unaligned-vector-mem");

if (llvm::is_contained(Features, "+experimental"))
HasExperimental = true;
Expand Down
10 changes: 5 additions & 5 deletions clang/lib/Basic/Targets/SPIR.h
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ class LLVM_LIBRARY_VISIBILITY SPIR32TargetInfo : public SPIRTargetInfo {
SizeType = TargetInfo::UnsignedInt;
PtrDiffType = IntPtrType = TargetInfo::SignedInt;
resetDataLayout("e-p:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-"
"v96:128-v192:256-v256:256-v512:512-v1024:1024");
"v96:128-v192:256-v256:256-v512:512-v1024:1024-G1");
}

void getTargetDefines(const LangOptions &Opts,
Expand All @@ -276,7 +276,7 @@ class LLVM_LIBRARY_VISIBILITY SPIR64TargetInfo : public SPIRTargetInfo {
SizeType = TargetInfo::UnsignedLong;
PtrDiffType = IntPtrType = TargetInfo::SignedLong;
resetDataLayout("e-i64:64-v16:16-v24:32-v32:32-v48:64-"
"v96:128-v192:256-v256:256-v512:512-v1024:1024");
"v96:128-v192:256-v256:256-v512:512-v1024:1024-G1");
}

void getTargetDefines(const LangOptions &Opts,
Expand Down Expand Up @@ -315,7 +315,7 @@ class LLVM_LIBRARY_VISIBILITY SPIRVTargetInfo : public BaseSPIRVTargetInfo {
// SPIR-V IDs are represented with a single 32-bit word.
SizeType = TargetInfo::UnsignedInt;
resetDataLayout("e-i64:64-v16:16-v24:32-v32:32-v48:64-"
"v96:128-v192:256-v256:256-v512:512-v1024:1024");
"v96:128-v192:256-v256:256-v512:512-v1024:1024-G1");
}

void getTargetDefines(const LangOptions &Opts,
Expand All @@ -336,7 +336,7 @@ class LLVM_LIBRARY_VISIBILITY SPIRV32TargetInfo : public BaseSPIRVTargetInfo {
SizeType = TargetInfo::UnsignedInt;
PtrDiffType = IntPtrType = TargetInfo::SignedInt;
resetDataLayout("e-p:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-"
"v96:128-v192:256-v256:256-v512:512-v1024:1024");
"v96:128-v192:256-v256:256-v512:512-v1024:1024-G1");
}

void getTargetDefines(const LangOptions &Opts,
Expand All @@ -357,7 +357,7 @@ class LLVM_LIBRARY_VISIBILITY SPIRV64TargetInfo : public BaseSPIRVTargetInfo {
SizeType = TargetInfo::UnsignedLong;
PtrDiffType = IntPtrType = TargetInfo::SignedLong;
resetDataLayout("e-i64:64-v16:16-v24:32-v32:32-v48:64-"
"v96:128-v192:256-v256:256-v512:512-v1024:1024");
"v96:128-v192:256-v256:256-v512:512-v1024:1024-G1");
}

void getTargetDefines(const LangOptions &Opts,
Expand Down
9 changes: 9 additions & 0 deletions clang/lib/CodeGen/CGBuiltin.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3436,6 +3436,15 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl GD, unsigned BuiltinID,
Builder.CreateAssumption(ConstantInt::getTrue(getLLVMContext()), {OBD});
return RValue::get(nullptr);
}
case Builtin::BI__builtin_allow_runtime_check: {
StringRef Kind =
cast<StringLiteral>(E->getArg(0)->IgnoreParenCasts())->getString();
LLVMContext &Ctx = CGM.getLLVMContext();
llvm::Value *Allow = Builder.CreateCall(
CGM.getIntrinsic(llvm::Intrinsic::allow_runtime_check),
llvm::MetadataAsValue::get(Ctx, llvm::MDString::get(Ctx, Kind)));
return RValue::get(Allow);
}
case Builtin::BI__arithmetic_fence: {
// Create the builtin call if FastMath is selected, and the target
// supports the builtin, otherwise just return the argument.
Expand Down
16 changes: 7 additions & 9 deletions clang/lib/CodeGen/CGCall.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4124,8 +4124,7 @@ static bool isProvablyNull(llvm::Value *addr) {
}

static bool isProvablyNonNull(Address Addr, CodeGenFunction &CGF) {
return llvm::isKnownNonZero(Addr.getBasePointer(), /*Depth=*/0,
CGF.CGM.getDataLayout());
return llvm::isKnownNonZero(Addr.getBasePointer(), CGF.CGM.getDataLayout());
}

/// Emit the actual writing-back of a writeback.
Expand Down Expand Up @@ -4694,11 +4693,11 @@ void CodeGenFunction::EmitCallArg(CallArgList &args, const Expr *E,
AggValueSlot Slot = args.isUsingInAlloca()
? createPlaceholderSlot(*this, type) : CreateAggTemp(type, "agg.tmp");

bool DestroyedInCallee = true, NeedsCleanup = true;
bool DestroyedInCallee = true, NeedsEHCleanup = true;
if (const auto *RD = type->getAsCXXRecordDecl())
DestroyedInCallee = RD->hasNonTrivialDestructor();
else
NeedsCleanup = type.isDestructedType();
NeedsEHCleanup = needsEHCleanup(type.isDestructedType());

if (DestroyedInCallee)
Slot.setExternallyDestructed();
Expand All @@ -4707,15 +4706,14 @@ void CodeGenFunction::EmitCallArg(CallArgList &args, const Expr *E,
RValue RV = Slot.asRValue();
args.add(RV, type);

if (DestroyedInCallee && NeedsCleanup) {
if (DestroyedInCallee && NeedsEHCleanup) {
// Create a no-op GEP between the placeholder and the cleanup so we can
// RAUW it successfully. It also serves as a marker of the first
// instruction where the cleanup is active.
pushFullExprCleanup<DestroyUnpassedArg>(NormalAndEHCleanup,
Slot.getAddress(), type);
pushFullExprCleanup<DestroyUnpassedArg>(EHCleanup, Slot.getAddress(),
type);
// This unreachable is a temporary marker which will be removed later.
llvm::Instruction *IsActive =
Builder.CreateFlagLoad(llvm::Constant::getNullValue(Int8PtrTy));
llvm::Instruction *IsActive = Builder.CreateUnreachable();
args.addArgCleanupDeactivation(EHStack.stable_begin(), IsActive);
}
return;
Expand Down
Loading