Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[flang] Added fir.dummy_scope operation to preserve dummy arguments association. #90642

Merged
merged 5 commits into from
May 1, 2024

Conversation

vzakhari
Copy link
Contributor

The new operation is just an abstract attribute that is attached to
[hl]fir.declare operations of dummy arguments of a subroutine.
Dummy arguments of the same subroutine refer to the same fir.dummy_scope,
so they can be recognized as such during FIR AliasAnalysis.
Note that the fir.dummy_scope must be specific to the runtime instantiation
of a subroutine, so any MLIR inlining/cloning should duplicate and unique it
vs using the same fir.dummy_scope for different runtime instantiations.
This is why I made it an operation rather than an attribute.
The new operation uses a write effect on DebuggingResource, same as
[hl]fir.declare, to avoid optimizing it away.

…ssociation.

The new operation is just an abstract attribute that is attached to
[hl]fir.declare operations of dummy arguments of a subroutine.
Dummy arguments of the same subroutine refer to the same fir.dummy_scope,
so they can be recognized as such during FIR AliasAnalysis.
Note that the fir.dummy_scope must be specific to the runtime instantiation
of a subroutine, so any MLIR inlining/cloning should duplicate and unique it
vs using the same fir.dummy_scope for different runtime instantiations.
This is why I made it an operation rather than an attribute.
The new operation uses a write effect on DebuggingResource, same as
[hl]fir.declare, to avoid optimizing it away.
@llvmbot llvmbot added flang Flang issues not falling into any other category flang:fir-hlfir flang:codegen labels Apr 30, 2024
@llvmbot
Copy link
Collaborator

llvmbot commented Apr 30, 2024

@llvm/pr-subscribers-flang-codegen

Author: Slava Zakharin (vzakhari)

Changes

The new operation is just an abstract attribute that is attached to
[hl]fir.declare operations of dummy arguments of a subroutine.
Dummy arguments of the same subroutine refer to the same fir.dummy_scope,
so they can be recognized as such during FIR AliasAnalysis.
Note that the fir.dummy_scope must be specific to the runtime instantiation
of a subroutine, so any MLIR inlining/cloning should duplicate and unique it
vs using the same fir.dummy_scope for different runtime instantiations.
This is why I made it an operation rather than an attribute.
The new operation uses a write effect on DebuggingResource, same as
[hl]fir.declare, to avoid optimizing it away.


Full diff: https://github.com/llvm/llvm-project/pull/90642.diff

10 Files Affected:

  • (modified) flang/include/flang/Optimizer/Dialect/FIROps.td (+83-1)
  • (modified) flang/include/flang/Optimizer/HLFIR/HLFIROps.td (+3-1)
  • (modified) flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp (+17-1)
  • (modified) flang/lib/Optimizer/HLFIR/IR/HLFIROps.cpp (+2-1)
  • (modified) flang/lib/Optimizer/HLFIR/Transforms/ConvertToFIR.cpp (+2-2)
  • (added) flang/test/Fir/dummy-scope-codegen.fir (+9)
  • (added) flang/test/Fir/dummy_scope.fir (+34)
  • (modified) flang/test/HLFIR/declare-codegen.fir (+10)
  • (added) flang/test/HLFIR/dummy_scope.fir (+34)
  • (modified) flang/unittests/Optimizer/FortranVariableTest.cpp (+5-4)
diff --git a/flang/include/flang/Optimizer/Dialect/FIROps.td b/flang/include/flang/Optimizer/Dialect/FIROps.td
index 496193e25cab61..95cce67137890f 100644
--- a/flang/include/flang/Optimizer/Dialect/FIROps.td
+++ b/flang/include/flang/Optimizer/Dialect/FIROps.td
@@ -3074,6 +3074,7 @@ def fir_DeclareOp : fir_Op<"declare", [AttrSizedOperandSegments,
     AnyRefOrBox:$memref,
     Optional<AnyShapeOrShiftType>:$shape,
     Variadic<AnyIntegerType>:$typeparams,
+    Optional<I1>:$dummy_scope,
     Builtin_StringAttr:$uniq_name,
     OptionalAttr<fir_FortranVariableFlagsAttr>:$fortran_attrs,
     OptionalAttr<fir_CUDADataAttributeAttr>:$cuda_attr
@@ -3083,7 +3084,8 @@ def fir_DeclareOp : fir_Op<"declare", [AttrSizedOperandSegments,
 
   let assemblyFormat = [{
     $memref (`(` $shape^ `)`)? (`typeparams` $typeparams^)?
-     attr-dict `:` functional-type(operands, results)
+    (`dummy_scope` $dummy_scope^)?
+    attr-dict `:` functional-type(operands, results)
   }];
 
   let hasVerifier = 1;
@@ -3247,4 +3249,84 @@ def fir_CUDADeallocateOp : fir_Op<"cuda_deallocate",
   let hasVerifier = 1;
 }
 
+def fir_DummyScopeOp : fir_Op<"dummy_scope",
+    [MemoryEffects<[MemWrite<DebuggingResource>]>]> {
+  let summary = "Define a scope for dummy arguments";
+
+  let description = [{
+    An abstract handle to be used to associate dummy arguments of the same
+    subroutine between each other. By lowering, all [hl]fir.declare
+    operations representing declarations of dummy arguments of a subroutine
+    use the result of this operation. This allows recognizing the references
+    of these dummy arguments as belonging to the same runtime instance
+    of the subroutine even after MLIR inlining. Thus, the Fortran aliasing
+    rules might be applied to those references based on the original
+    declarations of the dummy arguments.
+    For example:
+    ```
+      subroutine test(x, y)
+        real, target :: x, y
+        x = y ! may alias
+        call inner(x, y)
+      contains
+        subroutine inner(x, y)
+          real :: x, y
+          x = y ! may not alias
+        end subroutine inner
+      end subroutine test
+    ```
+    After MLIR inlining this may look like this:
+    ```
+      func.func @_QPtest(
+          %arg0: !fir.ref<f32> {fir.target},
+          %arg1: !fir.ref<f32> {fir.target}) {
+        %0 = fir.declare %arg0 {fortran_attrs = #fir.var_attrs<target>} :
+            (!fir.ref<f32>) -> !fir.ref<f32>
+        %1 = fir.declare %arg1 {fortran_attrs = #fir.var_attrs<target>} :
+            (!fir.ref<f32>) -> !fir.ref<f32>
+        %2 = fir.load %1 : !fir.ref<f32>
+        fir.store %2 to %0 : !fir.ref<f32>
+        %3 = fir.declare %0 : (!fir.ref<f32>) -> !fir.ref<f32>
+        %4 = fir.declare %1 : (!fir.ref<f32>) -> !fir.ref<f32>
+        %5 = fir.load %4 : !fir.ref<f32>
+        fir.store %5 to %3 : !fir.ref<f32>
+        return
+      }
+    ```
+    Without marking %3 and %4 as declaring the dummy arguments
+    of the same runtime instance of `inner` subroutine the FIR
+    AliasAnalysis cannot deduce non-aliasing for the second load/store pair.
+    This information may be preserved by using fir.dummy_scope operation:
+    ```
+      func.func @_QPtest(
+          %arg0: !fir.ref<f32> {fir.target},
+          %arg1: !fir.ref<f32> {fir.target}) {
+        %h1 = fir.dummy_scope : i1
+        %0 = fir.declare %arg0 dummy_scope(%h1)
+            {fortran_attrs = #fir.var_attrs<target>} :
+            (!fir.ref<f32>) -> !fir.ref<f32>
+        %1 = fir.declare %arg1 dummy_scope(%h1)
+            {fortran_attrs = #fir.var_attrs<target>} :
+            (!fir.ref<f32>) -> !fir.ref<f32>
+        %2 = fir.load %1 : !fir.ref<f32>
+        fir.store %2 to %0 : !fir.ref<f32>
+        %h2 = fir.dummy_scope : i1
+        %3 = fir.declare %0 dummy_scope(%h2) : (!fir.ref<f32>) -> !fir.ref<f32>
+        %4 = fir.declare %1 dummy_scope(%h2) : (!fir.ref<f32>) -> !fir.ref<f32>
+        %5 = fir.load %4 : !fir.ref<f32>
+        fir.store %5 to %3 : !fir.ref<f32>
+        return
+      }
+    ```
+    Note that even if `inner` is called and inlined twice inside
+    `test`, the two inlined instances of `inner` must use two different
+    fir.dummy_scope operations for their fir.declare ops. This
+    two distinct fir.dummy_scope must remain distinct during the optimizations.
+    This is guaranteed by the write memory effect on the DebuggingResource.
+  }];
+
+  let results = (outs I1);
+  let assemblyFormat = "attr-dict `:` type(results)";
+}
+
 #endif
diff --git a/flang/include/flang/Optimizer/HLFIR/HLFIROps.td b/flang/include/flang/Optimizer/HLFIR/HLFIROps.td
index 743a6c98ec1a03..be0bb770a3e171 100644
--- a/flang/include/flang/Optimizer/HLFIR/HLFIROps.td
+++ b/flang/include/flang/Optimizer/HLFIR/HLFIROps.td
@@ -87,6 +87,7 @@ def hlfir_DeclareOp : hlfir_Op<"declare", [AttrSizedOperandSegments,
     AnyRefOrBox:$memref,
     Optional<AnyShapeOrShiftType>:$shape,
     Variadic<AnyIntegerType>:$typeparams,
+    Optional<I1>:$dummy_scope,
     Builtin_StringAttr:$uniq_name,
     OptionalAttr<fir_FortranVariableFlagsAttr>:$fortran_attrs,
     OptionalAttr<fir_CUDADataAttributeAttr>:$cuda_attr
@@ -96,7 +97,8 @@ def hlfir_DeclareOp : hlfir_Op<"declare", [AttrSizedOperandSegments,
 
   let assemblyFormat = [{
     $memref (`(` $shape^ `)`)? (`typeparams` $typeparams^)?
-     attr-dict `:` functional-type(operands, results)
+    (`dummy_scope` $dummy_scope^)?
+    attr-dict `:` functional-type(operands, results)
   }];
 
   let builders = [
diff --git a/flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp b/flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp
index ce7ee22d5d7744..5bd3ec8d18450e 100644
--- a/flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp
+++ b/flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp
@@ -281,6 +281,20 @@ class DeclareOpConversion : public mlir::OpRewritePattern<fir::DeclareOp> {
   }
 };
 
+class DummyScopeOpConversion
+    : public mlir::OpRewritePattern<fir::DummyScopeOp> {
+public:
+  using OpRewritePattern::OpRewritePattern;
+
+  mlir::LogicalResult
+  matchAndRewrite(fir::DummyScopeOp dummyScopeOp,
+                  mlir::PatternRewriter &rewriter) const override {
+    rewriter.replaceOpWithNewOp<fir::UndefOp>(dummyScopeOp,
+                                              dummyScopeOp.getType());
+    return mlir::success();
+  }
+};
+
 class CodeGenRewrite : public fir::impl::CodeGenRewriteBase<CodeGenRewrite> {
 public:
   void runOnOperation() override final {
@@ -293,6 +307,7 @@ class CodeGenRewrite : public fir::impl::CodeGenRewriteBase<CodeGenRewrite> {
     target.addIllegalOp<fir::ArrayCoorOp>();
     target.addIllegalOp<fir::ReboxOp>();
     target.addIllegalOp<fir::DeclareOp>();
+    target.addIllegalOp<fir::DummyScopeOp>();
     target.addDynamicallyLegalOp<fir::EmboxOp>([](fir::EmboxOp embox) {
       return !(embox.getShape() ||
                mlir::isa<fir::SequenceType>(
@@ -321,5 +336,6 @@ std::unique_ptr<mlir::Pass> fir::createFirCodeGenRewritePass() {
 
 void fir::populatePreCGRewritePatterns(mlir::RewritePatternSet &patterns) {
   patterns.insert<EmboxConversion, ArrayCoorConversion, ReboxConversion,
-                  DeclareOpConversion>(patterns.getContext());
+                  DeclareOpConversion, DummyScopeOpConversion>(
+      patterns.getContext());
 }
diff --git a/flang/lib/Optimizer/HLFIR/IR/HLFIROps.cpp b/flang/lib/Optimizer/HLFIR/IR/HLFIROps.cpp
index 0d62ca4954e6bf..4b586ad1d3a4af 100644
--- a/flang/lib/Optimizer/HLFIR/IR/HLFIROps.cpp
+++ b/flang/lib/Optimizer/HLFIR/IR/HLFIROps.cpp
@@ -133,7 +133,8 @@ void hlfir::DeclareOp::build(mlir::OpBuilder &builder,
   mlir::Type hlfirVariableType =
       getHLFIRVariableType(inputType, hasExplicitLbs);
   build(builder, result, {hlfirVariableType, inputType}, memref, shape,
-        typeparams, nameAttr, fortran_attrs, cuda_attr);
+        typeparams, /*dummy_scope=*/nullptr, nameAttr, fortran_attrs,
+        cuda_attr);
 }
 
 mlir::LogicalResult hlfir::DeclareOp::verify() {
diff --git a/flang/lib/Optimizer/HLFIR/Transforms/ConvertToFIR.cpp b/flang/lib/Optimizer/HLFIR/Transforms/ConvertToFIR.cpp
index 517285dce133de..3570e0011ca7e6 100644
--- a/flang/lib/Optimizer/HLFIR/Transforms/ConvertToFIR.cpp
+++ b/flang/lib/Optimizer/HLFIR/Transforms/ConvertToFIR.cpp
@@ -328,8 +328,8 @@ class DeclareOpConversion : public mlir::OpRewritePattern<hlfir::DeclareOp> {
       cudaAttr = fir::CUDADataAttributeAttr::get(rewriter.getContext(), *attr);
     auto firDeclareOp = rewriter.create<fir::DeclareOp>(
         loc, memref.getType(), memref, declareOp.getShape(),
-        declareOp.getTypeparams(), declareOp.getUniqName(), fortranAttrs,
-        cudaAttr);
+        declareOp.getTypeparams(), declareOp.getDummyScope(),
+        declareOp.getUniqName(), fortranAttrs, cudaAttr);
 
     // Propagate other attributes from hlfir.declare to fir.declare.
     // OpenACC's acc.declare is one example. Right now, the propagation
diff --git a/flang/test/Fir/dummy-scope-codegen.fir b/flang/test/Fir/dummy-scope-codegen.fir
new file mode 100644
index 00000000000000..d8bdd0a822a5f0
--- /dev/null
+++ b/flang/test/Fir/dummy-scope-codegen.fir
@@ -0,0 +1,9 @@
+// RUN: fir-opt --cg-rewrite %s -o - | FileCheck %s
+
+func.func @dummy_scope(%arg0: !fir.ref<f32>) {
+  %scope = fir.dummy_scope : i1
+  %0 = fir.declare %arg0 dummy_scope %scope {uniq_name = "x"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+  return
+}
+// CHECK-LABEL: func.func @dummy_scope(
+// CHECK-NEXT: return
diff --git a/flang/test/Fir/dummy_scope.fir b/flang/test/Fir/dummy_scope.fir
new file mode 100644
index 00000000000000..8def2759fb77d6
--- /dev/null
+++ b/flang/test/Fir/dummy_scope.fir
@@ -0,0 +1,34 @@
+// RUN: fir-opt %s | fir-opt | FileCheck %s
+// RUN: fir-opt %s | fir-opt -cse | FileCheck %s
+
+// CHECK-LABEL:   func.func @dummy_scope(
+// CHECK-SAME:                           %[[VAL_0:.*]]: !fir.ref<f32>) {
+// CHECK:           %[[VAL_1:.*]] = fir.dummy_scope : i1
+// CHECK:           %[[VAL_2:.*]] = fir.declare %[[VAL_0]] dummy_scope %[[VAL_1]] {uniq_name = "x"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+// CHECK:           return
+// CHECK:         }
+func.func @dummy_scope(%arg0: !fir.ref<f32>) {
+  %scope = fir.dummy_scope : i1
+  %0 = fir.declare %arg0 dummy_scope %scope {uniq_name = "x"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+  return
+}
+
+// CHECK-LABEL:   func.func @dummy_scopes(
+// CHECK-SAME:                            %[[VAL_0:.*]]: !fir.ref<f32>) {
+// CHECK:           %[[VAL_1:.*]] = fir.dummy_scope : i1
+// CHECK:           %[[VAL_2:.*]] = fir.declare %[[VAL_0]] dummy_scope %[[VAL_1]] {uniq_name = "x"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+// CHECK:           %[[VAL_3:.*]] = fir.dummy_scope : i1
+// CHECK:           %[[VAL_4:.*]] = fir.declare %[[VAL_0]] dummy_scope %[[VAL_3]] {uniq_name = "innerEx"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+// CHECK:           %[[VAL_5:.*]] = fir.dummy_scope : i1
+// CHECK:           %[[VAL_6:.*]] = fir.declare %[[VAL_0]] dummy_scope %[[VAL_5]] {uniq_name = "innerEx"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+// CHECK:           return
+// CHECK:         }
+func.func @dummy_scopes(%arg0: !fir.ref<f32>) {
+  %scope_out = fir.dummy_scope : i1
+  %0 = fir.declare %arg0 dummy_scope %scope_out {uniq_name = "x"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+  %scope_in1 = fir.dummy_scope : i1
+  %1 = fir.declare %arg0 dummy_scope %scope_in1 {uniq_name = "innerEx"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+  %scope_in2 = fir.dummy_scope : i1
+  %2 = fir.declare %arg0 dummy_scope %scope_in2 {uniq_name = "innerEx"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
+  return
+}
diff --git a/flang/test/HLFIR/declare-codegen.fir b/flang/test/HLFIR/declare-codegen.fir
index 3e80a52be45248..d079aa776a3a9c 100644
--- a/flang/test/HLFIR/declare-codegen.fir
+++ b/flang/test/HLFIR/declare-codegen.fir
@@ -200,3 +200,13 @@ func.func @test_optional_declare(%arg0: !fir.box<!fir.array<?xi32>>) {
 // CHECK:    %[[VAL_7:.*]] = fir.absent !fir.box<!fir.array<?xi32>>
 // CHECK:    fir.result %[[VAL_7]] : !fir.box<!fir.array<?xi32>>
 // CHECK:  }
+
+func.func @dummy_scope(%arg0: !fir.ref<f32>) {
+  %scope = fir.dummy_scope : i1
+  %0:2 = hlfir.declare %arg0 dummy_scope %scope {uniq_name = "x"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+  return
+}
+// CHECK-LABEL:   func.func @dummy_scope(
+// CHECK-SAME:    %[[VAL_0:.*]]: !fir.ref<f32>) {
+// CHECK:         %[[SCOPE:.*]] = fir.dummy_scope : i1
+// CHECK:         %[[VAL_1:.*]] = fir.declare %[[VAL_0]] dummy_scope %[[SCOPE]] {uniq_name = "x"} : (!fir.ref<f32>, i1) -> !fir.ref<f32>
diff --git a/flang/test/HLFIR/dummy_scope.fir b/flang/test/HLFIR/dummy_scope.fir
new file mode 100644
index 00000000000000..304ebccfbe3197
--- /dev/null
+++ b/flang/test/HLFIR/dummy_scope.fir
@@ -0,0 +1,34 @@
+// RUN: fir-opt %s | fir-opt | FileCheck %s
+// RUN: fir-opt %s | fir-opt -cse | FileCheck %s
+
+// CHECK-LABEL:   func.func @dummy_scope(
+// CHECK-SAME:                           %[[VAL_0:.*]]: !fir.ref<f32>) {
+// CHECK:           %[[VAL_1:.*]] = fir.dummy_scope : i1
+// CHECK:           %[[VAL_2:.*]]:2 = hlfir.declare %[[VAL_0]] dummy_scope %[[VAL_1]] {uniq_name = "x"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+// CHECK:           return
+// CHECK:         }
+func.func @dummy_scope(%arg0: !fir.ref<f32>) {
+  %scope = fir.dummy_scope : i1
+  %0:2 = hlfir.declare %arg0 dummy_scope %scope {uniq_name = "x"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+  return
+}
+
+// CHECK-LABEL:   func.func @dummy_scopes(
+// CHECK-SAME:                            %[[VAL_0:.*]]: !fir.ref<f32>) {
+// CHECK:           %[[VAL_1:.*]] = fir.dummy_scope : i1
+// CHECK:           %[[VAL_2:.*]]:2 = hlfir.declare %[[VAL_0]] dummy_scope %[[VAL_1]] {uniq_name = "x"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+// CHECK:           %[[VAL_3:.*]] = fir.dummy_scope : i1
+// CHECK:           %[[VAL_4:.*]]:2 = hlfir.declare %[[VAL_0]] dummy_scope %[[VAL_3]] {uniq_name = "innerEx"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+// CHECK:           %[[VAL_5:.*]] = fir.dummy_scope : i1
+// CHECK:           %[[VAL_6:.*]]:2 = hlfir.declare %[[VAL_0]] dummy_scope %[[VAL_5]] {uniq_name = "innerEx"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+// CHECK:           return
+// CHECK:         }
+func.func @dummy_scopes(%arg0: !fir.ref<f32>) {
+  %scope_out = fir.dummy_scope : i1
+  %0:2 = hlfir.declare %arg0 dummy_scope %scope_out {uniq_name = "x"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+  %scope_in1 = fir.dummy_scope : i1
+  %1:2 = hlfir.declare %arg0 dummy_scope %scope_in1 {uniq_name = "innerEx"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+  %scope_in2 = fir.dummy_scope : i1
+  %2:2 = hlfir.declare %arg0 dummy_scope %scope_in2 {uniq_name = "innerEx"} : (!fir.ref<f32>, i1) -> (!fir.ref<f32>, !fir.ref<f32>)
+  return
+}
diff --git a/flang/unittests/Optimizer/FortranVariableTest.cpp b/flang/unittests/Optimizer/FortranVariableTest.cpp
index 790f735a6cf29e..f5f559ef887c85 100644
--- a/flang/unittests/Optimizer/FortranVariableTest.cpp
+++ b/flang/unittests/Optimizer/FortranVariableTest.cpp
@@ -48,7 +48,8 @@ TEST_F(FortranVariableTest, SimpleScalar) {
   mlir::Value addr = builder->create<fir::AllocaOp>(loc, eleType);
   auto name = mlir::StringAttr::get(&context, "x");
   auto declare = builder->create<fir::DeclareOp>(loc, addr.getType(), addr,
-      /*shape=*/mlir::Value{}, /*typeParams=*/std::nullopt, name,
+      /*shape=*/mlir::Value{}, /*typeParams=*/std::nullopt,
+      /*dummy_scope=*/nullptr, name,
       /*fortran_attrs=*/fir::FortranVariableFlagsAttr{},
       /*cuda_attr=*/fir::CUDADataAttributeAttr{});
 
@@ -74,7 +75,7 @@ TEST_F(FortranVariableTest, CharacterScalar) {
       loc, eleType, /*pinned=*/false, typeParams);
   auto name = mlir::StringAttr::get(&context, "x");
   auto declare = builder->create<fir::DeclareOp>(loc, addr.getType(), addr,
-      /*shape=*/mlir::Value{}, typeParams, name,
+      /*shape=*/mlir::Value{}, typeParams, /*dummy_scope=*/nullptr, name,
       /*fortran_attrs=*/fir::FortranVariableFlagsAttr{},
       /*cuda_attr=*/fir::CUDADataAttributeAttr{});
 
@@ -105,7 +106,7 @@ TEST_F(FortranVariableTest, SimpleArray) {
   mlir::Value shape = createShape(extents);
   auto name = mlir::StringAttr::get(&context, "x");
   auto declare = builder->create<fir::DeclareOp>(loc, addr.getType(), addr,
-      shape, /*typeParams*/ std::nullopt, name,
+      shape, /*typeParams*/ std::nullopt, /*dummy_scope=*/nullptr, name,
       /*fortran_attrs=*/fir::FortranVariableFlagsAttr{},
       /*cuda_attr=*/fir::CUDADataAttributeAttr{});
 
@@ -136,7 +137,7 @@ TEST_F(FortranVariableTest, CharacterArray) {
   mlir::Value shape = createShape(extents);
   auto name = mlir::StringAttr::get(&context, "x");
   auto declare = builder->create<fir::DeclareOp>(loc, addr.getType(), addr,
-      shape, typeParams, name,
+      shape, typeParams, /*dummy_scope=*/nullptr, name,
       /*fortran_attrs=*/fir::FortranVariableFlagsAttr{},
       /*cuda_attr=*/fir::CUDADataAttributeAttr{});
 

@vzakhari
Copy link
Contributor Author

The lowering, AliasAnalysis and TBAA tags attachment pass changes will be in separate PRs.

@@ -3074,6 +3074,7 @@ def fir_DeclareOp : fir_Op<"declare", [AttrSizedOperandSegments,
AnyRefOrBox:$memref,
Optional<AnyShapeOrShiftType>:$shape,
Variadic<AnyIntegerType>:$typeparams,
Optional<I1>:$dummy_scope,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason it's an I1 type? Could it be its own type like it's done for the async token for example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not have a strong preference for the result type, so I picked an existing one. A dedicated type is probably more robust, e.g. if some code tries to use the dummy-scope op's result in arithmetic it will be an error. Do you think a dedicated type will be better?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have strong opinion too. If it's not too complicated to have a dedicated type I think it will make it nicer but I can live with the I1 too.

Copy link
Contributor

@clementval clementval left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks overall ok to me. Just a question about the result type of the op. .

Copy link
Contributor

@clementval clementval left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for adding the specific type Slava!

@vzakhari vzakhari merged commit 986f832 into llvm:main May 1, 2024
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flang:codegen flang:fir-hlfir flang Flang issues not falling into any other category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants