Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aten dialect #16

Merged
merged 1 commit into from
Aug 13, 2020
Merged

Conversation

stephenneuendorffer
Copy link
Contributor

Add the Aten Dialect.

Co-authored-by: Jeff Fifield jefff@xilinx.com

Copy link
Collaborator

@stellaraccident stellaraccident left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this contribution - as we've discussed, getting this set of things in-tree/public is the priority. I've made a lot of nit/syntax/etc comments that it would be good to address in a followup/cleanup (I don't want to disrupt the "make it work" mode you are in now).

include/npcomp/Dialect/ATen/ATen.td Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATen.td Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenDialect.h Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenOpInterface.td Outdated Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenOpInterface.td Outdated Show resolved Hide resolved
lib/Dialect/ATen/ATenLoweringPass.cpp Show resolved Hide resolved
lib/Dialect/ATen/ATenPasses.cpp Show resolved Hide resolved
lib/Dialect/ATen/LivenessReport.cpp Outdated Show resolved Hide resolved
lib/Dialect/ATen/ReturnEliminationPass.cpp Show resolved Hide resolved
lib/Dialect/ATen/ReturnEliminationPass.cpp Outdated Show resolved Hide resolved
@silvasean
Copy link
Contributor

There's a lot more in this PR than just a dialect definition it seems. Can you update the commit messages to make it clearer what functionality is included?

@silvasean
Copy link
Contributor

Specifically, I see at least:

  • aten-to-std pass
  • statistics interface
  • return elimination pass
  • some sort of layer name thing
  • liveness report
  • some other report

Copy link
Contributor

@silvasean silvasean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few more comments. Just adding TODO's to the code is fine to address them.

include/npcomp/Dialect/ATen/ATen.td Show resolved Hide resolved
lib/Dialect/ATen/ReturnEliminationPass.cpp Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATen.td Outdated Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenDialect.h Outdated Show resolved Hide resolved
test/Dialect/ATen/lenet_fwd.mlir Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenDialect.h Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenDialect.h Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenDialect.h Outdated Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenOpStatisticsUtils.h Outdated Show resolved Hide resolved
@stephenneuendorffer
Copy link
Contributor Author

Specifically, I see at least:

  • aten-to-std pass
  • statistics interface
  • return elimination pass
  • some sort of layer name thing
  • liveness report
  • some other report

Yes, there's some lowering and report passes here. I'll add some better descriptions of this. The short answer is that there is also a bunch of pytorch stuff still to come which intercepts function calls through the pytorch jit and generates MLIR in this dialect, then lowers the MLIR into function calls through AddLayerNames -> aten-to-std -> ReturnElimination -> std-to-llvm. The result is then jitted using the LLVM jit, linked against a runtime library which makes calls back into pytorch to implement all the layers. So, a basic pipecleaning flow that integrates with pytorch, very much in the style of pytorch_xla.

The reports are intended to support more interesting optimizations.

include/npcomp/Dialect/ATen/ATen.td Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenDialect.h Outdated Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenDialect.h Outdated Show resolved Hide resolved
include/npcomp/Dialect/ATen/ATenDialect.h Show resolved Hide resolved
// The pytorch convolution operator has 9 arguments, but we only have a jit
// library that supports the first six at the moment.
def : Pat<(aten_ConvolutionOverrideableOp $a1, $a2, $a3, $a4, $a5, $a6,
$a7, $a8, $a9),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Formatting/Indentation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would you format this?

include/npcomp/Dialect/ATen/ATenToStd.td Outdated Show resolved Hide resolved
include/npcomp/Dialect/ATen/CMakeLists.txt Outdated Show resolved Hide resolved
include/npcomp/Dialect/ATen/LivenessReport.h Show resolved Hide resolved
lib/Dialect/ATen/ATenLoweringPass.cpp Outdated Show resolved Hide resolved
This patch adds a dialect intended to be used as a frontend dialect
to facilitate lowering from "A Tensor Library" in torch/pytorch.

This patch includes several passes that are useful in conjuction with the
dialect:

--aten-layer-name: Generates layer names for each operation, which are not
  present in the original pytorch.
--aten-to-std: Lower the ATen dialect into standard dialect function calls.
--return-elimination-pass: convert functions (primarily the toplevel function)
  to pass return values by reference.  This simplifies pytorch integration.
--aten-op-report: generate a textual report about the model
--liveness-report

Future patches will implement actual integration with the pytorch jit to
intercept and generates MLIR in this dialect, then lower the resulting MLIR
into function calls through aten-layer-name -> aten-to-std ->
return-elimination -> std-to-llvm. The result would then jitted using the LLVM
jit, linked against a runtime library which makes calls back into pytorch to
implement all the layers.

Co-authored-by: Jeff Fifield <jeff.fifield@xilinx.com>
@stephenneuendorffer stephenneuendorffer merged commit bb668e6 into llvm:master Aug 13, 2020
saeta added a commit to saeta/mlir-npcomp that referenced this pull request Dec 31, 2020
Committing this as a snapshot of progress, but this code organization
approach is not scalable.

Output:

Got a dialect for op %0 = rd.range %c1_i64 to %c3_i64 : (i64, i64) -> !rd.Dataset: rd
walkOp name stringref: 'rd.range'
Made a create fn:
llvm.func internal @__rd_create_foo_fix_me(%arg0: !llvm.ptr<struct<(i64, i64)>>) {
  %0 = llvm.mlir.constant(0 : index) : !llvm.i64
  %1 = llvm.mlir.constant(1 : index) : !llvm.i64
  %2 = llvm.getelementptr %arg0[%0, %0] : (!llvm.ptr<struct<(i64, i64)>>, !llvm.i64, !llvm.i64) -> !llvm.ptr<struct<(i64, i64)>>
  %3 = llvm.getelementptr %arg0[%0, %1] : (!llvm.ptr<struct<(i64, i64)>>, !llvm.i64, !llvm.i64) -> !llvm.ptr<struct<(i64, i64)>>
  %c1_i64 = constant 1 : i64
  %c3_i64 = constant 3 : i64
  llvm.store %2, %c1_i64 : i64
  llvm.store %3, %c3_i64 : i64
  return
}
Made a next function:
llvm.func internal @__rd_next_foo_fix_me(%arg0: !llvm.ptr<struct<(i64, i64)>>) -> !llvm.struct<(i1, i64)> {
  %0 = llvm.mlir.constant(0 : index) : !llvm.i64
  %1 = llvm.mlir.constant(1 : index) : !llvm.i64
  %2 = llvm.getelementptr %arg0[%0, %0] : (!llvm.ptr<struct<(i64, i64)>>, !llvm.i64, !llvm.i64) -> !llvm.ptr<struct<(i64, i64)>>
  %3 = llvm.getelementptr %arg0[%0, %1] : (!llvm.ptr<struct<(i64, i64)>>, !llvm.i64, !llvm.i64) -> !llvm.ptr<struct<(i64, i64)>>
  %4 = llvm.load %2 : !llvm.ptr<struct<(i64, i64)>>
  %5 = llvm.load %3 : !llvm.ptr<struct<(i64, i64)>>
  %6 = "llvm.add"(%4, %1) : (!llvm.struct<(i64, i64)>, !llvm.i64) -> !llvm.struct<(i64, i64)>
  %7 = llvm.icmp "ne" %4, %5 : !llvm.struct<(i64, i64)>
  llvm.store %2, %6 : !llvm.struct<(i64, i64)>
  return %7, %6 : !llvm.i1, !llvm.struct<(i64, i64)>
}
Did some sugary! Things now look like:
module  {
  func @main() {
    %c1_i64 = constant 1 : i64
    %c3_i64 = constant 3 : i64
    %0 = llvm.mlir.constant(1 : index) : !llvm.i64
    %1 = llvm.alloca %0 x !llvm.struct<(i64, i64)> : (!llvm.i64) -> !llvm.ptr<struct<(i64, i64)>>
    llvm.call @__rd_create_foo_fix_me(%1) : (!llvm.ptr<struct<(i64, i64)>>) -> ()
    %valid, %value = rd.iterator_next %1 : (!llvm.ptr<struct<(i64, i64)>>) -> (i1, i64)
    "rd.print"(%value) : (i64) -> ()
    return
  }
}

Walking users.... found: rd.iterator_next... MATCHING!
Walking users.... found: llvm.call... didn't match.
Did some more sugary! Things now look like:
module  {
  func @main() {
    %c1_i64 = constant 1 : i64
    %c3_i64 = constant 3 : i64
    %0 = llvm.mlir.constant(1 : index) : !llvm.i64
    %1 = llvm.alloca %0 x !llvm.struct<(i64, i64)> : (!llvm.i64) -> !llvm.ptr<struct<(i64, i64)>>
    llvm.call @__rd_create_foo_fix_me(%1) : (!llvm.ptr<struct<(i64, i64)>>) -> ()
    %2 = llvm.call @__rd_next_foo_fix_me(%1) : (!llvm.ptr<struct<(i64, i64)>>) -> !llvm.struct<(i1, i64)>
    %3 = llvm.extractvalue %2[0 : i32] : !llvm.struct<(i1, i64)>
    %4 = llvm.extractvalue %2[1 : i32] : !llvm.struct<(i1, i64)>
    "rd.print"(%4) : (!llvm.struct<(i1, i64)>) -> ()
    return
  }
}

Stack dump:
0.      Program arguments: /usr/local/google/home/saeta/src/mlir-npcomp/build/bin/npcomp-opt basic.mlir -rd-lower-to-llvm
 #0 0x00007f607c4110b3 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/llvm/lib/Support/Unix/Signals.inc:563:13
 #1 0x00007f607c40f330 llvm::sys::RunSignalHandlers() /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/llvm/lib/Support/Signals.cpp:72:18
 llvm#2 0x00007f607c411575 SignalHandler(int) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/llvm/lib/Support/Unix/Signals.inc:0:3
 llvm#3 0x00007f608108e140 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x14140)
 llvm#4 0x00007f60804fe420 llvm::ilist_node_base<true>::isSentinel() const /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/llvm/include/llvm/ADT/ilist_node_base.h:45:36
 llvm#5 0x00007f60804fe420 llvm::ilist_node_base<true>::isKnownSentinel() const /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/llvm/include/llvm/ADT/ilist_node_base.h:46:41
 llvm#6 0x00007f60804fe420 llvm::ilist_iterator<llvm::ilist_detail::node_options<mlir::Operation, true, false, void>, false, false>::operator*() const /usr/local/google/home/saeta/src/mlir-npcom
p/external/llvm-project/llvm/include/llvm/ADT/ilist_iterator.h:138:5
 llvm#7 0x00007f60804fe420 llvm::early_inc_iterator_impl<llvm::ilist_iterator<llvm::ilist_detail::node_options<mlir::Operation, true, false, void>, false, false> >::operator*() /usr/local/google
/home/saeta/src/mlir-npcomp/external/llvm-project/llvm/include/llvm/ADT/STLExtras.h:546:12
 llvm#8 0x00007f60804fe420 mlir::detail::walk(mlir::Operation*, llvm::function_ref<void (mlir::Operation*)>) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mlir/lib/IR/Visito
rs.cpp:41:27
 llvm#9 0x00007f60804fe43c mlir::detail::walk(mlir::Operation*, llvm::function_ref<void (mlir::Operation*)>) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mlir/lib/IR/Visito
rs.cpp:0:9
 llvm#10 0x00007f6080e7fbf8 std::enable_if<(!(llvm::is_one_of<mlir::NPCOMP::rd::MakeIteratorOp, mlir::Operation*, mlir::Region*, mlir::Block*>::value)) && (std::is_same<void, void>::value), void>
::type mlir::detail::walk<(anonymous namespace)::LowerToRuntimePass::runOnOperation()::'lambda'(mlir::NPCOMP::rd::MakeIteratorOp), mlir::NPCOMP::rd::MakeIteratorOp, void>(mlir::Operation*, (
anonymous namespace)::LowerToRuntimePass::runOnOperation()::'lambda'(mlir::NPCOMP::rd::MakeIteratorOp)&&) /usr/local/google/home/saeta/src/mlir-npcomp/build/install-mlir/include/mlir/IR/Visi
tors.h:119:3
 llvm#11 0x00007f6080e7fb90 void mlir::Operation::walk<(anonymous namespace)::LowerToRuntimePass::runOnOperation()::'lambda'(mlir::NPCOMP::rd::MakeIteratorOp), void>((anonymous namespace)::LowerT
oRuntimePass::runOnOperation()::'lambda'(mlir::NPCOMP::rd::MakeIteratorOp)&&) /usr/local/google/home/saeta/src/mlir-npcomp/build/install-mlir/include/mlir/IR/Operation.h:527:5
 llvm#12 0x00007f6080e7fb03 void mlir::OpState::walk<(anonymous namespace)::LowerToRuntimePass::runOnOperation()::'lambda'(mlir::NPCOMP::rd::MakeIteratorOp), void>((anonymous namespace)::LowerToR
untimePass::runOnOperation()::'lambda'(mlir::NPCOMP::rd::MakeIteratorOp)&&) /usr/local/google/home/saeta/src/mlir-npcomp/build/install-mlir/include/mlir/IR/OpDefinition.h:178:5
 llvm#13 0x00007f6080e7f876 (anonymous namespace)::LowerToRuntimePass::runOnOperation() /usr/local/google/home/saeta/src/mlir-npcomp/build/../lib/Dialect/RD/Transforms/LowerToLLVM.cpp:189:33
 llvm#14 0x00007f6080522617 mlir::detail::OpToOpPassAdaptor::run(mlir::Pass*, mlir::Operation*, mlir::AnalysisManager, bool) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mli
r/lib/Pass/Pass.cpp:0:11
 llvm#15 0x00007f6080525917 mlir::failed(mlir::LogicalResult) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mlir/include/mlir/Support/LogicalResult.h:47:23
 llvm#16 0x00007f6080525917 mlir::detail::OpToOpPassAdaptor::runPipeline(llvm::iterator_range<llvm::pointee_iterator<std::unique_ptr<mlir::Pass, std::default_delete<mlir::Pass> >*, mlir::Pass> >,
 mlir::Operation*, mlir::AnalysisManager, bool) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mlir/lib/Pass/Pass.cpp:402:9
 llvm#17 0x00007f6080525917 mlir::PassManager::run(mlir::Operation*) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mlir/lib/Pass/Pass.cpp:817:13
 llvm#18 0x00007f608055b69f mlir::failed(mlir::LogicalResult) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mlir/include/mlir/Support/LogicalResult.h:47:23
 llvm#19 0x00007f608055b69f performActions(llvm::raw_ostream&, bool, bool, llvm::SourceMgr&, mlir::MLIRContext*, mlir::PassPipelineCLParser const&) /usr/local/google/home/saeta/src/mlir-npcomp/ex
ternal/llvm-project/mlir/lib/Support/MlirOptMain.cpp:75:7
 llvm#20 0x00007f608055a26d processBuffer(llvm::raw_ostream&, std::unique_ptr<llvm::MemoryBuffer, std::default_delete<llvm::MemoryBuffer> >, bool, bool, bool, bool, mlir::PassPipelineCLParser con
st&, mlir::DialectRegistry&) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mlir/lib/Support/MlirOptMain.cpp:109:12
 llvm#21 0x00007f6080559ff5 mlir::MlirOptMain(llvm::raw_ostream&, std::unique_ptr<llvm::MemoryBuffer, std::default_delete<llvm::MemoryBuffer> >, mlir::PassPipelineCLParser const&, mlir::DialectRe
gistry&, bool, bool, bool, bool, bool) /usr/local/google/home/saeta/src/mlir-npcomp/external/llvm-project/mlir/lib/Support/MlirOptMain.cpp:146:10
 llvm#22 0x000000000040d2ef main /usr/local/google/home/saeta/src/mlir-npcomp/build/../tools/npcomp-opt/npcomp-opt.cpp:91:14
 llvm#23 0x00007f607b688d0a __libc_start_main ./csu/../csu/libc-start.c:308:16
 llvm#24 0x000000000040ceca _start (/usr/local/google/home/saeta/src/mlir-npcomp/build/bin/npcomp-opt+0x40ceca)
Segmentation fault
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
* Allow importing variadic inputs/outputs of onnx operators

* Enable testcases for variadic ops

* Modify gen_doc.py
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
* fix issue llvm#15 and llvm#16

* fix format

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
JianzheXiao pushed a commit to JianzheXiao/torch-mlir that referenced this pull request Aug 4, 2023
# This is the 1st commit message:

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (llvm#2340)

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.
# This is the commit message llvm#2:

update PyTorch version to 2.1.0.dev20230729 (llvm#2354)

- torch version: 2.1.0.dev20230729
 - torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
 - torchvision version: 0.16.0.dev20230729

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
# This is the commit message llvm#3:

update PyTorch version to 2.1.0.dev20230730 (llvm#2356)

- torch version: 2.1.0.dev20230730
 - torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
 - torchvision version: 0.16.0.dev20230730

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
# This is the commit message llvm#4:

update PyTorch version to 2.1.0.dev20230731 (llvm#2359)

- torch version: 2.1.0.dev20230731
 - torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
 - torchvision version: 0.16.0.dev20230731

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
# This is the commit message llvm#5:

LTC->MLIR Debug Info support (llvm#1922)

* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>
# This is the commit message llvm#6:

build: update llvm tag to 41895843

Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

# This is the commit message llvm#7:

update PyTorch version to 2.1.0.dev20230802 (llvm#2366)

- torch version: 2.1.0.dev20230802
 - torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
 - torchvision version: 0.16.0.dev20230802

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
# This is the commit message llvm#8:

Change Python version from 3.10 to 3.11 in installation instructions (llvm#2370)


# This is the commit message llvm#9:

Add CITATION file (llvm#2371)


# This is the commit message llvm#10:

Add packaging as an install dependency (llvm#2369)

Needed by `torch_mlir._version`. Resolves llvm#2368.
# This is the commit message llvm#11:

[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (llvm#2358)

* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op
# This is the commit message llvm#12:

update PyTorch version to 2.1.0.dev20230803 (llvm#2372)

- torch version: 2.1.0.dev20230803
 - torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
 - torchvision version: 0.16.0.dev20230803

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
# This is the commit message llvm#13:

Prevent failed stable CI job from cancelling nightly jobs (llvm#2373)

The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.
# This is the commit message llvm#14:

[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (llvm#2355)


# This is the commit message llvm#15:

update

# This is the commit message llvm#16:

update xfail sets

# This is the commit message llvm#17:

update xfail_sets

# This is the commit message llvm#18:

update

# This is the commit message llvm#19:

fix xfail_sets

# This is the commit message llvm#20:

update:

# This is the commit message llvm#21:

update

# This is the commit message llvm#22:

update:
@renxida renxida mentioned this pull request Mar 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants